modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
223M
| likes
int64 0
8.08k
| library_name
stringclasses 341
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 52
values | createdAt
unknown | card
stringlengths 1
1.01M
|
---|---|---|---|---|---|---|---|---|---|
jssky/e2159e9b-9954-4b13-b5ab-336fc1891df9 | jssky | "2024-12-08T15:12:54Z" | 5 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/llama-2-7b-chat",
"base_model:adapter:unsloth/llama-2-7b-chat",
"license:apache-2.0",
"region:us"
] | null | "2024-12-08T15:09:09Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/llama-2-7b-chat
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e2159e9b-9954-4b13-b5ab-336fc1891df9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/llama-2-7b-chat
bf16: false
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 94b2438f87da807a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/94b2438f87da807a_train_data.json
type:
field_input: rejected
field_instruction: prompt
field_output: chosen
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
devices:
- 0
- 1
- 2
- 3
- 4
- 5
- 6
- 7
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: true
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: jssky/e2159e9b-9954-4b13-b5ab-336fc1891df9
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 1
mlflow_experiment_name: /tmp/94b2438f87da807a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
num_gpus: 8
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 4056
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_batch_size: 32
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e2159e9b-9954-4b13-b5ab-336fc1891df9
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: e2159e9b-9954-4b13-b5ab-336fc1891df9
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# e2159e9b-9954-4b13-b5ab-336fc1891df9
This model is a fine-tuned version of [unsloth/llama-2-7b-chat](https://huggingface.co/unsloth/llama-2-7b-chat) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.0081 | 0.0015 | 1 | 1.1038 |
| 1.402 | 0.0044 | 3 | 1.0944 |
| 0.9614 | 0.0087 | 6 | 1.0460 |
| 0.9996 | 0.0131 | 9 | 0.9925 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
deepnet/SN6-77S1 | deepnet | "2024-03-27T18:03:35Z" | 3 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-27T00:19:53Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
PranavSharma25/finetuning-sentiment-model-3000-samples | PranavSharma25 | "2024-11-14T06:45:55Z" | 104 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-11-14T05:48:53Z" | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6954
- Accuracy: 0.4733
- F1: 0.0920
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-50
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
CodyKilpatrick/ppo-LunarLander-v2 | CodyKilpatrick | "2023-06-20T15:07:45Z" | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-05-22T15:17:47Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 265.08 +/- 20.87
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
baesad/Llama3.2-BLChat-3B | baesad | "2025-02-02T06:10:11Z" | 17 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-31T15:17:16Z" | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** baesad
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
John6666/bancinxl-v20-sdxl | John6666 | "2024-12-23T06:50:48Z" | 69 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"pony",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-11-23T02:43:47Z" | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- pony
---
Original model is [here](https://civitai.com/models/875403/bancinxl?modelVersionId=1088540).
This model created by [n_Arno](https://civitai.com/user/n_Arno).
|
Fetanos/ppo-Pyramids | Fetanos | "2024-05-15T12:50:50Z" | 6 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | "2024-05-15T12:49:52Z" | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Fetanos/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
DrNicefellow/Mistral-5-from-Mixtral-8x7B-v0.1 | DrNicefellow | "2024-04-12T16:23:37Z" | 8 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-11T12:07:48Z" | ---
license: apache-2.0
---
# Mixtral-8x7B--v0.1: Model 5
## Model Description
This model is the 5th extracted standalone model from the [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1), using the [Mixtral Model Expert Extractor tool](https://github.com/MeNicefellow/Mixtral-Model-Expert-Extractor) I made. It is constructed by selecting the first expert from each Mixture of Experts (MoE) layer. The extraction of this model is experimental. It is expected to be worse than Mistral-7B.
## Model Architecture
The architecture of this model includes:
- Multi-head attention layers derived from the base Mixtral model.
- The first expert from each MoE layer, intended to provide a balanced approach to language understanding and generation tasks.
- Additional layers and components as required to ensure the model's functionality outside the MoE framework.
### Example
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "DrNicefellow/Mistral-5-from-Mixtral-8x7B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
text = "Today is a pleasant"
input_ids = tokenizer.encode(text, return_tensors='pt')
output = model.generate(input_ids)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
## License
This model is available under the Apache 2.0 License.
## Discord Server
Join our Discord server [here](https://discord.gg/xhcBDEM3).
## License
This model is open-sourced under the Apache 2.0 License. See the LICENSE file for more details.
|
Helsinki-NLP/opus-mt-ur-en | Helsinki-NLP | "2023-08-16T12:08:24Z" | 10,804 | 3 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ur",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2022-03-02T23:29:04Z" | ---
language:
- ur
- en
tags:
- translation
license: apache-2.0
---
### urd-eng
* source group: Urdu
* target group: English
* OPUS readme: [urd-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/urd-eng/README.md)
* model: transformer-align
* source language(s): urd
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/urd-eng/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/urd-eng/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/urd-eng/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.urd.eng | 23.2 | 0.435 |
### System Info:
- hf_name: urd-eng
- source_languages: urd
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/urd-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ur', 'en']
- src_constituents: {'urd'}
- tgt_constituents: {'eng'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/urd-eng/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/urd-eng/opus-2020-06-17.test.txt
- src_alpha3: urd
- tgt_alpha3: eng
- short_pair: ur-en
- chrF2_score: 0.435
- bleu: 23.2
- brevity_penalty: 0.975
- ref_len: 12029.0
- src_name: Urdu
- tgt_name: English
- train_date: 2020-06-17
- src_alpha2: ur
- tgt_alpha2: en
- prefer_old: False
- long_pair: urd-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
muhtasham/small-mlm-glue-cola-custom-tokenizer-expand-vocab | muhtasham | "2023-01-31T22:23:23Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2023-01-31T21:45:35Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: small-mlm-glue-cola-custom-tokenizer-expand-vocab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# small-mlm-glue-cola-custom-tokenizer-expand-vocab
This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7408
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.9517 | 0.47 | 500 | 4.2375 |
| 4.2066 | 0.94 | 1000 | 3.8797 |
| 3.7476 | 1.4 | 1500 | 3.7590 |
| 3.6681 | 1.87 | 2000 | 3.5806 |
| 3.4312 | 2.34 | 2500 | 3.3642 |
| 3.3021 | 2.81 | 3000 | 3.0777 |
| 3.143 | 3.27 | 3500 | 3.2374 |
| 2.9997 | 3.74 | 4000 | 2.9701 |
| 2.9106 | 4.21 | 4500 | 3.0228 |
| 2.7981 | 4.68 | 5000 | 2.7408 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.1.dev0
- Tokenizers 0.13.2
|
NCW/My-new-work | NCW | "2022-08-13T00:13:22Z" | 0 | 0 | null | [
"license:afl-3.0",
"region:us"
] | null | "2022-08-13T00:13:22Z" | ---
license: afl-3.0
---
|
Gordon119/TAT-openai-whisper-large-v2-mix-tag-epoch5-total5epoch | Gordon119 | "2024-03-10T07:05:35Z" | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-03-10T07:05:24Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
beston91/gpt2-xl_ft_mult_5k | beston91 | "2022-03-20T17:31:57Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-19T08:50:34Z" | ---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl_ft_mult_5k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl_ft_mult_5k
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6758
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.99 | 27 | 6.3035 |
| No log | 1.99 | 54 | 1.2709 |
| No log | 2.99 | 81 | 0.7482 |
| No log | 3.99 | 108 | 0.6758 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
### Perplexity
Score: 21.267963409423828
### Dataset Size
Size: 5000 |
Best000/2b707c33-8da2-4a21-b508-4b42124561ed | Best000 | "2025-02-04T06:16:12Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Meta-Llama-3-8B",
"base_model:adapter:NousResearch/Meta-Llama-3-8B",
"license:other",
"region:us"
] | null | "2025-02-04T06:09:23Z" | ---
library_name: peft
license: other
base_model: NousResearch/Meta-Llama-3-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2b707c33-8da2-4a21-b508-4b42124561ed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
# 2b707c33-8da2-4a21-b508-4b42124561ed
This model is a fine-tuned version of [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4991
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
KonstantinosVlachakis/llama2-13B-FT | KonstantinosVlachakis | "2024-01-24T15:54:34Z" | 1 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-13b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-13b-chat-hf",
"region:us"
] | null | "2024-01-24T15:49:18Z" | ---
library_name: peft
base_model: meta-llama/Llama-2-13b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
harshit-070/distilbert-base-uncased-finetuned-squad | harshit-070 | "2023-01-05T10:24:34Z" | 10 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | "2023-01-05T10:09:29Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
metythorn/donut-base-khmerID | metythorn | "2024-06-03T16:59:32Z" | 47 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-text-to-text | "2024-06-01T18:20:34Z" | ---
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-khmerID
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-khmerID
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
fbaldassarri/meta-llama_Llama-3.2-1B-Instruct-auto_gptq-int8-gs128-asym | fbaldassarri | "2025-01-09T20:16:54Z" | 8 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autoround",
"auto-round",
"autogptq",
"gptq",
"auto-gptq",
"woq",
"meta",
"pytorch",
"llama-3",
"intel-autoround",
"intel",
"conversational",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:quantized:meta-llama/Llama-3.2-1B-Instruct",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | text-generation | "2025-01-09T14:11:20Z" | ---
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
license: llama3.2
library_name: transformers
tags:
- autoround
- auto-round
- autogptq
- gptq
- auto-gptq
- woq
- meta
- pytorch
- llama
- llama-3
- intel-autoround
- intel
model_name: Llama 3.2 1B Instruct
base_model: meta-llama/Llama-3.2-1B-Instruct
inference: false
model_creator: meta-llama
pipeline_tag: text-generation
prompt_template: '{prompt}
'
quantized_by: fbaldassarri
---
## Model Information
Quantized version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) using torch.float32 for quantization tuning.
- 8 bits (INT8)
- group size = 128
- Asymmetrical Quantization
- Method AutoGPTQ
Quantization framework: [Intel AutoRound](https://github.com/intel/auto-round)
Note: this INT8 version of Llama-3.2-1B-Instruct has been quantized to run inference through CPU.
## Replication Recipe
### Step 1 Install Requirements
I suggest to install requirements into a dedicated python-virtualenv or a conda enviroment.
```
wget https://github.com/intel/auto-round/archive/refs/tags/v0.4.3.tar.gz
tar -xvzf v0.4.3.tar.gz
cd auto-round-0.4.3
pip install -r requirements-cpu.txt --upgrade
```
### Step 2 Build Intel AutoRound wheel from sources
```
pip install -vvv --no-build-isolation -e .[cpu]
```
### Step 3 Script for Quantization
```
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "meta-llama/Llama-3.2-1B-Instruct"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
from auto_round import AutoRound
bits, group_size, sym, device, amp = 8, 128, False, 'cpu', False
autoround = AutoRound(model, tokenizer, nsamples=128, iters=200, seqlen=512, batch_size=4, bits=bits, group_size=group_size, sym=sym, device=device, amp=amp)
autoround.quantize()
output_dir = "./AutoRound/meta-llama_Llama-3.2-1B-Instruct-auto_gptq-int8-gs128-asym"
autoround.save_quantized(output_dir, format='auto_gptq', inplace=True)
```
## License
[Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE)
## Disclaimer
This quantized model comes with no warrenty. It has been developed only for research purposes.
|
MiiiTiii/DeepSeek-R1-MQA | MiiiTiii | "2025-02-01T02:40:49Z" | 13 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-01T02:32:48Z" | ---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MiiiTiii
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Mahmoud8/sentiment_analysis_model | Mahmoud8 | "2024-04-17T15:12:26Z" | 108 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-04-17T15:02:59Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sentiment_analysis_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment_analysis_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7543
- Accuracy: 0.8483
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 150 | 0.4045 | 0.8317 |
| No log | 2.0 | 300 | 0.4403 | 0.83 |
| No log | 3.0 | 450 | 0.5234 | 0.8325 |
| 0.3116 | 4.0 | 600 | 0.5604 | 0.8367 |
| 0.3116 | 5.0 | 750 | 0.6089 | 0.8425 |
| 0.3116 | 6.0 | 900 | 0.6792 | 0.85 |
| 0.0814 | 7.0 | 1050 | 0.7147 | 0.8508 |
| 0.0814 | 8.0 | 1200 | 0.7421 | 0.8517 |
| 0.0814 | 9.0 | 1350 | 0.7794 | 0.845 |
| 0.0302 | 10.0 | 1500 | 0.7543 | 0.8483 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.13.3
|
LarryAIDraw/noa_bluearchive | LarryAIDraw | "2024-03-25T07:14:10Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-11-26T08:06:18Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/122000?modelVersionId=156935 |
Shadow-AI/Playboi_Carti_Deep_Voice_300_Epochs_RVC_V2 | Shadow-AI | "2023-09-02T14:09:32Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2023-09-02T14:06:52Z" | ---
license: openrail
---
|
dimasik1987/cd25eb61-ff07-4097-b643-809026dbde60 | dimasik1987 | "2025-01-14T04:29:52Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:tokyotech-llm/Llama-3-Swallow-8B-v0.1",
"base_model:adapter:tokyotech-llm/Llama-3-Swallow-8B-v0.1",
"license:llama3",
"region:us"
] | null | "2025-01-14T04:19:04Z" | ---
library_name: peft
license: llama3
base_model: tokyotech-llm/Llama-3-Swallow-8B-v0.1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: cd25eb61-ff07-4097-b643-809026dbde60
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: tokyotech-llm/Llama-3-Swallow-8B-v0.1
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f6fce09fa93faa88_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f6fce09fa93faa88_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: dimasik1987/cd25eb61-ff07-4097-b643-809026dbde60
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 78GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/f6fce09fa93faa88_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4626d0fc-ba2c-47a9-a030-a28b5b9c4d26
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 4626d0fc-ba2c-47a9-a030-a28b5b9c4d26
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# cd25eb61-ff07-4097-b643-809026dbde60
This model is a fine-tuned version of [tokyotech-llm/Llama-3-Swallow-8B-v0.1](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-8B-v0.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7151
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0008 | 1 | 1.9343 |
| 1.9998 | 0.0039 | 5 | 1.8601 |
| 1.9455 | 0.0078 | 10 | 1.7355 |
| 1.7123 | 0.0117 | 15 | 1.7241 |
| 1.7362 | 0.0156 | 20 | 1.7186 |
| 1.7954 | 0.0195 | 25 | 1.7157 |
| 1.6821 | 0.0234 | 30 | 1.7151 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
kagevazquez/DeepSeek-R1-Distill-Qwen-32B-abliterated-Q4_K_M-GGUF | kagevazquez | "2025-01-23T01:50:19Z" | 1,037 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:stepenZEN/DeepSeek-R1-Distill-Qwen-32B-abliterated",
"base_model:quantized:stepenZEN/DeepSeek-R1-Distill-Qwen-32B-abliterated",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-23T01:48:49Z" | ---
language:
- en
base_model: stepenZEN/DeepSeek-R1-Distill-Qwen-32B-abliterated
tags:
- llama-cpp
- gguf-my-repo
---
# kagevazquez/DeepSeek-R1-Distill-Qwen-32B-abliterated-Q4_K_M-GGUF
This model was converted to GGUF format from [`stepenZEN/DeepSeek-R1-Distill-Qwen-32B-abliterated`](https://huggingface.co/stepenZEN/DeepSeek-R1-Distill-Qwen-32B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/stepenZEN/DeepSeek-R1-Distill-Qwen-32B-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo kagevazquez/DeepSeek-R1-Distill-Qwen-32B-abliterated-Q4_K_M-GGUF --hf-file deepseek-r1-distill-qwen-32b-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo kagevazquez/DeepSeek-R1-Distill-Qwen-32B-abliterated-Q4_K_M-GGUF --hf-file deepseek-r1-distill-qwen-32b-abliterated-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo kagevazquez/DeepSeek-R1-Distill-Qwen-32B-abliterated-Q4_K_M-GGUF --hf-file deepseek-r1-distill-qwen-32b-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo kagevazquez/DeepSeek-R1-Distill-Qwen-32B-abliterated-Q4_K_M-GGUF --hf-file deepseek-r1-distill-qwen-32b-abliterated-q4_k_m.gguf -c 2048
```
|
LHRuig/chrishmswrth5 | LHRuig | "2025-01-18T06:27:55Z" | 206 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | "2025-01-18T06:26:37Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: chrishmswrth5
---
# chrishmswrth5
<Gallery />
## Model description
chrishmswrth5 lora
## Trigger words
You should use `chrishmswrth5` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/chrishmswrth5/tree/main) them in the Files & versions tab.
|
slimaneMakh/BinarySuperClass_Cash_and_cash_equivalents_tableClassification_13may_paraphrase-mul | slimaneMakh | "2024-05-15T13:52:03Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-05-15T13:52:02Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
vincegmz/dreamboost_lora_mnistm_zero_batch_size1_with_prior_preservation | vincegmz | "2023-10-28T02:44:21Z" | 2 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2023-10-28T02:40:03Z" |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of color zero
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - vincegmz/dreamboost_lora_mnistm_zero_batch_size1_with_prior_preservation
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of color zero using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
![img_0](./image_0.png)
![img_1](./image_1.png)
![img_2](./image_2.png)
![img_3](./image_3.png)
LoRA for the text encoder was enabled: False.
|
CyberHarem/ohara_mari_lovelivesunshine | CyberHarem | "2023-09-25T12:55:48Z" | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/ohara_mari_lovelivesunshine",
"license:mit",
"region:us"
] | text-to-image | "2023-08-15T00:06:34Z" | ---
license: mit
datasets:
- CyberHarem/ohara_mari_lovelivesunshine
pipeline_tag: text-to-image
tags:
- art
---
# Lora of ohara_mari_lovelivesunshine
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 4000, you need to download `4000/ohara_mari_lovelivesunshine.pt` as the embedding and `4000/ohara_mari_lovelivesunshine.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 4000**, with the score of 0.956. The trigger words are:
1. `ohara_mari_lovelivesunshine`
2. `blonde_hair, yellow_eyes, braid, smile, hair_rings, crown_braid, blush, medium_hair`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-----------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 7500 | 0.901 | [Download](7500/ohara_mari_lovelivesunshine.zip) | ![pattern_1-7500](7500/previews/pattern_1.png) | ![pattern_2-7500](7500/previews/pattern_2.png) | ![pattern_3-7500](7500/previews/pattern_3.png) | ![bikini-7500](7500/previews/bikini.png) | [<NSFW, click to see>](7500/previews/bondage.png) | ![free-7500](7500/previews/free.png) | ![maid-7500](7500/previews/maid.png) | ![miko-7500](7500/previews/miko.png) | [<NSFW, click to see>](7500/previews/nude.png) | [<NSFW, click to see>](7500/previews/nude2.png) | ![suit-7500](7500/previews/suit.png) | ![yukata-7500](7500/previews/yukata.png) |
| 7000 | 0.915 | [Download](7000/ohara_mari_lovelivesunshine.zip) | ![pattern_1-7000](7000/previews/pattern_1.png) | ![pattern_2-7000](7000/previews/pattern_2.png) | ![pattern_3-7000](7000/previews/pattern_3.png) | ![bikini-7000](7000/previews/bikini.png) | [<NSFW, click to see>](7000/previews/bondage.png) | ![free-7000](7000/previews/free.png) | ![maid-7000](7000/previews/maid.png) | ![miko-7000](7000/previews/miko.png) | [<NSFW, click to see>](7000/previews/nude.png) | [<NSFW, click to see>](7000/previews/nude2.png) | ![suit-7000](7000/previews/suit.png) | ![yukata-7000](7000/previews/yukata.png) |
| 6500 | 0.914 | [Download](6500/ohara_mari_lovelivesunshine.zip) | ![pattern_1-6500](6500/previews/pattern_1.png) | ![pattern_2-6500](6500/previews/pattern_2.png) | ![pattern_3-6500](6500/previews/pattern_3.png) | ![bikini-6500](6500/previews/bikini.png) | [<NSFW, click to see>](6500/previews/bondage.png) | ![free-6500](6500/previews/free.png) | ![maid-6500](6500/previews/maid.png) | ![miko-6500](6500/previews/miko.png) | [<NSFW, click to see>](6500/previews/nude.png) | [<NSFW, click to see>](6500/previews/nude2.png) | ![suit-6500](6500/previews/suit.png) | ![yukata-6500](6500/previews/yukata.png) |
| 6000 | 0.919 | [Download](6000/ohara_mari_lovelivesunshine.zip) | ![pattern_1-6000](6000/previews/pattern_1.png) | ![pattern_2-6000](6000/previews/pattern_2.png) | ![pattern_3-6000](6000/previews/pattern_3.png) | ![bikini-6000](6000/previews/bikini.png) | [<NSFW, click to see>](6000/previews/bondage.png) | ![free-6000](6000/previews/free.png) | ![maid-6000](6000/previews/maid.png) | ![miko-6000](6000/previews/miko.png) | [<NSFW, click to see>](6000/previews/nude.png) | [<NSFW, click to see>](6000/previews/nude2.png) | ![suit-6000](6000/previews/suit.png) | ![yukata-6000](6000/previews/yukata.png) |
| 5500 | 0.903 | [Download](5500/ohara_mari_lovelivesunshine.zip) | ![pattern_1-5500](5500/previews/pattern_1.png) | ![pattern_2-5500](5500/previews/pattern_2.png) | ![pattern_3-5500](5500/previews/pattern_3.png) | ![bikini-5500](5500/previews/bikini.png) | [<NSFW, click to see>](5500/previews/bondage.png) | ![free-5500](5500/previews/free.png) | ![maid-5500](5500/previews/maid.png) | ![miko-5500](5500/previews/miko.png) | [<NSFW, click to see>](5500/previews/nude.png) | [<NSFW, click to see>](5500/previews/nude2.png) | ![suit-5500](5500/previews/suit.png) | ![yukata-5500](5500/previews/yukata.png) |
| 5000 | 0.932 | [Download](5000/ohara_mari_lovelivesunshine.zip) | ![pattern_1-5000](5000/previews/pattern_1.png) | ![pattern_2-5000](5000/previews/pattern_2.png) | ![pattern_3-5000](5000/previews/pattern_3.png) | ![bikini-5000](5000/previews/bikini.png) | [<NSFW, click to see>](5000/previews/bondage.png) | ![free-5000](5000/previews/free.png) | ![maid-5000](5000/previews/maid.png) | ![miko-5000](5000/previews/miko.png) | [<NSFW, click to see>](5000/previews/nude.png) | [<NSFW, click to see>](5000/previews/nude2.png) | ![suit-5000](5000/previews/suit.png) | ![yukata-5000](5000/previews/yukata.png) |
| 4500 | 0.918 | [Download](4500/ohara_mari_lovelivesunshine.zip) | ![pattern_1-4500](4500/previews/pattern_1.png) | ![pattern_2-4500](4500/previews/pattern_2.png) | ![pattern_3-4500](4500/previews/pattern_3.png) | ![bikini-4500](4500/previews/bikini.png) | [<NSFW, click to see>](4500/previews/bondage.png) | ![free-4500](4500/previews/free.png) | ![maid-4500](4500/previews/maid.png) | ![miko-4500](4500/previews/miko.png) | [<NSFW, click to see>](4500/previews/nude.png) | [<NSFW, click to see>](4500/previews/nude2.png) | ![suit-4500](4500/previews/suit.png) | ![yukata-4500](4500/previews/yukata.png) |
| **4000** | **0.956** | [**Download**](4000/ohara_mari_lovelivesunshine.zip) | ![pattern_1-4000](4000/previews/pattern_1.png) | ![pattern_2-4000](4000/previews/pattern_2.png) | ![pattern_3-4000](4000/previews/pattern_3.png) | ![bikini-4000](4000/previews/bikini.png) | [<NSFW, click to see>](4000/previews/bondage.png) | ![free-4000](4000/previews/free.png) | ![maid-4000](4000/previews/maid.png) | ![miko-4000](4000/previews/miko.png) | [<NSFW, click to see>](4000/previews/nude.png) | [<NSFW, click to see>](4000/previews/nude2.png) | ![suit-4000](4000/previews/suit.png) | ![yukata-4000](4000/previews/yukata.png) |
| 3500 | 0.929 | [Download](3500/ohara_mari_lovelivesunshine.zip) | ![pattern_1-3500](3500/previews/pattern_1.png) | ![pattern_2-3500](3500/previews/pattern_2.png) | ![pattern_3-3500](3500/previews/pattern_3.png) | ![bikini-3500](3500/previews/bikini.png) | [<NSFW, click to see>](3500/previews/bondage.png) | ![free-3500](3500/previews/free.png) | ![maid-3500](3500/previews/maid.png) | ![miko-3500](3500/previews/miko.png) | [<NSFW, click to see>](3500/previews/nude.png) | [<NSFW, click to see>](3500/previews/nude2.png) | ![suit-3500](3500/previews/suit.png) | ![yukata-3500](3500/previews/yukata.png) |
| 3000 | 0.921 | [Download](3000/ohara_mari_lovelivesunshine.zip) | ![pattern_1-3000](3000/previews/pattern_1.png) | ![pattern_2-3000](3000/previews/pattern_2.png) | ![pattern_3-3000](3000/previews/pattern_3.png) | ![bikini-3000](3000/previews/bikini.png) | [<NSFW, click to see>](3000/previews/bondage.png) | ![free-3000](3000/previews/free.png) | ![maid-3000](3000/previews/maid.png) | ![miko-3000](3000/previews/miko.png) | [<NSFW, click to see>](3000/previews/nude.png) | [<NSFW, click to see>](3000/previews/nude2.png) | ![suit-3000](3000/previews/suit.png) | ![yukata-3000](3000/previews/yukata.png) |
| 2500 | 0.911 | [Download](2500/ohara_mari_lovelivesunshine.zip) | ![pattern_1-2500](2500/previews/pattern_1.png) | ![pattern_2-2500](2500/previews/pattern_2.png) | ![pattern_3-2500](2500/previews/pattern_3.png) | ![bikini-2500](2500/previews/bikini.png) | [<NSFW, click to see>](2500/previews/bondage.png) | ![free-2500](2500/previews/free.png) | ![maid-2500](2500/previews/maid.png) | ![miko-2500](2500/previews/miko.png) | [<NSFW, click to see>](2500/previews/nude.png) | [<NSFW, click to see>](2500/previews/nude2.png) | ![suit-2500](2500/previews/suit.png) | ![yukata-2500](2500/previews/yukata.png) |
| 2000 | 0.913 | [Download](2000/ohara_mari_lovelivesunshine.zip) | ![pattern_1-2000](2000/previews/pattern_1.png) | ![pattern_2-2000](2000/previews/pattern_2.png) | ![pattern_3-2000](2000/previews/pattern_3.png) | ![bikini-2000](2000/previews/bikini.png) | [<NSFW, click to see>](2000/previews/bondage.png) | ![free-2000](2000/previews/free.png) | ![maid-2000](2000/previews/maid.png) | ![miko-2000](2000/previews/miko.png) | [<NSFW, click to see>](2000/previews/nude.png) | [<NSFW, click to see>](2000/previews/nude2.png) | ![suit-2000](2000/previews/suit.png) | ![yukata-2000](2000/previews/yukata.png) |
| 1500 | 0.855 | [Download](1500/ohara_mari_lovelivesunshine.zip) | ![pattern_1-1500](1500/previews/pattern_1.png) | ![pattern_2-1500](1500/previews/pattern_2.png) | ![pattern_3-1500](1500/previews/pattern_3.png) | ![bikini-1500](1500/previews/bikini.png) | [<NSFW, click to see>](1500/previews/bondage.png) | ![free-1500](1500/previews/free.png) | ![maid-1500](1500/previews/maid.png) | ![miko-1500](1500/previews/miko.png) | [<NSFW, click to see>](1500/previews/nude.png) | [<NSFW, click to see>](1500/previews/nude2.png) | ![suit-1500](1500/previews/suit.png) | ![yukata-1500](1500/previews/yukata.png) |
| 1000 | 0.807 | [Download](1000/ohara_mari_lovelivesunshine.zip) | ![pattern_1-1000](1000/previews/pattern_1.png) | ![pattern_2-1000](1000/previews/pattern_2.png) | ![pattern_3-1000](1000/previews/pattern_3.png) | ![bikini-1000](1000/previews/bikini.png) | [<NSFW, click to see>](1000/previews/bondage.png) | ![free-1000](1000/previews/free.png) | ![maid-1000](1000/previews/maid.png) | ![miko-1000](1000/previews/miko.png) | [<NSFW, click to see>](1000/previews/nude.png) | [<NSFW, click to see>](1000/previews/nude2.png) | ![suit-1000](1000/previews/suit.png) | ![yukata-1000](1000/previews/yukata.png) |
| 500 | 0.765 | [Download](500/ohara_mari_lovelivesunshine.zip) | ![pattern_1-500](500/previews/pattern_1.png) | ![pattern_2-500](500/previews/pattern_2.png) | ![pattern_3-500](500/previews/pattern_3.png) | ![bikini-500](500/previews/bikini.png) | [<NSFW, click to see>](500/previews/bondage.png) | ![free-500](500/previews/free.png) | ![maid-500](500/previews/maid.png) | ![miko-500](500/previews/miko.png) | [<NSFW, click to see>](500/previews/nude.png) | [<NSFW, click to see>](500/previews/nude2.png) | ![suit-500](500/previews/suit.png) | ![yukata-500](500/previews/yukata.png) |
|
asad/Diffusion-small | asad | "2024-02-01T08:38:19Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-02-01T08:38:19Z" | ---
license: apache-2.0
---
|
YanJiangJerry/SA-roberta-e3-w1-5-b16-w0.01-data2 | YanJiangJerry | "2023-07-14T18:19:30Z" | 118 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-07-14T17:48:27Z" | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: SA-roberta-e3-w1-5-b16-w0.01-data2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SA-roberta-e3-w1-5-b16-w0.01-data2
This model is a fine-tuned version of [Amalq/autotrain-smm4h_large_roberta_clean-874027878](https://huggingface.co/Amalq/autotrain-smm4h_large_roberta_clean-874027878) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7680
- Accuracy: 0.9021
- F1: 0.8646
- Precision: 0.8921
- Recall: 0.8388
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.2612 | 1.0 | 581 | 0.4296 | 0.9021 | 0.8721 | 0.8499 | 0.8955 |
| 0.1252 | 2.0 | 1162 | 0.7605 | 0.8977 | 0.8571 | 0.8932 | 0.8239 |
| 0.0567 | 3.0 | 1743 | 0.7680 | 0.9021 | 0.8646 | 0.8921 | 0.8388 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
BazookaCow19/class-recommendation-model | BazookaCow19 | "2024-11-29T11:24:55Z" | 107 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-08-16T18:25:49Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
remzloev/bazartv_RVC | remzloev | "2024-05-28T15:38:00Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-05-28T13:20:02Z" | ---
license: openrail
---
|
cocktailpeanut/llama.30b.zip | cocktailpeanut | "2023-03-19T07:29:07Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2023-03-19T07:29:07Z" | ---
license: openrail
---
|
sebajoe/batchPrompting_7b_25 | sebajoe | "2024-04-20T06:49:30Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-04-20T06:49:20Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
camenduru/evf-sam2 | camenduru | "2024-09-17T11:39:17Z" | 6 | 0 | null | [
"safetensors",
"evf",
"arxiv:2406.20076",
"license:apache-2.0",
"region:us"
] | null | "2024-09-17T11:37:30Z" | ---
license: apache-2.0
---
## EVF-SAM
[EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model](https://huggingface.co/papers/2406.20076)
## Usage:
This is the checkpoint holder of [EVF-SAM](https://github.com/hustvl/EVF-SAM.git).
Please refer to `"inference.py"` and `"inference_video.py"` in the source code for detailed usage.
We haven't supported `"AutoModel.from_pretrained(...)"` yet, please import the model script from source code. |
Peppenapo/gemmaFinetuneTESTRUNOK | Peppenapo | "2024-04-29T15:26:53Z" | 3 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-29T15:22:40Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kamalkraj/bert-base-cased-ner-conll2003 | kamalkraj | "2023-12-09T13:24:22Z" | 15 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2022-04-24T14:45:57Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
base_model: bert-base-cased
model-index:
- name: bert-base-cased-ner-conll2003
results:
- task:
type: token-classification
name: Token Classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- type: precision
value: 0.9438052359513089
name: Precision
- type: recall
value: 0.9525412319084483
name: Recall
- type: f1
value: 0.9481531116508919
name: F1
- type: accuracy
value: 0.9910634321093416
name: Accuracy
- task:
type: token-classification
name: Token Classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: test
metrics:
- type: accuracy
value: 0.9116307653519484
name: Accuracy
verified: true
- type: precision
value: 0.9366103911345081
name: Precision
verified: true
- type: recall
value: 0.9262526113340186
name: Recall
verified: true
- type: f1
value: 0.9314027058794109
name: F1
verified: true
- type: loss
value: 0.4366346299648285
name: loss
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-ner-conll2003
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0355
- Precision: 0.9438
- Recall: 0.9525
- F1: 0.9482
- Accuracy: 0.9911
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.1.0
- Tokenizers 0.12.1
|
nunuzak/ppo-LunarLander-v2 | nunuzak | "2023-03-08T01:57:35Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-08T01:57:12Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 260.28 +/- 26.17
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
none1/ppo-LunarLander-v2 | none1 | "2022-05-06T01:50:17Z" | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2022-05-06T01:49:44Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 278.81 +/- 19.74
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
huggingartists/gunna | huggingartists | "2021-09-15T17:15:43Z" | 6 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/gunna",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | ---
language: en
datasets:
- huggingartists/gunna
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/18e3833ac527a4bf14ddf2acef834795.640x640x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Gunna</div>
<a href="https://genius.com/artists/gunna">
<div style="text-align: center; font-size: 14px;">@gunna</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Gunna.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/gunna).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/gunna")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/vcyblers/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Gunna's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/3c1xymw6) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/3c1xymw6/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/gunna')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/gunna")
model = AutoModelWithLMHead.from_pretrained("huggingartists/gunna")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk)
[![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
|
damgomz/ft_2_11e6_base_x1 | damgomz | "2024-06-20T17:26:02Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-06-20T16:33:41Z" | ---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 94501.51422262192 |
| Emissions (Co2eq in kg) | 0.0571843146301403 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 1.115640484968489 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0984380830054476 |
| Consumed energy (kWh) | 1.2140785679739348 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.18191541487854718 |
| Emissions (Co2eq in kg) | 0.037013093070526915 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_2_11e6_base_x1 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.1e-05 |
| batch_size | 2 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.690976 | 0.406776 |
| 1 | 0.308481 | 0.251671 | 0.926780 |
| 2 | 0.211442 | 0.225041 | 0.921172 |
| 3 | 0.168826 | 0.215469 | 0.926522 |
| 4 | 0.119771 | 0.243876 | 0.923814 |
| 5 | 0.081189 | 0.266942 | 0.926301 |
| 6 | 0.048614 | 0.338743 | 0.920674 |
|
ygmrdgan/bert-finetuned-ner_lr2e-05_bs32 | ygmrdgan | "2023-11-07T18:49:06Z" | 3 | 0 | transformers | [
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2023-11-07T18:12:58Z" | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_keras_callback
model-index:
- name: ygmrdgan/bert-finetuned-ner_lr2e-05_bs32
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ygmrdgan/bert-finetuned-ner_lr2e-05_bs32
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3152
- Validation Loss: 0.4966
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 639, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.3142 | 0.5308 | 0 |
| 0.3152 | 0.4966 | 1 |
### Framework versions
- Transformers 4.35.0
- TensorFlow 2.14.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
OwOOwO/bomb3 | OwOOwO | "2024-03-31T15:01:25Z" | 90 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-31T14:59:59Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
fangzhaoz/mistralv1_spectral_r8_2e4_e3 | fangzhaoz | "2024-04-15T08:45:17Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | "2024-04-15T08:45:13Z" | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: mistralv1_spectral_r8_2e4_e3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistralv1_spectral_r8_2e4_e3
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.39.3
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.15.2 |
Ahs2000/segformer-b0-scene-parse-150 | Ahs2000 | "2024-11-03T15:24:57Z" | 49 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"segformer",
"generated_from_trainer",
"dataset:scene_parse_150",
"base_model:nvidia/mit-b0",
"base_model:finetune:nvidia/mit-b0",
"license:other",
"endpoints_compatible",
"region:us"
] | null | "2024-11-03T12:24:16Z" | ---
library_name: transformers
license: other
base_model: nvidia/mit-b0
tags:
- generated_from_trainer
datasets:
- scene_parse_150
model-index:
- name: segformer-b0-scene-parse-150
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-scene-parse-150
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the scene_parse_150 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3049
- Mean Iou: 0.0573
- Mean Accuracy: 0.0859
- Overall Accuracy: 0.4101
- Per Category Iou: [0.030010927318135348, 0.44726327746817224, 0.00125928200111358, 0.9390098229092976, 0.38234383192498567, 0.7785783214702916, 0.0, 0.0, 0.0, 0.0, 0.3425946024166124, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan]
- Per Category Accuracy: [0.06397920795118507, 0.8896496979508158, 0.1742260619150468, 0.972699587340297, 0.5473868702844434, 0.9668470205567394, 0.0, nan, 0.0, 0.0, 0.4206481846498948, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 3.9227 | 4.0 | 20 | 4.0114 | 0.0393 | 0.0661 | 0.3227 | [0.06495002035888996, 0.3616824052477034, 0.0012751862654151842, 0.9383487415721895, 0.003642086330935252, 0.6238042624952752, 0.0, 0.0, 0.0, 0.0, 0.04837538868243426, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] | [0.19314162536793672, 0.7487100796609799, 0.1717062634989201, 0.9751683375280062, 0.0036764320802740043, 0.9665451793252272, 0.0, nan, 0.0, 0.0, 0.04958273876615048, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] |
| 3.5704 | 8.0 | 40 | 3.8278 | 0.0440 | 0.0697 | 0.3314 | [0.05867716018346553, 0.3732525545076808, 0.0016563196625038951, 0.940859590195372, 0.06871724092604459, 0.6723288671507391, 0.0, 0.0, 0.0, 0.0, 0.08217889152322527, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] | [0.1560358676485134, 0.7765117303839916, 0.24874010079193665, 0.9725133799052144, 0.07300842472042707, 0.9623464326421018, 0.0, nan, 0.0, 0.0, 0.08583421708688917, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] |
| 3.4495 | 12.0 | 60 | 3.6593 | 0.0513 | 0.0810 | 0.3724 | [0.04013217032326797, 0.37378386572223904, 0.002132418179570002, 0.9445812374687819, 0.25007496607970453, 0.7221795390214315, 0.0, 0.0, 0.0, 0.0, 0.23447140247510742, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] | [0.0822302337400107, 0.8900590972588998, 0.3538516918646508, 0.9678041338050588, 0.29407965253701746, 0.9655045028404612, 0.0, nan, 0.0, 0.0, 0.2529996724096767, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] |
| 2.7922 | 16.0 | 80 | 3.5772 | 0.0562 | 0.0861 | 0.4024 | [0.05052749951447491, 0.4096836982285473, 0.0020946539981145464, 0.9437682003494468, 0.3363278034572279, 0.7582318912588282, 0.0, 0.0, 0.0, 0.0, 0.30894883649841426, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] | [0.11003482395314967, 0.9123416702866732, 0.2847372210223182, 0.9733543167088137, 0.44507871335351357, 0.9624996058043618, 0.0, nan, 0.0, 0.0, 0.35915004192045663, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] |
| 3.0411 | 20.0 | 100 | 3.4623 | 0.0573 | 0.0884 | 0.4111 | [0.03856540801747222, 0.4218826987563737, 0.0022877904088786706, 0.9429566227457791, 0.3573942676941075, 0.7436237815621519, 0.0, 0.0, 0.0, 0.0, 0.3563730326521024, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] | [0.08046345531380782, 0.9365587331747822, 0.29013678905687545, 0.9654825475579796, 0.49064831592876174, 0.9716043987728127, 0.0, nan, 0.0, 0.0, 0.42117010821585427, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] |
| 2.8443 | 24.0 | 120 | 3.5053 | 0.0582 | 0.0870 | 0.3918 | [0.03664342772335264, 0.4283838963956857, 0.0022737335646281494, 0.9405355721043511, 0.31750602659336585, 0.7857576325981176, 0.0, 0.0, 0.0, 0.0, 0.3987358862297607, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] | [0.08017012916582819, 0.8485128804522769, 0.3826493880489561, 0.9649809888215473, 0.40650934470129424, 0.9624770803393236, 0.0, nan, 0.0, 0.0, 0.4427466505277536, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] |
| 2.7681 | 28.0 | 140 | 3.4122 | 0.0586 | 0.0878 | 0.4040 | [0.03312422685035546, 0.432634739953462, 0.001900827521034813, 0.9382290337556315, 0.3677854556624332, 0.7851506126960462, 0.0, 0.0, 0.0, 0.0, 0.37191689693992425, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] | [0.06676239558782901, 0.8599666855219529, 0.31569474442044637, 0.9650800992305428, 0.5309075166102808, 0.9632474512436309, 0.0, nan, 0.0, 0.0, 0.42372420226203894, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] |
| 2.445 | 32.0 | 160 | 3.3460 | 0.0598 | 0.0888 | 0.4131 | [0.032548051964298, 0.43766520298234657, 0.001747089037591831, 0.9406653786124988, 0.36845328619107537, 0.7891234460485762, 0.0, 0.0, 0.0, 0.0, 0.42112342504840067, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] | [0.07099516011855833, 0.8890027845403321, 0.2458603311735061, 0.9738438620623374, 0.49991446098198794, 0.9642926328214046, 0.0, nan, 0.0, 0.0, 0.5302021620961339, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] |
| 2.5309 | 36.0 | 180 | 3.3812 | 0.0577 | 0.0881 | 0.4050 | [0.03263227841514087, 0.42862312848082507, 0.001677291655955992, 0.9421507343303059, 0.3708547174798877, 0.7915861057403827, 0.0, 0.0, 0.0, 0.0, 0.43330380041967825, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] | [0.06951147553284741, 0.8180048190361303, 0.2818574514038877, 0.9714201620605354, 0.5192113651678136, 0.962801447035874, 0.0, nan, 0.0, 0.0, 0.5159381020860285, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] |
| 2.4341 | 40.0 | 200 | 3.2711 | 0.0599 | 0.0898 | 0.4169 | [0.0288167612906008, 0.4413618354516986, 0.0018055928611931836, 0.9411833342570688, 0.38248812801419096, 0.7946395385141464, 0.0, 0.0, 0.0, 0.0, 0.4024118202813353, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] | [0.059258703430210544, 0.8769676949568881, 0.2613390928725702, 0.9690234921702777, 0.578170442603319, 0.963157349383478, 0.0, nan, 0.0, 0.0, 0.5108243616152979, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] |
| 2.5026 | 44.0 | 220 | 3.3146 | 0.0584 | 0.0870 | 0.4050 | [0.029417411453161204, 0.44352154176953096, 0.0017935972748444507, 0.9412199597905367, 0.3778803290010674, 0.7877165979112559, 0.0, 0.0, 0.0, 0.0, 0.33971275980155, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] | [0.060623011095232084, 0.870254796378535, 0.2786177105831533, 0.967356635291715, 0.5463359623488665, 0.966040608908371, 0.0, nan, 0.0, 0.0, 0.3976779953693165, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] |
| 2.3054 | 48.0 | 240 | 3.3049 | 0.0573 | 0.0859 | 0.4101 | [0.030010927318135348, 0.44726327746817224, 0.00125928200111358, 0.9390098229092976, 0.38234383192498567, 0.7785783214702916, 0.0, 0.0, 0.0, 0.0, 0.3425946024166124, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] | [0.06397920795118507, 0.8896496979508158, 0.1742260619150468, 0.972699587340297, 0.5473868702844434, 0.9668470205567394, 0.0, nan, 0.0, 0.0, 0.4206481846498948, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
Xu-Ouyang/pythia-160m-deduped-int2-step2000-GPTQ-wikitext2-uva | Xu-Ouyang | "2024-09-13T11:16:20Z" | 75 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"2-bit",
"gptq",
"region:us"
] | text-generation | "2024-09-13T11:16:03Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
flammenai/flammen4-mistral-7B | flammenai | "2024-03-09T22:41:34Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:Gille/StrangeMerges_30-7B-slerp",
"base_model:merge:Gille/StrangeMerges_30-7B-slerp",
"base_model:nbeerbower/Flammen-Trismegistus-7B",
"base_model:merge:nbeerbower/Flammen-Trismegistus-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-09T22:35:13Z" | ---
license: apache-2.0
base_model:
- nbeerbower/Flammen-Trismegistus-7B
- Gille/StrangeMerges_30-7B-slerp
library_name: transformers
tags:
- mergekit
- merge
---
# flammen4-mistral-7B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [nbeerbower/Flammen-Trismegistus-7B](https://huggingface.co/nbeerbower/Flammen-Trismegistus-7B)
* [Gille/StrangeMerges_30-7B-slerp](https://huggingface.co/Gille/StrangeMerges_30-7B-slerp)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: nbeerbower/Flammen-Trismegistus-7B
layer_range: [0, 32]
- model: Gille/StrangeMerges_30-7B-slerp
layer_range: [0, 32]
merge_method: slerp
base_model: nbeerbower/Flammen-Trismegistus-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
MayBashendy/ArabicNewSplits4_FineTuningAraBERT_run1_AugV5_k16_task1_organization | MayBashendy | "2024-12-08T23:40:31Z" | 162 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-12-08T23:24:44Z" | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits4_FineTuningAraBERT_run1_AugV5_k16_task1_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits4_FineTuningAraBERT_run1_AugV5_k16_task1_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9719
- Qwk: 0.5938
- Mse: 0.9719
- Rmse: 0.9858
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0270 | 2 | 5.2625 | -0.0098 | 5.2625 | 2.2940 |
| No log | 0.0541 | 4 | 3.1414 | 0.0781 | 3.1414 | 1.7724 |
| No log | 0.0811 | 6 | 2.0213 | 0.1164 | 2.0213 | 1.4217 |
| No log | 0.1081 | 8 | 1.6582 | 0.1040 | 1.6582 | 1.2877 |
| No log | 0.1351 | 10 | 1.5226 | 0.0839 | 1.5226 | 1.2339 |
| No log | 0.1622 | 12 | 1.3814 | 0.1737 | 1.3814 | 1.1753 |
| No log | 0.1892 | 14 | 1.5867 | 0.0398 | 1.5867 | 1.2596 |
| No log | 0.2162 | 16 | 1.6973 | 0.0271 | 1.6973 | 1.3028 |
| No log | 0.2432 | 18 | 1.5629 | 0.0757 | 1.5629 | 1.2502 |
| No log | 0.2703 | 20 | 1.3762 | 0.1693 | 1.3762 | 1.1731 |
| No log | 0.2973 | 22 | 1.2822 | 0.2629 | 1.2822 | 1.1324 |
| No log | 0.3243 | 24 | 1.1827 | 0.2335 | 1.1827 | 1.0875 |
| No log | 0.3514 | 26 | 1.1192 | 0.3408 | 1.1192 | 1.0579 |
| No log | 0.3784 | 28 | 1.1288 | 0.3275 | 1.1288 | 1.0625 |
| No log | 0.4054 | 30 | 1.0919 | 0.3000 | 1.0919 | 1.0450 |
| No log | 0.4324 | 32 | 1.2183 | 0.3367 | 1.2183 | 1.1038 |
| No log | 0.4595 | 34 | 1.2235 | 0.3341 | 1.2235 | 1.1061 |
| No log | 0.4865 | 36 | 1.0933 | 0.3683 | 1.0933 | 1.0456 |
| No log | 0.5135 | 38 | 1.0721 | 0.3689 | 1.0721 | 1.0354 |
| No log | 0.5405 | 40 | 1.0867 | 0.3513 | 1.0867 | 1.0424 |
| No log | 0.5676 | 42 | 1.0980 | 0.3788 | 1.0980 | 1.0478 |
| No log | 0.5946 | 44 | 1.1128 | 0.3353 | 1.1128 | 1.0549 |
| No log | 0.6216 | 46 | 1.1470 | 0.1629 | 1.1470 | 1.0710 |
| No log | 0.6486 | 48 | 1.1757 | 0.1334 | 1.1757 | 1.0843 |
| No log | 0.6757 | 50 | 1.1721 | 0.1334 | 1.1721 | 1.0826 |
| No log | 0.7027 | 52 | 1.0801 | 0.3083 | 1.0801 | 1.0393 |
| No log | 0.7297 | 54 | 1.0158 | 0.4266 | 1.0158 | 1.0079 |
| No log | 0.7568 | 56 | 1.0456 | 0.4664 | 1.0456 | 1.0225 |
| No log | 0.7838 | 58 | 1.0692 | 0.4113 | 1.0692 | 1.0340 |
| No log | 0.8108 | 60 | 0.9997 | 0.4574 | 0.9997 | 0.9999 |
| No log | 0.8378 | 62 | 0.9798 | 0.4411 | 0.9798 | 0.9899 |
| No log | 0.8649 | 64 | 0.9569 | 0.4030 | 0.9569 | 0.9782 |
| No log | 0.8919 | 66 | 0.9134 | 0.4555 | 0.9134 | 0.9557 |
| No log | 0.9189 | 68 | 0.9138 | 0.4659 | 0.9138 | 0.9559 |
| No log | 0.9459 | 70 | 1.0369 | 0.4453 | 1.0369 | 1.0183 |
| No log | 0.9730 | 72 | 1.0050 | 0.4451 | 1.0050 | 1.0025 |
| No log | 1.0 | 74 | 0.8375 | 0.4842 | 0.8375 | 0.9151 |
| No log | 1.0270 | 76 | 0.8823 | 0.4761 | 0.8823 | 0.9393 |
| No log | 1.0541 | 78 | 0.9597 | 0.5131 | 0.9597 | 0.9797 |
| No log | 1.0811 | 80 | 0.9042 | 0.5595 | 0.9042 | 0.9509 |
| No log | 1.1081 | 82 | 0.8496 | 0.6109 | 0.8496 | 0.9217 |
| No log | 1.1351 | 84 | 0.8806 | 0.5784 | 0.8806 | 0.9384 |
| No log | 1.1622 | 86 | 0.8994 | 0.6073 | 0.8994 | 0.9484 |
| No log | 1.1892 | 88 | 0.9758 | 0.5693 | 0.9758 | 0.9878 |
| No log | 1.2162 | 90 | 1.0179 | 0.5154 | 1.0179 | 1.0089 |
| No log | 1.2432 | 92 | 0.9529 | 0.5348 | 0.9529 | 0.9761 |
| No log | 1.2703 | 94 | 0.8386 | 0.5595 | 0.8386 | 0.9158 |
| No log | 1.2973 | 96 | 0.8038 | 0.5649 | 0.8038 | 0.8966 |
| No log | 1.3243 | 98 | 0.8687 | 0.5384 | 0.8687 | 0.9320 |
| No log | 1.3514 | 100 | 0.7965 | 0.5801 | 0.7965 | 0.8925 |
| No log | 1.3784 | 102 | 0.7695 | 0.6263 | 0.7695 | 0.8772 |
| No log | 1.4054 | 104 | 0.8306 | 0.6033 | 0.8306 | 0.9114 |
| No log | 1.4324 | 106 | 0.8712 | 0.6062 | 0.8712 | 0.9334 |
| No log | 1.4595 | 108 | 0.8975 | 0.6229 | 0.8975 | 0.9474 |
| No log | 1.4865 | 110 | 0.8995 | 0.6320 | 0.8995 | 0.9484 |
| No log | 1.5135 | 112 | 0.8404 | 0.6480 | 0.8404 | 0.9167 |
| No log | 1.5405 | 114 | 0.8390 | 0.6552 | 0.8390 | 0.9160 |
| No log | 1.5676 | 116 | 0.7836 | 0.6655 | 0.7836 | 0.8852 |
| No log | 1.5946 | 118 | 0.7839 | 0.6991 | 0.7839 | 0.8854 |
| No log | 1.6216 | 120 | 1.0192 | 0.5668 | 1.0192 | 1.0095 |
| No log | 1.6486 | 122 | 1.2261 | 0.5312 | 1.2261 | 1.1073 |
| No log | 1.6757 | 124 | 1.2190 | 0.5234 | 1.2190 | 1.1041 |
| No log | 1.7027 | 126 | 0.9818 | 0.6320 | 0.9818 | 0.9908 |
| No log | 1.7297 | 128 | 0.9525 | 0.6401 | 0.9525 | 0.9759 |
| No log | 1.7568 | 130 | 1.1336 | 0.5127 | 1.1336 | 1.0647 |
| No log | 1.7838 | 132 | 1.3577 | 0.4011 | 1.3577 | 1.1652 |
| No log | 1.8108 | 134 | 1.3193 | 0.4090 | 1.3193 | 1.1486 |
| No log | 1.8378 | 136 | 1.0441 | 0.5615 | 1.0441 | 1.0218 |
| No log | 1.8649 | 138 | 0.9320 | 0.6487 | 0.9320 | 0.9654 |
| No log | 1.8919 | 140 | 1.0070 | 0.5975 | 1.0070 | 1.0035 |
| No log | 1.9189 | 142 | 1.2195 | 0.4796 | 1.2195 | 1.1043 |
| No log | 1.9459 | 144 | 1.2984 | 0.4031 | 1.2984 | 1.1395 |
| No log | 1.9730 | 146 | 1.0953 | 0.5339 | 1.0953 | 1.0466 |
| No log | 2.0 | 148 | 0.9263 | 0.6280 | 0.9263 | 0.9624 |
| No log | 2.0270 | 150 | 0.9394 | 0.6280 | 0.9394 | 0.9692 |
| No log | 2.0541 | 152 | 1.2203 | 0.4734 | 1.2203 | 1.1047 |
| No log | 2.0811 | 154 | 1.4484 | 0.4627 | 1.4484 | 1.2035 |
| No log | 2.1081 | 156 | 1.3119 | 0.4760 | 1.3119 | 1.1454 |
| No log | 2.1351 | 158 | 1.2366 | 0.5134 | 1.2366 | 1.1120 |
| No log | 2.1622 | 160 | 1.2309 | 0.5150 | 1.2309 | 1.1095 |
| No log | 2.1892 | 162 | 1.3679 | 0.5026 | 1.3679 | 1.1696 |
| No log | 2.2162 | 164 | 1.5282 | 0.4815 | 1.5282 | 1.2362 |
| No log | 2.2432 | 166 | 1.5263 | 0.4815 | 1.5263 | 1.2354 |
| No log | 2.2703 | 168 | 1.3866 | 0.4933 | 1.3866 | 1.1776 |
| No log | 2.2973 | 170 | 1.1684 | 0.5198 | 1.1684 | 1.0809 |
| No log | 2.3243 | 172 | 1.1582 | 0.4999 | 1.1582 | 1.0762 |
| No log | 2.3514 | 174 | 1.2508 | 0.4641 | 1.2508 | 1.1184 |
| No log | 2.3784 | 176 | 1.0980 | 0.5310 | 1.0980 | 1.0479 |
| No log | 2.4054 | 178 | 0.8573 | 0.5712 | 0.8573 | 0.9259 |
| No log | 2.4324 | 180 | 0.7984 | 0.5828 | 0.7984 | 0.8936 |
| No log | 2.4595 | 182 | 0.8931 | 0.5827 | 0.8931 | 0.9450 |
| No log | 2.4865 | 184 | 1.0443 | 0.5232 | 1.0443 | 1.0219 |
| No log | 2.5135 | 186 | 1.3070 | 0.4361 | 1.3070 | 1.1433 |
| No log | 2.5405 | 188 | 1.3619 | 0.4391 | 1.3619 | 1.1670 |
| No log | 2.5676 | 190 | 1.1913 | 0.4966 | 1.1913 | 1.0914 |
| No log | 2.5946 | 192 | 0.9667 | 0.5891 | 0.9667 | 0.9832 |
| No log | 2.6216 | 194 | 0.9073 | 0.6575 | 0.9073 | 0.9525 |
| No log | 2.6486 | 196 | 0.9993 | 0.5704 | 0.9993 | 0.9996 |
| No log | 2.6757 | 198 | 1.2759 | 0.4617 | 1.2759 | 1.1296 |
| No log | 2.7027 | 200 | 1.4216 | 0.4315 | 1.4216 | 1.1923 |
| No log | 2.7297 | 202 | 1.3947 | 0.4220 | 1.3947 | 1.1810 |
| No log | 2.7568 | 204 | 1.2783 | 0.4573 | 1.2783 | 1.1306 |
| No log | 2.7838 | 206 | 1.1896 | 0.5081 | 1.1896 | 1.0907 |
| No log | 2.8108 | 208 | 1.1863 | 0.5198 | 1.1863 | 1.0892 |
| No log | 2.8378 | 210 | 1.2174 | 0.5150 | 1.2174 | 1.1034 |
| No log | 2.8649 | 212 | 1.2754 | 0.4674 | 1.2754 | 1.1294 |
| No log | 2.8919 | 214 | 1.2375 | 0.4999 | 1.2375 | 1.1124 |
| No log | 2.9189 | 216 | 1.1681 | 0.5324 | 1.1681 | 1.0808 |
| No log | 2.9459 | 218 | 1.0497 | 0.5607 | 1.0497 | 1.0246 |
| No log | 2.9730 | 220 | 1.0573 | 0.5390 | 1.0573 | 1.0283 |
| No log | 3.0 | 222 | 1.2281 | 0.5195 | 1.2281 | 1.1082 |
| No log | 3.0270 | 224 | 1.2030 | 0.5191 | 1.2030 | 1.0968 |
| No log | 3.0541 | 226 | 0.9654 | 0.5490 | 0.9654 | 0.9825 |
| No log | 3.0811 | 228 | 0.7465 | 0.6407 | 0.7465 | 0.8640 |
| No log | 3.1081 | 230 | 0.6862 | 0.6551 | 0.6862 | 0.8284 |
| No log | 3.1351 | 232 | 0.6838 | 0.6786 | 0.6838 | 0.8269 |
| No log | 3.1622 | 234 | 0.7781 | 0.6640 | 0.7781 | 0.8821 |
| No log | 3.1892 | 236 | 0.9719 | 0.6023 | 0.9719 | 0.9858 |
| No log | 3.2162 | 238 | 1.1987 | 0.5577 | 1.1987 | 1.0948 |
| No log | 3.2432 | 240 | 1.3295 | 0.5177 | 1.3295 | 1.1530 |
| No log | 3.2703 | 242 | 1.1957 | 0.5286 | 1.1957 | 1.0935 |
| No log | 3.2973 | 244 | 1.0864 | 0.5579 | 1.0864 | 1.0423 |
| No log | 3.3243 | 246 | 1.1123 | 0.5424 | 1.1123 | 1.0547 |
| No log | 3.3514 | 248 | 1.1538 | 0.5252 | 1.1538 | 1.0742 |
| No log | 3.3784 | 250 | 1.2171 | 0.5142 | 1.2171 | 1.1032 |
| No log | 3.4054 | 252 | 1.1804 | 0.5317 | 1.1804 | 1.0864 |
| No log | 3.4324 | 254 | 1.0069 | 0.5607 | 1.0069 | 1.0034 |
| No log | 3.4595 | 256 | 0.8726 | 0.6018 | 0.8726 | 0.9341 |
| No log | 3.4865 | 258 | 0.9053 | 0.5978 | 0.9053 | 0.9515 |
| No log | 3.5135 | 260 | 0.9145 | 0.5974 | 0.9145 | 0.9563 |
| No log | 3.5405 | 262 | 1.0277 | 0.5655 | 1.0277 | 1.0137 |
| No log | 3.5676 | 264 | 1.2811 | 0.5231 | 1.2811 | 1.1319 |
| No log | 3.5946 | 266 | 1.3944 | 0.5064 | 1.3944 | 1.1808 |
| No log | 3.6216 | 268 | 1.2908 | 0.5259 | 1.2908 | 1.1361 |
| No log | 3.6486 | 270 | 0.9969 | 0.5790 | 0.9969 | 0.9985 |
| No log | 3.6757 | 272 | 0.7440 | 0.6930 | 0.7440 | 0.8625 |
| No log | 3.7027 | 274 | 0.6873 | 0.7428 | 0.6873 | 0.8290 |
| No log | 3.7297 | 276 | 0.6817 | 0.7402 | 0.6817 | 0.8256 |
| No log | 3.7568 | 278 | 0.7667 | 0.6528 | 0.7667 | 0.8756 |
| No log | 3.7838 | 280 | 0.9435 | 0.5853 | 0.9435 | 0.9713 |
| No log | 3.8108 | 282 | 1.1044 | 0.5493 | 1.1044 | 1.0509 |
| No log | 3.8378 | 284 | 1.0567 | 0.5586 | 1.0567 | 1.0280 |
| No log | 3.8649 | 286 | 0.8623 | 0.6223 | 0.8623 | 0.9286 |
| No log | 3.8919 | 288 | 0.8101 | 0.6605 | 0.8101 | 0.9001 |
| No log | 3.9189 | 290 | 0.7880 | 0.6543 | 0.7880 | 0.8877 |
| No log | 3.9459 | 292 | 0.8041 | 0.6523 | 0.8041 | 0.8967 |
| No log | 3.9730 | 294 | 0.8114 | 0.6523 | 0.8114 | 0.9008 |
| No log | 4.0 | 296 | 0.7456 | 0.6838 | 0.7456 | 0.8635 |
| No log | 4.0270 | 298 | 0.7581 | 0.6625 | 0.7581 | 0.8707 |
| No log | 4.0541 | 300 | 0.8289 | 0.6297 | 0.8289 | 0.9104 |
| No log | 4.0811 | 302 | 0.9533 | 0.6149 | 0.9533 | 0.9764 |
| No log | 4.1081 | 304 | 1.0496 | 0.6149 | 1.0496 | 1.0245 |
| No log | 4.1351 | 306 | 1.0085 | 0.6164 | 1.0085 | 1.0042 |
| No log | 4.1622 | 308 | 0.9166 | 0.6286 | 0.9166 | 0.9574 |
| No log | 4.1892 | 310 | 0.8987 | 0.6251 | 0.8987 | 0.9480 |
| No log | 4.2162 | 312 | 0.9373 | 0.6141 | 0.9373 | 0.9681 |
| No log | 4.2432 | 314 | 0.9679 | 0.6000 | 0.9679 | 0.9838 |
| No log | 4.2703 | 316 | 0.9875 | 0.5941 | 0.9875 | 0.9937 |
| No log | 4.2973 | 318 | 0.9111 | 0.6160 | 0.9111 | 0.9545 |
| No log | 4.3243 | 320 | 0.7843 | 0.6729 | 0.7843 | 0.8856 |
| No log | 4.3514 | 322 | 0.6740 | 0.7070 | 0.6740 | 0.8210 |
| No log | 4.3784 | 324 | 0.6754 | 0.7116 | 0.6754 | 0.8218 |
| No log | 4.4054 | 326 | 0.7752 | 0.6797 | 0.7752 | 0.8805 |
| No log | 4.4324 | 328 | 1.0256 | 0.6254 | 1.0256 | 1.0127 |
| No log | 4.4595 | 330 | 1.1731 | 0.5975 | 1.1731 | 1.0831 |
| No log | 4.4865 | 332 | 1.2595 | 0.5914 | 1.2595 | 1.1223 |
| No log | 4.5135 | 334 | 1.2075 | 0.5914 | 1.2075 | 1.0989 |
| No log | 4.5405 | 336 | 1.0271 | 0.6263 | 1.0271 | 1.0135 |
| No log | 4.5676 | 338 | 0.8912 | 0.6449 | 0.8912 | 0.9441 |
| No log | 4.5946 | 340 | 0.8133 | 0.6716 | 0.8133 | 0.9018 |
| No log | 4.6216 | 342 | 0.8276 | 0.6609 | 0.8276 | 0.9097 |
| No log | 4.6486 | 344 | 0.9569 | 0.6131 | 0.9569 | 0.9782 |
| No log | 4.6757 | 346 | 1.0398 | 0.5859 | 1.0398 | 1.0197 |
| No log | 4.7027 | 348 | 1.0992 | 0.5774 | 1.0992 | 1.0484 |
| No log | 4.7297 | 350 | 1.1006 | 0.5688 | 1.1006 | 1.0491 |
| No log | 4.7568 | 352 | 1.0154 | 0.5870 | 1.0154 | 1.0077 |
| No log | 4.7838 | 354 | 0.9228 | 0.6300 | 0.9228 | 0.9606 |
| No log | 4.8108 | 356 | 0.8612 | 0.6355 | 0.8612 | 0.9280 |
| No log | 4.8378 | 358 | 0.8156 | 0.6784 | 0.8156 | 0.9031 |
| No log | 4.8649 | 360 | 0.8143 | 0.6743 | 0.8143 | 0.9024 |
| No log | 4.8919 | 362 | 0.8630 | 0.6495 | 0.8630 | 0.9290 |
| No log | 4.9189 | 364 | 1.0263 | 0.5823 | 1.0263 | 1.0131 |
| No log | 4.9459 | 366 | 1.1079 | 0.5310 | 1.1079 | 1.0525 |
| No log | 4.9730 | 368 | 1.0855 | 0.5401 | 1.0855 | 1.0419 |
| No log | 5.0 | 370 | 0.9488 | 0.5987 | 0.9488 | 0.9741 |
| No log | 5.0270 | 372 | 0.8537 | 0.6111 | 0.8537 | 0.9240 |
| No log | 5.0541 | 374 | 0.8181 | 0.6174 | 0.8181 | 0.9045 |
| No log | 5.0811 | 376 | 0.7635 | 0.6877 | 0.7635 | 0.8738 |
| No log | 5.1081 | 378 | 0.7422 | 0.7081 | 0.7422 | 0.8615 |
| No log | 5.1351 | 380 | 0.8021 | 0.6432 | 0.8021 | 0.8956 |
| No log | 5.1622 | 382 | 0.9247 | 0.6010 | 0.9247 | 0.9616 |
| No log | 5.1892 | 384 | 1.0773 | 0.5774 | 1.0773 | 1.0379 |
| No log | 5.2162 | 386 | 1.1278 | 0.5462 | 1.1278 | 1.0620 |
| No log | 5.2432 | 388 | 1.0400 | 0.5761 | 1.0400 | 1.0198 |
| No log | 5.2703 | 390 | 0.9920 | 0.5935 | 0.9920 | 0.9960 |
| No log | 5.2973 | 392 | 1.0073 | 0.5958 | 1.0073 | 1.0036 |
| No log | 5.3243 | 394 | 1.0232 | 0.5827 | 1.0232 | 1.0115 |
| No log | 5.3514 | 396 | 1.1034 | 0.5486 | 1.1034 | 1.0504 |
| No log | 5.3784 | 398 | 1.1537 | 0.5377 | 1.1537 | 1.0741 |
| No log | 5.4054 | 400 | 1.0823 | 0.5698 | 1.0823 | 1.0403 |
| No log | 5.4324 | 402 | 0.9770 | 0.5892 | 0.9770 | 0.9885 |
| No log | 5.4595 | 404 | 0.9580 | 0.5935 | 0.9580 | 0.9788 |
| No log | 5.4865 | 406 | 0.9537 | 0.5857 | 0.9537 | 0.9766 |
| No log | 5.5135 | 408 | 0.8943 | 0.6088 | 0.8943 | 0.9457 |
| No log | 5.5405 | 410 | 0.7948 | 0.6433 | 0.7948 | 0.8915 |
| No log | 5.5676 | 412 | 0.7271 | 0.7230 | 0.7271 | 0.8527 |
| No log | 5.5946 | 414 | 0.6734 | 0.7427 | 0.6734 | 0.8206 |
| No log | 5.6216 | 416 | 0.6704 | 0.7240 | 0.6704 | 0.8188 |
| No log | 5.6486 | 418 | 0.7174 | 0.7219 | 0.7174 | 0.8470 |
| No log | 5.6757 | 420 | 0.8773 | 0.6454 | 0.8773 | 0.9367 |
| No log | 5.7027 | 422 | 1.0723 | 0.5712 | 1.0723 | 1.0355 |
| No log | 5.7297 | 424 | 1.1006 | 0.5694 | 1.1006 | 1.0491 |
| No log | 5.7568 | 426 | 0.9977 | 0.5660 | 0.9977 | 0.9988 |
| No log | 5.7838 | 428 | 0.8426 | 0.6359 | 0.8426 | 0.9179 |
| No log | 5.8108 | 430 | 0.7725 | 0.6594 | 0.7725 | 0.8789 |
| No log | 5.8378 | 432 | 0.7437 | 0.6909 | 0.7437 | 0.8624 |
| No log | 5.8649 | 434 | 0.7824 | 0.6565 | 0.7824 | 0.8845 |
| No log | 5.8919 | 436 | 0.8490 | 0.6375 | 0.8490 | 0.9214 |
| No log | 5.9189 | 438 | 0.8537 | 0.6127 | 0.8537 | 0.9239 |
| No log | 5.9459 | 440 | 0.8060 | 0.6548 | 0.8060 | 0.8977 |
| No log | 5.9730 | 442 | 0.7445 | 0.7199 | 0.7445 | 0.8629 |
| No log | 6.0 | 444 | 0.7422 | 0.7134 | 0.7422 | 0.8615 |
| No log | 6.0270 | 446 | 0.7960 | 0.6612 | 0.7960 | 0.8922 |
| No log | 6.0541 | 448 | 0.8989 | 0.6032 | 0.8989 | 0.9481 |
| No log | 6.0811 | 450 | 0.9832 | 0.5714 | 0.9832 | 0.9916 |
| No log | 6.1081 | 452 | 1.0664 | 0.5650 | 1.0664 | 1.0327 |
| No log | 6.1351 | 454 | 1.0988 | 0.5753 | 1.0988 | 1.0482 |
| No log | 6.1622 | 456 | 1.1171 | 0.5753 | 1.1171 | 1.0569 |
| No log | 6.1892 | 458 | 1.0639 | 0.6012 | 1.0639 | 1.0315 |
| No log | 6.2162 | 460 | 0.9678 | 0.5883 | 0.9678 | 0.9838 |
| No log | 6.2432 | 462 | 0.8216 | 0.6703 | 0.8216 | 0.9064 |
| No log | 6.2703 | 464 | 0.7327 | 0.7375 | 0.7327 | 0.8560 |
| No log | 6.2973 | 466 | 0.7059 | 0.7254 | 0.7059 | 0.8402 |
| No log | 6.3243 | 468 | 0.7113 | 0.7370 | 0.7113 | 0.8434 |
| No log | 6.3514 | 470 | 0.7680 | 0.6911 | 0.7680 | 0.8764 |
| No log | 6.3784 | 472 | 0.8547 | 0.6212 | 0.8547 | 0.9245 |
| No log | 6.4054 | 474 | 0.9265 | 0.5865 | 0.9265 | 0.9625 |
| No log | 6.4324 | 476 | 0.9803 | 0.5690 | 0.9803 | 0.9901 |
| No log | 6.4595 | 478 | 0.9649 | 0.5690 | 0.9649 | 0.9823 |
| No log | 6.4865 | 480 | 0.9280 | 0.5909 | 0.9280 | 0.9633 |
| No log | 6.5135 | 482 | 0.9098 | 0.5984 | 0.9098 | 0.9538 |
| No log | 6.5405 | 484 | 0.8872 | 0.6174 | 0.8872 | 0.9419 |
| No log | 6.5676 | 486 | 0.9075 | 0.6058 | 0.9075 | 0.9526 |
| No log | 6.5946 | 488 | 0.8781 | 0.6174 | 0.8781 | 0.9371 |
| No log | 6.6216 | 490 | 0.8601 | 0.6261 | 0.8601 | 0.9274 |
| No log | 6.6486 | 492 | 0.8638 | 0.6174 | 0.8638 | 0.9294 |
| No log | 6.6757 | 494 | 0.8446 | 0.6457 | 0.8446 | 0.9190 |
| No log | 6.7027 | 496 | 0.8596 | 0.6012 | 0.8596 | 0.9271 |
| No log | 6.7297 | 498 | 0.8866 | 0.6012 | 0.8866 | 0.9416 |
| 0.4931 | 6.7568 | 500 | 0.9294 | 0.5745 | 0.9294 | 0.9640 |
| 0.4931 | 6.7838 | 502 | 0.9266 | 0.5836 | 0.9266 | 0.9626 |
| 0.4931 | 6.8108 | 504 | 0.9103 | 0.5914 | 0.9103 | 0.9541 |
| 0.4931 | 6.8378 | 506 | 0.8644 | 0.5967 | 0.8644 | 0.9297 |
| 0.4931 | 6.8649 | 508 | 0.8366 | 0.6294 | 0.8366 | 0.9147 |
| 0.4931 | 6.8919 | 510 | 0.8090 | 0.6525 | 0.8090 | 0.8994 |
| 0.4931 | 6.9189 | 512 | 0.8221 | 0.6493 | 0.8221 | 0.9067 |
| 0.4931 | 6.9459 | 514 | 0.8440 | 0.6212 | 0.8440 | 0.9187 |
| 0.4931 | 6.9730 | 516 | 0.8342 | 0.6304 | 0.8342 | 0.9134 |
| 0.4931 | 7.0 | 518 | 0.8554 | 0.6212 | 0.8554 | 0.9249 |
| 0.4931 | 7.0270 | 520 | 0.8609 | 0.6121 | 0.8609 | 0.9279 |
| 0.4931 | 7.0541 | 522 | 0.8948 | 0.6312 | 0.8948 | 0.9460 |
| 0.4931 | 7.0811 | 524 | 0.9365 | 0.6338 | 0.9365 | 0.9677 |
| 0.4931 | 7.1081 | 526 | 0.9131 | 0.6183 | 0.9131 | 0.9556 |
| 0.4931 | 7.1351 | 528 | 0.8497 | 0.6157 | 0.8497 | 0.9218 |
| 0.4931 | 7.1622 | 530 | 0.7939 | 0.6759 | 0.7939 | 0.8910 |
| 0.4931 | 7.1892 | 532 | 0.7407 | 0.6817 | 0.7407 | 0.8606 |
| 0.4931 | 7.2162 | 534 | 0.7286 | 0.6849 | 0.7286 | 0.8536 |
| 0.4931 | 7.2432 | 536 | 0.7315 | 0.6849 | 0.7315 | 0.8553 |
| 0.4931 | 7.2703 | 538 | 0.7501 | 0.6639 | 0.7501 | 0.8661 |
| 0.4931 | 7.2973 | 540 | 0.7694 | 0.6530 | 0.7694 | 0.8772 |
| 0.4931 | 7.3243 | 542 | 0.8046 | 0.6403 | 0.8046 | 0.8970 |
| 0.4931 | 7.3514 | 544 | 0.8281 | 0.6509 | 0.8281 | 0.9100 |
| 0.4931 | 7.3784 | 546 | 0.8491 | 0.6330 | 0.8491 | 0.9215 |
| 0.4931 | 7.4054 | 548 | 0.8496 | 0.6330 | 0.8496 | 0.9217 |
| 0.4931 | 7.4324 | 550 | 0.8160 | 0.6487 | 0.8160 | 0.9033 |
| 0.4931 | 7.4595 | 552 | 0.7767 | 0.6420 | 0.7767 | 0.8813 |
| 0.4931 | 7.4865 | 554 | 0.7347 | 0.6647 | 0.7347 | 0.8571 |
| 0.4931 | 7.5135 | 556 | 0.7250 | 0.6926 | 0.7250 | 0.8515 |
| 0.4931 | 7.5405 | 558 | 0.7395 | 0.6864 | 0.7395 | 0.8599 |
| 0.4931 | 7.5676 | 560 | 0.7825 | 0.6713 | 0.7825 | 0.8846 |
| 0.4931 | 7.5946 | 562 | 0.8176 | 0.6424 | 0.8176 | 0.9042 |
| 0.4931 | 7.6216 | 564 | 0.8398 | 0.6113 | 0.8398 | 0.9164 |
| 0.4931 | 7.6486 | 566 | 0.8513 | 0.6034 | 0.8513 | 0.9227 |
| 0.4931 | 7.6757 | 568 | 0.8590 | 0.6021 | 0.8590 | 0.9268 |
| 0.4931 | 7.7027 | 570 | 0.8748 | 0.6021 | 0.8748 | 0.9353 |
| 0.4931 | 7.7297 | 572 | 0.9100 | 0.5935 | 0.9100 | 0.9539 |
| 0.4931 | 7.7568 | 574 | 0.9115 | 0.5935 | 0.9115 | 0.9547 |
| 0.4931 | 7.7838 | 576 | 0.9131 | 0.5935 | 0.9131 | 0.9555 |
| 0.4931 | 7.8108 | 578 | 0.8859 | 0.6077 | 0.8859 | 0.9412 |
| 0.4931 | 7.8378 | 580 | 0.8648 | 0.6091 | 0.8648 | 0.9299 |
| 0.4931 | 7.8649 | 582 | 0.8698 | 0.6091 | 0.8698 | 0.9326 |
| 0.4931 | 7.8919 | 584 | 0.8977 | 0.6077 | 0.8977 | 0.9475 |
| 0.4931 | 7.9189 | 586 | 0.8970 | 0.6077 | 0.8970 | 0.9471 |
| 0.4931 | 7.9459 | 588 | 0.9159 | 0.5994 | 0.9159 | 0.9570 |
| 0.4931 | 7.9730 | 590 | 0.9411 | 0.5958 | 0.9411 | 0.9701 |
| 0.4931 | 8.0 | 592 | 0.9642 | 0.5825 | 0.9642 | 0.9819 |
| 0.4931 | 8.0270 | 594 | 0.9635 | 0.5950 | 0.9635 | 0.9816 |
| 0.4931 | 8.0541 | 596 | 0.9401 | 0.5958 | 0.9401 | 0.9696 |
| 0.4931 | 8.0811 | 598 | 0.9078 | 0.6039 | 0.9078 | 0.9528 |
| 0.4931 | 8.1081 | 600 | 0.8843 | 0.6122 | 0.8843 | 0.9404 |
| 0.4931 | 8.1351 | 602 | 0.9060 | 0.6070 | 0.9060 | 0.9518 |
| 0.4931 | 8.1622 | 604 | 0.9271 | 0.6136 | 0.9271 | 0.9629 |
| 0.4931 | 8.1892 | 606 | 0.9466 | 0.6093 | 0.9466 | 0.9729 |
| 0.4931 | 8.2162 | 608 | 0.9800 | 0.6024 | 0.9800 | 0.9899 |
| 0.4931 | 8.2432 | 610 | 1.0231 | 0.5881 | 1.0231 | 1.0115 |
| 0.4931 | 8.2703 | 612 | 1.0388 | 0.5848 | 1.0388 | 1.0192 |
| 0.4931 | 8.2973 | 614 | 1.0192 | 0.5881 | 1.0192 | 1.0096 |
| 0.4931 | 8.3243 | 616 | 0.9653 | 0.6037 | 0.9653 | 0.9825 |
| 0.4931 | 8.3514 | 618 | 0.9046 | 0.6132 | 0.9046 | 0.9511 |
| 0.4931 | 8.3784 | 620 | 0.8712 | 0.6091 | 0.8712 | 0.9334 |
| 0.4931 | 8.4054 | 622 | 0.8721 | 0.6091 | 0.8721 | 0.9339 |
| 0.4931 | 8.4324 | 624 | 0.8739 | 0.6091 | 0.8739 | 0.9348 |
| 0.4931 | 8.4595 | 626 | 0.8641 | 0.6091 | 0.8641 | 0.9296 |
| 0.4931 | 8.4865 | 628 | 0.8658 | 0.6091 | 0.8658 | 0.9305 |
| 0.4931 | 8.5135 | 630 | 0.8756 | 0.6091 | 0.8756 | 0.9358 |
| 0.4931 | 8.5405 | 632 | 0.9112 | 0.6074 | 0.9112 | 0.9546 |
| 0.4931 | 8.5676 | 634 | 0.9381 | 0.6049 | 0.9381 | 0.9686 |
| 0.4931 | 8.5946 | 636 | 0.9891 | 0.5857 | 0.9891 | 0.9945 |
| 0.4931 | 8.6216 | 638 | 1.0206 | 0.5727 | 1.0206 | 1.0102 |
| 0.4931 | 8.6486 | 640 | 1.0637 | 0.5879 | 1.0637 | 1.0314 |
| 0.4931 | 8.6757 | 642 | 1.0891 | 0.5868 | 1.0891 | 1.0436 |
| 0.4931 | 8.7027 | 644 | 1.0939 | 0.5740 | 1.0939 | 1.0459 |
| 0.4931 | 8.7297 | 646 | 1.1069 | 0.5689 | 1.1069 | 1.0521 |
| 0.4931 | 8.7568 | 648 | 1.1055 | 0.5689 | 1.1055 | 1.0514 |
| 0.4931 | 8.7838 | 650 | 1.0832 | 0.5699 | 1.0832 | 1.0408 |
| 0.4931 | 8.8108 | 652 | 1.0454 | 0.5607 | 1.0454 | 1.0224 |
| 0.4931 | 8.8378 | 654 | 1.0261 | 0.5684 | 1.0261 | 1.0130 |
| 0.4931 | 8.8649 | 656 | 1.0254 | 0.5684 | 1.0254 | 1.0126 |
| 0.4931 | 8.8919 | 658 | 1.0350 | 0.5684 | 1.0350 | 1.0173 |
| 0.4931 | 8.9189 | 660 | 1.0296 | 0.5684 | 1.0296 | 1.0147 |
| 0.4931 | 8.9459 | 662 | 1.0316 | 0.5727 | 1.0316 | 1.0157 |
| 0.4931 | 8.9730 | 664 | 1.0500 | 0.5717 | 1.0500 | 1.0247 |
| 0.4931 | 9.0 | 666 | 1.0576 | 0.5674 | 1.0576 | 1.0284 |
| 0.4931 | 9.0270 | 668 | 1.0430 | 0.5717 | 1.0430 | 1.0213 |
| 0.4931 | 9.0541 | 670 | 1.0438 | 0.5717 | 1.0438 | 1.0217 |
| 0.4931 | 9.0811 | 672 | 1.0523 | 0.5717 | 1.0523 | 1.0258 |
| 0.4931 | 9.1081 | 674 | 1.0448 | 0.5717 | 1.0448 | 1.0221 |
| 0.4931 | 9.1351 | 676 | 1.0289 | 0.5717 | 1.0289 | 1.0143 |
| 0.4931 | 9.1622 | 678 | 1.0263 | 0.5727 | 1.0263 | 1.0131 |
| 0.4931 | 9.1892 | 680 | 1.0340 | 0.5717 | 1.0340 | 1.0168 |
| 0.4931 | 9.2162 | 682 | 1.0568 | 0.5717 | 1.0568 | 1.0280 |
| 0.4931 | 9.2432 | 684 | 1.0742 | 0.5674 | 1.0742 | 1.0364 |
| 0.4931 | 9.2703 | 686 | 1.0755 | 0.5674 | 1.0755 | 1.0370 |
| 0.4931 | 9.2973 | 688 | 1.0626 | 0.5717 | 1.0626 | 1.0308 |
| 0.4931 | 9.3243 | 690 | 1.0405 | 0.5717 | 1.0405 | 1.0201 |
| 0.4931 | 9.3514 | 692 | 1.0126 | 0.5727 | 1.0126 | 1.0063 |
| 0.4931 | 9.3784 | 694 | 0.9967 | 0.5932 | 0.9967 | 0.9983 |
| 0.4931 | 9.4054 | 696 | 0.9948 | 0.5857 | 0.9948 | 0.9974 |
| 0.4931 | 9.4324 | 698 | 0.9886 | 0.5857 | 0.9886 | 0.9943 |
| 0.4931 | 9.4595 | 700 | 0.9801 | 0.5958 | 0.9801 | 0.9900 |
| 0.4931 | 9.4865 | 702 | 0.9812 | 0.5958 | 0.9812 | 0.9906 |
| 0.4931 | 9.5135 | 704 | 0.9825 | 0.5958 | 0.9825 | 0.9912 |
| 0.4931 | 9.5405 | 706 | 0.9931 | 0.5814 | 0.9931 | 0.9965 |
| 0.4931 | 9.5676 | 708 | 1.0057 | 0.5727 | 1.0057 | 1.0028 |
| 0.4931 | 9.5946 | 710 | 1.0127 | 0.5727 | 1.0127 | 1.0063 |
| 0.4931 | 9.6216 | 712 | 1.0119 | 0.5727 | 1.0119 | 1.0059 |
| 0.4931 | 9.6486 | 714 | 1.0074 | 0.5727 | 1.0074 | 1.0037 |
| 0.4931 | 9.6757 | 716 | 1.0074 | 0.5727 | 1.0074 | 1.0037 |
| 0.4931 | 9.7027 | 718 | 1.0069 | 0.5727 | 1.0069 | 1.0034 |
| 0.4931 | 9.7297 | 720 | 1.0015 | 0.5814 | 1.0015 | 1.0008 |
| 0.4931 | 9.7568 | 722 | 0.9956 | 0.5814 | 0.9956 | 0.9978 |
| 0.4931 | 9.7838 | 724 | 0.9893 | 0.5825 | 0.9893 | 0.9946 |
| 0.4931 | 9.8108 | 726 | 0.9862 | 0.5825 | 0.9862 | 0.9931 |
| 0.4931 | 9.8378 | 728 | 0.9824 | 0.5825 | 0.9824 | 0.9912 |
| 0.4931 | 9.8649 | 730 | 0.9801 | 0.5915 | 0.9801 | 0.9900 |
| 0.4931 | 9.8919 | 732 | 0.9787 | 0.5915 | 0.9787 | 0.9893 |
| 0.4931 | 9.9189 | 734 | 0.9763 | 0.5847 | 0.9763 | 0.9881 |
| 0.4931 | 9.9459 | 736 | 0.9741 | 0.5847 | 0.9741 | 0.9870 |
| 0.4931 | 9.9730 | 738 | 0.9727 | 0.5847 | 0.9727 | 0.9862 |
| 0.4931 | 10.0 | 740 | 0.9719 | 0.5938 | 0.9719 | 0.9858 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
Jobiniah/bible-mistral-7b | Jobiniah | "2024-01-20T07:15:29Z" | 31 | 0 | peft | [
"peft",
"safetensors",
"text-generation",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | text-generation | "2024-01-04T04:04:43Z" | ---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
QuantFactory/Mistral-NeMo-Minitron-8B-Base-GGUF | QuantFactory | "2024-08-21T18:36:13Z" | 340 | 5 | transformers | [
"transformers",
"gguf",
"arxiv:2009.03300",
"arxiv:2407.14679",
"license:other",
"endpoints_compatible",
"region:us"
] | null | "2024-08-21T17:51:31Z" |
---
license: other
license_name: nvidia-open-model-license
license_link: >-
https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf
library_name: transformers
---
![](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)
# QuantFactory/Mistral-NeMo-Minitron-8B-Base-GGUF
This is quantized version of [nvidia/Mistral-NeMo-Minitron-8B-Base](https://huggingface.co/nvidia/Mistral-NeMo-Minitron-8B-Base) created using llama.cpp
# Original Model Card
# Mistral-NeMo-Minitron-8B-Base
## Model Overview
Mistral-NeMo-Minitron-8B-Base is a base text-to-text model that can be adopted for a variety of natural language generation tasks. It is a large language model (LLM) obtained by pruning and distilling the Mistral-NeMo 12B; specifically, we prune the embedding dimension and MLP intermediate dimension in the model. Following pruning, we perform continued training with distillation using 380 billion tokens to arrive at the final model; we use the continuous pre-training data corpus used in Nemotron-4 15B for this purpose.
**Model Developer:** NVIDIA
**Model Dates:** Mistral-NeMo-Minitron-8B-Base was trained between July 24, 2024 and August 10, 2024.
## License
This model is released under the [NVIDIA Open Model License Agreement](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf).
## Model Architecture
Mistral-NeMo-Minitron-8B-Base uses a model embedding size of 4096, 32 attention heads, MLP intermediate dimension of 11520, with 40 layers in total. Additionally, it uses Grouped-Query Attention (GQA) and Rotary Position Embeddings (RoPE).
**Architecture Type:** Transformer Decoder (Auto-Regressive Language Model)
**Network Architecture:** Mistral-NeMo
**Input Type(s):** Text
**Input Format(s):** String
**Input Parameters:** One Dimensional (1D)
**Other Properties Related to Input:** Works well within 8k characters or less.
**Output Type(s):** Text
**Output Format:** String
**Output Parameters:** 1D
**Other Properties Related to Output:** None
## Usage
Support for this model will be added in the upcoming `transformers` release. In the meantime, please install the library from source:
```
pip install git+https://github.com/huggingface/transformers
```
We can now run inference on this model:
```python
import torch
from transformers import AutoTokenizer, LlamaForCausalLM
# Load the tokenizer and model
model_path = "nvidia/Mistral-NeMo-Minitron-8B-Base"
tokenizer = AutoTokenizer.from_pretrained(model_path)
device = 'cuda'
dtype = torch.bfloat16
model = LlamaForCausalLM.from_pretrained(model_path, torch_dtype=dtype, device_map=device)
# Prepare the input text
prompt = 'Complete the paragraph: our solar system is'
inputs = tokenizer.encode(prompt, return_tensors='pt').to(model.device)
# Generate the output
outputs = model.generate(inputs, max_length=20)
# Decode and print the output
output_text = tokenizer.decode(outputs[0])
print(output_text)
```
## Software Integration
**Runtime Engine(s):**
* NeMo 24.05
**Supported Hardware Microarchitecture Compatibility:** <br>
* NVIDIA Ampere
* NVIDIA Blackwell
* NVIDIA Hopper
* NVIDIA Lovelace
**Operating System(s):** <br>
* Linux
## Dataset & Training
**Data Collection Method by Dataset:** Automated
**Labeling Method by Dataset:** Not Applicable
**Properties:**
The training corpus for Mistral-NeMo-Minitron-8B-Base consists of English and multilingual text, as well as code. Our sources cover a variety of document types such as: webpages, dialogue, articles, and other written materials. The corpus spans domains including legal, math, science, finance, and more. In our continued training set, we introduce a small portion of question-answering, and alignment style data to improve model performance.
**Data Freshness:**
Training was done in 2024, the pretraining data has a cutoff of June 2023.
## Evaluation Results
_5-shot performance._ Language Understanding evaluated using [Massive Multitask Language Understanding](https://arxiv.org/abs/2009.03300):
| Average |
| :---- |
| 69.5 |
_Zero-shot performance._ Evaluated using select datasets from the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) with additions:
| HellaSwag | Winogrande | GSM8K| ARC-Challenge | XLSum |
| :---- | :---- | :---- | :---- | :---- |
| 83.0 | 80.4 | 58.5 | 64.4 | 32.0
_Code generation performance._ Evaluated using [MBPP](https://github.com/google-research/google-research/tree/master/mbpp):
| Score |
| :---- |
| 43.77 |
## Inference
**Engine:** TensorRT-LLM
**Test Hardware:** NVIDIA A100
**DType:** BFloat16
## Limitations
The model was trained on data that contains toxic language, unsafe content, and societal biases originally crawled from the internet. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts. The model may generate answers that may be inaccurate, omit key information, or include irrelevant or redundant text producing socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive.
## Ethical Considerations
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
## References
* [Minitron: Compact Language Models via Pruning and Knowledge Distillation](https://arxiv.org/abs/2407.14679)
* [LLM Pruning and Distillation in Practice: The Minitron Approach](https://research.nvidia.com/publication/_llm-pruning-and-distillation-practice-minitron-approach)
|
Mandur/distilbert-base-uncased-finetuned-ner | Mandur | "2023-06-02T18:48:09Z" | 109 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2023-06-01T21:09:52Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9284131205673759
- name: Recall
type: recall
value: 0.9372413021590782
- name: F1
type: f1
value: 0.932806324110672
- name: Accuracy
type: accuracy
value: 0.9839388692074285
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0616
- Precision: 0.9284
- Recall: 0.9372
- F1: 0.9328
- Accuracy: 0.9839
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2442 | 1.0 | 878 | 0.0704 | 0.9151 | 0.9211 | 0.9181 | 0.9812 |
| 0.054 | 2.0 | 1756 | 0.0621 | 0.9239 | 0.9346 | 0.9292 | 0.9830 |
| 0.0297 | 3.0 | 2634 | 0.0616 | 0.9284 | 0.9372 | 0.9328 | 0.9839 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
lmqg/mt5-small-itquad-qg | lmqg | "2023-01-18T13:47:12Z" | 16 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"question generation",
"it",
"dataset:lmqg/qg_itquad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-06-05T23:19:44Z" |
---
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
language: it
datasets:
- lmqg/qg_itquad
pipeline_tag: text2text-generation
tags:
- question generation
widget:
- text: "<hl> Dopo il 1971 <hl> , l' OPEC ha tardato ad adeguare i prezzi per riflettere tale deprezzamento."
example_title: "Question Generation Example 1"
- text: "L' individuazione del petrolio e lo sviluppo di nuovi giacimenti richiedeva in genere <hl> da cinque a dieci anni <hl> prima di una produzione significativa."
example_title: "Question Generation Example 2"
- text: "il <hl> Giappone <hl> è stato il paese più dipendente dal petrolio arabo."
example_title: "Question Generation Example 3"
model-index:
- name: lmqg/mt5-small-itquad-qg
results:
- task:
name: Text2text Generation
type: text2text-generation
dataset:
name: lmqg/qg_itquad
type: default
args: default
metrics:
- name: BLEU4 (Question Generation)
type: bleu4_question_generation
value: 7.37
- name: ROUGE-L (Question Generation)
type: rouge_l_question_generation
value: 21.93
- name: METEOR (Question Generation)
type: meteor_question_generation
value: 17.57
- name: BERTScore (Question Generation)
type: bertscore_question_generation
value: 80.8
- name: MoverScore (Question Generation)
type: moverscore_question_generation
value: 56.79
- name: QAAlignedF1Score-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]
type: qa_aligned_f1_score_bertscore_question_answer_generation_with_gold_answer_gold_answer
value: 87.66
- name: QAAlignedRecall-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]
type: qa_aligned_recall_bertscore_question_answer_generation_with_gold_answer_gold_answer
value: 87.57
- name: QAAlignedPrecision-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]
type: qa_aligned_precision_bertscore_question_answer_generation_with_gold_answer_gold_answer
value: 87.76
- name: QAAlignedF1Score-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]
type: qa_aligned_f1_score_moverscore_question_answer_generation_with_gold_answer_gold_answer
value: 61.6
- name: QAAlignedRecall-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]
type: qa_aligned_recall_moverscore_question_answer_generation_with_gold_answer_gold_answer
value: 61.48
- name: QAAlignedPrecision-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer]
type: qa_aligned_precision_moverscore_question_answer_generation_with_gold_answer_gold_answer
value: 61.73
- name: QAAlignedF1Score-BERTScore (Question & Answer Generation) [Gold Answer]
type: qa_aligned_f1_score_bertscore_question_answer_generation_gold_answer
value: 81.63
- name: QAAlignedRecall-BERTScore (Question & Answer Generation) [Gold Answer]
type: qa_aligned_recall_bertscore_question_answer_generation_gold_answer
value: 82.28
- name: QAAlignedPrecision-BERTScore (Question & Answer Generation) [Gold Answer]
type: qa_aligned_precision_bertscore_question_answer_generation_gold_answer
value: 81.04
- name: QAAlignedF1Score-MoverScore (Question & Answer Generation) [Gold Answer]
type: qa_aligned_f1_score_moverscore_question_answer_generation_gold_answer
value: 55.85
- name: QAAlignedRecall-MoverScore (Question & Answer Generation) [Gold Answer]
type: qa_aligned_recall_moverscore_question_answer_generation_gold_answer
value: 56.14
- name: QAAlignedPrecision-MoverScore (Question & Answer Generation) [Gold Answer]
type: qa_aligned_precision_moverscore_question_answer_generation_gold_answer
value: 55.6
---
# Model Card of `lmqg/mt5-small-itquad-qg`
This model is fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) for question generation task on the [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [google/mt5-small](https://huggingface.co/google/mt5-small)
- **Language:** it
- **Training data:** [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="it", model="lmqg/mt5-small-itquad-qg")
# model prediction
questions = model.generate_q(list_context="Dopo il 1971 , l' OPEC ha tardato ad adeguare i prezzi per riflettere tale deprezzamento.", list_answer="Dopo il 1971")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "lmqg/mt5-small-itquad-qg")
output = pipe("<hl> Dopo il 1971 <hl> , l' OPEC ha tardato ad adeguare i prezzi per riflettere tale deprezzamento.")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-small-itquad-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_itquad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:-----------------------------------------------------------------|
| BERTScore | 80.8 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| Bleu_1 | 22.78 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| Bleu_2 | 14.93 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| Bleu_3 | 10.34 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| Bleu_4 | 7.37 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| METEOR | 17.57 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| MoverScore | 56.79 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| ROUGE_L | 21.93 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
- ***Metric (Question & Answer Generation, Reference Answer)***: Each question is generated from *the gold answer*. [raw metric file](https://huggingface.co/lmqg/mt5-small-itquad-qg/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_itquad.default.json)
| | Score | Type | Dataset |
|:--------------------------------|--------:|:--------|:-----------------------------------------------------------------|
| QAAlignedF1Score (BERTScore) | 87.66 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| QAAlignedF1Score (MoverScore) | 61.6 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| QAAlignedPrecision (BERTScore) | 87.76 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| QAAlignedPrecision (MoverScore) | 61.73 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| QAAlignedRecall (BERTScore) | 87.57 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| QAAlignedRecall (MoverScore) | 61.48 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
- ***Metric (Question & Answer Generation, Pipeline Approach)***: Each question is generated on the answer generated by [`lmqg/mt5-small-itquad-ae`](https://huggingface.co/lmqg/mt5-small-itquad-ae). [raw metric file](https://huggingface.co/lmqg/mt5-small-itquad-qg/raw/main/eval_pipeline/metric.first.answer.paragraph.questions_answers.lmqg_qg_itquad.default.lmqg_mt5-small-itquad-ae.json)
| | Score | Type | Dataset |
|:--------------------------------|--------:|:--------|:-----------------------------------------------------------------|
| QAAlignedF1Score (BERTScore) | 81.63 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| QAAlignedF1Score (MoverScore) | 55.85 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| QAAlignedPrecision (BERTScore) | 81.04 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| QAAlignedPrecision (MoverScore) | 55.6 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| QAAlignedRecall (BERTScore) | 82.28 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| QAAlignedRecall (MoverScore) | 56.14 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_itquad
- dataset_name: default
- input_types: ['paragraph_answer']
- output_types: ['question']
- prefix_types: None
- model: google/mt5-small
- max_length: 512
- max_length_output: 32
- epoch: 15
- batch: 16
- lr: 0.0005
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 4
- label_smoothing: 0.0
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-small-itquad-qg/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
Ghali20/Zephyr_beta_5M | Ghali20 | "2023-12-16T00:17:38Z" | 3 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:HuggingFaceH4/zephyr-7b-alpha",
"base_model:adapter:HuggingFaceH4/zephyr-7b-alpha",
"region:us"
] | null | "2023-12-16T00:17:06Z" | ---
library_name: peft
base_model: HuggingFaceH4/zephyr-7b-alpha
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
ByunByun/lora_0301 | ByunByun | "2024-03-01T10:00:33Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-03-01T10:00:09Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/SoMix-xb-GGUF | mradermacher | "2024-06-09T19:13:33Z" | 21 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"MaziyarPanahi/TheTop-5x7B-Instruct-S3-v0.1",
"argilla/notus-7b-v1",
"en",
"endpoints_compatible",
"region:us"
] | null | "2024-06-09T18:34:17Z" | ---
base_model: powermove72/SoMix-xb
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- MaziyarPanahi/TheTop-5x7B-Instruct-S3-v0.1
- argilla/notus-7b-v1
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/powermove72/SoMix-xb
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SoMix-xb-GGUF/resolve/main/SoMix-xb.Q2_K.gguf) | Q2_K | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/SoMix-xb-GGUF/resolve/main/SoMix-xb.IQ3_XS.gguf) | IQ3_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/SoMix-xb-GGUF/resolve/main/SoMix-xb.Q3_K_S.gguf) | Q3_K_S | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/SoMix-xb-GGUF/resolve/main/SoMix-xb.IQ3_S.gguf) | IQ3_S | 5.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/SoMix-xb-GGUF/resolve/main/SoMix-xb.IQ3_M.gguf) | IQ3_M | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/SoMix-xb-GGUF/resolve/main/SoMix-xb.Q3_K_M.gguf) | Q3_K_M | 5.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SoMix-xb-GGUF/resolve/main/SoMix-xb.Q3_K_L.gguf) | Q3_K_L | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/SoMix-xb-GGUF/resolve/main/SoMix-xb.IQ4_XS.gguf) | IQ4_XS | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/SoMix-xb-GGUF/resolve/main/SoMix-xb.Q4_K_S.gguf) | Q4_K_S | 6.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SoMix-xb-GGUF/resolve/main/SoMix-xb.Q4_K_M.gguf) | Q4_K_M | 6.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SoMix-xb-GGUF/resolve/main/SoMix-xb.Q5_K_S.gguf) | Q5_K_S | 7.8 | |
| [GGUF](https://huggingface.co/mradermacher/SoMix-xb-GGUF/resolve/main/SoMix-xb.Q5_K_M.gguf) | Q5_K_M | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/SoMix-xb-GGUF/resolve/main/SoMix-xb.Q6_K.gguf) | Q6_K | 9.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/SoMix-xb-GGUF/resolve/main/SoMix-xb.Q8_0.gguf) | Q8_0 | 12.0 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
MaziyarPanahi/Experiment28M7_Inex12Yam | MaziyarPanahi | "2024-04-08T18:02:29Z" | 19 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"Safetensors",
"text-generation-inference",
"merge",
"base_model:automerger/Experiment28M7-7B",
"base_model:merge:automerger/Experiment28M7-7B",
"base_model:automerger/Inex12Yam-7B",
"base_model:merge:automerger/Inex12Yam-7B",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | text-generation | "2024-04-08T17:49:19Z" | ---
license: apache-2.0
tags:
- Safetensors
- text-generation-inference
- merge
model_name: Experiment28M7_Inex12Yam
base_model:
- automerger/Experiment28M7-7B
- automerger/Inex12Yam-7B
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# Experiment28M7_Inex12Yam
Experiment28M7_Inex12Yam is a merge of the following models:
* [automerger/Experiment28M7-7B](https://huggingface.co/automerger/Experiment28M7-7B)
* [automerger/Inex12Yam-7B](https://huggingface.co/automerger/Inex12Yam-7B)
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/Experiment28M7_Inex12Yam"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
ZhiyuanQiu/camembert-base-finetuned-Train_RAW_157080-dd | ZhiyuanQiu | "2022-08-13T19:41:50Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"camembert",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2022-08-13T18:09:10Z" | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: camembert-base-finetuned-Train_RAW_157080-dd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# camembert-base-finetuned-Train_RAW_157080-dd
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2610
- Precision: 0.8933
- Recall: 0.9183
- F1: 0.9056
- Accuracy: 0.9375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1991 | 1.0 | 5128 | 0.1842 | 0.8684 | 0.9101 | 0.8888 | 0.9358 |
| 0.142 | 2.0 | 10256 | 0.2028 | 0.8856 | 0.9176 | 0.9013 | 0.9394 |
| 0.1187 | 3.0 | 15384 | 0.2475 | 0.8876 | 0.9160 | 0.9016 | 0.9317 |
| 0.082 | 4.0 | 20512 | 0.2610 | 0.8933 | 0.9183 | 0.9056 | 0.9375 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Xu-Ouyang/pythia-12b-deduped-int2-step86000-GPTQ-wikitext2-uva | Xu-Ouyang | "2024-09-20T01:33:59Z" | 60 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"2-bit",
"gptq",
"region:us"
] | text-generation | "2024-09-20T01:32:15Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
John6666/illust-possible-v25-sdxl | John6666 | "2024-12-23T06:47:17Z" | 208 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"pony",
"illustrious",
"en",
"base_model:Laxhar/noobai-xl-EarlyAccess",
"base_model:finetune:Laxhar/noobai-xl-EarlyAccess",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-11-13T03:24:05Z" | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- pony
- illustrious
base_model: Laxhar/sdxl_noob
---
Original model is [here](https://civitai.com/models/880866/illust-possible?modelVersionId=1054197).
This model created by [OZn_](https://civitai.com/user/OZn_).
|
stvhuang/rcr-codeserver-66016878-e60b-4231-bcf6-0ca444c52f42-65464d9fc76scht_20240318T042457-ep00 | stvhuang | "2024-03-18T10:59:03Z" | 60 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2024-03-18T10:57:46Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sr5434/AlphaZero-Kuhn-Poker | sr5434 | "2024-02-27T23:09:29Z" | 0 | 1 | null | [
"license:mit",
"region:us"
] | null | "2024-02-27T23:07:14Z" | ---
license: mit
---
I used PGX and MCTX to train AlphaZero on Kuhn Poker. It ran on a TPU v2-8(courtesy of the TPU Research Cloud Program) for ~3.5 days.
Code can be found [here](https://github.com/sr5434/MuZero). |
SultanR/SmolTulu-1.7b-Instruct | SultanR | "2024-12-17T00:09:34Z" | 252 | 13 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"Tulu3",
"Smollm",
"SLMs",
"Small",
"Huggingface",
"Allenai",
"SFT",
"DPO",
"GGUF",
"conversational",
"en",
"dataset:allenai/tulu-3-sft-mixture",
"dataset:allenai/llama-3.1-tulu-3-8b-preference-mixture",
"arxiv:2411.15124",
"arxiv:2412.08347",
"base_model:HuggingFaceTB/SmolLM2-1.7B",
"base_model:finetune:HuggingFaceTB/SmolLM2-1.7B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-01T16:40:35Z" | ---
license: apache-2.0
language:
- en
library_name: transformers
tags:
- Tulu3
- Smollm
- SLMs
- Small
- Huggingface
- Allenai
- SFT
- DPO
- GGUF
base_model:
- HuggingFaceTB/SmolLM2-1.7B
datasets:
- allenai/tulu-3-sft-mixture
- allenai/llama-3.1-tulu-3-8b-preference-mixture
pipeline_tag: text-generation
model-index:
- name: SmolTulu-1.7b-Instruct
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 65.41
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=SultanR/SmolTulu-1.7b-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 12.26
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=SultanR/SmolTulu-1.7b-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 2.64
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=SultanR/SmolTulu-1.7b-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 2.57
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=SultanR/SmolTulu-1.7b-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 1.92
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=SultanR/SmolTulu-1.7b-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 7.89
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=SultanR/SmolTulu-1.7b-Instruct
name: Open LLM Leaderboard
---
# SmolLM2 1.7b Instruction Tuned & DPO Aligned through Tulu 3!
![SmolTulu Banner](smoltulubanner.png)
SmolTulu-1.7b-Instruct is the first model in a series of models meant to leverage [AllenAI's Tulu 3 post-training pipeline](https://arxiv.org/abs/2411.15124) to tune the [base version of Huggingface's SmolLM2-1.7b](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B)! The post training pipeline AllenAI came up with seemed like something perfect to apply here.
This model scores the highest current score in both IFEval and GSM8k (after SmolTulu-1.7b-Reinforced) while maintaining the extremely low contamination levels in Tulu 3 and SmolLM2! I've listed the datasets used to do both the SFT (supervised finetuning) and DPO (direct preference optimization) stages.
Something important to note, this model has only undergone SFT and DPO! Find the RLVR version here, [SmolTulu-1.7b-Reinforced](https://huggingface.co/SultanR/SmolTulu-1.7b-Reinforced)
## Evaluation
I ran these evaluations using [SmolLM2's evaluation code](https://github.com/huggingface/smollm/tree/main/evaluation) for a more fair comparison.
| Metric | SmolTulu-1.7b-Instruct | SmolTulu-1.7b-Reinforced | SmolLM2-1.7B-Instruct | Llama-1B-Instruct | Qwen2.5-1.5B-Instruct | SmolLM1-1.7B-Instruct |
|:----------------------------|:---------------------:|:---------------------:|:---------------------:|:---------------------:|:---------------------:|:---------------------:|
| ARC (Average) | 51.5 | 51.1 | **51.7** | 41.6 | 46.2 | 43.7 |
| BBH (3-shot) | 33.8 | 33.4 | 32.2 | 27.6 | **35.3** | 25.7 |
| GSM8K (5-shot) | 51.6 | **61.0** | 48.2 | 26.8 | 42.8 | 4.6 |
| HellaSwag | 61.1 | 60.4 | **66.1** | 56.1 | 60.9 | 55.5 |
| IFEval (Average prompt/inst) | 67.7 | **69.3** | 56.7 | 53.5 | 47.4 | 23.1 |
| MMLU-Pro (MCF) | 17.4 | 17.3 | 19.3 | 12.7 | **24.2** | 11.7 |
| PIQA | 72.2 | 72.1 | **74.4** | 72.3 | 73.2 | 71.6 |
## Training Details
The model was trained using Direct Preference Optimization (DPO) with the following configuration:
- Base model: SmolLM2-1.7B with AllenAI's SFT pipeline ran
- Mixed precision: bfloat16
- Learning rate: 8e-7 with linear scheduler
- Warmup ratio: 0.1
- Training epochs: 1
- Effective batch size: 12
- Sequence length: 4096 tokens
- DPO loss: Length-normalized DPO
- DPO beta: 5.0
- Gradient checkpointing enabled
- DeepSpeed Stage 3 for memory optimization
## Usage
Just like any Huggingface model, just run it using the transformers library:
```python
# pip install transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "SultanR/SmolTulu-1.7b-Instruct"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
# for multiple GPUs install accelerate and do `model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")`
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
inputs = tokenizer.encode("Gravity is", return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
You can also use the model in llama.cpp through the [gguf version](https://huggingface.co/SultanR/SmolTulu-1.7b-Instruct-GGUF)!
## [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_SultanR__SmolTulu-1.7b-Instruct)
To give a more holistic overview, I also added the Open LLM Leaderboard results, which differ a lot from the script that was used to benchmark SmolLM2-Instruct.
As of writing this, the number 1 ranking model in IFEval for any model under 2 billion parameters :)
| Metric |Value|
|-------------------|----:|
|Avg. |15.45|
|IFEval (0-Shot) |65.41|
|BBH (3-Shot) |12.26|
|MATH Lvl 5 (4-Shot)| 2.64|
|GPQA (0-shot) | 2.57|
|MuSR (0-shot) | 1.92|
|MMLU-PRO (5-shot) | 7.89|
## Citation
```
@misc{alrashed2024smoltuluhigherlearningrate,
title={SmolTulu: Higher Learning Rate to Batch Size Ratios Can Lead to Better Reasoning in SLMs},
author={Sultan Alrashed},
year={2024},
eprint={2412.08347},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2412.08347},
}
```
The training methodology follows the Tulu 3 paper:
```
@article{lambert2024tulu3,
title={TÜLU 3: Pushing Frontiers in Open Language Model Post-Training},
author={Lambert, Nathan and Morrison, Jacob and Pyatkin, Valentina and others},
year={2024},
journal={arXiv preprint arXiv:2411.15124}
}
``` |
ivaan01/TFG-Mauri | ivaan01 | "2023-05-19T00:09:00Z" | 0 | 0 | null | [
"conversational",
"dataset:samhog/psychology-10k",
"region:us"
] | text-generation | "2023-05-18T23:07:24Z" | ---
datasets:
- samhog/psychology-10k
pipeline_tag: conversational
--- |
m-biriuchinskii/Creole-classifier-v1-balanced | m-biriuchinskii | "2024-04-17T07:03:56Z" | 1 | 0 | fasttext | [
"fasttext",
"language",
"text-classification",
"fr",
"region:us"
] | text-classification | "2024-04-17T06:50:48Z" | ---
language:
- fr
metrics:
- accuracy
library_name: fasttext
pipeline_tag: text-classification
tags:
- language
---
## Results
- **Nombre d'échantillons:** 11853
- **Précision:** 0.669
- **Rappel:** 0.669
|
TakedaAIML/section_classifier | TakedaAIML | "2024-09-17T07:38:43Z" | 53 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"text-classification",
"fr",
"en",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"region:us"
] | text-classification | "2024-09-10T06:53:05Z" | ---
license: apache-2.0
language:
- fr
- en
base_model:
- google-bert/bert-base-uncased
pipeline_tag: text-classification
library_name: sentence-transformers
---
# Takeda Section Classifier
Pretrained model (finetuned version of [BERT Multilingual Uncased](https://huggingface.co/google-bert/bert-base-multilingual-uncased)) on french and english documents using supervised training for sections classification.
This work has been made by Digital Innovation Team from Belgium 🇧🇪 (LE).
## Model Description
The model aims at classifying text in classes representing part of reports:
* Description
* Immediate Correction
* Root Cause
* Action Plan
* Impacted Elements
## Intended uses & limitations
The model can be use for Takeda documentation, the team do not guarantee results for out of the scope documentation.
## How to Use
You can use this model directly with a pipeline for text classification:
```python
from transformers import (
TextClassificationPipeline,
AutoTokenizer,
AutoModelForSequenceClassification,
)
tokenizer = AutoTokenizer.from_pretrained("TakedaAIML/section_classifier")
model = AutoModelForSequenceClassification.from_pretrained(
"TakedaAIML/section_classifier"
)
pipe = TextClassificationPipeline(model=model, tokenizer=tokenizer)
prediction = pipe('this is a piece of text representing the Description section. An event occur on june 24 and ...')
``` |
C0ttontheBunny/Catnap | C0ttontheBunny | "2024-02-01T02:53:37Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-01-31T18:43:52Z" | ---
license: openrail
---
|
daniel40/e377f248-fc22-49f5-a894-a420a75da0c4 | daniel40 | "2025-01-28T21:39:00Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:lmsys/vicuna-7b-v1.3",
"base_model:adapter:lmsys/vicuna-7b-v1.3",
"region:us"
] | null | "2025-01-28T21:24:09Z" | ---
library_name: peft
base_model: lmsys/vicuna-7b-v1.3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e377f248-fc22-49f5-a894-a420a75da0c4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: lmsys/vicuna-7b-v1.3
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 7f0c587cec1971bb_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7f0c587cec1971bb_train_data.json
type:
field_instruction: instruction
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: daniel40/e377f248-fc22-49f5-a894-a420a75da0c4
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/7f0c587cec1971bb_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1575562f-a79f-4a26-8bf7-62d290bbfa3d
wandb_project: Birthday-SN56-27-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1575562f-a79f-4a26-8bf7-62d290bbfa3d
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# e377f248-fc22-49f5-a894-a420a75da0c4
This model is a fine-tuned version of [lmsys/vicuna-7b-v1.3](https://huggingface.co/lmsys/vicuna-7b-v1.3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5160
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 1.9941 |
| 1.8674 | 0.0008 | 13 | 1.7657 |
| 1.7289 | 0.0015 | 26 | 1.5677 |
| 1.5153 | 0.0023 | 39 | 1.5160 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Neko-Institute-of-Science/LLaMA-7B-4bit-128g | Neko-Institute-of-Science | "2023-04-15T19:30:55Z" | 15 | 7 | transformers | [
"transformers",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-04-07T04:41:38Z" | ```
7B (act-order true-sequential groupsize)
wikitext2 5.677095890045166 (stock 16bit)
wikitext2 5.768329620361328 (32g)
wikitext2 5.833956718444824 (128g)
ptb-new 10.10704231262207 (stock 16bit)
ptb-new 10.273148536682129 (32g)
ptb-new 10.347890853881836 (128g)
c4-new 7.343583106994629 (stock 16bit)
c4-new 7.443920612335205 (32g)
c4-new 7.5146918296813965 (128g)
``` |
tensorblock/Teleut-7b-GGUF | tensorblock | "2024-12-03T17:29:01Z" | 12 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"dataset:allenai/tulu-3-sft-mixture",
"base_model:allura-org/Teleut-7b",
"base_model:quantized:allura-org/Teleut-7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-12-03T16:42:57Z" | ---
library_name: transformers
license: apache-2.0
base_model: allura-org/Teleut-7b
datasets:
- allenai/tulu-3-sft-mixture
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## allura-org/Teleut-7b - GGUF
This repo contains GGUF format model files for [allura-org/Teleut-7b](https://huggingface.co/allura-org/Teleut-7b).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Teleut-7b-Q2_K.gguf](https://huggingface.co/tensorblock/Teleut-7b-GGUF/blob/main/Teleut-7b-Q2_K.gguf) | Q2_K | 3.016 GB | smallest, significant quality loss - not recommended for most purposes |
| [Teleut-7b-Q3_K_S.gguf](https://huggingface.co/tensorblock/Teleut-7b-GGUF/blob/main/Teleut-7b-Q3_K_S.gguf) | Q3_K_S | 3.492 GB | very small, high quality loss |
| [Teleut-7b-Q3_K_M.gguf](https://huggingface.co/tensorblock/Teleut-7b-GGUF/blob/main/Teleut-7b-Q3_K_M.gguf) | Q3_K_M | 3.808 GB | very small, high quality loss |
| [Teleut-7b-Q3_K_L.gguf](https://huggingface.co/tensorblock/Teleut-7b-GGUF/blob/main/Teleut-7b-Q3_K_L.gguf) | Q3_K_L | 4.088 GB | small, substantial quality loss |
| [Teleut-7b-Q4_0.gguf](https://huggingface.co/tensorblock/Teleut-7b-GGUF/blob/main/Teleut-7b-Q4_0.gguf) | Q4_0 | 4.431 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Teleut-7b-Q4_K_S.gguf](https://huggingface.co/tensorblock/Teleut-7b-GGUF/blob/main/Teleut-7b-Q4_K_S.gguf) | Q4_K_S | 4.458 GB | small, greater quality loss |
| [Teleut-7b-Q4_K_M.gguf](https://huggingface.co/tensorblock/Teleut-7b-GGUF/blob/main/Teleut-7b-Q4_K_M.gguf) | Q4_K_M | 4.683 GB | medium, balanced quality - recommended |
| [Teleut-7b-Q5_0.gguf](https://huggingface.co/tensorblock/Teleut-7b-GGUF/blob/main/Teleut-7b-Q5_0.gguf) | Q5_0 | 5.315 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Teleut-7b-Q5_K_S.gguf](https://huggingface.co/tensorblock/Teleut-7b-GGUF/blob/main/Teleut-7b-Q5_K_S.gguf) | Q5_K_S | 5.315 GB | large, low quality loss - recommended |
| [Teleut-7b-Q5_K_M.gguf](https://huggingface.co/tensorblock/Teleut-7b-GGUF/blob/main/Teleut-7b-Q5_K_M.gguf) | Q5_K_M | 5.445 GB | large, very low quality loss - recommended |
| [Teleut-7b-Q6_K.gguf](https://huggingface.co/tensorblock/Teleut-7b-GGUF/blob/main/Teleut-7b-Q6_K.gguf) | Q6_K | 6.254 GB | very large, extremely low quality loss |
| [Teleut-7b-Q8_0.gguf](https://huggingface.co/tensorblock/Teleut-7b-GGUF/blob/main/Teleut-7b-Q8_0.gguf) | Q8_0 | 8.099 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Teleut-7b-GGUF --include "Teleut-7b-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Teleut-7b-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
emilykang/medner-cardiovascular_pulmonary_lora | emilykang | "2024-05-15T16:00:29Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | "2024-05-15T12:58:16Z" | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
datasets:
- generator
model-index:
- name: medner-cardiovascular_pulmonary_lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# medner-cardiovascular_pulmonary_lora
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 24
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 10
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.2.1+cu118
- Datasets 2.19.0
- Tokenizers 0.19.1 |
BroAlanTaps/GPT2-large-256-17250steps-1.2Btokens | BroAlanTaps | "2024-10-11T14:35:49Z" | 119 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-10-11T14:34:00Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Geneva-12B-GCv2-50k-GGUF | mradermacher | "2025-02-04T08:54:05Z" | 296 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"trl",
"gammacorpus",
"geneva",
"chat",
"mistral",
"conversational",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ru",
"zh",
"ja",
"dataset:rubenroy/GammaCorpus-v2-50k",
"base_model:rubenroy/Geneva-12B-GCv2-50k",
"base_model:quantized:rubenroy/Geneva-12B-GCv2-50k",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-02-04T08:08:32Z" | ---
base_model: rubenroy/Geneva-12B-GCv2-50k
datasets:
- rubenroy/GammaCorpus-v2-50k
language:
- en
- fr
- de
- es
- it
- pt
- ru
- zh
- ja
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- trl
- gammacorpus
- geneva
- chat
- mistral
- conversational
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/rubenroy/Geneva-12B-GCv2-50k
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Geneva-12B-GCv2-50k-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Geneva-12B-GCv2-50k-GGUF/resolve/main/Geneva-12B-GCv2-50k.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Geneva-12B-GCv2-50k-GGUF/resolve/main/Geneva-12B-GCv2-50k.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/Geneva-12B-GCv2-50k-GGUF/resolve/main/Geneva-12B-GCv2-50k.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Geneva-12B-GCv2-50k-GGUF/resolve/main/Geneva-12B-GCv2-50k.Q3_K_L.gguf) | Q3_K_L | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Geneva-12B-GCv2-50k-GGUF/resolve/main/Geneva-12B-GCv2-50k.IQ4_XS.gguf) | IQ4_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/Geneva-12B-GCv2-50k-GGUF/resolve/main/Geneva-12B-GCv2-50k.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Geneva-12B-GCv2-50k-GGUF/resolve/main/Geneva-12B-GCv2-50k.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Geneva-12B-GCv2-50k-GGUF/resolve/main/Geneva-12B-GCv2-50k.Q5_K_S.gguf) | Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/Geneva-12B-GCv2-50k-GGUF/resolve/main/Geneva-12B-GCv2-50k.Q5_K_M.gguf) | Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/Geneva-12B-GCv2-50k-GGUF/resolve/main/Geneva-12B-GCv2-50k.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Geneva-12B-GCv2-50k-GGUF/resolve/main/Geneva-12B-GCv2-50k.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned_SystemError0.0_Seed103 | behzadnet | "2023-12-17T21:36:36Z" | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"region:us"
] | null | "2023-12-17T21:36:33Z" | ---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
DerekTrayn/Ale | DerekTrayn | "2023-08-20T15:03:27Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2023-08-20T15:02:32Z" | ---
license: openrail
---
|
gf2rl/david1 | gf2rl | "2023-03-29T23:51:19Z" | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-29T23:51:12Z" | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: david1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 9.50 +/- 0.50
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
bakisanlan/ppo_LunarLander_v2_bksnln | bakisanlan | "2022-12-12T23:35:56Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2022-12-12T23:35:29Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 269.79 +/- 21.11
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AngeT10/Totti | AngeT10 | "2023-10-11T16:51:01Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2023-10-11T16:48:18Z" | ---
license: openrail
---
|
teneriffa/TherapyBeagle-11B-v1-Q4_0-GGUF | teneriffa | "2024-04-08T12:01:20Z" | 23 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"dataset:jerryjalapeno/nart-100k-synthetic",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-04-08T11:58:01Z" | ---
license: cc-by-nc-4.0
tags:
- llama-cpp
- gguf-my-repo
datasets:
- jerryjalapeno/nart-100k-synthetic
---
# teneriffa/TherapyBeagle-11B-v1-Q4_0-GGUF
This model was converted to GGUF format from [`victunes/TherapyBeagle-11B-v1`](https://huggingface.co/victunes/TherapyBeagle-11B-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/victunes/TherapyBeagle-11B-v1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo teneriffa/TherapyBeagle-11B-v1-Q4_0-GGUF --model therapybeagle-11b-v1.Q4_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo teneriffa/TherapyBeagle-11B-v1-Q4_0-GGUF --model therapybeagle-11b-v1.Q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m therapybeagle-11b-v1.Q4_0.gguf -n 128
```
|
junklivs/distilbert-base-uncased-finetuned-cola | junklivs | "2023-03-31T15:25:27Z" | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-03-31T13:28:41Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5361146089547957
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8228
- Matthews Correlation: 0.5361
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5241 | 1.0 | 535 | 0.5480 | 0.4006 |
| 0.3496 | 2.0 | 1070 | 0.5164 | 0.4819 |
| 0.2387 | 3.0 | 1605 | 0.6022 | 0.5138 |
| 0.1779 | 4.0 | 2140 | 0.7458 | 0.5280 |
| 0.127 | 5.0 | 2675 | 0.8228 | 0.5361 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
Vivian12300/Meta-Llama-3-8B-Instruct_mathqa_French_new | Vivian12300 | "2024-07-10T14:01:42Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:generator",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-10T12:56:36Z" | ---
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
model-index:
- name: Meta-Llama-3-8B-Instruct_mathqa_French_new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Meta-Llama-3-8B-Instruct_mathqa_French_new
This model was trained from scratch on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 36
### Training results
### Framework versions
- Transformers 4.42.3
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
YUNSUN7/Haneul | YUNSUN7 | "2024-05-01T07:52:13Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-05-01T07:51:02Z" | ---
license: apache-2.0
---
|
sail-rvc/JUNGKOOK_AI__RVC_v2_200_Epochs_ | sail-rvc | "2023-07-14T07:24:14Z" | 1 | 0 | transformers | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
] | audio-to-audio | "2023-07-14T07:23:56Z" |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# JUNGKOOK_AI__RVC_v2_200_Epochs_
## RVC Model
![banner](https://i.imgur.com/xocCjhH.jpg)
This model repo was automatically generated.
Date: 2023-07-14 07:24:14
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
|
steveice/videomae-base-finetuned-engine-subset | steveice | "2023-03-10T20:02:38Z" | 61 | 0 | transformers | [
"transformers",
"pytorch",
"videomae",
"video-classification",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | "2023-03-10T19:33:03Z" | ---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-engine-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-engine-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5634
- Accuracy: 0.475
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 224
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.6687 | 0.25 | 57 | 2.5948 | 0.15 |
| 2.3001 | 1.25 | 114 | 2.2452 | 0.175 |
| 2.1531 | 2.25 | 171 | 1.9180 | 0.3875 |
| 1.6332 | 3.24 | 224 | 1.5634 | 0.475 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.12.1+cu113
- Datasets 2.10.1
- Tokenizers 0.13.2
|
sinhala-nlp/xlm-t-hasoc-hi | sinhala-nlp | "2022-11-01T20:15:31Z" | 100 | 0 | transformers | [
"transformers",
"pytorch",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2022-11-01T19:32:51Z" | ---
license: apache-2.0
---
|
aa-unh/poca-SoccerTwos | aa-unh | "2024-04-11T21:29:45Z" | 4 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | "2024-04-11T21:28:02Z" | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: aa-unh/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
qgallouedec/tqc-Hopper-v3-1640964538 | qgallouedec | "2024-04-10T19:34:01Z" | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"Hopper-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"Hopper-v4",
"model-index",
"region:us"
] | reinforcement-learning | "2023-02-28T15:07:17Z" | ---
library_name: stable-baselines3
tags:
- Hopper-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
- Hopper-v4
model-index:
- name: TQC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Hopper-v3
type: Hopper-v3
metrics:
- type: mean_reward
value: 3702.73 +/- 5.94
name: mean_reward
verified: false
---
# **TQC** Agent playing **Hopper-v3**
This is a trained model of a **TQC** agent playing **Hopper-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo tqc --env Hopper-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env Hopper-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo tqc --env Hopper-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env Hopper-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo tqc --env Hopper-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo tqc --env Hopper-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('learning_starts', 10000),
('n_timesteps', 1000000.0),
('policy', 'MlpPolicy'),
('top_quantiles_to_drop_per_net', 5),
('normalize', False)])
```
|
PrunaAI/fateme-nateghi23-Llama-3-8B-Instruct-Finance-RAG-bnb-8bit-smashed | PrunaAI | "2024-12-03T23:45:53Z" | 6 | 0 | null | [
"safetensors",
"llama",
"pruna-ai",
"base_model:fateme-nateghi23/Llama-3-8B-Instruct-Finance-RAG",
"base_model:quantized:fateme-nateghi23/Llama-3-8B-Instruct-Finance-RAG",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2024-12-03T23:34:32Z" | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: fateme-nateghi23/Llama-3-8B-Instruct-Finance-RAG
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer">
<img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI)
[![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI)
[![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results
![image info](./plots.png)
**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo fateme-nateghi23/Llama-3-8B-Instruct-Finance-RAG installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/fateme-nateghi23-Llama-3-8B-Instruct-Finance-RAG-bnb-8bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("fateme-nateghi23/Llama-3-8B-Instruct-Finance-RAG")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model fateme-nateghi23/Llama-3-8B-Instruct-Finance-RAG before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Do it by yourself [here](https://docs.pruna.ai/en/latest/setup/pip.html). |
guoyu-zhang/model_usp3_dpo9 | guoyu-zhang | "2024-04-17T08:35:27Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"license:llama2",
"region:us"
] | null | "2024-04-17T08:35:16Z" | ---
license: llama2
library_name: peft
tags:
- trl
- dpo
- generated_from_trainer
base_model: meta-llama/Llama-2-7b-chat-hf
model-index:
- name: model_usp3_dpo9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_usp3_dpo9
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3840
- Rewards/chosen: -5.8994
- Rewards/rejected: -15.8549
- Rewards/accuracies: 0.75
- Rewards/margins: 9.9555
- Logps/rejected: -125.7216
- Logps/chosen: -114.4451
- Logits/rejected: -0.5607
- Logits/chosen: -0.5006
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.1194 | 2.67 | 100 | 1.1073 | 5.0437 | 2.1933 | 0.7400 | 2.8504 | -105.6681 | -102.2862 | 0.0014 | 0.0428 |
| 0.0189 | 5.33 | 200 | 2.5034 | -3.9384 | -11.8385 | 0.7000 | 7.9001 | -121.2590 | -112.2662 | -0.7943 | -0.7591 |
| 0.0521 | 8.0 | 300 | 2.6657 | 2.8593 | -3.0059 | 0.6700 | 5.8652 | -111.4450 | -104.7133 | -0.3470 | -0.2646 |
| 0.0001 | 10.67 | 400 | 2.4434 | -6.5026 | -16.5073 | 0.7400 | 10.0046 | -126.4465 | -115.1154 | -0.5717 | -0.5110 |
| 0.0 | 13.33 | 500 | 2.3881 | -5.9046 | -15.8560 | 0.75 | 9.9513 | -125.7228 | -114.4510 | -0.5605 | -0.5010 |
| 0.0 | 16.0 | 600 | 2.3960 | -5.9125 | -15.8411 | 0.75 | 9.9286 | -125.7063 | -114.4597 | -0.5602 | -0.5003 |
| 0.0 | 18.67 | 700 | 2.3936 | -5.8978 | -15.8162 | 0.75 | 9.9184 | -125.6786 | -114.4434 | -0.5604 | -0.5003 |
| 0.0 | 21.33 | 800 | 2.3929 | -5.9227 | -15.8715 | 0.75 | 9.9488 | -125.7401 | -114.4710 | -0.5609 | -0.5010 |
| 0.0 | 24.0 | 900 | 2.3975 | -5.9447 | -15.8363 | 0.75 | 9.8917 | -125.7010 | -114.4955 | -0.5609 | -0.5009 |
| 0.0 | 26.67 | 1000 | 2.3840 | -5.8994 | -15.8549 | 0.75 | 9.9555 | -125.7216 | -114.4451 | -0.5607 | -0.5006 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
VamsiPranav/sequential-training | VamsiPranav | "2023-11-22T21:01:14Z" | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-11-22T20:28:49Z" | ---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: sequential-training
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sequential-training
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
DevozZ/LunarLander-v2 | DevozZ | "2023-05-21T16:06:12Z" | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | "2023-05-21T15:49:21Z" | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -58.93 +/- 81.11
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'LunarLander'
'seed': 42
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 500000
'learning_rate': 0.001
'num_envs': 16
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'DevozZ/LunarLander-v2'
'batch_size': 2048
'minibatch_size': 512}
```
|
BookWormXtreme/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-3.5bpw-exl2 | BookWormXtreme | "2024-01-06T07:12:02Z" | 0 | 1 | null | [
"safetensors",
"license:apache-2.0",
"region:us"
] | null | "2024-01-05T11:09:16Z" | ---
license: apache-2.0
---
# Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-3.5bpw-exl2
This is a 3.5bpw exl2 quant of DrShotgun's Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss. All credit for merging, etc goes to DrShotgun.
[Original Repo Link](https://huggingface.co/Doctor-Shotgun/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss)
## Original Model Card:
Experimental model, using a limarp qlora trained at 10k ctx length (greater than size of the longest limarp sample when tokenized via mistral's tokenizer) on [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) using [Charles Goddard](https://huggingface.co/chargoddard)'s ZLoss and Megablocks-based fork of transformers, and then fused to [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) at 0.5 weight.
Would try with temp ~1.5-2 and min-p of ~0.03-0.05 since mixtral does appear to be highly confident on its responses and can enter repetition loops after several thousand tokens of responses.
[Peft Adapter](https://huggingface.co/Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora)
## Usage:
The intended prompt format is the Alpaca instruction format of LimaRP v3:
```
### Instruction:
Character's Persona: {bot character description}
User's Persona: {user character description}
Scenario: {what happens in the story}
Play the role of Character. Taking the above information into consideration, you must engage in a roleplaying chat with User below this line. Do not write dialogues and narration for User.
### Input:
User: {utterance}
### Response:
Character: {utterance}
### Input:
User: {utterance}
### Response:
Character: {utterance}
(etc.)
```
## Message length control
Due to the inclusion of LimaRP v3, it is possible to append a length modifier to the response instruction sequence, like this:
```
### Input
User: {utterance}
### Response: (length = medium)
Character: {utterance}
```
This has an immediately noticeable effect on bot responses. The available lengths are: `micro, tiny, short, medium, long, massive, huge, enormous, humongous, unlimited`. The recommended starting length is `medium`. Keep in mind that the AI may ramble or impersonate the user with very long messages.
## Bias, Risks, and Limitations
The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model. It is not intended for supplying factual information or advice in any form.
## Training Details
This model is a merge. Please refer to the link repositories of the merged models for details. |
aseratus1/214439b2-71ec-465c-b5dd-8760be6169e1 | aseratus1 | "2025-01-29T09:09:21Z" | 12 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-360M",
"base_model:adapter:unsloth/SmolLM2-360M",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-29T08:59:05Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-360M
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 214439b2-71ec-465c-b5dd-8760be6169e1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-360M
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 155f72bf61c52f9c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/155f72bf61c52f9c_train_data.json
type:
field_input: title_main
field_instruction: texte
field_output: texteHtml
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: aseratus1/214439b2-71ec-465c-b5dd-8760be6169e1
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/155f72bf61c52f9c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d46de064-6529-4c08-8755-e14ca536003f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d46de064-6529-4c08-8755-e14ca536003f
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 214439b2-71ec-465c-b5dd-8760be6169e1
This model is a fine-tuned version of [unsloth/SmolLM2-360M](https://huggingface.co/unsloth/SmolLM2-360M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1348
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.1953 | 0.3535 | 200 | 0.1348 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
uppaluru/distilbert-base-uncased-finetuned-ner | uppaluru | "2025-01-09T16:34:11Z" | 129 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-12-23T11:38:00Z" | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9239082487869431
- name: Recall
type: recall
value: 0.9372413021590782
- name: F1
type: f1
value: 0.9305270172710612
- name: Accuracy
type: accuracy
value: 0.9835575960728867
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0617
- Precision: 0.9239
- Recall: 0.9372
- F1: 0.9305
- Accuracy: 0.9836
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2515 | 1.0 | 878 | 0.0699 | 0.9048 | 0.9184 | 0.9116 | 0.9801 |
| 0.0527 | 2.0 | 1756 | 0.0610 | 0.9193 | 0.9341 | 0.9266 | 0.9828 |
| 0.0312 | 3.0 | 2634 | 0.0617 | 0.9239 | 0.9372 | 0.9305 | 0.9836 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
aifoundry-org/FLUX.1-schnell-Quantized | aifoundry-org | "2024-08-27T18:37:38Z" | 1,125 | 6 | null | [
"gguf",
"text-to-image",
"image-generation",
"flux",
"en",
"base_model:black-forest-labs/FLUX.1-schnell",
"base_model:quantized:black-forest-labs/FLUX.1-schnell",
"license:apache-2.0",
"region:us"
] | text-to-image | "2024-08-16T15:55:43Z" | ---
base_model: black-forest-labs/FLUX.1-schnell
license: apache-2.0
language:
- en
pipeline_tag: text-to-image
tags:
- text-to-image
- image-generation
- flux
---
Quantized versions of https://huggingface.co/black-forest-labs/FLUX.1-schnell
Tools used for quantization: modded [stable-diffusion.cpp](https://github.com/leejet/stable-diffusion.cpp), [LlamaQuantizer](https://github.com/aifoundry-org/LlamaQuantizer)
**Work in progress, use at your own risk**
## How to:
[WIP]
1. Dowload and build [stable-diffusion.cpp](https://github.com/leejet/stable-diffusion.cpp)
2. Download one of the models from this repo and
* Autoencoder https://huggingface.co/black-forest-labs/FLUX.1-schnell/resolve/main/ae.safetensors
* CLIP_L https://huggingface.co/comfyanonymous/flux_text_encoders/blob/main/clip_l.safetensors
* T5XXL https://huggingface.co/comfyanonymous/flux_text_encoders/blob/main/t5xxl_fp16.safetensors
3. Enter your `stable-diffusion.cpp` dir
4. Run the following command:
```
./build/bin/sd --diffusion-model [path to gguf] --vae [path to ae.safetensors] --clip_l [path to clip_l.safetensors] --t5xxl [path to t5xxl_fp16.safetensors] -p "a frog holding a sign saying 'hi' " -o ../frog.png -v --cfg-scale 1.0 --sampling-method euler -v --seed 42 --steps 4
```
## Results:
<table style="border-collapse: collapse; width: 100%;">
<tr>
<td style="border: none; padding: 10px; text-align: center; vertical-align: middle; font-size: 1.5em;"><strong>Quant type</strong></td>
<td style="border: none; padding: 10px; text-align: center; vertical-align: middle; font-size: 1.5em;"><strong>Size</strong></td>
<td style="border: none; padding: 10px; text-align: center; vertical-align: middle; font-size: 1.5em; min-width: 256px;"><strong>Result (x0.5)</strong></td>
<td style="border: none; padding: 10px; text-align: center; vertical-align: middle; font-size: 1.5em;"><strong>Download link</strong></td>
</tr>
<tr>
<td style="border: none; padding: 10px; text-align: center; vertical-align: middle; font-size: 1.5em;">
<strong>default</strong>
</td>
<td style="border: none; padding: 10px; text-align: center; vertical-align: middle; font-size: 1.5em;">
<strong>23.8 GB</strong>
</td>
<td style="border: none; padding: 10px; text-align: center; vertical-align: middle; font-size: 1.5em;">
<a href="https://huggingface.co/aifoundry-org/FLUX.1-schnell-Quantized/blob/main/examples/flux_frog_default.png">
<img src="./examples/flux_frog_default.png" alt="flux_frog_default.png" style="display: block; margin: 0 auto; min-width: 256px; width: 256px; height: 256px; aspect-ratio: 1 / 1; object-fit: cover;">
</a>
</td>
<td style="border: none; padding: 10px; text-align: center; vertical-align: middle; font-size: 1.5em;">
<a href="https://huggingface.co/black-forest-labs/FLUX.1-schnell/resolve/main/flux1-schnell.safetensors">flux1-schnell.safetensors.gguf</a>
</td>
</tr>
<tr>
<td style="border: none; padding: 10px; text-align: center; vertical-align: middle; font-size: 1.5em;">
<strong>FP16</strong>
</td>
<td style="border: none; padding: 10px; text-align: center; vertical-align: middle; font-size: 1.5em;">
<strong> 23.8 GB</strong>
</td>
<td style="border: none; padding: 10px; text-align: center; vertical-align: middle; font-size: 1.5em;">
<a href="https://huggingface.co/aifoundry-org/FLUX.1-schnell-Quantized/blob/main/examples/flux_frog_F16.png">
<img src="./examples/flux_frog_F16.png" alt="flux_frog_F16.png" style="display: block; margin: 0 auto; min-width: 256px; width: 256px; height: 256px; aspect-ratio: 1 / 1; object-fit: cover;">
</a>
</td>
<td style="border: none; padding: 10px; text-align: center; vertical-align: middle; font-size: 1.5em;">
<a href="https://huggingface.co/aifoundry-org/FLUX.1-schnell-Quantized/resolve/main/flux1-schnell-F16.gguf">flux1-schnell-F16.gguf</a>
</td>
</tr>
<tr>
<td style="border: none; padding: 10px; text-align: center; vertical-align: middle; font-size: 1.5em;">
<strong>Q8_0</strong>
</td>
<td style="border: none; padding: 10px; text-align: center; vertical-align: middle; font-size: 1.5em;">
<strong> 12.6 GB</strong>
</td>
<td style="border: none; padding: 10px; text-align: center; vertical-align: middle; font-size: 1.5em;">
<a href="https://huggingface.co/aifoundry-org/FLUX.1-schnell-Quantized/blob/main/examples/flux_frog_Q8_0.png">
<img src="./examples/flux_frog_Q8_0.png" alt="flux_frog_Q8_0.png" style="display: block; margin: 0 auto; min-width: 256px; width: 256px; height: 256px; aspect-ratio: 1 / 1; object-fit: cover;">
</a>
</td>
<td style="border: none; padding: 10px; text-align: center; vertical-align: middle; font-size: 1.5em;">
<a href="https://huggingface.co/aifoundry-org/FLUX.1-schnell-Quantized/resolve/main/flux1-schnell-Q8_0.gguf">flux1-schnell-Q8_0.gguf</a>
</td>
<tr>
<td style="border: none; padding: 10px; text-align: center; vertical-align: middle; font-size: 1.5em;">
<strong>Q5_0</strong>
</td>
<td style="border: none; padding: 10px; text-align: center; vertical-align: middle; font-size: 1.5em;">
<strong> 8.18 GB</strong>
</td>
<td style="border: none; padding: 10px; text-align: center; vertical-align: middle;">
<a href="https://huggingface.co/aifoundry-org/FLUX.1-schnell-Quantized/blob/main/examples/flux_frog_Q5_0.png">
<img src="./examples/flux_frog_Q5_0.png" alt="flux_frog_Q5_0.png" style="display: block; margin: 0 auto; min-width: 256px; width: 256px; height: 256px; aspect-ratio: 1 / 1; object-fit: cover;">
</a>
</td>
<td style="border: none; padding: 10px; text-align: center; vertical-align: middle; font-size: 1.5em;">
<a href="https://huggingface.co/aifoundry-org/FLUX.1-schnell-Quantized/resolve/main/flux1-schnell-Q5_0.gguf">flux1-schnell-Q5_0.gguf</a>
</td>
</tr>
<tr>
<td style="border: none; padding: 10px; text-align: center; vertical-align: middle; font-size: 1.5em;">
<strong>Q5_1</strong>
</td>
<td style="border: none; padding: 10px; text-align: center; vertical-align: middle; font-size: 1.5em;">
<strong> 8.92 GB</strong>
</td>
<td style="border: none; padding: 10px; text-align: center; vertical-align: middle;">
<a href="https://huggingface.co/aifoundry-org/FLUX.1-schnell-Quantized/blob/main/examples/flux_frog_Q5_1.png">
<img src="./examples/flux_frog_Q5_1.png" alt="flux_frog_Q5_1.png" style="display: block; margin: 0 auto; min-width: 256px; width: 256px; height: 256px; aspect-ratio: 1 / 1; object-fit: cover;">
</a>
</td>
<td style="border: none; padding: 10px; text-align: center; vertical-align: middle; font-size: 1.5em;">
<a href="https://huggingface.co/aifoundry-org/FLUX.1-schnell-Quantized/resolve/main/flux1-schnell-Q5_1.gguf">flux1-schnell-Q5_1.gguf</a>
</td>
</tr>
<tr>
<td style="border: none; padding: 10px; text-align: center; vertical-align: middle; font-size: 1.5em;">
<strong>Q4_0</strong>
</td>
<td style="border: none; padding: 10px; text-align: center; vertical-align: middle; font-size: 1.5em;">
<strong> 6.69 GB</strong>
</td>
<td style="border: none; padding: 10px; text-align: center; vertical-align: middle;">
<a href="https://huggingface.co/aifoundry-org/FLUX.1-schnell-Quantized/blob/main/examples/flux_frog_Q4_0.png">
<img src="./examples/flux_frog_Q4_0.png" alt="flux_frog_Q4_0.png" style="display: block; margin: 0 auto; min-width: 256px; width: 256px; height: 256px; aspect-ratio: 1 / 1; object-fit: cover;">
</a>
</td>
<td style="border: none; padding: 10px; text-align: center; vertical-align: middle; font-size: 1.5em;">
<a href="https://huggingface.co/aifoundry-org/FLUX.1-schnell-Quantized/resolve/main/flux1-schnell-Q4_0.gguf">flux1-schnell-Q4_0.gguf</a>
</td>
</tr>
<tr>
<td style="border: none; padding: 10px; text-align: center; vertical-align: middle; font-size: 1.5em;">
<strong>Q4_1</strong>
</td>
<td style="border: none; padding: 10px; text-align: center; vertical-align: middle; font-size: 1.5em;">
<strong> 7.43 GB</strong>
</td>
<td style="border: none; padding: 10px; text-align: center; vertical-align: middle;">
<a href="https://huggingface.co/aifoundry-org/FLUX.1-schnell-Quantized/blob/main/examples/flux_frog_Q4_1.png">
<img src="./examples/flux_frog_Q4_1.png" alt="flux_frog_Q4_1.png" style="display: block; margin: 0 auto; min-width: 256px; width: 256px; height: 256px;">
</a>
</td>
<td style="border: none; padding: 10px; text-align: center; vertical-align: middle; font-size: 1.5em;">
<a href="https://huggingface.co/aifoundry-org/FLUX.1-schnell-Quantized/resolve/main/flux1-schnell-Q4_1.gguf">flux1-schnell-Q4_1.gguf</a>
</td>
</tr>
<tr>
<td style="border: none; padding: 10px; text-align: center; vertical-align: middle; font-size: 1.5em;">
<strong>Q4_K</strong>
</td>
<td style="border: none; padding: 10px; text-align: center; vertical-align: middle; font-size: 1.5em;">
<strong> 6.69 GB</strong>
</td>
<td style="border: none; padding: 10px; text-align: center; vertical-align: middle;">
<a href="https://huggingface.co/aifoundry-org/FLUX.1-schnell-Quantized/blob/main/examples/flux_frog_Q4_K.png">
<img src="./examples/flux_frog_Q4_K.png" alt="flux_frog_Q4_K.png" style="display: block; margin: 0 auto; min-width: 256px; width: 256px; height: 256px; aspect-ratio: 1 / 1; object-fit: cover;">
</a>
</td>
<td style="border: none; padding: 10px; text-align: center; vertical-align: middle; font-size: 1.5em;">
<a href="https://huggingface.co/aifoundry-org/FLUX.1-schnell-Quantized/resolve/main/flux1-schnell-Q4_K.gguf">flux1-schnell-Q4_K.gguf</a>
</td>
</tr>
<tr>
<td style="border: none; padding: 10px; text-align: center; vertical-align: middle; font-size: 1.5em;">
<strong>Q2_K</strong>
</td>
<td style="border: none; padding: 10px; text-align: center; vertical-align: middle; font-size: 1.5em;">
<strong> 3.9 GB</strong>
</td>
<td style="border: none; padding: 10px; text-align: center; vertical-align: middle;">
<a href="https://huggingface.co/aifoundry-org/FLUX.1-schnell-Quantized/blob/main/examples/flux_frog_Q2_K.png">
<img src="./examples/flux_frog_Q2_K.png" alt="flux_frog_Q2_K.png" style="display: block; margin: 0 auto; min-width: 256px; width: 256px; height: 256px; aspect-ratio: 1 / 1; object-fit: cover;">
</a>
</td>
<td style="border: none; padding: 10px; text-align: center; vertical-align: middle; font-size: 1.5em;">
<a href="https://huggingface.co/aifoundry-org/FLUX.1-schnell-Quantized/resolve/main/flux1-schnell-Q2_K.gguf">flux1-schnell-Q2_K.gguf</a>
</td>
</tr>
</table>
|
shisa-ai/Mistral-Nemo-Japanese-Instruct-2408-GPTQ-W4A16-gs128 | shisa-ai | "2025-01-21T18:47:49Z" | 6 | 0 | null | [
"safetensors",
"mistral",
"gptq",
"ja",
"en",
"base_model:cyberagent/Mistral-Nemo-Japanese-Instruct-2408",
"base_model:quantized:cyberagent/Mistral-Nemo-Japanese-Instruct-2408",
"license:apache-2.0",
"4-bit",
"region:us"
] | null | "2025-01-21T17:49:38Z" | ---
license: apache-2.0
language:
- ja
- en
base_model:
- cyberagent/Mistral-Nemo-Japanese-Instruct-2408
tags:
- gptq
---
W4A16 gs128 GPTQ quant of [cyberagent/Mistral-Nemo-Japanese-Instruct-2408](https://huggingface.co/cyberagent/Mistral-Nemo-Japanese-Instruct-2408) w/ [GPTQModel](https://github.com/ModelCloud/GPTQModel) 1.7.2 using [augmxnt/ultra-orca-boros-en-ja-v1](https://huggingface.co/datasets/augmxnt/ultra-orca-boros-en-ja-v1) as calibration set
|
gcmsrc/distilbert-base-uncased-finetuned-emotion | gcmsrc | "2023-05-21T15:46:15Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-09-05T15:27:07Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9355
- name: F1
type: f1
value: 0.9356480877541032
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1424
- Accuracy: 0.9355
- F1: 0.9356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5311 | 1.0 | 250 | 0.1817 | 0.932 | 0.9317 |
| 0.14 | 2.0 | 500 | 0.1483 | 0.9365 | 0.9368 |
| 0.0915 | 3.0 | 750 | 0.1424 | 0.9355 | 0.9356 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.2+cu102
- Datasets 2.8.0
- Tokenizers 0.10.3
|
Stardragon2099/florencetrial-17e | Stardragon2099 | "2024-12-17T06:28:21Z" | 104 | 0 | transformers | [
"transformers",
"safetensors",
"florence2",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] | text-generation | "2024-12-17T06:26:07Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
davidschulte/ESM_DBQ__Bottega.Veneta.Product.prices.United.States_default | davidschulte | "2024-11-28T16:18:38Z" | 9 | 0 | null | [
"safetensors",
"embedding_space_map",
"BaseLM:bert-base-multilingual-uncased",
"dataset:DBQ/Bottega.Veneta.Product.prices.United.States",
"arxiv:2410.15148",
"base_model:google-bert/bert-base-multilingual-uncased",
"base_model:finetune:google-bert/bert-base-multilingual-uncased",
"license:apache-2.0",
"region:us"
] | null | "2024-11-28T16:18:34Z" | ---
base_model: bert-base-multilingual-uncased
datasets:
- DBQ/Bottega.Veneta.Product.prices.United.States
license: apache-2.0
tags:
- embedding_space_map
- BaseLM:bert-base-multilingual-uncased
---
# ESM DBQ/Bottega.Veneta.Product.prices.United.States
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
ESM
- **Developed by:** David Schulte
- **Model type:** ESM
- **Base Model:** bert-base-multilingual-uncased
- **Intermediate Task:** DBQ/Bottega.Veneta.Product.prices.United.States
- **ESM architecture:** linear
- **Language(s) (NLP):** [More Information Needed]
- **License:** Apache-2.0 license
## Training Details
### Intermediate Task
- **Task ID:** DBQ/Bottega.Veneta.Product.prices.United.States
- **Subset [optional]:** default
- **Text Column:** title
- **Label Column:** category2_code
- **Dataset Split:** train
- **Sample size [optional]:** 4469
- **Sample seed [optional]:**
### Training Procedure [optional]
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Language Model Training Hyperparameters [optional]
- **Epochs:** 3
- **Batch size:** 32
- **Learning rate:** 2e-05
- **Weight Decay:** 0.01
- **Optimizer**: AdamW
### ESM Training Hyperparameters [optional]
- **Epochs:** 10
- **Batch size:** 32
- **Learning rate:** 0.001
- **Weight Decay:** 0.01
- **Optimizer**: AdamW
### Additional trainiung details [optional]
## Model evaluation
### Evaluation of fine-tuned language model [optional]
### Evaluation of ESM [optional]
MSE:
### Additional evaluation details [optional]
## What are Embedding Space Maps?
<!-- This section describes the evaluation protocols and provides the results. -->
Embedding Space Maps (ESMs) are neural networks that approximate the effect of fine-tuning a language model on a task. They can be used to quickly transform embeddings from a base model to approximate how a fine-tuned model would embed the the input text.
ESMs can be used for intermediate task selection with the ESM-LogME workflow.
## How can I use Embedding Space Maps for Intermediate Task Selection?
[![PyPI version](https://img.shields.io/pypi/v/hf-dataset-selector.svg)](https://pypi.org/project/hf-dataset-selector)
We release **hf-dataset-selector**, a Python package for intermediate task selection using Embedding Space Maps.
**hf-dataset-selector** fetches ESMs for a given language model and uses it to find the best dataset for applying intermediate training to the target task. ESMs are found by their tags on the Huggingface Hub.
```python
from hfselect import Dataset, compute_task_ranking
# Load target dataset from the Hugging Face Hub
dataset = Dataset.from_hugging_face(
name="stanfordnlp/imdb",
split="train",
text_col="text",
label_col="label",
is_regression=False,
num_examples=1000,
seed=42
)
# Fetch ESMs and rank tasks
task_ranking = compute_task_ranking(
dataset=dataset,
model_name="bert-base-multilingual-uncased"
)
# Display top 5 recommendations
print(task_ranking[:5])
```
For more information on how to use ESMs please have a look at the [official Github repository](https://github.com/davidschulte/hf-dataset-selector).
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
If you are using this Embedding Space Maps, please cite our [paper](https://arxiv.org/abs/2410.15148).
**BibTeX:**
```
@misc{schulte2024moreparameterefficientselectionintermediate,
title={Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning},
author={David Schulte and Felix Hamborg and Alan Akbik},
year={2024},
eprint={2410.15148},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2410.15148},
}
```
**APA:**
```
Schulte, D., Hamborg, F., & Akbik, A. (2024). Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning. arXiv preprint arXiv:2410.15148.
```
## Additional Information
|
mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF | mradermacher | "2024-09-08T23:25:33Z" | 93 | 2 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:nbeerbower/Stella-mistral-nemo-12B-v2",
"base_model:quantized:nbeerbower/Stella-mistral-nemo-12B-v2",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2024-09-08T16:54:02Z" | ---
base_model: nbeerbower/Stella-mistral-nemo-12B-v2
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/nbeerbower/Stella-mistral-nemo-12B-v2
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-IQ1_S.gguf) | i1-IQ1_S | 3.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-IQ2_S.gguf) | i1-IQ2_S | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-IQ2_M.gguf) | i1-IQ2_M | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 7.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 7.2 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 7.2 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-Q4_0.gguf) | i1-Q4_0 | 7.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-Q6_K.gguf) | i1-Q6_K | 10.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Nishitbaria/Aurora-style-lora | Nishitbaria | "2024-12-08T06:38:22Z" | 6 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | "2024-12-08T06:13:15Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
<lora:Aurora_BorealiStyler_FLUX-000018:1.3> This is a digital artwork
showcasing a breathtaking aurora borealis display in a nighttime landscape.
The central subject is the word "Aurora", stylized in glowing, ethereal
colors, rendered in vibrant hues of green, blue, and pink, appearing to be
formed by the swirling aurora lights.
output:
url: images/41903012.jpeg
- text: >-
<lora:Aurora_BorealiStyler_FLUX-000018:1.3> This is a digital artwork
showcasing a breathtaking aurora borealis display in a nighttime landscape.
The central subject is the word "Nishit Bariya", stylized in glowing,
ethereal colors, rendered in vibrant hues of green, blue, and pink,
appearing to be formed by the swirling aurora lights.
output:
url: images/example_bagfi1yvv.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: >-
Digital artwork showcasing a breathtaking aurora borealis display in a
nighttime landscape. The central subject is (SUBJECT), stylized in glowing,
ethereal colors, rendered in vibrant hues of (COLORS), appearing to be formed
by the swirling aurora lights.
---
# Aurora-style-lora
<Gallery />
## Trigger words
You should use `Digital artwork showcasing a breathtaking aurora borealis display in a nighttime landscape. The central subject is (SUBJECT)` to trigger the image generation.
You should use `stylized in glowing` to trigger the image generation.
You should use `ethereal colors` to trigger the image generation.
You should use `rendered in vibrant hues of (COLORS)` to trigger the image generation.
You should use `appearing to be formed by the swirling aurora lights.` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Nishitbaria/Aurora-style-lora/tree/main) them in the Files & versions tab.
|