The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
(ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: 1de5b560-2d1d-4a63-af0d-fe91fa3b6013)')
Error code: UnexpectedError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
modelId
string | author
string | last_modified
unknown | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
unknown | card
string |
---|---|---|---|---|---|---|---|---|---|
vdos/fe115e1d-5fee-48d6-a092-e5263ae33a9f | vdos | "2024-12-20T07:18:41" | 5 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Hermes-llama-2-7b",
"base_model:adapter:NousResearch/Nous-Hermes-llama-2-7b",
"license:mit",
"region:us"
] | null | "2024-12-20T07:08:10" | ---
library_name: peft
license: mit
base_model: NousResearch/Nous-Hermes-llama-2-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fe115e1d-5fee-48d6-a092-e5263ae33a9f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.5.2`
```yaml
adapter: lora
base_model: NousResearch/Nous-Hermes-llama-2-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b9bcdb29f32a8ac8_train_data.json
ds_type: json
field: text
path: /workspace/input_data/b9bcdb29f32a8ac8_train_data.json
type: completion
debug: null
deepspeed: null
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 25
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
group_by_length: true
hub_model_id: vdos/fe115e1d-5fee-48d6-a092-e5263ae33a9f
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/b9bcdb29f32a8ac8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 25
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: fe115e1d-5fee-48d6-a092-e5263ae33a9f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: fe115e1d-5fee-48d6-a092-e5263ae33a9f
warmup_ratio: 0.05
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# fe115e1d-5fee-48d6-a092-e5263ae33a9f
This model is a fine-tuned version of [NousResearch/Nous-Hermes-llama-2-7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.6837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- total_eval_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- training_steps: 18
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 5.6363 | 0.1684 | 1 | 5.6837 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.3
- Pytorch 2.3.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3 |
catrinbaze/phi3-slerp | catrinbaze | "2024-10-09T12:54:09" | 7 | 0 | null | [
"safetensors",
"phi3small",
"merge",
"mergekit",
"lazymergekit",
"microsoft/Phi-3-small-8k-instruct",
"microsoft/Phi-3-small-128k-instruct",
"custom_code",
"base_model:microsoft/Phi-3-small-128k-instruct",
"base_model:merge:microsoft/Phi-3-small-128k-instruct",
"base_model:microsoft/Phi-3-small-8k-instruct",
"base_model:merge:microsoft/Phi-3-small-8k-instruct",
"region:us"
] | null | "2024-10-09T11:56:28" | ---
base_model:
- microsoft/Phi-3-small-8k-instruct
- microsoft/Phi-3-small-128k-instruct
tags:
- merge
- mergekit
- lazymergekit
- microsoft/Phi-3-small-8k-instruct
- microsoft/Phi-3-small-128k-instruct
---
# phi3-slerp
phi3-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [microsoft/Phi-3-small-8k-instruct](https://huggingface.co/microsoft/Phi-3-small-8k-instruct)
* [microsoft/Phi-3-small-128k-instruct](https://huggingface.co/microsoft/Phi-3-small-128k-instruct)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: microsoft/Phi-3-small-8k-instruct
layer_range: [0, 32]
- model: microsoft/Phi-3-small-128k-instruct
layer_range: [0, 32]
merge_method: slerp
base_model: microsoft/Phi-3-small-8k-instruct
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "catrinbaze/phi3-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
k948181/ybdH-1 | k948181 | "2021-04-22T13:34:20" | 0 | 0 | null | [
"region:us"
] | null | "2022-03-02T23:29:05" | >tr|Q8ZR27|Q8ZR27_SALTY Putative glycerol dehydrogenase OS=Salmonella typhimurium (strain LT2 / SGSC1412 / ATCC 700720) OX=99287 GN=ybdH PE=3 SV=1
MNHTEIRVVTGPANYFSHAGSLERLTDFFTPEQLSHAVWVYGERAIAAARPYLPEAFERA
GAKHLPFTGHCSERHVAQLAHACNDDRQVVIGVGGGALLDTAKALARRLALPFVAIPTIA
ATCAAWTPLSVWYNDAGQALQFEIFDDANFLVLVEPRIILQAPDDYLLAGIGDTLAKWYE
AVVLAPQPETLPLTVRLGINSACAIRDLLLDSSEQALADKQQRRLTQAFCDVVDAIIAGG
GMVGGLGERYTRVAAAHAVHNGLTVLPQTEKFLHGTKVAYGILVQSALLGQDDVLAQLIT
AYRRFHLPARLSELDVDIHNTAEIDRVIAHTLRPVESIHYLPVTLTPDTLRAAFEKVEFF
RI
|
talkbank/CHATUtterance-en | talkbank | "2024-02-25T21:58:47" | 710 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2023-12-04T00:39:45" | ---
language:
- en
---
# TalkBank Batchalign CHATUtterance
CHATUtterance is a series of Bert-derivative models designed for the task of Utterance Segmentation released by the TalkBank project, which is trained on the the utterance diarization samples given by [The Michigan Corpus of Academic Spoken English](https://ca.talkbank.org/access/MICASE.html).
## Usage
The models can be used directly as a Bert-class token classification model following the [instructions from Huggingface](https://huggingface.co/docs/transformers/tasks/token_classification). Feel free to inspect [this file](https://github.com/TalkBank/batchalign/blob/73ec04761ed3ee2eba04ba0cf14dc898f88b72f7/baln/utokengine.py#L85-L94) for a sense of what the classes means. Alternatively, to get the full analysis possible with the model, it is best combined with the TalkBank Batchalign suite of analysis software, [available here](https://github.com/talkbank/batchalign2), using `transcribe` mode.
Target labels:
- `0`: regular form
- `1`: start of utterance/capitalized word
- `2`: end of declarative utterance (end this utterance with a `.`)
- `3`: end of interrogative utterance (end this utterance with a `?`)
- `4`: end of exclamatory utterance (end this utterance with a `!`)
- `5`: break in the utterance; depending on orthography one can insert a `,`
|
itsindro/distilhubert-finetuned-gtzan | itsindro | "2025-01-13T20:18:37" | 13 | 0 | transformers | [
"transformers",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | "2025-01-13T20:17:55" | ---
library_name: transformers
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: marsyas/gtzan
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.83
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the marsyas/gtzan dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5878
- Accuracy: 0.83
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.978 | 1.0 | 113 | 1.4803 | 0.6 |
| 1.3816 | 2.0 | 226 | 1.0322 | 0.72 |
| 1.0378 | 3.0 | 339 | 0.9786 | 0.75 |
| 0.7748 | 4.0 | 452 | 0.7674 | 0.74 |
| 0.6074 | 5.0 | 565 | 0.6434 | 0.81 |
| 0.5075 | 6.0 | 678 | 0.5948 | 0.77 |
| 0.3899 | 7.0 | 791 | 0.5878 | 0.83 |
| 0.2387 | 8.0 | 904 | 0.5331 | 0.82 |
| 0.1927 | 9.0 | 1017 | 0.5601 | 0.83 |
| 0.1532 | 10.0 | 1130 | 0.5554 | 0.83 |
### Framework versions
- Transformers 4.48.0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
tarabukinivan/685b22dd-73f7-49e8-b1e5-2d833ca87b3a | tarabukinivan | "2025-01-16T18:53:51" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:HuggingFaceH4/tiny-random-LlamaForCausalLM",
"base_model:adapter:HuggingFaceH4/tiny-random-LlamaForCausalLM",
"region:us"
] | null | "2025-01-16T18:52:39" | ---
library_name: peft
base_model: HuggingFaceH4/tiny-random-LlamaForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 685b22dd-73f7-49e8-b1e5-2d833ca87b3a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: HuggingFaceH4/tiny-random-LlamaForCausalLM
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- dd0f30e3221a1eab_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/dd0f30e3221a1eab_train_data.json
type:
field_instruction: text
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: tarabukinivan/685b22dd-73f7-49e8-b1e5-2d833ca87b3a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/dd0f30e3221a1eab_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: fad7684c-bc01-45e5-a7b7-96df8d9ecd42
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: fad7684c-bc01-45e5-a7b7-96df8d9ecd42
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 685b22dd-73f7-49e8-b1e5-2d833ca87b3a
This model is a fine-tuned version of [HuggingFaceH4/tiny-random-LlamaForCausalLM](https://huggingface.co/HuggingFaceH4/tiny-random-LlamaForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.3766
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 10.3791 |
| 10.3799 | 0.0007 | 8 | 10.3787 |
| 10.3738 | 0.0014 | 16 | 10.3774 |
| 10.3778 | 0.0020 | 24 | 10.3766 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
indridinn/distilbert-base-uncased-finetuned-ner | indridinn | "2021-10-01T22:29:15" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2022-03-02T23:29:05" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9274720407485328
- name: Recall
type: recall
value: 0.9370175634858485
- name: F1
type: f1
value: 0.932220367278798
- name: Accuracy
type: accuracy
value: 0.9836370279759162
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0610
- Precision: 0.9275
- Recall: 0.9370
- F1: 0.9322
- Accuracy: 0.9836
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2507 | 1.0 | 878 | 0.0714 | 0.9181 | 0.9243 | 0.9212 | 0.9813 |
| 0.0516 | 2.0 | 1756 | 0.0617 | 0.9208 | 0.9325 | 0.9266 | 0.9828 |
| 0.0306 | 3.0 | 2634 | 0.0610 | 0.9275 | 0.9370 | 0.9322 | 0.9836 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
RichardErkhov/yakazimir_-_qwen_uncCPO_entropy_0_01-awq | RichardErkhov | "2024-12-06T21:48:22" | 5 | 0 | null | [
"safetensors",
"qwen2",
"4-bit",
"awq",
"region:us"
] | null | "2024-12-06T21:47:43" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
qwen_uncCPO_entropy_0_01 - AWQ
- Model creator: https://huggingface.co/yakazimir/
- Original model: https://huggingface.co/yakazimir/qwen_uncCPO_entropy_0_01/
Original model description:
---
library_name: transformers
license: other
base_model: trl-lib/qwen1.5-0.5b-sft
tags:
- alignment-handbook
- trl
- simpo
- generated_from_trainer
- trl
- simpo
- generated_from_trainer
datasets:
- yakazimir/ultrafeedback_binarized
model-index:
- name: qwen_uncCPO_entropy_0_01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen_uncCPO_entropy_0_01
This model is a fine-tuned version of [trl-lib/qwen1.5-0.5b-sft](https://huggingface.co/trl-lib/qwen1.5-0.5b-sft) on the yakazimir/ultrafeedback_binarized dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0500
- Sft Loss: 3.9220
- Rewards/chosen: -4.3252
- Rewards/rejected: -5.1044
- Rewards/accuracies: 0.6892
- Rewards/margins: 0.7793
- Logps/rejected: -5.1044
- Logps/chosen: -4.3252
- Logits/rejected: 0.1444
- Logits/chosen: 0.0509
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Sft Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.0563 | 0.2141 | 400 | 0.0573 | 4.8352 | -5.7454 | -6.0246 | 0.5445 | 0.2792 | -6.0246 | -5.7454 | 0.6512 | 0.5372 |
| 0.0533 | 0.4282 | 800 | 0.0524 | 4.2340 | -4.6954 | -5.0777 | 0.6157 | 0.3823 | -5.0777 | -4.6954 | 0.2939 | 0.1644 |
| 0.0533 | 0.6422 | 1200 | 0.0518 | 4.1504 | -4.5198 | -5.0186 | 0.6484 | 0.4989 | -5.0186 | -4.5198 | 0.4014 | 0.2684 |
| 0.0508 | 0.8563 | 1600 | 0.0512 | 4.0690 | -4.5220 | -5.0081 | 0.6491 | 0.4862 | -5.0081 | -4.5220 | 0.2498 | 0.1344 |
| 0.0529 | 1.0704 | 2000 | 0.0508 | 3.9195 | -4.3917 | -4.9646 | 0.6521 | 0.5729 | -4.9646 | -4.3917 | 0.3268 | 0.2181 |
| 0.0522 | 1.2845 | 2400 | 0.0504 | 4.1797 | -4.6133 | -5.2771 | 0.6647 | 0.6638 | -5.2771 | -4.6133 | 0.2727 | 0.1622 |
| 0.0515 | 1.4986 | 2800 | 0.0504 | 4.0933 | -4.4442 | -5.0786 | 0.6825 | 0.6344 | -5.0786 | -4.4442 | 0.2050 | 0.0984 |
| 0.0526 | 1.7127 | 3200 | 0.0503 | 4.0886 | -4.4943 | -5.1537 | 0.6751 | 0.6594 | -5.1537 | -4.4943 | 0.2002 | 0.0920 |
| 0.0533 | 1.9267 | 3600 | 0.0501 | 3.9857 | -4.3809 | -5.1003 | 0.6825 | 0.7195 | -5.1003 | -4.3809 | 0.1348 | 0.0421 |
| 0.0493 | 2.1408 | 4000 | 0.0500 | 3.9751 | -4.3954 | -5.1537 | 0.6840 | 0.7583 | -5.1537 | -4.3954 | 0.3029 | 0.1980 |
| 0.0522 | 2.3549 | 4400 | 0.0500 | 3.9820 | -4.4013 | -5.1632 | 0.6869 | 0.7619 | -5.1632 | -4.4013 | 0.2139 | 0.1131 |
| 0.0513 | 2.5690 | 4800 | 0.0500 | 3.9732 | -4.3709 | -5.1160 | 0.6944 | 0.7451 | -5.1160 | -4.3709 | 0.1787 | 0.0785 |
| 0.0498 | 2.7831 | 5200 | 0.0500 | 3.9372 | -4.3318 | -5.0969 | 0.6892 | 0.7651 | -5.0969 | -4.3318 | 0.2138 | 0.1134 |
| 0.0496 | 2.9972 | 5600 | 0.0500 | 3.9220 | -4.3252 | -5.1044 | 0.6892 | 0.7793 | -5.1044 | -4.3252 | 0.1444 | 0.0509 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
|
jysssacc/bloomz-560m_lora_lr5e-05_bs4_epoch5_wd0.01 | jysssacc | "2024-01-11T15:42:09" | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:bigscience/bloomz-560m",
"base_model:adapter:bigscience/bloomz-560m",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | "2024-01-10T16:41:42" | ---
license: bigscience-bloom-rail-1.0
library_name: peft
tags:
- generated_from_trainer
base_model: bigscience/bloomz-560m
model-index:
- name: bloomz-560m_lora_lr5e-05_bs4_epoch5_wd0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bloomz-560m_lora_lr5e-05_bs4_epoch5_wd0.01
This model is a fine-tuned version of [bigscience/bloomz-560m](https://huggingface.co/bigscience/bloomz-560m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2822
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.245 | 1.0 | 157 | 3.5777 |
| 3.5546 | 2.0 | 314 | 3.3217 |
| 3.3905 | 3.0 | 471 | 3.2829 |
| 3.2189 | 4.0 | 628 | 3.2776 |
| 3.1307 | 5.0 | 785 | 3.2822 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.0 |
IaraMed/mbert_base | IaraMed | "2025-03-17T14:45:26" | 12 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-03-14T17:45:03" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sergioalves/98ba4e5f-1290-4026-b2d6-e33ba92c84e7 | sergioalves | "2025-01-13T09:47:22" | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:scb10x/llama-3-typhoon-v1.5-8b-instruct",
"base_model:adapter:scb10x/llama-3-typhoon-v1.5-8b-instruct",
"license:llama3",
"region:us"
] | null | "2025-01-13T09:44:18" | ---
library_name: peft
license: llama3
base_model: scb10x/llama-3-typhoon-v1.5-8b-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 98ba4e5f-1290-4026-b2d6-e33ba92c84e7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: scb10x/llama-3-typhoon-v1.5-8b-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 59d0333252bc2389_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/59d0333252bc2389_train_data.json
type:
field_input: brief
field_instruction: headline
field_output: content
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: sergioalves/98ba4e5f-1290-4026-b2d6-e33ba92c84e7
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/59d0333252bc2389_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_hf
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e447129b-78c8-4acf-9f49-b741f367afbb
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: e447129b-78c8-4acf-9f49-b741f367afbb
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 98ba4e5f-1290-4026-b2d6-e33ba92c84e7
This model is a fine-tuned version of [scb10x/llama-3-typhoon-v1.5-8b-instruct](https://huggingface.co/scb10x/llama-3-typhoon-v1.5-8b-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5070
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0105 | 1 | 2.8111 |
| 2.6068 | 0.0842 | 8 | 2.6399 |
| 2.4671 | 0.1684 | 16 | 2.5314 |
| 2.4191 | 0.2526 | 24 | 2.5070 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mncai/Mistral-7B-CollectiveCognition | mncai | "2023-10-22T04:35:52" | 1,452 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"MindsAndCompany",
"en",
"dataset:CollectiveCognition/chats-data-2023-09-27",
"arxiv:2306.02707",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-10-20T06:54:07" | ---
pipeline_tag: text-generation
license: mit
language:
- en
library_name: transformers
tags:
- MindsAndCompany
datasets:
- CollectiveCognition/chats-data-2023-09-27
---
## Model Details
* **Developed by**: [Minds And Company](https://mnc.ai/)
* **Backbone Model**: [Mistral-7B-v0.1](mistralai/Mistral-7B-v0.1)
* **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers)
## Dataset Details
### Used Datasets
- CollectiveCognition/chats-data-2023-09-27
### Prompt Template
- Llama Prompt Template
## Limitations & Biases:
Llama2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/
## License Disclaimer:
This model is bound by the license & usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind.
## Contact Us
- [Minds And Company](https://mnc.ai/)
## Citiation:
Please kindly cite using the following BibTeX:
```bibtex
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{Orca-best,
title = {Orca-best: A filtered version of orca gpt4 dataset.},
author = {Shahul Es},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://huggingface.co/datasets/shahules786/orca-best/},
}
```
```
@software{touvron2023llama2,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava,
Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller,
Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez Madian Khabsa, Isabel Kloumann,
Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov,
Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith,
Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu , Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan,
Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom},
year={2023}
}
```
> Readme format: [Riiid/sheep-duck-llama-2-70b-v1.1](https://huggingface.co/Riiid/sheep-duck-llama-2-70b-v1.1) |
rhndeveloper/orca-2-7B-v01-fine-tuned-using-ludwig-4bit | rhndeveloper | "2024-01-13T14:04:46" | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/Orca-2-7b",
"base_model:adapter:microsoft/Orca-2-7b",
"region:us"
] | null | "2024-01-13T14:04:40" | ---
library_name: peft
base_model: microsoft/Orca-2-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
DrBob2142/Mix-Models | DrBob2142 | "2023-01-02T13:34:18" | 0 | 31 | null | [
"region:us"
] | null | "2022-12-05T16:45:48" | These are a few mix models that I had created in my spare time, whatever I consider to be the primary model in the mix will have its name before the mixing method
Midnight Mixes are the latest Mixes, in theroy they should have a even spread of all modlels in the mix
Spiced models folow an idea to keep as mutch of one model preserved when mixed with somthing else, will be generaly anything Mixed at 25 weighted sum or lower
Bake models folow a very easy to track mix of the models, First part of the mix is the model that was primaraly used
Potluck models end up being a large mix of models, with one model still having a majority in the mix, mixing methods can get messy so true recipes can be had to keep track of
Below are a few examples of what I consider to be the best of these mixes thus far, will update this page the more I mess with it!
All examples use the WD 1.4 VAE kl-f8-anime2.ckpt, and are done on clip skip 2
### Examples :
## Midnight Mixes:
# DustStorm Waistlands

# CherryBlossoms

# Cyber Fox

## Other Mixes :
# Fire Fox


# Mecha


# Cyber Fox


|
bunnycore/Llama-3.2-3B-Apex | bunnycore | "2024-10-31T09:47:21" | 87 | 3 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:bunnycore/Llama-3.2-3B-All-Mix",
"base_model:merge:bunnycore/Llama-3.2-3B-All-Mix",
"base_model:bunnycore/Llama-3.2-3B-Booval",
"base_model:merge:bunnycore/Llama-3.2-3B-Booval",
"base_model:bunnycore/Llama-3.2-3B-CodeReactor",
"base_model:merge:bunnycore/Llama-3.2-3B-CodeReactor",
"base_model:bunnycore/Llama-3.2-3B-Long-Think",
"base_model:merge:bunnycore/Llama-3.2-3B-Long-Think",
"base_model:bunnycore/Llama-3.2-3B-Mix-Skill",
"base_model:merge:bunnycore/Llama-3.2-3B-Mix-Skill",
"base_model:bunnycore/Llama-3.2-3B-Prodigy",
"base_model:merge:bunnycore/Llama-3.2-3B-Prodigy",
"base_model:bunnycore/Llama-3.2-3B-Pure-RP",
"base_model:merge:bunnycore/Llama-3.2-3B-Pure-RP",
"base_model:bunnycore/Llama-3.2-3B-Sci-Think",
"base_model:merge:bunnycore/Llama-3.2-3B-Sci-Think",
"base_model:djuna-test-lab/TEST-L3.2-ReWish-3B",
"base_model:merge:djuna-test-lab/TEST-L3.2-ReWish-3B",
"base_model:huihui-ai/Llama-3.2-3B-Instruct-abliterated",
"base_model:merge:huihui-ai/Llama-3.2-3B-Instruct-abliterated",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-10-31T09:43:32" | ---
base_model:
- bunnycore/Llama-3.2-3B-Mix-Skill
- bunnycore/Llama-3.2-3B-Prodigy
- bunnycore/Llama-3.2-3B-Sci-Think
- bunnycore/Llama-3.2-3B-All-Mix
- bunnycore/Llama-3.2-3B-Booval
- bunnycore/Llama-3.2-3B-Long-Think
- bunnycore/Llama-3.2-3B-Pure-RP
- djuna-test-lab/TEST-L3.2-ReWish-3B
- huihui-ai/Llama-3.2-3B-Instruct-abliterated
- bunnycore/Llama-3.2-3B-CodeReactor
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [huihui-ai/Llama-3.2-3B-Instruct-abliterated](https://huggingface.co/huihui-ai/Llama-3.2-3B-Instruct-abliterated) as a base.
### Models Merged
The following models were included in the merge:
* [bunnycore/Llama-3.2-3B-Mix-Skill](https://huggingface.co/bunnycore/Llama-3.2-3B-Mix-Skill)
* [bunnycore/Llama-3.2-3B-Prodigy](https://huggingface.co/bunnycore/Llama-3.2-3B-Prodigy)
* [bunnycore/Llama-3.2-3B-Sci-Think](https://huggingface.co/bunnycore/Llama-3.2-3B-Sci-Think)
* [bunnycore/Llama-3.2-3B-All-Mix](https://huggingface.co/bunnycore/Llama-3.2-3B-All-Mix)
* [bunnycore/Llama-3.2-3B-Booval](https://huggingface.co/bunnycore/Llama-3.2-3B-Booval)
* [bunnycore/Llama-3.2-3B-Long-Think](https://huggingface.co/bunnycore/Llama-3.2-3B-Long-Think)
* [bunnycore/Llama-3.2-3B-Pure-RP](https://huggingface.co/bunnycore/Llama-3.2-3B-Pure-RP)
* [djuna-test-lab/TEST-L3.2-ReWish-3B](https://huggingface.co/djuna-test-lab/TEST-L3.2-ReWish-3B)
* [bunnycore/Llama-3.2-3B-CodeReactor](https://huggingface.co/bunnycore/Llama-3.2-3B-CodeReactor)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: djuna-test-lab/TEST-L3.2-ReWish-3B
- model: bunnycore/Llama-3.2-3B-All-Mix
- model: bunnycore/Llama-3.2-3B-Mix-Skill
- model: bunnycore/Llama-3.2-3B-Booval
- model: bunnycore/Llama-3.2-3B-Sci-Think
- model: bunnycore/Llama-3.2-3B-CodeReactor
- model: bunnycore/Llama-3.2-3B-Long-Think
- model: bunnycore/Llama-3.2-3B-Pure-RP
- model: bunnycore/Llama-3.2-3B-Prodigy
merge_method: model_stock
base_model: huihui-ai/Llama-3.2-3B-Instruct-abliterated
parameters:
normalize: false
int8_mask: true
dtype: float16
```
|
juhw/uiop78 | juhw | "2025-03-05T11:01:48" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-05T10:58:14" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TOMFORD79/Platinum_3 | TOMFORD79 | "2025-03-04T17:41:59" | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | "2025-03-04T17:31:42" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
iamanshul/ddpm-butterflies-128 | iamanshul | "2023-01-09T14:23:04" | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | "2023-01-05T14:56:34" | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/iamanshul/ddpm-butterflies-128/tensorboard?#scalars)
|
StepLaw/StepLaw-N_429M-D_19.0B-LR2.762e-03-BS1048576 | StepLaw | "2025-04-15T15:34:50" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"step1",
"text-generation",
"StepLaw",
"causal-lm",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-10T14:54:33" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
zendeer/custom_speecht5_finetuned_voxpopuli_nl | zendeer | "2024-06-18T17:42:53" | 76 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"dataset:voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | "2024-06-18T17:41:38" | ---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
datasets:
- voxpopuli
model-index:
- name: custom_speecht5_finetuned_voxpopuli_nl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# custom_speecht5_finetuned_voxpopuli_nl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
nickmalhotra/ProjectIndus | nickmalhotra | "2024-07-21T09:20:46" | 504 | 14 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"license:osl-3.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-18T10:54:29" | ---
license: osl-3.0
model-index:
- name: indus_1.175B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 22.7
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nickmalhotra/ProjectIndus
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 25.04
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nickmalhotra/indus_1.175B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 23.12
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nickmalhotra/indus_1.175B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 0.0
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nickmalhotra/indus_1.175B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 49.57
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nickmalhotra/indus_1.175B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0.0
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nickmalhotra/indus_1.175B
name: Open LLM Leaderboard
widget:
- example_title: वर्तमान प्रधानमंत्री
messages:
- role: user
content: >-
भारत के वर्तमान प्रधानमंत्री कौन हैं?
- example_title: होली का महत्व
messages:
- role: user
content: >-
होली का महत्व क्या है?
---
# Model Card for Project Indus
<!-- Provide a quick summary of what the model is/does. -->
Project Indus LLM is a groundbreaking open-source language model tailored for Hindi and its dialects, designed to enhance natural language processing and generation across diverse Indian linguistic applications.
# Table of Contents
- [Table of Contents](#table-of-contents)
- [Model Details](#model-details)
- [Model Description](#model-description)
- [Uses](#uses)
- [Direct Use](#direct-use)
- [Downstream Use](#downstream-use)
- [Out-of-Scope Use](#out-of-scope-use)
- [Bias, Risks, and Limitations](#bias-risks-and-limitations)
- [Recommendations](#recommendations)
- [Training Details](#training-details)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Preprocessing](#preprocessing)
- [Evaluation](#evaluation)
- [Testing Data, Factors & Metrics](#testing-data-factors--metrics)
- [Testing Data](#testing-data)
- [Factors](#factors)
- [Metrics](#metrics)
- [Results](#results)
- [Model Examination](#model-examination)
- [Technical Specifications](#technical-specifications)
- [Model Architecture and Objective](#model-architecture-and-objective)
- [Compute Infrastructure](#compute-infrastructure)
- [Hardware](#hardware)
- [Software](#software)
- [Citation](#citation)
- [Glossary](#glossary)
- [More Information](#more-information)
- [Model Card Authors](#model-card-authors)
- [Model Card Contact](#model-card-contact)
- [How to Get Started with the Model](#how-to-get-started-with-the-model)
# Model Details
## Model Description
Project Indus LLM aims to provide a robust language model for Indian languages, starting with Hindi and its dialects. This open-source foundational model, hosted on Hugging Face, is tailored for easy integration and further development by researchers and developers focusing on Indian linguistic diversity.
<!-- Provide a longer summary of what this model is/does. -->
The model is a pretrained model in Hindi and dialects which is instruct tuned.
- **Developed by:** Nikhil Malhotra, Nilesh Brahme, Satish Mishra, Vinay Sharma (Makers Lab, TechMahindra)
- **Model type:** Foundational Language model
- **Language(s) (NLP):** hin, bho, mai, doi
- **License:** other
- **Parent Model:** It is a grounds up model built on GPT-2 architecture starting from tokenizer to decoder
- **Resources for more information:** <https://www.techmahindra.com/en-in/innovation/the-indus-project/>
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
Uses include question and answeting and conversation in Hindi and Dialects. The model would be reward tuned to be used across various industries
1. Call center
2. Healthcare
3. Automotive
4. Telecom
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
Project Indus can be directly used for generating text, simulating conversation, and other text generation tasks without additional training.
## Downstream Use
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
Uses include question and answeting and conversation in Hindi and Dialects. The model would be reward tuned to be used across various industries
1. Call center
2. Healthcare
3. Automotive
4. Telecom
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
Project Indus is not designed for high-stakes decision-making tasks such as medical diagnosis or legal advice, nor can it be used for fill-in-the-blank exercises, multiple Q&A, and similar applications at the moment.
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Significant research has explored bias and fairness issues with language models
(see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
We have taken care across various biases by trying to remove them from training data. However since the model is a generative model, it would tend to produce hallucinations.
Any disturbing or harmful sterotype produced by the model is purely un-intentional and coincidental.
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
It is recommended to avoid biases and negative connotations in the model, and regular updates along with community feedback are crucial for addressing any emergent bias or misuse scenarios.
# Training Details
The model was trained on a curated dataset comprising various sources of Hindi text, including literature, news articles, and web content.
## Infrastructure
- **Training Infrastructure:** Utilized high-performance computing resources provided by CDAC, featuring NVIDIA A100 GPUs.
- **Running Infrastructure:** Tested for both GPU (NVIDIA GeForce RTX 3070 or higher) and CPU (Intel Xeon Platinum 8580) environments.
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The Project Indus LLM was trained on a diverse and extensive dataset comprising various sources of Hindi text and its dialects. The data collection and curation process was meticulously designed to cater to the linguistic diversity and complexity of Indian languages, particularly focusing on Hindi and its 37 dialects.
### Data Sources and Collection
Data was collected in three main buckets:
1. **Open-Source Hindi Data**: This included publicly available sources from the internet across different categories such as news, and non-news. Automated scripts were used to scrape and extract text from web pages. Here are some of the sources:
- **News**: Articles from news portals.
- **Non-News**: Diverse sources including Wikipedia, commoncrawl.org, and other culturally significant content like 'Man ki Baat' from AIR.
2. **Translated Data**: A portion of the Pile dataset, which is a large English dataset used for training AI models, was translated into Hindi using three different translation models. IndicTrans2 (AI4Bharat) was selected as the best model for this purpose based on its accuracy and efficiency.
3. **Dialects**: Data collection for dialects presented a unique challenge due to the limited material available on the internet. Data for major dialects like Maithili, Bhojpuri, Magahi, and Braj Bhasha was collected from multiple sources, including fieldwork where representatives collected old books and other texts, which were then digitized and converted into text data.
## Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
Training involved extensive preprocessing to clean and standardize the text, followed by supervised learning on a high-performance computing setup.
- **Pre-training:** Conducted on a dataset of 22 billion tokens using advanced tokenization techniques.
- **Fine-Tuning:** Supervised fine-tuning performed with a focus on Indian languages, utilizing datasets specifically tailored for cultural, political, and social contexts.
Below is a table summarizing the datasets used for pre-training and fine-tuning the model:
| Phase | Data Source | Tokens | Notes |
|---------------|-----------------------------------------|-----------|-----------------------------------------------------|
| Pre-training | Cleaned dataset of Hindi and dialects | 22 billion| Utilized advanced tokenization |
| Fine-tuning | Custom datasets tailored for Indian languages | Varied | Focus on cultural, political, and social contexts |
- **Training Infrastructure:** Utilized high-performance computing resources provided by CDAC, featuring NVIDIA A100 GPUs.
- **Running Infrastructure:** Tested for both GPU (NVIDIA GeForce RTX 3070 or higher) and CPU (Intel Xeon Platinum 8580) environments.
### Preprocessing
The collected data underwent several stages of cleaning and preprocessing to ensure high quality and usability for training:
- **Cleaning**: The data was cleaned of unwanted text, characters, and personal information like mobile numbers. Transliteration was performed where necessary, and unwanted tags from scraped web pages were removed.
- **Bias Removal**: A Bias Removal Toolkit was developed to detect and remove biased language from the training data. This toolkit helped in ensuring that the text used for training the model was ethical, correct, and socially responsible.
- **Tokenization**: The data was tokenized using a custom tokenizer developed specifically for Hindi and its dialects. This tokenizer was based on Byte Pair Encoding (BPE) with additional mechanisms like byte fallback to handle the peculiarities of Hindi script efficiently.
#### Summary
The final dataset used for training consisted of:
- **Raw Data Size**: Over 500 GB of raw data collected.
- **Cleaned and Curated Data**: Approximately 200 GB of clean Hindi and dialect text data.
- **Tokenization**: Utilized 22 billion tokens created from the cleaned data for pre-training.
This diverse and extensive training data foundation allowed Project Indus LLM to develop robust capabilities for understanding and generating Hindi text, making it a powerful tool for applications requiring Indian language processing.
# Evaluation
### Indic LLM Leaderboard Results
Project Indus LLM has been evaluated using the Indic LLM Leaderboard, which employs the `indic_eval` evaluation framework specifically designed for assessing models on Indian language tasks. This framework provides a comprehensive view of model performance across a variety of benchmarks tailored to Indian languages.
Detailed results from the Indic LLM Leaderboard (α), accessible at [Hugging Face Indic LLM Leaderboard](https://huggingface.co/spaces/Cognitive-Lab/indic_llm_leaderboard), are shown below:
| Task | Version | Metric | Value | | Stderr |
|--------------------------------|---------|----------|-------|---|--------|
| All | | acc | 0.2891| ± | 0.0109 |
| | | acc_norm | 0.3013| ± | 0.0112 |
| indiceval:ARC-Challenge:hindi:10 | 0 | acc | 0.2167| ± | 0.0120 |
| | | acc_norm | 0.2474| ± | 0.0126 |
| indiceval:ARC-Easy:hindi:5 | 0 | acc | 0.3615| ± | 0.0099 |
| | | acc_norm | 0.3552| ± | 0.0098 |
These results highlight the model's capabilities in understanding and generating Hindi language text under controlled testing conditions. The standard error values indicate the variance observed during the evaluation, providing insights into the consistency of the model's performance across different evaluation runs.
### Open LLM Leaderboard Evaluation Results
Additionally, Project Indus LLM has been evaluated on the Open LLM Leaderboard, which provides another layer of benchmarking by comparing the model's performance against other state-of-the-art language models. Below are the summarized results from the Open LLM Leaderboard:
| Metric |Value|
|---------------------------------|----:|
|Avg. |20.07|
|AI2 Reasoning Challenge (25-Shot)|22.70|
|HellaSwag (10-Shot) |25.04|
|MMLU (5-Shot) |23.12|
|Winogrande (5-shot) |49.57|
These benchmark results can be explored further on [Hugging Face Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
### Evaluation Context
The evaluation metrics `acc` (accuracy) and `acc_norm` (normalized accuracy) are used to quantify the model's performance. The tasks are differentiated by their difficulty and the specific dataset used, such as the ARC Challenge and ARC Easy sets, both adapted to Hindi language conditions to ensure relevant assessment. This structured evaluation ensures that the Indus LLM not only performs well in generalized text generation tasks but also in more specialized, context-specific scenarios pertinent to the Indian linguistic framework.
## Results
Project Indus demonstrates competitive performance, particularly in text generation tasks, as evidenced by its scores on standardized benchmarks.
# Technical Specifications
## Model Architecture and Objective
Project Indus LLM is based on a GPT-2.0-like architecture, tailored to handle the complexities of the Hindi language and its dialects. This model was designed to serve as a foundational model that can be fine-tuned for various applications, making it highly versatile and adaptable to different domains within the Indian context.
- **Architecture Details**:
- **Layers**: 22 transformer layers, which provide a deep neural network capable of understanding complex language patterns.
- **Heads**: 32 attention heads per layer, facilitating a broad attention mechanism across different parts of the input data.
- **Embedding Size**: 2048, which allows the model to represent a wide variety of information and nuances in the data.
- **Vocabulary Size**: 32,300, tailored to include a comprehensive set of Hindi words and common phrases found in the training data.
The objective of this model is to provide a robust tool for text generation and understanding in Hindi and its dialects, supporting the development of applications that require natural language processing in these languages. It also aims to bridge the gap in technology where Indian languages are underrepresented, providing a platform for further linguistic research and technological inclusion.
## Compute Infrastructure
##### Hardware
The pre-training and fine-tuning of Project Indus LLM were conducted on high-performance computing infrastructure provided by the Centre for Development of Advanced Computing (CDAC). This setup included:
- **Nodes and GPUs**: Utilization of six nodes, each equipped with eight NVIDIA A100 GPUs. These GPUs are state-of-the-art for machine learning tasks and provide the necessary computational power to handle the large volumes of data and complex model architectures.
- **Memory and Storage**: Each node was equipped with ample memory and storage to handle the datasets and model parameters efficiently. Specific configurations included 40 GB of GPU memory per card, essential for training large models.
Inference performance was tested on GPU as well as CPU.
- **GPU**: On GPU NVIDIA GeForce RTX 3070 we have seen for 250-350 tokens inference time around ~5-10s.
- **CPU**: On Intel CPU Xeon(R) Platinum 8580 we have seen performance comparable to GPU with throughput of > 30 token/second.
##### Software
The software environment was crucial for efficiently training and running the model. Key components included:
- **Operating System**: Linux, chosen for its stability and support for high-performance computing tasks.
- **Machine Learning Frameworks**: PyTorch, used for its flexibility and efficiency in training deep learning models. It supports extensive parallel processing and GPU acceleration, which are critical for training large models like Project Indus LLM.
- **Job Scheduler**: SLURM (Simple Linux Utility for Resource Management) was used to manage and allocate resources effectively across the distributed system. This ensured optimal scheduling of training jobs without resource contention.
# Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
The detailed citation information will help in acknowledging the work and efforts of the team behind Project Indus LLM when it is used or referenced in academic or professional settings.
```bibtex
@article{malhotra2024projectindus,
title={Project Indus: A Foundational Model for Indian Languages},
author={Malhotra, Nikhil and Brahme, Nilesh and Mishra, Satish and Sharma, Vinay},
journal={Tech Mahindra Makers Lab},
year={2024},
url={https://www.techmahindra.com/en-in/innovation/the-indus-project/}
}
```
**APA:**
Malhotra, N., Brahme, N., Mishra, S., & Sharma, V. (2024). Project Indus: A Foundational Model for Indian Languages. *Tech Mahindra Makers Lab*. Available at <https://www.techmahindra.com/en-in/innovation/the-indus-project/>
# Glossary
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
This glossary section explains key terms used throughout the model documentation and technical details, helping users unfamiliar with certain concepts to better understand the content.
- **Transformer Layers**: Part of a neural network architecture that uses self-attention mechanisms to process sequential data such as text. Essential for NLP tasks.
- **Attention Heads**: Sub-units of a model layer that allow the model to focus on different parts of the input sequence when making predictions.
- **Embedding Size**: The size of the vector used to represent each token or word in a dense numerical form. Larger embeddings can capture more detailed information.
- **Block Size**: The maximum length of the input tokens the model can process in one operation.
- **Vocabulary Size**: The total number of unique words or tokens that the model can understand and generate.
# More Information
For further details on Project Indus LLM, including additional documentation, tutorials, and community discussions, visit the following resources:
- **Project Repository**: [Hugging Face Repository](https://huggingface.co/nickmalhotra/ProjectIndus)
- **Tech Mahindra Makers Lab**: Insights into the research and development behind Project Indus can be found on the [Tech Mahindra Innovation page](https://www.techmahindra.com/en-in/innovation/makers-lab/).
- **Community Forums**: Engage with the community on [Hugging Face Forums](https://huggingface.co/nickmalhotra/ProjectIndus/discussions?status=open&type=discussion) for support, brainstorming, and sharing of new ideas related to Project Indus.
# Model Card Authors
<!-- This section provides another layer of transparency and accountability. Whose views is this model card representing? How many voices were included in its construction? Etc. -->
The model card and documentation for Project Indus LLM were collaboratively authored by:
- **Nikhil Malhotra**: Chief Innovation Officer at Tech Mahindra.
- **Nilesh Brahme**: Senior AI Research Scientist and one of the primary contributors to the Project Indus development.
- **Satish Mishra**: AI Architect, whose insights have significantly shaped the model's capabilities.
- **Vinay Sharma**: LLM Engineer focused on the linguistic data processing and model training aspects of Project Indus.
# Model Card Contact
For inquiries, support, or further information regarding Project Indus LLM, please reach out through the following channels:
- **Email**: [[email protected]](mailto:[email protected]) - For direct queries and professional engagements.
- **GitHub Issues**: For technical issues, feature requests, or contributions, please use the Issues section of the [Project Indus GitHub repository](https://github.com/Tech-Mahindra-Makers-Lab/Indus-1.1B).
- **Hugging Face Spaces**: Questions and discussions related to model implementation and community projects can be posted in our dedicated space on Hugging Face.
# How to Get Started with the Model
To begin using Project Indus LLM for your projects, follow these steps to set up and run the model:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("nickmalhotra/ProjectIndus")
tokenizer = AutoTokenizer.from_pretrained("nickmalhotra/ProjectIndus")
# Example inference
def format_template(user_prompt):
messages = [
{"role": "user", "content": user_prompt},
]
response = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
return response
user_prompt = """भारत के वर्तमान प्रधानमंत्री कौन हैं?"""
input_ids = format_template(user_prompt)
# Generate text using the model
output = model.generate(input_ids,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.eos_token_id,
max_length=1024,
num_beams=5,
do_sample=True,
early_stopping=True,
temperature=0.7,
top_k=50,
top_p=0.95,
repetition_penalty=1.2,
no_repeat_ngram_size=3,
num_return_sequences=1,
)
print(tokenizer.decode(output[0], skip_special_tokens=False))
```
## Disclaimer
#### Model Limitations
Project Indus LLM is trained with single instruction tuning, which may result in hallucinations—instances where the model generates plausible but inaccurate information. Users should exercise caution, especially in scenarios requiring high factual accuracy.
#### Adaptation for Specific Use Cases
Project Indus LLM is designed as a foundational model suitable for further development and fine-tuning. Users are encouraged to adapt and refine the model to meet specific requirements of their applications.
#### Recommendations for Fine-Tuning
- **Identify Specific Needs**: Clearly define the requirements of your use case to guide the fine-tuning process.
- **Curate Targeted Data**: Ensure the training data is relevant and of high quality to improve model performance.
- **Continuous Evaluation**: Regularly assess the model's performance during and after fine-tuning to maintain accuracy and reduce biases.
This disclaimer aims to provide users with a clear understanding of the model's capabilities and limitations, facilitating its effective application and development. |
jonatasgrosman/exp_w2v2r_de_vp-100k_gender_male-8_female-2_s564 | jonatasgrosman | "2022-07-25T04:48:13" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"de",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-07-25T04:48:01" | ---
language:
- de
license: apache-2.0
tags:
- automatic-speech-recognition
- de
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_de_vp-100k_gender_male-8_female-2_s564
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
hgnoi/p8f4ZWvGfTPEMvJM | hgnoi | "2024-05-25T13:42:56" | 77 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-25T13:40:28" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gokulsrinivasagan/bert_tiny_olda_book_10_v1_sst2 | gokulsrinivasagan | "2025-02-11T20:43:10" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokulsrinivasagan/bert_tiny_olda_book_10_v1",
"base_model:finetune:gokulsrinivasagan/bert_tiny_olda_book_10_v1",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-02-11T20:40:14" | ---
library_name: transformers
language:
- en
base_model: gokulsrinivasagan/bert_tiny_olda_book_10_v1
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert_tiny_olda_book_10_v1_sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE SST2
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.8428899082568807
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_tiny_olda_book_10_v1_sst2
This model is a fine-tuned version of [gokulsrinivasagan/bert_tiny_olda_book_10_v1](https://huggingface.co/gokulsrinivasagan/bert_tiny_olda_book_10_v1) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3667
- Accuracy: 0.8429
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3588 | 1.0 | 264 | 0.3667 | 0.8429 |
| 0.2083 | 2.0 | 528 | 0.3749 | 0.8670 |
| 0.1452 | 3.0 | 792 | 0.4113 | 0.8624 |
| 0.1111 | 4.0 | 1056 | 0.4390 | 0.8612 |
| 0.0861 | 5.0 | 1320 | 0.4392 | 0.8635 |
| 0.0695 | 6.0 | 1584 | 0.4670 | 0.8681 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.2.0+cu121
- Datasets 3.1.0
- Tokenizers 0.20.1
|
zubairsalman7/xray_vit | zubairsalman7 | "2024-12-11T14:21:49" | 193 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"medical-imaging",
"chest-xray",
"tumor-detection",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-12-04T19:37:11" | ---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- medical-imaging
- chest-xray
- tumor-detection
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-xray-tumor
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-xray-tumor
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the chest-xray-tumor dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2989
- Accuracy: 0.9574
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 0.5283 | 3.6765 | 125 | 0.2948 | 0.9606 |
| 0.516 | 7.3529 | 250 | 0.2843 | 0.9601 |
| 0.4878 | 11.0294 | 375 | 0.2756 | 0.9601 |
| 0.459 | 14.7059 | 500 | 0.2801 | 0.9601 |
| 0.4462 | 18.3824 | 625 | 0.2761 | 0.9595 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
hopkins/eng-deu-simcse.dev2.4440 | hopkins | "2023-07-04T18:30:13" | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2023-07-03T17:07:41" | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-deu-simcse.dev2.4440
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-deu-simcse.dev2.4440
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6391
- Bleu: 21.6215
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Sofia-gb/fashionSigLIP-roturas4 | Sofia-gb | "2025-04-13T19:50:29" | 3 | 0 | transformers | [
"transformers",
"safetensors",
"feature-extraction",
"custom_code",
"arxiv:1910.09700",
"region:us"
] | feature-extraction | "2025-04-05T01:09:41" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
bartowski/Tiger-Gemma-9B-v2-GGUF | bartowski | "2024-09-04T14:59:30" | 198 | 2 | null | [
"gguf",
"text-generation",
"base_model:TheDrummer/Tiger-Gemma-9B-v2",
"base_model:quantized:TheDrummer/Tiger-Gemma-9B-v2",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2024-09-04T14:21:24" | ---
base_model: TheDrummer/Tiger-Gemma-9B-v2
pipeline_tag: text-generation
quantized_by: bartowski
---
## Llamacpp imatrix Quantizations of Tiger-Gemma-9B-v2
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3658">b3658</a> for quantization.
Original model: https://huggingface.co/TheDrummer/Tiger-Gemma-9B-v2
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
## Prompt format
```
<bos><start_of_turn>user
{prompt}<end_of_turn>
<start_of_turn>model
<end_of_turn>
<start_of_turn>model
```
Note that this model does not support a System prompt.
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [Tiger-Gemma-9B-v2-f16.gguf](https://huggingface.co/bartowski/Tiger-Gemma-9B-v2-GGUF/blob/main/Tiger-Gemma-9B-v2-f16.gguf) | f16 | 18.49GB | false | Full F16 weights. |
| [Tiger-Gemma-9B-v2-Q8_0.gguf](https://huggingface.co/bartowski/Tiger-Gemma-9B-v2-GGUF/blob/main/Tiger-Gemma-9B-v2-Q8_0.gguf) | Q8_0 | 9.83GB | false | Extremely high quality, generally unneeded but max available quant. |
| [Tiger-Gemma-9B-v2-Q6_K_L.gguf](https://huggingface.co/bartowski/Tiger-Gemma-9B-v2-GGUF/blob/main/Tiger-Gemma-9B-v2-Q6_K_L.gguf) | Q6_K_L | 7.81GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [Tiger-Gemma-9B-v2-Q6_K.gguf](https://huggingface.co/bartowski/Tiger-Gemma-9B-v2-GGUF/blob/main/Tiger-Gemma-9B-v2-Q6_K.gguf) | Q6_K | 7.59GB | false | Very high quality, near perfect, *recommended*. |
| [Tiger-Gemma-9B-v2-Q5_K_L.gguf](https://huggingface.co/bartowski/Tiger-Gemma-9B-v2-GGUF/blob/main/Tiger-Gemma-9B-v2-Q5_K_L.gguf) | Q5_K_L | 6.87GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [Tiger-Gemma-9B-v2-Q5_K_M.gguf](https://huggingface.co/bartowski/Tiger-Gemma-9B-v2-GGUF/blob/main/Tiger-Gemma-9B-v2-Q5_K_M.gguf) | Q5_K_M | 6.65GB | false | High quality, *recommended*. |
| [Tiger-Gemma-9B-v2-Q5_K_S.gguf](https://huggingface.co/bartowski/Tiger-Gemma-9B-v2-GGUF/blob/main/Tiger-Gemma-9B-v2-Q5_K_S.gguf) | Q5_K_S | 6.48GB | false | High quality, *recommended*. |
| [Tiger-Gemma-9B-v2-Q4_K_L.gguf](https://huggingface.co/bartowski/Tiger-Gemma-9B-v2-GGUF/blob/main/Tiger-Gemma-9B-v2-Q4_K_L.gguf) | Q4_K_L | 5.98GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [Tiger-Gemma-9B-v2-Q4_K_M.gguf](https://huggingface.co/bartowski/Tiger-Gemma-9B-v2-GGUF/blob/main/Tiger-Gemma-9B-v2-Q4_K_M.gguf) | Q4_K_M | 5.76GB | false | Good quality, default size for must use cases, *recommended*. |
| [Tiger-Gemma-9B-v2-Q4_K_S.gguf](https://huggingface.co/bartowski/Tiger-Gemma-9B-v2-GGUF/blob/main/Tiger-Gemma-9B-v2-Q4_K_S.gguf) | Q4_K_S | 5.48GB | false | Slightly lower quality with more space savings, *recommended*. |
| [Tiger-Gemma-9B-v2-Q4_0.gguf](https://huggingface.co/bartowski/Tiger-Gemma-9B-v2-GGUF/blob/main/Tiger-Gemma-9B-v2-Q4_0.gguf) | Q4_0 | 5.46GB | false | Legacy format, generally not worth using over similarly sized formats |
| [Tiger-Gemma-9B-v2-Q4_0_8_8.gguf](https://huggingface.co/bartowski/Tiger-Gemma-9B-v2-GGUF/blob/main/Tiger-Gemma-9B-v2-Q4_0_8_8.gguf) | Q4_0_8_8 | 5.44GB | false | Optimized for ARM inference. Requires 'sve' support (see link below). |
| [Tiger-Gemma-9B-v2-Q4_0_4_8.gguf](https://huggingface.co/bartowski/Tiger-Gemma-9B-v2-GGUF/blob/main/Tiger-Gemma-9B-v2-Q4_0_4_8.gguf) | Q4_0_4_8 | 5.44GB | false | Optimized for ARM inference. Requires 'i8mm' support (see link below). |
| [Tiger-Gemma-9B-v2-Q4_0_4_4.gguf](https://huggingface.co/bartowski/Tiger-Gemma-9B-v2-GGUF/blob/main/Tiger-Gemma-9B-v2-Q4_0_4_4.gguf) | Q4_0_4_4 | 5.44GB | false | Optimized for ARM inference. Should work well on all ARM chips, pick this if you're unsure. |
| [Tiger-Gemma-9B-v2-Q3_K_XL.gguf](https://huggingface.co/bartowski/Tiger-Gemma-9B-v2-GGUF/blob/main/Tiger-Gemma-9B-v2-Q3_K_XL.gguf) | Q3_K_XL | 5.35GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [Tiger-Gemma-9B-v2-IQ4_XS.gguf](https://huggingface.co/bartowski/Tiger-Gemma-9B-v2-GGUF/blob/main/Tiger-Gemma-9B-v2-IQ4_XS.gguf) | IQ4_XS | 5.18GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Tiger-Gemma-9B-v2-Q3_K_L.gguf](https://huggingface.co/bartowski/Tiger-Gemma-9B-v2-GGUF/blob/main/Tiger-Gemma-9B-v2-Q3_K_L.gguf) | Q3_K_L | 5.13GB | false | Lower quality but usable, good for low RAM availability. |
| [Tiger-Gemma-9B-v2-Q3_K_M.gguf](https://huggingface.co/bartowski/Tiger-Gemma-9B-v2-GGUF/blob/main/Tiger-Gemma-9B-v2-Q3_K_M.gguf) | Q3_K_M | 4.76GB | false | Low quality. |
| [Tiger-Gemma-9B-v2-IQ3_M.gguf](https://huggingface.co/bartowski/Tiger-Gemma-9B-v2-GGUF/blob/main/Tiger-Gemma-9B-v2-IQ3_M.gguf) | IQ3_M | 4.49GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Tiger-Gemma-9B-v2-Q3_K_S.gguf](https://huggingface.co/bartowski/Tiger-Gemma-9B-v2-GGUF/blob/main/Tiger-Gemma-9B-v2-Q3_K_S.gguf) | Q3_K_S | 4.34GB | false | Low quality, not recommended. |
| [Tiger-Gemma-9B-v2-IQ3_XS.gguf](https://huggingface.co/bartowski/Tiger-Gemma-9B-v2-GGUF/blob/main/Tiger-Gemma-9B-v2-IQ3_XS.gguf) | IQ3_XS | 4.14GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Tiger-Gemma-9B-v2-Q2_K_L.gguf](https://huggingface.co/bartowski/Tiger-Gemma-9B-v2-GGUF/blob/main/Tiger-Gemma-9B-v2-Q2_K_L.gguf) | Q2_K_L | 4.03GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [Tiger-Gemma-9B-v2-Q2_K.gguf](https://huggingface.co/bartowski/Tiger-Gemma-9B-v2-GGUF/blob/main/Tiger-Gemma-9B-v2-Q2_K.gguf) | Q2_K | 3.81GB | false | Very low quality but surprisingly usable. |
| [Tiger-Gemma-9B-v2-IQ2_M.gguf](https://huggingface.co/bartowski/Tiger-Gemma-9B-v2-GGUF/blob/main/Tiger-Gemma-9B-v2-IQ2_M.gguf) | IQ2_M | 3.43GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
## Q4_0_X_X
If you're using an ARM chip, the Q4_0_X_X quants will have a substantial speedup. Check out Q4_0_4_4 speed comparisons [on the original pull request](https://github.com/ggerganov/llama.cpp/pull/5780#pullrequestreview-21657544660)
To check which one would work best for your ARM chip, you can check [AArch64 SoC features](https://gpages.juszkiewicz.com.pl/arm-socs-table/arm-socs.html)(thanks EloyOn!).
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
Some say that this improves the quality, others don't notice any difference. If you use these models PLEASE COMMENT with your findings. I would like feedback that these are actually used and useful so I don't keep uploading quants no one is using.
Thanks!
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset
Thank you ZeroWw for the inspiration to experiment with embed/output
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Tiger-Gemma-9B-v2-GGUF --include "Tiger-Gemma-9B-v2-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Tiger-Gemma-9B-v2-GGUF --include "Tiger-Gemma-9B-v2-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (Tiger-Gemma-9B-v2-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
Best000/40a42f06-0539-4742-abbd-328cbac17fcd | Best000 | "2025-02-09T02:53:58" | 9 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-7B-Instruct",
"base_model:adapter:unsloth/Qwen2-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-02-09T02:30:06" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 40a42f06-0539-4742-abbd-328cbac17fcd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
# 40a42f06-0539-4742-abbd-328cbac17fcd
This model is a fine-tuned version of [unsloth/Qwen2-7B-Instruct](https://huggingface.co/unsloth/Qwen2-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2725
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
EmreDinc/firefox_bug_classifier_all | EmreDinc | "2025-04-13T17:40:13" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-04-13T17:25:15" | ---
library_name: transformers
license: apache-2.0
base_model: google-bert/bert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: firefox_bug_classifier_all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# firefox_bug_classifier_all
This model is a fine-tuned version of [google-bert/bert-base-cased](https://huggingface.co/google-bert/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3815
- Accuracy: 0.8494
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4109 | 1.0 | 1563 | 0.4164 | 0.832 |
| 0.3322 | 2.0 | 3126 | 0.3583 | 0.8508 |
| 0.2748 | 3.0 | 4689 | 0.3815 | 0.8494 |
### Framework versions
- Transformers 4.51.2
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
Primeness/primeh1v10c2 | Primeness | "2025-01-28T06:27:13" | 22 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-28T05:54:53" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
turboderp/Mistral-Large-Instruct-2411-exl3 | turboderp | "2025-04-10T08:34:54" | 50 | 3 | null | [
"license:other",
"region:us"
] | null | "2025-04-08T00:08:44" | ---
license: other
license_name: mrl
---
EXL3 quants of [Mistral-Large-Instruct-2411](https://huggingface.co/mistralai/Mistral-Large-Instruct-2411)
[1.40 bits per weight / H4](https://huggingface.co/turboderp/Mistral-Large-Instruct-2411-exl3/tree/1.4bpw_H4)
[1.60 bits per weight / H4](https://huggingface.co/turboderp/Mistral-Large-Instruct-2411-exl3/tree/1.6bpw_H4)
[1.80 bits per weight / H5](https://huggingface.co/turboderp/Mistral-Large-Instruct-2411-exl3/tree/1.8bpw_H5)
[2.00 bits per weight / H5](https://huggingface.co/turboderp/Mistral-Large-Instruct-2411-exl3/tree/2.0bpw_H5)
[2.25 bits per weight / H5](https://huggingface.co/turboderp/Mistral-Large-Instruct-2411-exl3/tree/2.25bpw_H5)
[2.50 bits per weight / H5](https://huggingface.co/turboderp/Mistral-Large-Instruct-2411-exl3/tree/2.5bpw_H5)
[3.00 bits per weight / H6](https://huggingface.co/turboderp/Mistral-Large-Instruct-2411-exl3/tree/3.0bpw)
[3.50 bits per weight / H6](https://huggingface.co/turboderp/Mistral-Large-Instruct-2411-exl3/tree/3.5bpw)
[4.00 bits per weight / H6](https://huggingface.co/turboderp/Mistral-Large-Instruct-2411-exl3/tree/4.0bpw)
[5.00 bits per weight / H6](https://huggingface.co/turboderp/Mistral-Large-Instruct-2411-exl3/tree/5.0bpw)
[6.00 bits per weight / H6](https://huggingface.co/turboderp/Mistral-Large-Instruct-2411-exl3/tree/6.0bpw)
 |
Emilianohack6950/GenOrtega | Emilianohack6950 | "2023-07-05T16:50:47" | 32 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-06-28T22:03:29" | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
GenOrtega es un modelo avanzado de generación de imágenes basado en inteligencia artificial, diseñado para capturar y recrear la belleza única de Jenna Ortega, una talentosa actriz reconocida en la industria del entretenimiento. Utilizando técnicas de aprendizaje profundo y una vasta cantidad de datos de entrenamiento, este modelo ha sido entrenado para generar imágenes fotorrealistas de alta calidad que capturan con precisión los rasgos faciales, la expresividad y el estilo inconfundible de Jenna Ortega.
Con GenOrtega, puedes explorar la creatividad y obtener imágenes personalizadas de Jenna Ortega para diversas aplicaciones, como proyectos de diseño gráfico, desarrollo de videojuegos, producción cinematográfica, arte digital y más. El modelo ofrece una amplia gama de opciones para personalizar las imágenes generadas, como la elección de expresiones faciales, cambios de vestuario y entornos, lo que te permite adaptar las imágenes a tus necesidades específicas.
GenOrtega ha sido entrenado en una amplia variedad de imágenes y poses de Jenna Ortega, lo que le permite capturar su diversidad y versatilidad como artista. Además, el modelo cuenta con una interfaz intuitiva y fácil de usar, lo que lo hace accesible tanto para profesionales creativos como para entusiastas del arte digital.
Descubre la magia de GenOrtega y experimenta con la generación de imágenes de Jenna Ortega para dar vida a tus ideas y proyectos con un toque de estilo y autenticidad únicos.
Sample pictures of this concept:





|
hoangphu7122002ai/CLM_LLama_v0 | hoangphu7122002ai | "2023-09-05T18:11:08" | 5 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-09-03T11:19:12" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
aardvark-labs/stp-classifier-6-1 | aardvark-labs | "2025-03-13T12:23:59" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-03-13T10:03:59" | ---
library_name: transformers
--- |
HASAN55/New_test_roberta_base_for_squad | HASAN55 | "2023-05-31T14:07:01" | 59 | 0 | transformers | [
"transformers",
"tf",
"roberta",
"question-answering",
"generated_from_keras_callback",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | "2023-05-31T11:26:07" | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: HASAN55/New_test_roberta_base_for_squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# HASAN55/New_test_roberta_base_for_squad
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4265
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 24804, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 1.0308 | 0 |
| 0.6950 | 1 |
| 0.5347 | 2 |
| 0.4265 | 3 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ZMC2019/OpenR1-Qwen-7B-Sparse-P25 | ZMC2019 | "2025-04-03T15:46:12" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"dataset:open-r1/OpenR1-Math-220k",
"base_model:ZMC2019/OpenR1-Qwen-7B-Sparse",
"base_model:finetune:ZMC2019/OpenR1-Qwen-7B-Sparse",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-02T18:56:07" | ---
base_model: ZMC2019/OpenR1-Qwen-7B-Sparse
datasets: open-r1/OpenR1-Math-220k
library_name: transformers
model_name: OpenR1-Qwen-7B-Sparse-P25
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for OpenR1-Qwen-7B-Sparse-P25
This model is a fine-tuned version of [ZMC2019/OpenR1-Qwen-7B-Sparse](https://huggingface.co/ZMC2019/OpenR1-Qwen-7B-Sparse) on the [open-r1/OpenR1-Math-220k](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ZMC2019/OpenR1-Qwen-7B-Sparse-P25", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenzhuoming911/huggingface/runs/ykot45x0)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Gustav0-Freind/gemma-2-27b-Q5_K_S-GGUF | Gustav0-Freind | "2024-07-03T07:25:53" | 11 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:google/gemma-2-27b",
"base_model:quantized:google/gemma-2-27b",
"license:gemma",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-03T07:24:59" | ---
base_model: google/gemma-2-27b
library_name: transformers
license: gemma
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---
# Gustav0-Freind/gemma-2-27b-Q5_K_S-GGUF
This model was converted to GGUF format from [`google/gemma-2-27b`](https://huggingface.co/google/gemma-2-27b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/google/gemma-2-27b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Gustav0-Freind/gemma-2-27b-Q5_K_S-GGUF --hf-file gemma-2-27b-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Gustav0-Freind/gemma-2-27b-Q5_K_S-GGUF --hf-file gemma-2-27b-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Gustav0-Freind/gemma-2-27b-Q5_K_S-GGUF --hf-file gemma-2-27b-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Gustav0-Freind/gemma-2-27b-Q5_K_S-GGUF --hf-file gemma-2-27b-q5_k_s.gguf -c 2048
```
|
TheBloke/Metis-0.5-GPTQ | TheBloke | "2023-12-29T15:05:21" | 12 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"custom_code",
"base_model:Mihaiii/Metis-0.5",
"base_model:quantized:Mihaiii/Metis-0.5",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] | text-generation | "2023-12-29T14:32:58" | ---
base_model: Mihaiii/Metis-0.5
inference: false
license: apache-2.0
metrics:
- accuracy
model_creator: Mihai
model_name: Metis 0.5
model_type: mistral
prompt_template: 'SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Metis 0.5 - GPTQ
- Model creator: [Mihai](https://huggingface.co/Mihaiii)
- Original model: [Metis 0.5](https://huggingface.co/Mihaiii/Metis-0.5)
<!-- description start -->
# Description
This repo contains GPTQ model files for [Mihai's Metis 0.5](https://huggingface.co/Mihaiii/Metis-0.5).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Metis-0.5-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Metis-0.5-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Metis-0.5-GGUF)
* [Mihai's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Mihaiii/Metis-0.5)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Orca-Vicuna
```
SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models.
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Metis-0.5-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.16 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Metis-0.5-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.57 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Metis-0.5-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 7.52 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Metis-0.5-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 7.68 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/Metis-0.5-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 8.17 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Metis-0.5-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.29 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/Metis-0.5-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Metis-0.5-GPTQ:gptq-4bit-32g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `Metis-0.5-GPTQ`:
```shell
mkdir Metis-0.5-GPTQ
huggingface-cli download TheBloke/Metis-0.5-GPTQ --local-dir Metis-0.5-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir Metis-0.5-GPTQ
huggingface-cli download TheBloke/Metis-0.5-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir Metis-0.5-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir Metis-0.5-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Metis-0.5-GPTQ --local-dir Metis-0.5-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/Metis-0.5-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Metis-0.5-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Metis-0.5-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Metis-0.5-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/Metis-0.5-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(
prompt_template,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/Metis-0.5-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-32g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Write a story about llamas"
system_message = "You are a story writing assistant"
prompt_template=f'''SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama architecture models (including Mistral, Yi, DeepSeek, SOLAR, etc) in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Mihai's Metis 0.5
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
An instruct based fine tune of [migtissera/Tess-XS-v1-3-yarn-128K](https://huggingface.co/migtissera/Tess-XS-v1-3-yarn-128K).
It works well with long system prompts.
It isn't generic in a sense that it shouldn't be used for story telling, for example, but only for reasoning and text comprehension.
This model is trained on a private dataset. The high GSM8K score is **NOT** because of the MetaMath dataset.
# Prompt Format:
```
SYSTEM: <ANY SYSTEM CONTEXT>
USER:
ASSISTANT:
```
|
rubra-ai/Mistral-7B-Instruct-v0.2-GGUF | rubra-ai | "2024-07-04T08:55:34" | 9 | 2 | null | [
"gguf",
"function-calling",
"tool-calling",
"agentic",
"rubra",
"conversational",
"en",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | "2024-07-01T05:23:40" | ---
license: apache-2.0
model-index:
- name: Rubra-Mistral-7B-Instruct-v0.2
results:
- task:
type: text-generation
dataset:
type: MMLU
name: MMLU
metrics:
- type: 5-shot
value: 58.9
verified: false
- task:
type: text-generation
dataset:
type: GPQA
name: GPQA
metrics:
- type: 0-shot
value: 29.91
verified: false
- task:
type: text-generation
dataset:
type: GSM-8K
name: GSM-8K
metrics:
- type: 8-shot, CoT
value: 34.12
verified: false
- task:
type: text-generation
dataset:
type: MATH
name: MATH
metrics:
- type: 4-shot, CoT
value: 8.36
verified: false
- task:
type: text-generation
dataset:
type: MT-bench
name: MT-bench
metrics:
- type: GPT-4 as Judge
value: 7.36
verified: false
tags:
- function-calling
- tool-calling
- agentic
- rubra
- conversational
language:
- en
---
# Rubra Mistral 7B Instruct v0.2 GGUF
Original model: [rubra-ai/Mistral-7B-Instruct-v0.2](https://huggingface.co/rubra-ai/Mistral-7B-Instruct-v0.2)
## Model description
The model is the result of further post-training [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2). This model is designed for high performance in various instruction-following tasks and complex interactions, including multi-turn function calling and detailed conversations.
## Training Data
The model underwent additional training on a proprietary dataset encompassing diverse instruction-following, chat, and function calling data. This post-training process enhances the model's ability to integrate tools and manage complex interaction scenarios effectively.
## How to use
Refer to https://docs.rubra.ai/inference/llamacpp for usage. Feel free to ask/open issues up in our Github repo: https://github.com/rubra-ai/rubra
## Limitations and Bias
While the model performs well on a wide range of tasks, it may still produce biased or incorrect outputs. Users should exercise caution and critical judgment when using the model in sensitive or high-stakes applications. The model's outputs are influenced by the data it was trained on, which may contain inherent biases.
## Ethical Considerations
Users should ensure that the deployment of this model adheres to ethical guidelines and consider the potential societal impact of the generated text. Misuse of the model for generating harmful or misleading content is strongly discouraged.
## Acknowledgements
We would like to thank Mistral for the model.
## Contact Information
For questions or comments about the model, please reach out to [the rubra team](mailto:[email protected]).
## Citation
If you use this work, please cite it as:
```
@misc {rubra_ai_2024,
author = { Sanjay Nadhavajhala and Yingbei Tong },
title = { Mistral-7B-Instruct-v0.2 },
year = 2024,
url = { https://huggingface.co/rubra-ai/Mistral-7B-Instruct-v0.2 },
doi = { 10.57967/hf/2686 },
publisher = { Hugging Face }
}
``` |
moot20/SmolVLM-256M-Instruct-MLX-8bits | moot20 | "2025-02-19T15:06:48" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"idefics3",
"image-text-to-text",
"mlx",
"conversational",
"en",
"dataset:HuggingFaceM4/the_cauldron",
"dataset:HuggingFaceM4/Docmatix",
"base_model:HuggingFaceTB/SmolVLM-256M-Instruct",
"base_model:quantized:HuggingFaceTB/SmolVLM-256M-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | "2025-02-03T16:21:37" | ---
library_name: transformers
license: apache-2.0
datasets:
- HuggingFaceM4/the_cauldron
- HuggingFaceM4/Docmatix
pipeline_tag: image-text-to-text
language:
- en
base_model:
- HuggingFaceTB/SmolVLM-256M-Instruct
base_model_relation: quantized
tags:
- mlx
---
# moot20/SmolVLM-256M-Instruct-MLX-8bits
This model was converted to MLX format from [`HuggingFaceTB/SmolVLM-256M-Instruct`]() using mlx-vlm version **0.1.12**.
Refer to the [original model card](https://huggingface.co/HuggingFaceTB/SmolVLM-256M-Instruct) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model moot20/SmolVLM-256M-Instruct-MLX-8bits --max-tokens 100 --temp 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
jmc2432/stock-report-gguf | jmc2432 | "2024-06-18T07:22:16" | 2 | 0 | transformers | [
"transformers",
"gguf",
"gemma",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/gemma-7b-bnb-4bit",
"base_model:quantized:unsloth/gemma-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-18T07:17:33" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- gguf
base_model: unsloth/gemma-7b-bnb-4bit
---
# Uploaded model
- **Developed by:** jmc2432
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-7b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Helsinki-NLP/opus-mt-yo-en | Helsinki-NLP | "2023-08-16T12:09:00" | 4,265 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"yo",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2022-03-02T23:29:04" | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-yo-en
* source languages: yo
* target languages: en
* OPUS readme: [yo-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/yo-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/yo-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/yo-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/yo-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.yo.en | 33.8 | 0.496 |
|
asr-africa/w2v-bert-2.0-CV_Fleurs-lg-10hrs-v4 | asr-africa | "2024-10-26T03:04:46" | 29 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/w2v-bert-2.0",
"base_model:finetune:facebook/w2v-bert-2.0",
"license:mit",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-10-25T10:01:34" | ---
library_name: transformers
license: mit
base_model: facebook/w2v-bert-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: w2v-bert-2.0-CV_Fleurs-lg-10hrs-v4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# w2v-bert-2.0-CV_Fleurs-lg-10hrs-v4
This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8790
- Wer: 0.3341
- Cer: 0.0706
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 1.4106 | 1.0 | 513 | 0.5293 | 0.5666 | 0.1264 |
| 0.4775 | 2.0 | 1026 | 0.4882 | 0.5251 | 0.1118 |
| 0.3696 | 3.0 | 1539 | 0.3973 | 0.4565 | 0.0998 |
| 0.3007 | 4.0 | 2052 | 0.3896 | 0.4311 | 0.0932 |
| 0.2564 | 5.0 | 2565 | 0.3743 | 0.4372 | 0.0966 |
| 0.2273 | 6.0 | 3078 | 0.3549 | 0.4184 | 0.0880 |
| 0.203 | 7.0 | 3591 | 0.3836 | 0.4001 | 0.0847 |
| 0.1775 | 8.0 | 4104 | 0.3694 | 0.4132 | 0.0855 |
| 0.159 | 9.0 | 4617 | 0.3487 | 0.4137 | 0.0890 |
| 0.1444 | 10.0 | 5130 | 0.3705 | 0.4277 | 0.0892 |
| 0.1262 | 11.0 | 5643 | 0.3620 | 0.3982 | 0.0828 |
| 0.1139 | 12.0 | 6156 | 0.3666 | 0.4055 | 0.0842 |
| 0.1015 | 13.0 | 6669 | 0.3656 | 0.4011 | 0.0836 |
| 0.0914 | 14.0 | 7182 | 0.3573 | 0.3928 | 0.0834 |
| 0.0836 | 15.0 | 7695 | 0.3798 | 0.3795 | 0.0790 |
| 0.0729 | 16.0 | 8208 | 0.4289 | 0.3896 | 0.0822 |
| 0.0652 | 17.0 | 8721 | 0.4778 | 0.4157 | 0.0874 |
| 0.0599 | 18.0 | 9234 | 0.4570 | 0.3796 | 0.0792 |
| 0.0504 | 19.0 | 9747 | 0.4444 | 0.4125 | 0.0856 |
| 0.0498 | 20.0 | 10260 | 0.4612 | 0.3945 | 0.0842 |
| 0.0434 | 21.0 | 10773 | 0.4881 | 0.4002 | 0.0843 |
| 0.0386 | 22.0 | 11286 | 0.5099 | 0.3777 | 0.0804 |
| 0.0381 | 23.0 | 11799 | 0.4904 | 0.3866 | 0.0824 |
| 0.0344 | 24.0 | 12312 | 0.4622 | 0.4028 | 0.0834 |
| 0.0302 | 25.0 | 12825 | 0.4986 | 0.3918 | 0.0820 |
| 0.0268 | 26.0 | 13338 | 0.5162 | 0.3954 | 0.0812 |
| 0.0272 | 27.0 | 13851 | 0.4748 | 0.3774 | 0.0791 |
| 0.0235 | 28.0 | 14364 | 0.4718 | 0.3823 | 0.0785 |
| 0.0219 | 29.0 | 14877 | 0.5318 | 0.3738 | 0.0797 |
| 0.0205 | 30.0 | 15390 | 0.5196 | 0.3769 | 0.0783 |
| 0.0203 | 31.0 | 15903 | 0.5203 | 0.3741 | 0.0788 |
| 0.019 | 32.0 | 16416 | 0.5031 | 0.3858 | 0.0809 |
| 0.0179 | 33.0 | 16929 | 0.5772 | 0.3745 | 0.0810 |
| 0.0175 | 34.0 | 17442 | 0.4906 | 0.3676 | 0.0763 |
| 0.0166 | 35.0 | 17955 | 0.5371 | 0.3694 | 0.0786 |
| 0.0138 | 36.0 | 18468 | 0.5748 | 0.3744 | 0.0788 |
| 0.0134 | 37.0 | 18981 | 0.5343 | 0.3697 | 0.0778 |
| 0.0135 | 38.0 | 19494 | 0.5407 | 0.3839 | 0.0804 |
| 0.0123 | 39.0 | 20007 | 0.5343 | 0.3661 | 0.0767 |
| 0.0124 | 40.0 | 20520 | 0.5633 | 0.3801 | 0.0817 |
| 0.0131 | 41.0 | 21033 | 0.5581 | 0.3633 | 0.0774 |
| 0.0094 | 42.0 | 21546 | 0.5862 | 0.3684 | 0.0789 |
| 0.0101 | 43.0 | 22059 | 0.5479 | 0.3646 | 0.0761 |
| 0.0094 | 44.0 | 22572 | 0.5738 | 0.3621 | 0.0761 |
| 0.0078 | 45.0 | 23085 | 0.5284 | 0.3782 | 0.0777 |
| 0.0074 | 46.0 | 23598 | 0.6277 | 0.3725 | 0.0790 |
| 0.01 | 47.0 | 24111 | 0.5826 | 0.3686 | 0.0765 |
| 0.0088 | 48.0 | 24624 | 0.5601 | 0.3660 | 0.0761 |
| 0.0083 | 49.0 | 25137 | 0.5410 | 0.3606 | 0.0769 |
| 0.0074 | 50.0 | 25650 | 0.5592 | 0.3613 | 0.0780 |
| 0.007 | 51.0 | 26163 | 0.5891 | 0.3690 | 0.0779 |
| 0.0067 | 52.0 | 26676 | 0.5807 | 0.3662 | 0.0779 |
| 0.0067 | 53.0 | 27189 | 0.5851 | 0.3640 | 0.0773 |
| 0.0065 | 54.0 | 27702 | 0.5989 | 0.3667 | 0.0767 |
| 0.005 | 55.0 | 28215 | 0.5746 | 0.3757 | 0.0785 |
| 0.0071 | 56.0 | 28728 | 0.5823 | 0.3610 | 0.0757 |
| 0.005 | 57.0 | 29241 | 0.6048 | 0.3562 | 0.0758 |
| 0.0046 | 58.0 | 29754 | 0.6254 | 0.3561 | 0.0753 |
| 0.0055 | 59.0 | 30267 | 0.6036 | 0.3533 | 0.0755 |
| 0.004 | 60.0 | 30780 | 0.5876 | 0.3605 | 0.0758 |
| 0.0042 | 61.0 | 31293 | 0.5782 | 0.3643 | 0.0776 |
| 0.0034 | 62.0 | 31806 | 0.6118 | 0.3656 | 0.0748 |
| 0.004 | 63.0 | 32319 | 0.5830 | 0.3650 | 0.0756 |
| 0.0049 | 64.0 | 32832 | 0.5946 | 0.3579 | 0.0755 |
| 0.0034 | 65.0 | 33345 | 0.5856 | 0.3482 | 0.0725 |
| 0.0023 | 66.0 | 33858 | 0.6186 | 0.3513 | 0.0739 |
| 0.0019 | 67.0 | 34371 | 0.5910 | 0.3664 | 0.0760 |
| 0.002 | 68.0 | 34884 | 0.6762 | 0.3597 | 0.0764 |
| 0.0027 | 69.0 | 35397 | 0.6270 | 0.3503 | 0.0723 |
| 0.0026 | 70.0 | 35910 | 0.6596 | 0.3551 | 0.0728 |
| 0.0024 | 71.0 | 36423 | 0.6216 | 0.3563 | 0.0744 |
| 0.0021 | 72.0 | 36936 | 0.6250 | 0.3483 | 0.0728 |
| 0.0018 | 73.0 | 37449 | 0.5963 | 0.3524 | 0.0735 |
| 0.0028 | 74.0 | 37962 | 0.6323 | 0.3541 | 0.0745 |
| 0.0019 | 75.0 | 38475 | 0.6252 | 0.3459 | 0.0735 |
| 0.0021 | 76.0 | 38988 | 0.6399 | 0.3500 | 0.0734 |
| 0.0013 | 77.0 | 39501 | 0.6548 | 0.3499 | 0.0734 |
| 0.001 | 78.0 | 40014 | 0.6746 | 0.3503 | 0.0743 |
| 0.0011 | 79.0 | 40527 | 0.6395 | 0.3533 | 0.0739 |
| 0.0007 | 80.0 | 41040 | 0.6779 | 0.3478 | 0.0732 |
| 0.0007 | 81.0 | 41553 | 0.6806 | 0.3463 | 0.0724 |
| 0.0009 | 82.0 | 42066 | 0.7214 | 0.3453 | 0.0726 |
| 0.0017 | 83.0 | 42579 | 0.6250 | 0.3445 | 0.0721 |
| 0.0009 | 84.0 | 43092 | 0.6632 | 0.3443 | 0.0717 |
| 0.0005 | 85.0 | 43605 | 0.6850 | 0.3397 | 0.0709 |
| 0.0002 | 86.0 | 44118 | 0.7168 | 0.3417 | 0.0717 |
| 0.0001 | 87.0 | 44631 | 0.7629 | 0.3432 | 0.0720 |
| 0.0002 | 88.0 | 45144 | 0.7349 | 0.3385 | 0.0718 |
| 0.0003 | 89.0 | 45657 | 0.7347 | 0.3377 | 0.0715 |
| 0.0002 | 90.0 | 46170 | 0.7449 | 0.3433 | 0.0720 |
| 0.0001 | 91.0 | 46683 | 0.7630 | 0.3353 | 0.0708 |
| 0.0 | 92.0 | 47196 | 0.7952 | 0.3339 | 0.0705 |
| 0.0 | 93.0 | 47709 | 0.8144 | 0.3341 | 0.0705 |
| 0.0 | 94.0 | 48222 | 0.8309 | 0.3346 | 0.0707 |
| 0.0 | 95.0 | 48735 | 0.8456 | 0.3350 | 0.0708 |
| 0.0 | 96.0 | 49248 | 0.8585 | 0.3346 | 0.0706 |
| 0.0 | 97.0 | 49761 | 0.8673 | 0.3346 | 0.0706 |
| 0.0 | 98.0 | 50274 | 0.8743 | 0.3343 | 0.0707 |
| 0.0 | 99.0 | 50787 | 0.8773 | 0.3341 | 0.0706 |
| 0.0 | 100.0 | 51300 | 0.8790 | 0.3341 | 0.0706 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.1.0+cu118
- Datasets 3.0.2
- Tokenizers 0.20.1
|
artianand/disability_status_adapter_deberta_v3_large_batch_8 | artianand | "2025-02-03T20:53:55" | 5 | 0 | adapter-transformers | [
"adapter-transformers",
"deberta-v2",
"region:us"
] | null | "2025-02-03T20:53:48" | ---
tags:
- deberta-v2
- adapter-transformers
---
# Adapter `artianand/disability_status_adapter_deberta_v3_large_batch_8` for microsoft/deberta-v3-large
An [adapter](https://adapterhub.ml) for the `microsoft/deberta-v3-large` model that was trained on the None dataset.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("microsoft/deberta-v3-large")
adapter_name = model.load_adapter("artianand/disability_status_adapter_deberta_v3_large_batch_8", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> |
End of preview.
Subsets and Splits