modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-15 06:29:46
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 426
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-15 06:29:46
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Helsinki-NLP/opus-mt-fi-niu | Helsinki-NLP | "2023-08-16T11:35:09Z" | 111 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fi",
"niu",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2022-03-02T23:29:04Z" | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-niu
* source languages: fi
* target languages: niu
* OPUS readme: [fi-niu](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-niu/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-niu/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-niu/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-niu/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.niu | 35.3 | 0.565 |
|
HigherMind/PARENTING-Q3_K_L-GGUF | HigherMind | "2025-01-26T16:15:33Z" | 116 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:HigherMind/PARENTING",
"base_model:quantized:HigherMind/PARENTING",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-26T16:15:04Z" | ---
base_model: HigherMind/PARENTING
tags:
- llama-cpp
- gguf-my-repo
---
# HigherMind/PARENTING-Q3_K_L-GGUF
This model was converted to GGUF format from [`HigherMind/PARENTING`](https://huggingface.co/HigherMind/PARENTING) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/HigherMind/PARENTING) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo HigherMind/PARENTING-Q3_K_L-GGUF --hf-file parenting-q3_k_l.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo HigherMind/PARENTING-Q3_K_L-GGUF --hf-file parenting-q3_k_l.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo HigherMind/PARENTING-Q3_K_L-GGUF --hf-file parenting-q3_k_l.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo HigherMind/PARENTING-Q3_K_L-GGUF --hf-file parenting-q3_k_l.gguf -c 2048
```
|
Litzy619/V0402MP2 | Litzy619 | "2024-04-03T02:48:35Z" | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:finetune:microsoft/phi-2",
"license:mit",
"region:us"
] | null | "2024-04-03T00:59:31Z" | ---
license: mit
base_model: microsoft/phi-2
tags:
- generated_from_trainer
model-index:
- name: V0402MP2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V0402MP2
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1715
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 20
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5867 | 0.09 | 10 | 2.4416 |
| 2.2588 | 0.18 | 20 | 1.9249 |
| 1.7114 | 0.27 | 30 | 1.3589 |
| 1.2427 | 0.36 | 40 | 0.9778 |
| 0.8962 | 0.45 | 50 | 0.6311 |
| 0.5757 | 0.54 | 60 | 0.3253 |
| 0.3476 | 0.63 | 70 | 0.2216 |
| 0.2674 | 0.73 | 80 | 0.1883 |
| 0.2391 | 0.82 | 90 | 0.1766 |
| 0.2301 | 0.91 | 100 | 0.1724 |
| 0.2267 | 1.0 | 110 | 0.1715 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
speechbrain/asr-crdnn-commonvoice-14-rw | speechbrain | "2024-02-26T00:04:43Z" | 3 | 0 | speechbrain | [
"speechbrain",
"automatic-speech-recognition",
"CTC",
"Attention",
"pytorch",
"rw",
"dataset:common_voice",
"arxiv:2106.04624",
"license:apache-2.0",
"region:us"
] | automatic-speech-recognition | "2023-08-09T21:22:26Z" | ---
language:
- rw
thumbnail: null
tags:
- automatic-speech-recognition
- CTC
- Attention
- pytorch
- speechbrain
license: apache-2.0
datasets:
- common_voice
metrics:
- name: Test WER
type: wer
value: ' 29.22'
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# CRDNN with CTC/Attention trained on CommonVoice 14.0 Kinyarwanda (No LM)
This repository provides all the necessary tools to perform automatic speech
recognition from an end-to-end system pretrained on CommonVoice (German Language) within
SpeechBrain. For a better experience, we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io).
The performance of the model is the following:
| Release | Test CER | Test WER | GPUs |
|:-------------:|:--------------:|:--------------:| :--------:|
| 15.08.23 | 10.80 | 29.22 | 1xV100 32GB |
## Pipeline description
This ASR system is composed of 2 different but linked blocks:
- Tokenizer (unigram) that transforms words into subword units and trained with
the train transcriptions (train.tsv) of CommonVoice (rw).
- Acoustic model (CRDNN + CTC/Attention). The CRDNN architecture is made of
N blocks of convolutional neural networks with normalization and pooling on the
frequency domain. Then, a bidirectional LSTM is connected to a final DNN to obtain
the final acoustic representation that is given to the CTC and attention decoders.
The system is trained with recordings sampled at 16kHz (single channel).
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *transcribe_file* if needed.
## Install SpeechBrain
First of all, please install SpeechBrain with the following command:
```
pip install speechbrain
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Transcribing your own audio files (in Kinyarwanda)
```python
from speechbrain.inference.ASR import EncoderDecoderASR
asr_model = EncoderDecoderASR.from_hparams(source="speechbrain/asr-crdnn-commonvoice-14-rw", savedir="pretrained_models/asr-crdnn-commonvoice-14-rw")
asr_model.transcribe_file("speechbrain/asr-crdnn-commonvoice-14-rw/example_rw.wav")
```
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
## Parallel Inference on a Batch
Please, [see this Colab notebook](https://colab.research.google.com/drive/1hX5ZI9S4jHIjahFCZnhwwQmFoGAi3tmu?usp=sharing) to figure out how to transcribe in parallel a batch of input sentences using a pre-trained model.
### Training
The model was trained with SpeechBrain (986a2175).
To train it from scratch follows these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/
```
2. Install it:
```
cd speechbrain
pip install -r requirements.txt
pip install -e .
```
3. Run Training:
```
cd recipes/CommonVoice/ASR/seq2seq
python train.py hparams/train_rw.yaml --data_folder=your_data_folder
```
You can find our training results (models, logs, etc) [here](https://www.dropbox.com/sh/i1fv4f8miilqgii/AAB3gE97kmFDA0ISkIDSUW_La?dl=0)
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
# **About SpeechBrain**
- Website: https://speechbrain.github.io/
- Code: https://github.com/speechbrain/speechbrain/
- HuggingFace: https://huggingface.co/speechbrain/
# **Citing SpeechBrain**
Please, cite SpeechBrain if you use it for your research or business.
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
``` |
Procit004/NER | Procit004 | "2024-09-10T03:38:39Z" | 6 | 0 | null | [
"tensorboard",
"safetensors",
"bert",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"region:us"
] | null | "2024-09-10T03:38:21Z" | ---
base_model: bert-base-cased
license: apache-2.0
metrics:
- precision
- recall
- f1
- accuracy
tags:
- generated_from_trainer
model-index:
- name: NER
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NER
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0571
- Precision: 0.9540
- Recall: 0.9620
- F1: 0.9580
- Accuracy: 0.9812
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0698 | 1.0 | 4031 | 0.0589 | 0.9537 | 0.9611 | 0.9574 | 0.9804 |
| 0.045 | 2.0 | 8062 | 0.0571 | 0.9540 | 0.9620 | 0.9580 | 0.9812 |
| 0.0289 | 3.0 | 12093 | 0.0633 | 0.9612 | 0.9597 | 0.9604 | 0.9819 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0
- Datasets 2.21.0
- Tokenizers 0.19.1
|
asjoberg/Qwen2-0.5B-Instruct-predli-finetuned-fused16f-simplified-default | asjoberg | "2025-02-10T20:46:42Z" | 0 | 0 | mlx | [
"mlx",
"safetensors",
"qwen2",
"chat",
"text-generation",
"conversational",
"en",
"base_model:asjoberg/Qwen2-0.5B-Instruct-predli",
"base_model:quantized:asjoberg/Qwen2-0.5B-Instruct-predli",
"license:apache-2.0",
"4-bit",
"region:us"
] | text-generation | "2025-02-10T20:45:47Z" | ---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- chat
- mlx
- mlx
base_model: asjoberg/Qwen2-0.5B-Instruct-predli
---
# asjoberg/Qwen2-0.5B-Instruct-predli-finetuned-fused16f-simplified-default
The Model [asjoberg/Qwen2-0.5B-Instruct-predli-finetuned-fused16f-simplified-default](https://huggingface.co/asjoberg/Qwen2-0.5B-Instruct-predli-finetuned-fused16f-simplified-default) was
converted to MLX format from [asjoberg/Qwen2-0.5B-Instruct-predli](https://huggingface.co/asjoberg/Qwen2-0.5B-Instruct-predli)
using mlx-lm version **0.21.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("asjoberg/Qwen2-0.5B-Instruct-predli-finetuned-fused16f-simplified-default")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
Gausar/roberta-base-ner-demo | Gausar | "2024-04-21T12:25:53Z" | 88 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"token-classification",
"generated_from_trainer",
"mn",
"base_model:bayartsogt/mongolian-roberta-base",
"base_model:finetune:bayartsogt/mongolian-roberta-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-04-16T16:06:04Z" | ---
language:
- mn
base_model: bayartsogt/mongolian-roberta-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-base-ner-demo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-ner-demo
This model is a fine-tuned version of [bayartsogt/mongolian-roberta-base](https://huggingface.co/bayartsogt/mongolian-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1205
- Precision: 0.9307
- Recall: 0.9389
- F1: 0.9348
- Accuracy: 0.9816
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3889 | 1.0 | 477 | 0.0832 | 0.8808 | 0.8987 | 0.8897 | 0.9743 |
| 0.0736 | 2.0 | 954 | 0.0703 | 0.9170 | 0.9226 | 0.9198 | 0.9796 |
| 0.0361 | 3.0 | 1431 | 0.0784 | 0.9227 | 0.9321 | 0.9274 | 0.9801 |
| 0.0216 | 4.0 | 1908 | 0.0863 | 0.9235 | 0.9328 | 0.9281 | 0.9801 |
| 0.0116 | 5.0 | 2385 | 0.0977 | 0.9292 | 0.9371 | 0.9332 | 0.9809 |
| 0.007 | 6.0 | 2862 | 0.1071 | 0.9270 | 0.9356 | 0.9313 | 0.9808 |
| 0.0046 | 7.0 | 3339 | 0.1123 | 0.9322 | 0.9378 | 0.9350 | 0.9818 |
| 0.0029 | 8.0 | 3816 | 0.1179 | 0.9310 | 0.9371 | 0.9340 | 0.9814 |
| 0.0021 | 9.0 | 4293 | 0.1187 | 0.9293 | 0.9375 | 0.9334 | 0.9812 |
| 0.0013 | 10.0 | 4770 | 0.1205 | 0.9307 | 0.9389 | 0.9348 | 0.9816 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.15.2
|
MaxP/vit-base-riego | MaxP | "2023-06-05T23:00:08Z" | 35 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2022-12-30T19:09:08Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- f1
model-index:
- name: vit-base-riego
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: MaxP--agro_riego
split: test
args: MaxP--agro_riego
metrics:
- name: F1
type: f1
value: 0.37288135593220334
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-riego
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2998
- F1: 0.3729
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1696 | 0.79 | 100 | 1.1385 | 0.352 |
| 0.08 | 1.59 | 200 | 0.9071 | 0.3774 |
| 0.0928 | 2.38 | 300 | 1.1181 | 0.3454 |
| 0.0189 | 3.17 | 400 | 0.8262 | 0.3425 |
| 0.0728 | 3.97 | 500 | 0.9647 | 0.3747 |
| 0.0756 | 4.76 | 600 | 0.6097 | 0.4776 |
| 0.0018 | 5.56 | 700 | 1.3900 | 0.3652 |
| 0.002 | 6.35 | 800 | 0.7498 | 0.4606 |
| 0.0304 | 7.14 | 900 | 1.4367 | 0.3666 |
| 0.0024 | 7.94 | 1000 | 1.5714 | 0.3041 |
| 0.0463 | 8.73 | 1100 | 0.8038 | 0.4016 |
| 0.0014 | 9.52 | 1200 | 0.7175 | 0.4795 |
| 0.0015 | 10.32 | 1300 | 1.0347 | 0.3959 |
| 0.0009 | 11.11 | 1400 | 1.3881 | 0.3670 |
| 0.0131 | 11.9 | 1500 | 1.0780 | 0.4044 |
| 0.0007 | 12.7 | 1600 | 0.9834 | 0.4255 |
| 0.0011 | 13.49 | 1700 | 1.0753 | 0.4033 |
| 0.0007 | 14.29 | 1800 | 1.1514 | 0.3989 |
| 0.0007 | 15.08 | 1900 | 1.2373 | 0.3769 |
| 0.0007 | 15.87 | 2000 | 1.2998 | 0.3729 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
OpenFinAL/FINGPT_QA_V4 | OpenFinAL | "2025-03-07T16:02:36Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-07T16:01:46Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Gemma-The-Writer-J.GutenBerg-10B-GGUF | mradermacher | "2024-10-31T12:53:09Z" | 13 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:DavidAU/Gemma-The-Writer-J.GutenBerg-10B",
"base_model:quantized:DavidAU/Gemma-The-Writer-J.GutenBerg-10B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-10-30T14:52:50Z" | ---
base_model: DavidAU/Gemma-The-Writer-J.GutenBerg-10B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/DavidAU/Gemma-The-Writer-J.GutenBerg-10B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Gemma-The-Writer-J.GutenBerg-10B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Gemma-The-Writer-J.GutenBerg-10B-GGUF/resolve/main/Gemma-The-Writer-J.GutenBerg-10B.Q2_K.gguf) | Q2_K | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-The-Writer-J.GutenBerg-10B-GGUF/resolve/main/Gemma-The-Writer-J.GutenBerg-10B.Q3_K_S.gguf) | Q3_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-The-Writer-J.GutenBerg-10B-GGUF/resolve/main/Gemma-The-Writer-J.GutenBerg-10B.Q3_K_M.gguf) | Q3_K_M | 5.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-The-Writer-J.GutenBerg-10B-GGUF/resolve/main/Gemma-The-Writer-J.GutenBerg-10B.Q3_K_L.gguf) | Q3_K_L | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-The-Writer-J.GutenBerg-10B-GGUF/resolve/main/Gemma-The-Writer-J.GutenBerg-10B.IQ4_XS.gguf) | IQ4_XS | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-The-Writer-J.GutenBerg-10B-GGUF/resolve/main/Gemma-The-Writer-J.GutenBerg-10B.Q4_K_S.gguf) | Q4_K_S | 6.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gemma-The-Writer-J.GutenBerg-10B-GGUF/resolve/main/Gemma-The-Writer-J.GutenBerg-10B.Q4_K_M.gguf) | Q4_K_M | 6.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gemma-The-Writer-J.GutenBerg-10B-GGUF/resolve/main/Gemma-The-Writer-J.GutenBerg-10B.Q5_K_S.gguf) | Q5_K_S | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-The-Writer-J.GutenBerg-10B-GGUF/resolve/main/Gemma-The-Writer-J.GutenBerg-10B.Q5_K_M.gguf) | Q5_K_M | 7.3 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-The-Writer-J.GutenBerg-10B-GGUF/resolve/main/Gemma-The-Writer-J.GutenBerg-10B.Q6_K.gguf) | Q6_K | 8.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-The-Writer-J.GutenBerg-10B-GGUF/resolve/main/Gemma-The-Writer-J.GutenBerg-10B.Q8_0.gguf) | Q8_0 | 10.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-The-Writer-J.GutenBerg-10B-GGUF/resolve/main/Gemma-The-Writer-J.GutenBerg-10B.f16.gguf) | f16 | 20.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Xu-Ouyang/pythia-1b-deduped-int8-step2000-GPTQ-wikitext2 | Xu-Ouyang | "2024-08-22T07:20:13Z" | 76 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"gptq",
"region:us"
] | text-generation | "2024-08-22T07:19:11Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hdve/Qwen-Qwen1.5-1.8B-1718162486 | hdve | "2024-06-12T03:23:40Z" | 136 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-12T03:21:56Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
clop51/merlinite-7b-lab-Q4_K_M-lurn_slurm-Q4_K_M-GGUF | clop51 | "2024-06-08T15:42:27Z" | 3 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:clop51/merlinite-7b-lab-Q4_K_M-lurn_slurm",
"base_model:quantized:clop51/merlinite-7b-lab-Q4_K_M-lurn_slurm",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-06-08T15:42:13Z" | ---
tags:
- llama-cpp
- gguf-my-repo
base_model: clop51/merlinite-7b-lab-Q4_K_M-lurn_slurm
---
# clop51/merlinite-7b-lab-Q4_K_M-lurn_slurm-Q4_K_M-GGUF
This model was converted to GGUF format from [`clop51/merlinite-7b-lab-Q4_K_M-lurn_slurm`](https://huggingface.co/clop51/merlinite-7b-lab-Q4_K_M-lurn_slurm) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/clop51/merlinite-7b-lab-Q4_K_M-lurn_slurm) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama --hf-repo clop51/merlinite-7b-lab-Q4_K_M-lurn_slurm-Q4_K_M-GGUF --hf-file merlinite-7b-lab-q4_k_m-lurn_slurm-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo clop51/merlinite-7b-lab-Q4_K_M-lurn_slurm-Q4_K_M-GGUF --hf-file merlinite-7b-lab-q4_k_m-lurn_slurm-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./main --hf-repo clop51/merlinite-7b-lab-Q4_K_M-lurn_slurm-Q4_K_M-GGUF --hf-file merlinite-7b-lab-q4_k_m-lurn_slurm-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./server --hf-repo clop51/merlinite-7b-lab-Q4_K_M-lurn_slurm-Q4_K_M-GGUF --hf-file merlinite-7b-lab-q4_k_m-lurn_slurm-q4_k_m.gguf -c 2048
```
|
abaddon182/73eb1365-ef6a-4fa0-ab73-c48e33c5ca77 | abaddon182 | "2025-02-03T04:46:55Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.2-1B-Instruct",
"base_model:adapter:unsloth/Llama-3.2-1B-Instruct",
"license:llama3.2",
"region:us"
] | null | "2025-02-03T04:39:43Z" | ---
library_name: peft
license: llama3.2
base_model: unsloth/Llama-3.2-1B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 73eb1365-ef6a-4fa0-ab73-c48e33c5ca77
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Llama-3.2-1B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- bcb88f097f9f17ee_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/bcb88f097f9f17ee_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: abaddon182/73eb1365-ef6a-4fa0-ab73-c48e33c5ca77
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/bcb88f097f9f17ee_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6ef0cd25-bc41-4d69-bb5f-2f9babf6004e
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6ef0cd25-bc41-4d69-bb5f-2f9babf6004e
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 73eb1365-ef6a-4fa0-ab73-c48e33c5ca77
This model is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0626
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.6322 | 0.0115 | 1 | 2.8054 |
| 1.3356 | 0.5731 | 50 | 1.3403 |
| 0.9628 | 1.1461 | 100 | 1.1620 |
| 0.7653 | 1.7192 | 150 | 1.0846 |
| 1.0221 | 2.2923 | 200 | 1.0626 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
tsfeith/rl_course_vizdoom_health_gathering_supreme | tsfeith | "2024-03-01T16:14:41Z" | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-03-01T16:14:34Z" | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 12.08 +/- 5.10
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r tsfeith/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
Keithulu/distilgpt2-finetuned-python-stack-clean-answers-e200 | Keithulu | "2023-06-26T19:32:53Z" | 169 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-06-26T19:02:42Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-python-stack-clean-answers-e200
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-python-stack-clean-answers-e200
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0700
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 28 | 3.2510 |
| No log | 2.0 | 56 | 3.1681 |
| No log | 3.0 | 84 | 3.0891 |
| No log | 4.0 | 112 | 3.0233 |
| No log | 5.0 | 140 | 2.9563 |
| No log | 6.0 | 168 | 2.8967 |
| No log | 7.0 | 196 | 2.8380 |
| No log | 8.0 | 224 | 2.7777 |
| No log | 9.0 | 252 | 2.7218 |
| No log | 10.0 | 280 | 2.6671 |
| No log | 11.0 | 308 | 2.6158 |
| No log | 12.0 | 336 | 2.5594 |
| No log | 13.0 | 364 | 2.5105 |
| No log | 14.0 | 392 | 2.4551 |
| No log | 15.0 | 420 | 2.4029 |
| No log | 16.0 | 448 | 2.3500 |
| No log | 17.0 | 476 | 2.2973 |
| 3.016 | 18.0 | 504 | 2.2479 |
| 3.016 | 19.0 | 532 | 2.1940 |
| 3.016 | 20.0 | 560 | 2.1436 |
| 3.016 | 21.0 | 588 | 2.0926 |
| 3.016 | 22.0 | 616 | 2.0419 |
| 3.016 | 23.0 | 644 | 1.9912 |
| 3.016 | 24.0 | 672 | 1.9435 |
| 3.016 | 25.0 | 700 | 1.8982 |
| 3.016 | 26.0 | 728 | 1.8483 |
| 3.016 | 27.0 | 756 | 1.7974 |
| 3.016 | 28.0 | 784 | 1.7525 |
| 3.016 | 29.0 | 812 | 1.7082 |
| 3.016 | 30.0 | 840 | 1.6610 |
| 3.016 | 31.0 | 868 | 1.6108 |
| 3.016 | 32.0 | 896 | 1.5655 |
| 3.016 | 33.0 | 924 | 1.5193 |
| 3.016 | 34.0 | 952 | 1.4757 |
| 3.016 | 35.0 | 980 | 1.4342 |
| 2.2411 | 36.0 | 1008 | 1.3863 |
| 2.2411 | 37.0 | 1036 | 1.3433 |
| 2.2411 | 38.0 | 1064 | 1.3095 |
| 2.2411 | 39.0 | 1092 | 1.2757 |
| 2.2411 | 40.0 | 1120 | 1.2278 |
| 2.2411 | 41.0 | 1148 | 1.1887 |
| 2.2411 | 42.0 | 1176 | 1.1481 |
| 2.2411 | 43.0 | 1204 | 1.1193 |
| 2.2411 | 44.0 | 1232 | 1.0711 |
| 2.2411 | 45.0 | 1260 | 1.0332 |
| 2.2411 | 46.0 | 1288 | 1.0062 |
| 2.2411 | 47.0 | 1316 | 0.9696 |
| 2.2411 | 48.0 | 1344 | 0.9358 |
| 2.2411 | 49.0 | 1372 | 0.9109 |
| 2.2411 | 50.0 | 1400 | 0.8690 |
| 2.2411 | 51.0 | 1428 | 0.8420 |
| 2.2411 | 52.0 | 1456 | 0.8111 |
| 2.2411 | 53.0 | 1484 | 0.7848 |
| 1.5799 | 54.0 | 1512 | 0.7596 |
| 1.5799 | 55.0 | 1540 | 0.7361 |
| 1.5799 | 56.0 | 1568 | 0.7081 |
| 1.5799 | 57.0 | 1596 | 0.6818 |
| 1.5799 | 58.0 | 1624 | 0.6601 |
| 1.5799 | 59.0 | 1652 | 0.6351 |
| 1.5799 | 60.0 | 1680 | 0.6145 |
| 1.5799 | 61.0 | 1708 | 0.5926 |
| 1.5799 | 62.0 | 1736 | 0.5711 |
| 1.5799 | 63.0 | 1764 | 0.5492 |
| 1.5799 | 64.0 | 1792 | 0.5251 |
| 1.5799 | 65.0 | 1820 | 0.5114 |
| 1.5799 | 66.0 | 1848 | 0.4946 |
| 1.5799 | 67.0 | 1876 | 0.4758 |
| 1.5799 | 68.0 | 1904 | 0.4628 |
| 1.5799 | 69.0 | 1932 | 0.4435 |
| 1.5799 | 70.0 | 1960 | 0.4325 |
| 1.5799 | 71.0 | 1988 | 0.4168 |
| 1.0863 | 72.0 | 2016 | 0.4025 |
| 1.0863 | 73.0 | 2044 | 0.3904 |
| 1.0863 | 74.0 | 2072 | 0.3731 |
| 1.0863 | 75.0 | 2100 | 0.3606 |
| 1.0863 | 76.0 | 2128 | 0.3451 |
| 1.0863 | 77.0 | 2156 | 0.3387 |
| 1.0863 | 78.0 | 2184 | 0.3277 |
| 1.0863 | 79.0 | 2212 | 0.3160 |
| 1.0863 | 80.0 | 2240 | 0.3108 |
| 1.0863 | 81.0 | 2268 | 0.2980 |
| 1.0863 | 82.0 | 2296 | 0.2897 |
| 1.0863 | 83.0 | 2324 | 0.2814 |
| 1.0863 | 84.0 | 2352 | 0.2715 |
| 1.0863 | 85.0 | 2380 | 0.2607 |
| 1.0863 | 86.0 | 2408 | 0.2521 |
| 1.0863 | 87.0 | 2436 | 0.2482 |
| 1.0863 | 88.0 | 2464 | 0.2386 |
| 1.0863 | 89.0 | 2492 | 0.2347 |
| 0.7543 | 90.0 | 2520 | 0.2231 |
| 0.7543 | 91.0 | 2548 | 0.2205 |
| 0.7543 | 92.0 | 2576 | 0.2135 |
| 0.7543 | 93.0 | 2604 | 0.2081 |
| 0.7543 | 94.0 | 2632 | 0.2018 |
| 0.7543 | 95.0 | 2660 | 0.1956 |
| 0.7543 | 96.0 | 2688 | 0.1910 |
| 0.7543 | 97.0 | 2716 | 0.1855 |
| 0.7543 | 98.0 | 2744 | 0.1806 |
| 0.7543 | 99.0 | 2772 | 0.1768 |
| 0.7543 | 100.0 | 2800 | 0.1715 |
| 0.7543 | 101.0 | 2828 | 0.1687 |
| 0.7543 | 102.0 | 2856 | 0.1649 |
| 0.7543 | 103.0 | 2884 | 0.1629 |
| 0.7543 | 104.0 | 2912 | 0.1570 |
| 0.7543 | 105.0 | 2940 | 0.1563 |
| 0.7543 | 106.0 | 2968 | 0.1502 |
| 0.7543 | 107.0 | 2996 | 0.1486 |
| 0.5478 | 108.0 | 3024 | 0.1443 |
| 0.5478 | 109.0 | 3052 | 0.1408 |
| 0.5478 | 110.0 | 3080 | 0.1389 |
| 0.5478 | 111.0 | 3108 | 0.1366 |
| 0.5478 | 112.0 | 3136 | 0.1338 |
| 0.5478 | 113.0 | 3164 | 0.1304 |
| 0.5478 | 114.0 | 3192 | 0.1290 |
| 0.5478 | 115.0 | 3220 | 0.1264 |
| 0.5478 | 116.0 | 3248 | 0.1234 |
| 0.5478 | 117.0 | 3276 | 0.1212 |
| 0.5478 | 118.0 | 3304 | 0.1197 |
| 0.5478 | 119.0 | 3332 | 0.1185 |
| 0.5478 | 120.0 | 3360 | 0.1159 |
| 0.5478 | 121.0 | 3388 | 0.1130 |
| 0.5478 | 122.0 | 3416 | 0.1125 |
| 0.5478 | 123.0 | 3444 | 0.1106 |
| 0.5478 | 124.0 | 3472 | 0.1087 |
| 0.4258 | 125.0 | 3500 | 0.1077 |
| 0.4258 | 126.0 | 3528 | 0.1068 |
| 0.4258 | 127.0 | 3556 | 0.1048 |
| 0.4258 | 128.0 | 3584 | 0.1039 |
| 0.4258 | 129.0 | 3612 | 0.1022 |
| 0.4258 | 130.0 | 3640 | 0.1002 |
| 0.4258 | 131.0 | 3668 | 0.0987 |
| 0.4258 | 132.0 | 3696 | 0.0980 |
| 0.4258 | 133.0 | 3724 | 0.0973 |
| 0.4258 | 134.0 | 3752 | 0.0955 |
| 0.4258 | 135.0 | 3780 | 0.0951 |
| 0.4258 | 136.0 | 3808 | 0.0937 |
| 0.4258 | 137.0 | 3836 | 0.0932 |
| 0.4258 | 138.0 | 3864 | 0.0920 |
| 0.4258 | 139.0 | 3892 | 0.0908 |
| 0.4258 | 140.0 | 3920 | 0.0903 |
| 0.4258 | 141.0 | 3948 | 0.0889 |
| 0.4258 | 142.0 | 3976 | 0.0883 |
| 0.3496 | 143.0 | 4004 | 0.0879 |
| 0.3496 | 144.0 | 4032 | 0.0872 |
| 0.3496 | 145.0 | 4060 | 0.0865 |
| 0.3496 | 146.0 | 4088 | 0.0852 |
| 0.3496 | 147.0 | 4116 | 0.0849 |
| 0.3496 | 148.0 | 4144 | 0.0843 |
| 0.3496 | 149.0 | 4172 | 0.0836 |
| 0.3496 | 150.0 | 4200 | 0.0832 |
| 0.3496 | 151.0 | 4228 | 0.0822 |
| 0.3496 | 152.0 | 4256 | 0.0817 |
| 0.3496 | 153.0 | 4284 | 0.0813 |
| 0.3496 | 154.0 | 4312 | 0.0805 |
| 0.3496 | 155.0 | 4340 | 0.0799 |
| 0.3496 | 156.0 | 4368 | 0.0796 |
| 0.3496 | 157.0 | 4396 | 0.0789 |
| 0.3496 | 158.0 | 4424 | 0.0784 |
| 0.3496 | 159.0 | 4452 | 0.0781 |
| 0.3496 | 160.0 | 4480 | 0.0777 |
| 0.3045 | 161.0 | 4508 | 0.0776 |
| 0.3045 | 162.0 | 4536 | 0.0771 |
| 0.3045 | 163.0 | 4564 | 0.0762 |
| 0.3045 | 164.0 | 4592 | 0.0762 |
| 0.3045 | 165.0 | 4620 | 0.0763 |
| 0.3045 | 166.0 | 4648 | 0.0758 |
| 0.3045 | 167.0 | 4676 | 0.0754 |
| 0.3045 | 168.0 | 4704 | 0.0750 |
| 0.3045 | 169.0 | 4732 | 0.0748 |
| 0.3045 | 170.0 | 4760 | 0.0746 |
| 0.3045 | 171.0 | 4788 | 0.0742 |
| 0.3045 | 172.0 | 4816 | 0.0740 |
| 0.3045 | 173.0 | 4844 | 0.0735 |
| 0.3045 | 174.0 | 4872 | 0.0735 |
| 0.3045 | 175.0 | 4900 | 0.0732 |
| 0.3045 | 176.0 | 4928 | 0.0728 |
| 0.3045 | 177.0 | 4956 | 0.0724 |
| 0.3045 | 178.0 | 4984 | 0.0723 |
| 0.2786 | 179.0 | 5012 | 0.0721 |
| 0.2786 | 180.0 | 5040 | 0.0719 |
| 0.2786 | 181.0 | 5068 | 0.0717 |
| 0.2786 | 182.0 | 5096 | 0.0715 |
| 0.2786 | 183.0 | 5124 | 0.0714 |
| 0.2786 | 184.0 | 5152 | 0.0713 |
| 0.2786 | 185.0 | 5180 | 0.0712 |
| 0.2786 | 186.0 | 5208 | 0.0710 |
| 0.2786 | 187.0 | 5236 | 0.0707 |
| 0.2786 | 188.0 | 5264 | 0.0705 |
| 0.2786 | 189.0 | 5292 | 0.0704 |
| 0.2786 | 190.0 | 5320 | 0.0704 |
| 0.2786 | 191.0 | 5348 | 0.0704 |
| 0.2786 | 192.0 | 5376 | 0.0702 |
| 0.2786 | 193.0 | 5404 | 0.0703 |
| 0.2786 | 194.0 | 5432 | 0.0702 |
| 0.2786 | 195.0 | 5460 | 0.0702 |
| 0.2786 | 196.0 | 5488 | 0.0701 |
| 0.2633 | 197.0 | 5516 | 0.0701 |
| 0.2633 | 198.0 | 5544 | 0.0701 |
| 0.2633 | 199.0 | 5572 | 0.0700 |
| 0.2633 | 200.0 | 5600 | 0.0700 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
innat/videoswin | innat | "2024-07-06T20:00:42Z" | 0 | 2 | tf-keras | [
"tf-keras",
"videoswin",
"vision",
"video-classification",
"arxiv:2106.13230",
"arxiv:2103.14030",
"license:mit",
"region:us"
] | video-classification | "2023-10-14T13:09:51Z" | ---
library_name: tf-keras
license: mit
metrics:
- accuracy
pipeline_tag: video-classification
tags:
- videoswin
- vision
---
# [Video Swin Transformer : VideoSwin](https://github.com/innat/VideoSwin)

| Paper | Colab | HF Space | HF Hub |
| :--: | :--: | :---: | :---: |
| [](https://arxiv.org/abs/2106.13230) | [](https://colab.research.google.com/drive/1Q7A700MEI10UomikqjQJANWyFZktJCT-?usp=sharing) | [](https://huggingface.co/spaces/innat/VideoSwin) | [](https://huggingface.co/innat/videoswin) |
VideoSwin is a pure transformer based video modeling algorithm, attained top accuracy on the major video recognition benchmarks. In this model, the author advocates an inductive bias of locality in video transformers, which leads to a better speed-accuracy trade-off compared to previous approaches which compute self-attention globally even with spatial-temporal factorization. The locality of the proposed video architecture is realized by adapting the [**Swin Transformer**](https://arxiv.org/abs/2103.14030) designed for the image domain, while continuing to leverage the power of pre-trained image models.
- GitHub: https://github.com/innat/VideoSwin
This is a unofficial `Keras` implementation of [Video Swin transformers](https://arxiv.org/abs/2106.13230). The official `PyTorch` implementation is [here](https://github.com/SwinTransformer/Video-Swin-Transformer) based on [mmaction2](https://github.com/open-mmlab/mmaction2).
## Model Zoo
The 3D swin-video checkpoints are listed in [`MODEL_ZOO.md`](https://github.com/innat/VideoSwin/blob/main/MODEL_ZOO.md). Following are some hightlights.
### Kinetics 400
In the training phase, the video swin mdoels are initialized with the pretrained weights of image swin models. In that case, `IN` referes to **ImageNet**.
| Backbone | Pretrain | Top-1 | Top-5 | #params | FLOPs | config |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| Swin-T | IN-1K | 78.8 | 93.6 | 28M | ? | [swin-t](https://github.com/SwinTransformer/Video-Swin-Transformer/blob/master/configs/recognition/swin/swin_tiny_patch244_window877_kinetics400_1k.py) |
| Swin-S | IN-1K | 80.6 | 94.5 | 50M | ? | [swin-s](https://github.com/SwinTransformer/Video-Swin-Transformer/blob/master/configs/recognition/swin/swin_small_patch244_window877_kinetics400_1k.py) |
| Swin-B | IN-1K | 80.6 | 94.6 | 88M | ? | [swin-b](https://github.com/SwinTransformer/Video-Swin-Transformer/blob/master/configs/recognition/swin/swin_base_patch244_window877_kinetics400_1k.py) |
| Swin-B | IN-22K | 82.7 | 95.5 | 88M | ? | [swin-b](https://github.com/SwinTransformer/Video-Swin-Transformer/blob/master/configs/recognition/swin/swin_base_patch244_window877_kinetics400_22k.py) |
### Kinetics 600
| Backbone | Pretrain | Top-1 | Top-5 | #params | FLOPs | config |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| Swin-B | IN-22K | 84.0 | 96.5 | 88M | ? | [swin-b](https://github.com/SwinTransformer/Video-Swin-Transformer/blob/master/configs/recognition/swin/swin_base_patch244_window877_kinetics600_22k.py) |
### Something-Something V2
| Backbone | Pretrain | Top-1 | Top-5 | #params | FLOPs | config |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| Swin-B | Kinetics 400 | 69.6 | 92.7 | 89M | ? | [swin-b](https://github.com/SwinTransformer/Video-Swin-Transformer/blob/master/configs/recognition/swin/swin_base_patch244_window1677_sthv2.py) |
|
EE0/kogpt2-base-v2-5-finetuned-klue-ner | EE0 | "2023-05-07T12:46:53Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"token-classification",
"generated_from_trainer",
"dataset:klue",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | token-classification | "2023-05-06T10:39:38Z" | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- klue
metrics:
- f1
model-index:
- name: kogpt2-base-v2-5-finetuned-klue-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: klue
type: klue
config: ner
split: validation
args: ner
metrics:
- name: F1
type: f1
value: 0.5144974226804124
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kogpt2-base-v2-5-finetuned-klue-ner
This model is a fine-tuned version of [skt/kogpt2-base-v2](https://huggingface.co/skt/kogpt2-base-v2) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4425
- F1: 0.5145
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6215 | 1.0 | 876 | 0.5607 | 0.3318 |
| 0.4067 | 2.0 | 1752 | 0.5554 | 0.3609 |
| 0.3128 | 3.0 | 2628 | 0.4259 | 0.4569 |
| 0.2409 | 4.0 | 3504 | 0.4314 | 0.4894 |
| 0.1874 | 5.0 | 4380 | 0.4425 | 0.5145 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
exdysa/mir | exdysa | "2025-04-04T04:44:01Z" | 0 | 0 | null | [
"en",
"region:us"
] | null | "2024-10-30T01:53:01Z" | ---
language:
- en
---
massive thank you to [@silveroxides](https://huggingface.co/silveroxides) for phenomenal work collecting pristine state dicts and related information
#
> [!IMPORTANT]
> # MIR (Machine Intelligence Resource)
MIR is a naming standard, a proposed schema for AIGC/ML work.<br>
In its current incarnation, it looks like this:
> [!NOTE]
> # mir : model . transformer . clip-l : stable-diffusion-xl
```
uri : model . lora . hyper : flux-1
↑ ↑ ↑ ↑ ↑
mir:[domain].[architecture].[implementation]:[compatibility]
```
The solution is provided as a remedy to patch the fractionalization of modelspec standards between development houses (such as models released independently or indifferently to HF.CO ) and to archive metadata which would otherwise remain incomplete.
This work was inspired by the CivitAi [AIR-URN](https://github.com/civitai/civitai/wiki/AIR-%E2%80%90-Uniform-Resource-Names-for-AI) project<br>
and by the super-resolution registry code from the [Spandrel](https://github.com/chaiNNer-org/spandrel/blob/main/libs/spandrel/spandrel/__helpers/registry.py) library.
## Goals
- Standard identification scheme for **ALL** ML-related development
- Simplification of code for model-related logistics
- Rapid retrieval of resources and metadata
- Efficient and reliable compatability checks
- Organized hyperparameter management
> <details> <summary>Why not use `diffusion`/`sgm`, `ldm`/`text`/hf.co folder-structure/brand-specific trade word/preprint paper/development house/algorithm</summary>
>
> - Exact frameworks (SGM/LDM/RectifiedFlow) includes too few
> - Diffusion/Transformer are too broad, share and overlap resources
> - Multimodal models complicate content terms (Text/Image/Vision/etc)
> - HF.CO names do all of this & become inconsistent across folders/files
> - Development credit often shared (ex RunwayML with Stable Diffusion)
> - Paper heredity would be a neat tree, but it complicates retrieval
> - Algorithms (esp application) are less common knowledge, vague, ~~and I'm too smooth-brain.~~
> - Impartiality
> </details>
> <details><summary>Why `unet`, `dit`, `lora` over alternatives</summary>
>
> - UNET/DiT/Transformer are shared enough to be genre-ish but not too narrowly specific
> - Very similar technical process on this level
> - Functional and efficient for random lookups
> </details>
> <details><summary>Roadmap</summary>
>
> - Decide on `@` (like @8cfg for an indistinguishable 8 step lora that requires cfg)
> -- crucial spec element, or an optional, MIR app-determined feature?
> - Proof of concept generative model registry
> - Ensure compatability/integration/cross-pollenation with [OECD AI Classifications](https://oecd.ai/en/classification)
> - Ensure compatability/integration/cross-pollenation with [NIST AI 200-1 NIST Trustworthy and Responsible AI](https://www.nist.gov/publications/ai-use-taxonomy-human-centered-approach)
> </details>
|
DOFOFFICIAL/NathUI-Tutorial | DOFOFFICIAL | "2025-04-08T06:15:49Z" | 1 | 10 | null | [
"safetensors",
"torch",
"en",
"zh",
"license:apache-2.0",
"region:us"
] | null | "2025-02-17T00:49:25Z" | ---
license: apache-2.0
language:
- en
- zh
tags:
- torch
---
* Latest Update 2025-02-16
* 最新更新时间 2025-02-16
# 1. Introduction 介绍
### This is a public open-source repository that contains FREE LLM Application Courses lectured in Chinese
### 这是一个公共开源存储库,其中包含以中文讲授的免费的大语言模型应用课程
### This repository and its contents may be updated weekly, depending on the author's free time and project progress
### 本仓库和内容可能会以周的频率更新,取决于作者的闲暇时间和工程进度
### All of the source code is OPEN SOURCED on GitHub and models that were trained may be included into this repository
### 所有源代码在 GitHub 上都是开源的,经过训练的模型可能会包含在此存储库中
### Please refer to the Notion shared document for the password to install the `integration package`. This site is for diversion only
### 本仓库集成了UP的整合包。安装整合包的密码请参考Notion共享文档,本站仅供分流
# 2. Resources 配套资源
### See GitHub https://github.com/dof-studio/NathUI
### 参考 GitHub https://github.com/dof-studio/NathUI
### See Bilibili https://space.bilibili.com/303266889
### 参考 Bilibili https://space.bilibili.com/303266889
### See open notebook https://truthful-busby-322.notion.site/NathMath-LLM-18e45165050a80408586c3f2bf93ce68?pvs=73
### 参考公开的笔记本 https://truthful-busby-322.notion.site/NathMath-LLM-18e45165050a80408586c3f2bf93ce68?pvs=73
# 3. Terms of Acknowledge 使用条款
* I will open the Hugging Face open source repository from today. All teaching videos and models involved will be uploaded to the open source Hugging Face repository for permanent storage.
* From today, I will allow any pirated videos and secondary creations based on my teaching content, as long as the creator does not distort my words and spreads the technology I teach normally.
* I advocate the popularization of knowledge and knowledge without payment. All content is open source with Apache 2.0 License, but all products based on my knowledge and code do not provide warranty and guaranteed technical services.
* Welcome to star my repository
* 本人从今天开启Hugging Face开源仓库,所有的教学视频和涉及到的模型会同步上架开源Hugging Face仓库永久保存。
* 本人从今天起将允许任何盗版视频和基于本人教学内容的二次创作,只要创作者不歪曲我的话语正常传播我教学的技术。
* 本人倡导知识普及和知识不付费,所有的内容以Apache 2.0 许可证开源,但是基于本人知识和代码的所有产品不提供保修和确保的技术服务。
* 欢迎您为我的仓库点亮星星
`DOF Studio (2016 - 2025)/NathMath` |
TechxGenus/DeepSeek-V2-Lite-Chat-AWQ | TechxGenus | "2024-07-04T12:50:15Z" | 50 | 2 | transformers | [
"transformers",
"safetensors",
"deepseek_v2",
"text-generation",
"conversational",
"custom_code",
"arxiv:2405.04434",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] | text-generation | "2024-07-04T10:18:39Z" | ---
license: other
license_name: deepseek
license_link: https://github.com/deepseek-ai/DeepSeek-V2/blob/main/LICENSE-MODEL
---
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V2" />
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20V2-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;">
<img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;">
<img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;">
<img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;">
<img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/LICENSE-CODE" style="margin: 2px;">
<img alt="Code License" src="https://img.shields.io/badge/Code_License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/LICENSE-MODEL" style="margin: 2px;">
<img alt="Model License" src="https://img.shields.io/badge/Model_License-Model_Agreement-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<p align="center">
<a href="#2-model-downloads">Model Download</a> |
<a href="#3-evaluation-results">Evaluation Results</a> |
<a href="#4-model-architecture">Model Architecture</a> |
<a href="#6-api-platform">API Platform</a> |
<a href="#8-license">License</a> |
<a href="#9-citation">Citation</a>
</p>
<p align="center">
<a href="https://arxiv.org/abs/2405.04434"><b>Paper Link</b>👁️</a>
</p>
AWQ quantized version of DeepSeek-V2-Lite-Chat model.
---
# DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model
## 1. Introduction
Last week, the release and buzz around DeepSeek-V2 have ignited widespread interest in MLA (Multi-head Latent Attention)! Many in the community suggested open-sourcing a smaller MoE model for in-depth research. And now DeepSeek-V2-Lite comes out:
- 16B total params, 2.4B active params, scratch training with 5.7T tokens
- Outperforms 7B dense and 16B MoE on many English & Chinese benchmarks
- Deployable on single 40G GPU, fine-tunable on 8x80G GPUs
DeepSeek-V2, a strong Mixture-of-Experts (MoE) language model characterized by economical training and efficient inference. DeepSeek-V2 adopts innovative architectures including Multi-head Latent Attention (MLA) and DeepSeekMoE. MLA guarantees efficient inference through significantly compressing the Key-Value (KV) cache into a latent vector, while DeepSeekMoE enables training strong models at an economical cost through sparse computation.
## 2. News
- 2024.05.16: We released the DeepSeek-V2-Lite.
- 2024.05.06: We released the DeepSeek-V2.
## 3. Model Downloads
With DeepSeek-V2, we are open-sourcing base and chat models across two sizes:
<div align="center">
| **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
| :------------: | :------------: | :------------: | :------------: | :------------: |
| DeepSeek-V2-Lite | 16B | 2.4B | 32k | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-V2-Lite) |
| DeepSeek-V2-Lite-Chat (SFT) | 16B | 2.4B | 32k | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-V2-Lite-Chat) |
| DeepSeek-V2 | 236B | 21B | 128k | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-V2) |
| DeepSeek-V2-Chat (RL) | 236B | 21B | 128k | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-V2-Chat) |
</div>
Due to the constraints of HuggingFace, the open-source code currently experiences slower performance than our internal codebase when running on GPUs with Huggingface. To facilitate the efficient execution of our model, we offer a dedicated vllm solution that optimizes performance for running our model effectively.
## 4. Evaluation Results
### Base Model
#### Standard Benchmark
<div align="center">
| **Benchmark** | **Domain** | **DeepSeek 7B (Dense)** | **DeepSeekMoE 16B** | **DeepSeek-V2-Lite (MoE-16B)** |
|:-------------:|:----------:|:--------------:|:-----------------:|:--------------------------:|
| **Architecture** | - | MHA+Dense | MHA+MoE | MLA+MoE |
| **MMLU** | English | 48.2 | 45.0 | 58.3 |
| **BBH** | English | 39.5 | 38.9 | 44.1 |
| **C-Eval** | Chinese | 45.0 | 40.6 | 60.3 |
| **CMMLU** | Chinese | 47.2 | 42.5 | 64.3 |
| **HumanEval** | Code | 26.2 | 26.8 | 29.9 |
| **MBPP** | Code | 39.0 | 39.2 | 43.2 |
| **GSM8K** | Math | 17.4 | 18.8 | 41.1 |
| **Math** | Math | 3.3 | 4.3 | 17.1 |
</div>
For more evaluation details, such as few-shot settings and prompts, please check our paper.
### Chat Model
#### Standard Benchmark
<div align="center">
| Benchmark | Domain | DeepSeek 7B Chat (SFT) | DeepSeekMoE 16B Chat (SFT) | DeepSeek-V2-Lite 16B Chat (SFT) |
|:-----------:|:----------------:|:------------------:|:---------------:|:---------------------:|
| **MMLU** | English | 49.7 | 47.2 | 55.7 |
| **BBH** | English | 43.1 | 42.2 | 48.1 |
| **C-Eval** | Chinese | 44.7 | 40.0 | 60.1 |
| **CMMLU** | Chinese | 51.2 | 49.3 | 62.5 |
| **HumanEval** | Code | 45.1 | 45.7 | 57.3 |
| **MBPP** | Code | 39.0 | 46.2 | 45.8 |
| **GSM8K** | Math | 62.6 | 62.2 | 72.0 |
| **Math** | Math | 14.7 | 15.2 | 27.9 |
</div>
## 5. Model Architecture
DeepSeek-V2 adopts innovative architectures to guarantee economical training and efficient inference:
- For attention, we design MLA (Multi-head Latent Attention), which utilizes low-rank key-value union compression to eliminate the bottleneck of inference-time key-value cache, thus supporting efficient inference.
- For Feed-Forward Networks (FFNs), we adopt DeepSeekMoE architecture, a high-performance MoE architecture that enables training stronger models at lower costs.
<p align="center">
<img width="90%" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/architecture.png?raw=true" />
</p>
DeepSeek-V2-Lite has 27 layers and a hidden dimension of 2048. It also employs MLA and has 16 attention heads, where each head has a dimension of 128. Its KV compression dimension is 512, but slightly different from DeepSeek-V2, it does not compress the queries. For the decoupled queries and key, it has a per-head dimension of 64. DeepSeek-V2-Lite also employs DeepSeekMoE, and all FFNs except for the first layer are replaced with MoE layers. Each MoE layer consists of 2 shared experts and 64 routed experts, where the intermediate hidden dimension of each expert is 1408. Among the routed experts, 6 experts will be activated for each token. Under this configuration, DeepSeek-V2-Lite comprises 15.7B total parameters, of which 2.4B are activated for each token.
## 6. Training Details
DeepSeek-V2-Lite is also trained from scratch on the same pre-training corpus of DeepSeek-V2, which is not polluted by any SFT data. It uses the AdamW optimizer with hyper-parameters set to $\beta_1=0.9$, $\beta_2=0.95$, and $\mathrm{weight_decay}=0.1$. The learning rate is scheduled using a warmup-and-step-decay strategy. Initially, the learning rate linearly increases from 0 to the maximum value during the first 2K steps. Subsequently, the learning rate is multiplied by 0.316 after training about 80% of tokens, and again by 0.316 after training about 90% of tokens. The maximum learning rate is set to $4.2 \times 10^{-4}$, and the gradient clipping norm is set to 1.0. We do not employ the batch size scheduling strategy for it, and it is trained with a constant batch size of 4608 sequences. During pre-training, we set the maximum sequence length to 4K, and train DeepSeek-V2-Lite on 5.7T tokens. We leverage pipeline parallelism to deploy different layers of it on different devices, but for each layer, all experts will be deployed on the same device. Therefore, we only employ a small expert-level balance loss with $\alpha_{1}=0.001$, and do not employ device-level balance loss and communication balance loss for it. After pre-training, we also perform long-context extension, SFT for DeepSeek-V2-Lite and get a chat model called DeepSeek-V2-Lite Chat.
## 7. How to run locally
**To utilize DeepSeek-V2-Lite in BF16 format for inference, 40GB*1 GPU is required.**
### Inference with Huggingface's Transformers
You can directly employ [Huggingface's Transformers](https://github.com/huggingface/transformers) for model inference.
#### Text Completion
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
model_name = "deepseek-ai/DeepSeek-V2-Lite"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True, torch_dtype=torch.bfloat16).cuda()
model.generation_config = GenerationConfig.from_pretrained(model_name)
model.generation_config.pad_token_id = model.generation_config.eos_token_id
text = "An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs.to(model.device), max_new_tokens=100)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
```
#### Chat Completion
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
model_name = "deepseek-ai/DeepSeek-V2-Lite-Chat"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True, torch_dtype=torch.bfloat16).cuda()
model.generation_config = GenerationConfig.from_pretrained(model_name)
model.generation_config.pad_token_id = model.generation_config.eos_token_id
messages = [
{"role": "user", "content": "Write a piece of quicksort code in C++"}
]
input_tensor = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt")
outputs = model.generate(input_tensor.to(model.device), max_new_tokens=100)
result = tokenizer.decode(outputs[0][input_tensor.shape[1]:], skip_special_tokens=True)
print(result)
```
The complete chat template can be found within `tokenizer_config.json` located in the huggingface model repository.
An example of chat template is as belows:
```bash
<|begin▁of▁sentence|>User: {user_message_1}
Assistant: {assistant_message_1}<|end▁of▁sentence|>User: {user_message_2}
Assistant:
```
You can also add an optional system message:
```bash
<|begin▁of▁sentence|>{system_message}
User: {user_message_1}
Assistant: {assistant_message_1}<|end▁of▁sentence|>User: {user_message_2}
Assistant:
```
### Inference with vLLM (recommended)
To utilize [vLLM](https://github.com/vllm-project/vllm) for model inference, please merge this Pull Request into your vLLM codebase: https://github.com/vllm-project/vllm/pull/4650.
```python
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
max_model_len, tp_size = 8192, 1
model_name = "deepseek-ai/DeepSeek-V2-Lite-Chat"
tokenizer = AutoTokenizer.from_pretrained(model_name)
llm = LLM(model=model_name, tensor_parallel_size=tp_size, max_model_len=max_model_len, trust_remote_code=True, enforce_eager=True)
sampling_params = SamplingParams(temperature=0.3, max_tokens=256, stop_token_ids=[tokenizer.eos_token_id])
messages_list = [
[{"role": "user", "content": "Who are you?"}],
[{"role": "user", "content": "Translate the following content into Chinese directly: DeepSeek-V2 adopts innovative architectures to guarantee economical training and efficient inference."}],
[{"role": "user", "content": "Write a piece of quicksort code in C++."}],
]
prompt_token_ids = [tokenizer.apply_chat_template(messages, add_generation_prompt=True) for messages in messages_list]
outputs = llm.generate(prompt_token_ids=prompt_token_ids, sampling_params=sampling_params)
generated_text = [output.outputs[0].text for output in outputs]
print(generated_text)
```
### LangChain Support
Since our API is compatible with OpenAI, you can easily use it in [langchain](https://www.langchain.com/).
Here is an example:
```
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
model='deepseek-chat',
openai_api_key=<your-deepseek-api-key>,
openai_api_base='https://api.deepseek.com/v1',
temperature=0.85,
max_tokens=8000)
```
## 8. License
This code repository is licensed under [the MIT License](LICENSE-CODE). The use of DeepSeek-V2 Base/Chat models is subject to [the Model License](LICENSE-MODEL). DeepSeek-V2 series (including Base and Chat) supports commercial use.
## 9. Citation
```
@misc{deepseekv2,
title={DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model},
author={DeepSeek-AI},
year={2024},
eprint={2405.04434},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## 10. Contact
If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]).
|
linh5nb/Llama-2-7b-chat-luat-hon-nhan-1-Q4_K_M-GGUF | linh5nb | "2024-05-22T10:00:14Z" | 2 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"endpoints_compatible",
"region:us"
] | null | "2024-05-22T10:00:03Z" | ---
tags:
- llama-cpp
- gguf-my-repo
---
# linh5nb/Llama-2-7b-chat-luat-hon-nhan-1-Q4_K_M-GGUF
This model was converted to GGUF format from [`linh5nb/Llama-2-7b-chat-luat-hon-nhan-1`](https://huggingface.co/linh5nb/Llama-2-7b-chat-luat-hon-nhan-1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/linh5nb/Llama-2-7b-chat-luat-hon-nhan-1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo linh5nb/Llama-2-7b-chat-luat-hon-nhan-1-Q4_K_M-GGUF --model llama-2-7b-chat-luat-hon-nhan-1.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo linh5nb/Llama-2-7b-chat-luat-hon-nhan-1-Q4_K_M-GGUF --model llama-2-7b-chat-luat-hon-nhan-1.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama-2-7b-chat-luat-hon-nhan-1.Q4_K_M.gguf -n 128
```
|
qgallouedec/tqc-PandaPush-v1-2045464771 | qgallouedec | "2023-02-27T16:00:31Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaPush-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-02-27T15:59:32Z" | ---
library_name: stable-baselines3
tags:
- PandaPush-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaPush-v1
type: PandaPush-v1
metrics:
- type: mean_reward
value: -10.80 +/- 12.54
name: mean_reward
verified: false
---
# **TQC** Agent playing **PandaPush-v1**
This is a trained model of a **TQC** agent playing **PandaPush-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo tqc --env PandaPush-v1 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env PandaPush-v1 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo tqc --env PandaPush-v1 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env PandaPush-v1 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo tqc --env PandaPush-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo tqc --env PandaPush-v1 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('batch_size', 2048),
('buffer_size', 1000000),
('env_wrapper', 'sb3_contrib.common.wrappers.TimeFeatureWrapper'),
('gamma', 0.95),
('learning_rate', 0.001),
('n_timesteps', 1000000.0),
('policy', 'MultiInputPolicy'),
('policy_kwargs', 'dict(net_arch=[512, 512, 512], n_critics=2)'),
('replay_buffer_class', 'HerReplayBuffer'),
('replay_buffer_kwargs',
"dict( online_sampling=True, goal_selection_strategy='future', "
'n_sampled_goal=4, )'),
('tau', 0.05),
('normalize', False)])
```
# Environment Arguments
```python
{'render': True}
```
|
lesso09/730efed3-2a9e-4cce-be20-4154e1688185 | lesso09 | "2025-01-14T00:57:54Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer",
"base_model:adapter:NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-14T00:53:13Z" | ---
library_name: peft
license: other
base_model: NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 730efed3-2a9e-4cce-be20-4154e1688185
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer
bf16: true
chat_template: llama3
datasets:
- data_files:
- 1fcb0786201ac631_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1fcb0786201ac631_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso09/730efed3-2a9e-4cce-be20-4154e1688185
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/1fcb0786201ac631_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1988ed86-b714-427b-ab63-39d2c964de43
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1988ed86-b714-427b-ab63-39d2c964de43
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 730efed3-2a9e-4cce-be20-4154e1688185
This model is a fine-tuned version of [NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1084
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.2926 | 0.0020 | 1 | 1.3662 |
| 1.2258 | 0.0100 | 5 | 1.3460 |
| 1.135 | 0.0201 | 10 | 1.2448 |
| 1.3224 | 0.0301 | 15 | 1.1625 |
| 1.1253 | 0.0402 | 20 | 1.1188 |
| 0.9534 | 0.0502 | 25 | 1.1084 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
VIRAL-Video-intimo-de-MC-Mirella-e-marido/Video.intimo.vazado.da.MC.Mirella.e.Dynho.Alves.vaza.na.internet | VIRAL-Video-intimo-de-MC-Mirella-e-marido | "2025-04-13T04:18:09Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-13T04:17:52Z" | <animated-image data-catalyst=""><a href="https://tinyurl.com/5n6bjbnr?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
espnet/amuse_soundstream16k | espnet | "2024-06-20T07:02:39Z" | 6 | 0 | espnet | [
"espnet",
"audio",
"codec",
"multilingual",
"dataset:amuse",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null | "2024-06-19T16:54:50Z" | ---
tags:
- espnet
- audio
- codec
language: multilingual
datasets:
- amuse
license: cc-by-4.0
---
## ESPnet2 Codec model
### `espnet/amuse_soundstream16k`
This model was trained by ftshijt using amuse recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout 5201685018b0e8fb9826bc51a710623140a06627
pip install -e .
cd egs2/amuse/codec1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/amuse_soundstream16k
```
## Codec config
<details><summary>expand</summary>
```
config: conf/train_soundstream4_fs16000.yaml
print_config: false
log_level: INFO
drop_last_iter: false
dry_run: false
iterator_type: chunk
valid_iterator_type: null
output_dir: exp_16k/codec_train_soundstream4_fs16000_raw_fs16000
ngpu: 1
seed: 777
num_workers: 1
num_att_plot: 0
dist_backend: nccl
dist_init_method: env://
dist_world_size: 4
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 36365
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: true
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: false
use_tf32: true
collect_stats: false
write_collected_feats: false
max_epoch: 120
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- mel_loss
- min
- - train
- mel_loss
- min
- - train
- total_count
- max
keep_nbest_models: 5
nbest_averaging_interval: 0
grad_clip: -1
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: 50
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
use_adapter: false
adapter: lora
save_strategy: all
adapter_conf: {}
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: 5000
batch_size: 64
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp_16k/codec_stats_raw/train/audio_shape
valid_shape_file:
- exp_16k/codec_stats_raw/valid/audio_shape
batch_type: unsorted
valid_batch_type: null
fold_length:
- 256000
sort_in_batch: descending
shuffle_within_batch: false
sort_batch: descending
multiple_iterator: false
chunk_length: 32000
chunk_shift_ratio: 0.5
num_cache_chunks: 128
chunk_excluded_key_prefixes: []
chunk_default_fs: null
train_data_path_and_name_and_type:
- - dump_16k/raw/train/wav.scp
- audio
- kaldi_ark
valid_data_path_and_name_and_type:
- - dump_16k/raw/dev-small/wav.scp
- audio
- kaldi_ark
multi_task_dataset: false
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
allow_multi_rates: false
valid_max_cache_size: null
exclude_weight_decay: false
exclude_weight_decay_conf: {}
optim: adam
optim_conf:
lr: 0.0002
betas:
- 0.5
- 0.9
eps: 1.0e-09
weight_decay: 0.0
scheduler: exponentiallr
scheduler_conf:
gamma: 0.999875
optim2: adam
optim2_conf:
lr: 0.0002
betas:
- 0.5
- 0.9
eps: 1.0e-09
weight_decay: 0.0
scheduler2: exponentiallr
scheduler2_conf:
gamma: 0.999875
generator_first: true
skip_discriminator_prob: 0.0
model_conf: {}
use_preprocessor: true
codec: soundstream
codec_conf:
sampling_rate: 16000
generator_params:
hidden_dim: 512
encdec_channels: 1
encdec_n_filters: 32
encdec_n_residual_layers: 3
encdec_ratios:
- 8
- 5
- 4
- 2
encdec_activation: ELU
encdec_activation_params:
alpha: 1.0
encdec_norm: weight_norm
encdec_kernel_size: 7
encdec_residual_kernel_size: 7
encdec_last_kernel_size: 7
encdec_dilation_base: 2
encdec_causal: false
encdec_pad_mode: reflect
encdec_true_skip: false
encdec_compress: 2
encdec_lstm: 2
decoder_trim_right_ratio: 1.0
decoder_final_activation: null
decoder_final_activation_params: null
quantizer_n_q: 32
quantizer_bins: 1024
quantizer_decay: 0.99
quantizer_kmeans_init: true
quantizer_kmeans_iters: 50
quantizer_threshold_ema_dead_code: 2
quantizer_target_bandwidth:
- 2
- 4
- 8
- 16
- 32
sample_rate: 16000
discriminator_params:
scales: 3
scale_downsample_pooling: AvgPool1d
scale_downsample_pooling_params:
kernel_size: 4
stride: 2
padding: 2
scale_discriminator_params:
in_channels: 1
out_channels: 1
kernel_sizes:
- 15
- 41
- 5
- 3
channels: 128
max_downsample_channels: 1024
max_groups: 16
bias: true
downsample_scales:
- 2
- 2
- 4
- 4
- 1
nonlinear_activation: LeakyReLU
nonlinear_activation_params:
negative_slope: 0.1
scale_follow_official_norm: false
complexstft_discriminator_params:
in_channels: 1
channels: 32
strides:
- - 1
- 2
- - 2
- 2
- - 1
- 2
- - 2
- 2
- - 1
- 2
- - 2
- 2
chan_mults:
- 1
- 2
- 4
- 4
- 8
- 8
n_fft: 1024
hop_length: 256
win_length: 1024
stft_normalized: false
generator_adv_loss_params:
average_by_discriminators: false
loss_type: mse
discriminator_adv_loss_params:
average_by_discriminators: false
loss_type: mse
use_feat_match_loss: true
feat_match_loss_params:
average_by_discriminators: false
average_by_layers: false
include_final_outputs: true
use_mel_loss: true
mel_loss_params:
range_start: 6
range_end: 11
window: hann
n_mels: 80
fmin: 0
fmax: null
log_base: null
fs: 16000
lambda_quantization: 0.0
lambda_commit: 1.0
lambda_reconstruct: 1.0
lambda_adv: 1.0
lambda_mel: 45.0
lambda_feat_match: 2.0
cache_generator_outputs: true
required:
- output_dir
version: '202402'
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Ruqiya/Fine-Tuning-Gemma-2b-it-for-Arabic | Ruqiya | "2024-03-28T21:21:38Z" | 40 | 3 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"ar",
"en",
"dataset:arbml/CIDAR",
"base_model:google/gemma-2b-it",
"base_model:finetune:google/gemma-2b-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-28T12:55:43Z" | ---
datasets:
- arbml/CIDAR
base_model: google/gemma-2b-it
pipeline_tag: text-generation
language:
- ar
- en
---
# Fine-Tuning-Gemma-2b-it-for-Arabic
<!-- Provide a quick summary of what the model is/does. -->
This model is a fine-tuned version of [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it) on [arbml/CIDAR](https://huggingface.co/datasets/arbml/CIDAR) Arabic dataset.
It achieves the following results on the evaluation set:
- training_loss=2.281057505607605
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Ruqiya/Fine-Tuning-Gemma-2b-it-for-Arabic"
messages = [{"role": "user", "content": "ما هو الذكاء الاصطناعي؟"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
Alice327/trial-model | Alice327 | "2023-09-15T02:03:30Z" | 104 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-09-15T02:01:52Z" | ---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: trial-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trial-model
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0905
- F1: 0.2764
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
csukuangfj/sherpa-onnx-zipformer-en-2023-06-26 | csukuangfj | "2023-06-26T04:40:28Z" | 0 | 0 | null | [
"onnx",
"license:apache-2.0",
"region:us"
] | null | "2023-06-26T04:33:23Z" | ---
license: apache-2.0
---
The torchscript model is from
https://huggingface.co/Zengwei/icefall-asr-librispeech-zipformer-2023-05-15
The training code is from
https://github.com/k2-fsa/icefall/pull/1058
|
hs4/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rapid_wiry_hawk | hs4 | "2025-04-04T19:28:30Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am rapid wiry hawk",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-04T19:27:25Z" | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rapid_wiry_hawk
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am rapid wiry hawk
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rapid_wiry_hawk
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="hs4/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rapid_wiry_hawk", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.50.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
krmonline/Qwen2.5_8bit | krmonline | "2025-02-04T09:19:07Z" | 23 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-02-04T09:17:01Z" | ---
base_model: unsloth/qwen2.5-7b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** krmonline
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Alphatao/d19897bd-fe07-4911-95c0-b294c0693d1f | Alphatao | "2025-03-15T20:16:21Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-3B",
"base_model:adapter:unsloth/Qwen2.5-3B",
"license:other",
"region:us"
] | null | "2025-03-15T12:58:52Z" | ---
library_name: peft
license: other
base_model: unsloth/Qwen2.5-3B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d19897bd-fe07-4911-95c0-b294c0693d1f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-3B
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- bdd8a35f55f25533_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/bdd8a35f55f25533_train_data.json
type:
field_input: original_version
field_instruction: title
field_output: french_version
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
device_map:
? ''
: 0,1,2,3,4,5,6,7
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 100
eval_table_size: null
flash_attention: true
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: false
hub_model_id: Alphatao/d19897bd-fe07-4911-95c0-b294c0693d1f
hub_repo: null
hub_strategy: null
hub_token: null
learning_rate: 0.0002
load_best_model_at_end: true
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lora_target_modules:
- q_proj
- k_proj
- v_proj
- o_proj
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 2346
micro_batch_size: 4
mlflow_experiment_name: /tmp/bdd8a35f55f25533_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 117bcf8a-89aa-4e58-88c4-fd9dde22f122
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 117bcf8a-89aa-4e58-88c4-fd9dde22f122
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# d19897bd-fe07-4911-95c0-b294c0693d1f
This model is a fine-tuned version of [unsloth/Qwen2.5-3B](https://huggingface.co/unsloth/Qwen2.5-3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8274
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 2346
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.0386 | 0.0003 | 1 | 1.1236 |
| 1.1372 | 0.0343 | 100 | 0.9449 |
| 1.1323 | 0.0686 | 200 | 0.9222 |
| 1.0146 | 0.1029 | 300 | 0.9094 |
| 0.7854 | 0.1372 | 400 | 0.8968 |
| 0.9888 | 0.1715 | 500 | 0.8898 |
| 0.9185 | 0.2058 | 600 | 0.8832 |
| 0.9624 | 0.2401 | 700 | 0.8759 |
| 0.8026 | 0.2744 | 800 | 0.8706 |
| 1.2624 | 0.3087 | 900 | 0.8653 |
| 1.0704 | 0.3431 | 1000 | 0.8600 |
| 1.0318 | 0.3774 | 1100 | 0.8556 |
| 0.8575 | 0.4117 | 1200 | 0.8506 |
| 0.7795 | 0.4460 | 1300 | 0.8463 |
| 0.8011 | 0.4803 | 1400 | 0.8424 |
| 0.797 | 0.5146 | 1500 | 0.8391 |
| 1.1496 | 0.5489 | 1600 | 0.8364 |
| 0.8766 | 0.5832 | 1700 | 0.8337 |
| 1.0283 | 0.6175 | 1800 | 0.8313 |
| 0.9297 | 0.6518 | 1900 | 0.8296 |
| 1.0575 | 0.6861 | 2000 | 0.8285 |
| 0.9047 | 0.7204 | 2100 | 0.8278 |
| 0.8398 | 0.7547 | 2200 | 0.8275 |
| 0.68 | 0.7890 | 2300 | 0.8274 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
varun-v-rao/bart-large-lora-2.36M-snli-model2 | varun-v-rao | "2024-06-20T00:34:06Z" | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bart",
"text-classification",
"generated_from_trainer",
"dataset:stanfordnlp/snli",
"base_model:facebook/bart-large",
"base_model:finetune:facebook/bart-large",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-06-19T22:16:28Z" | ---
license: apache-2.0
base_model: facebook/bart-large
tags:
- generated_from_trainer
datasets:
- stanfordnlp/snli
metrics:
- accuracy
model-index:
- name: bart-large-lora-2.36M-snli-model2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: snli
type: stanfordnlp/snli
metrics:
- name: Accuracy
type: accuracy
value: 0.9086567770778297
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-lora-2.36M-snli-model2
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the snli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2622
- Accuracy: 0.9087
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 70
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3839 | 1.0 | 4292 | 0.2835 | 0.9020 |
| 0.355 | 2.0 | 8584 | 0.2663 | 0.9063 |
| 0.3486 | 3.0 | 12876 | 0.2622 | 0.9087 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
MayBashendy/ArabicNewSplits6_FineTuningAraBERT_run2_AugV5_k18_task1_organization | MayBashendy | "2024-12-22T15:55:39Z" | 161 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-12-22T15:31:47Z" | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits6_FineTuningAraBERT_run2_AugV5_k18_task1_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits6_FineTuningAraBERT_run2_AugV5_k18_task1_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5994
- Qwk: 0.7048
- Mse: 0.5994
- Rmse: 0.7742
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0222 | 2 | 5.0825 | -0.0095 | 5.0825 | 2.2544 |
| No log | 0.0444 | 4 | 3.0872 | 0.0317 | 3.0872 | 1.7570 |
| No log | 0.0667 | 6 | 1.9096 | 0.0756 | 1.9096 | 1.3819 |
| No log | 0.0889 | 8 | 1.5806 | 0.1199 | 1.5806 | 1.2572 |
| No log | 0.1111 | 10 | 1.4412 | 0.1785 | 1.4412 | 1.2005 |
| No log | 0.1333 | 12 | 1.3465 | 0.1693 | 1.3465 | 1.1604 |
| No log | 0.1556 | 14 | 1.3479 | 0.1205 | 1.3479 | 1.1610 |
| No log | 0.1778 | 16 | 1.5104 | 0.1119 | 1.5104 | 1.2290 |
| No log | 0.2 | 18 | 1.3797 | 0.1865 | 1.3797 | 1.1746 |
| No log | 0.2222 | 20 | 1.2060 | 0.4416 | 1.2060 | 1.0982 |
| No log | 0.2444 | 22 | 1.1649 | 0.4605 | 1.1649 | 1.0793 |
| No log | 0.2667 | 24 | 1.0001 | 0.5269 | 1.0001 | 1.0000 |
| No log | 0.2889 | 26 | 1.0582 | 0.5035 | 1.0582 | 1.0287 |
| No log | 0.3111 | 28 | 1.3394 | 0.4256 | 1.3394 | 1.1573 |
| No log | 0.3333 | 30 | 1.6733 | 0.3730 | 1.6733 | 1.2936 |
| No log | 0.3556 | 32 | 1.4998 | 0.3547 | 1.4998 | 1.2247 |
| No log | 0.3778 | 34 | 1.4125 | 0.3367 | 1.4125 | 1.1885 |
| No log | 0.4 | 36 | 1.4080 | 0.3888 | 1.4080 | 1.1866 |
| No log | 0.4222 | 38 | 1.6978 | 0.3239 | 1.6978 | 1.3030 |
| No log | 0.4444 | 40 | 1.8694 | 0.3060 | 1.8694 | 1.3673 |
| No log | 0.4667 | 42 | 1.8497 | 0.3101 | 1.8497 | 1.3600 |
| No log | 0.4889 | 44 | 1.2548 | 0.4621 | 1.2548 | 1.1202 |
| No log | 0.5111 | 46 | 0.8410 | 0.6287 | 0.8410 | 0.9171 |
| No log | 0.5333 | 48 | 0.7724 | 0.6929 | 0.7724 | 0.8789 |
| No log | 0.5556 | 50 | 0.5707 | 0.7487 | 0.5707 | 0.7555 |
| No log | 0.5778 | 52 | 0.5492 | 0.7593 | 0.5492 | 0.7411 |
| No log | 0.6 | 54 | 0.5597 | 0.7514 | 0.5597 | 0.7481 |
| No log | 0.6222 | 56 | 0.5583 | 0.7443 | 0.5583 | 0.7472 |
| No log | 0.6444 | 58 | 0.8377 | 0.6286 | 0.8377 | 0.9153 |
| No log | 0.6667 | 60 | 0.8690 | 0.6199 | 0.8690 | 0.9322 |
| No log | 0.6889 | 62 | 0.6016 | 0.7243 | 0.6016 | 0.7756 |
| No log | 0.7111 | 64 | 0.5419 | 0.7377 | 0.5419 | 0.7362 |
| No log | 0.7333 | 66 | 0.5486 | 0.7338 | 0.5486 | 0.7407 |
| No log | 0.7556 | 68 | 0.5971 | 0.6755 | 0.5971 | 0.7727 |
| No log | 0.7778 | 70 | 0.5583 | 0.7061 | 0.5583 | 0.7472 |
| No log | 0.8 | 72 | 0.8571 | 0.5849 | 0.8571 | 0.9258 |
| No log | 0.8222 | 74 | 1.0701 | 0.5418 | 1.0701 | 1.0344 |
| No log | 0.8444 | 76 | 1.3919 | 0.4110 | 1.3919 | 1.1798 |
| No log | 0.8667 | 78 | 1.1887 | 0.5069 | 1.1887 | 1.0903 |
| No log | 0.8889 | 80 | 1.1135 | 0.5140 | 1.1135 | 1.0552 |
| No log | 0.9111 | 82 | 0.8609 | 0.6213 | 0.8609 | 0.9278 |
| No log | 0.9333 | 84 | 0.6006 | 0.6808 | 0.6006 | 0.7750 |
| No log | 0.9556 | 86 | 0.6238 | 0.7392 | 0.6238 | 0.7898 |
| No log | 0.9778 | 88 | 0.6461 | 0.6952 | 0.6461 | 0.8038 |
| No log | 1.0 | 90 | 0.6725 | 0.7008 | 0.6725 | 0.8201 |
| No log | 1.0222 | 92 | 0.6628 | 0.7370 | 0.6628 | 0.8141 |
| No log | 1.0444 | 94 | 0.6740 | 0.7460 | 0.6740 | 0.8210 |
| No log | 1.0667 | 96 | 0.7116 | 0.7324 | 0.7116 | 0.8436 |
| No log | 1.0889 | 98 | 0.7080 | 0.7147 | 0.7080 | 0.8414 |
| No log | 1.1111 | 100 | 0.6716 | 0.7063 | 0.6716 | 0.8195 |
| No log | 1.1333 | 102 | 0.7368 | 0.6718 | 0.7368 | 0.8583 |
| No log | 1.1556 | 104 | 0.7394 | 0.6678 | 0.7394 | 0.8599 |
| No log | 1.1778 | 106 | 0.7883 | 0.6358 | 0.7883 | 0.8878 |
| No log | 1.2 | 108 | 0.7145 | 0.6868 | 0.7145 | 0.8453 |
| No log | 1.2222 | 110 | 0.6767 | 0.6950 | 0.6767 | 0.8226 |
| No log | 1.2444 | 112 | 0.6730 | 0.7061 | 0.6730 | 0.8204 |
| No log | 1.2667 | 114 | 0.6829 | 0.7008 | 0.6829 | 0.8264 |
| No log | 1.2889 | 116 | 0.6988 | 0.7022 | 0.6988 | 0.8359 |
| No log | 1.3111 | 118 | 0.7633 | 0.6795 | 0.7633 | 0.8737 |
| No log | 1.3333 | 120 | 0.7470 | 0.6673 | 0.7470 | 0.8643 |
| No log | 1.3556 | 122 | 0.7672 | 0.6739 | 0.7672 | 0.8759 |
| No log | 1.3778 | 124 | 0.8752 | 0.6325 | 0.8752 | 0.9355 |
| No log | 1.4 | 126 | 0.8153 | 0.6775 | 0.8153 | 0.9029 |
| No log | 1.4222 | 128 | 0.7214 | 0.7147 | 0.7214 | 0.8494 |
| No log | 1.4444 | 130 | 0.7732 | 0.6979 | 0.7732 | 0.8793 |
| No log | 1.4667 | 132 | 0.9405 | 0.6434 | 0.9405 | 0.9698 |
| No log | 1.4889 | 134 | 0.8967 | 0.6475 | 0.8967 | 0.9470 |
| No log | 1.5111 | 136 | 0.7911 | 0.7099 | 0.7911 | 0.8894 |
| No log | 1.5333 | 138 | 0.9194 | 0.6382 | 0.9194 | 0.9589 |
| No log | 1.5556 | 140 | 0.8945 | 0.6163 | 0.8945 | 0.9458 |
| No log | 1.5778 | 142 | 0.7361 | 0.6682 | 0.7361 | 0.8579 |
| No log | 1.6 | 144 | 0.7257 | 0.7173 | 0.7257 | 0.8519 |
| No log | 1.6222 | 146 | 0.6867 | 0.7203 | 0.6867 | 0.8287 |
| No log | 1.6444 | 148 | 0.6251 | 0.6965 | 0.6251 | 0.7906 |
| No log | 1.6667 | 150 | 0.7237 | 0.6986 | 0.7237 | 0.8507 |
| No log | 1.6889 | 152 | 0.7949 | 0.6274 | 0.7949 | 0.8916 |
| No log | 1.7111 | 154 | 0.7680 | 0.6694 | 0.7680 | 0.8764 |
| No log | 1.7333 | 156 | 0.5984 | 0.6910 | 0.5984 | 0.7735 |
| No log | 1.7556 | 158 | 0.5821 | 0.7691 | 0.5821 | 0.7629 |
| No log | 1.7778 | 160 | 0.7212 | 0.7395 | 0.7212 | 0.8492 |
| No log | 1.8 | 162 | 0.7545 | 0.7304 | 0.7545 | 0.8686 |
| No log | 1.8222 | 164 | 0.6662 | 0.7534 | 0.6662 | 0.8162 |
| No log | 1.8444 | 166 | 0.6541 | 0.7341 | 0.6541 | 0.8088 |
| No log | 1.8667 | 168 | 0.6429 | 0.7371 | 0.6429 | 0.8018 |
| No log | 1.8889 | 170 | 0.6664 | 0.7513 | 0.6664 | 0.8163 |
| No log | 1.9111 | 172 | 0.8518 | 0.6671 | 0.8518 | 0.9229 |
| No log | 1.9333 | 174 | 0.8150 | 0.6730 | 0.8150 | 0.9028 |
| No log | 1.9556 | 176 | 0.6497 | 0.7256 | 0.6497 | 0.8060 |
| No log | 1.9778 | 178 | 0.6826 | 0.6841 | 0.6826 | 0.8262 |
| No log | 2.0 | 180 | 0.9950 | 0.5841 | 0.9950 | 0.9975 |
| No log | 2.0222 | 182 | 1.0649 | 0.5686 | 1.0649 | 1.0319 |
| No log | 2.0444 | 184 | 0.8662 | 0.6491 | 0.8662 | 0.9307 |
| No log | 2.0667 | 186 | 0.6846 | 0.6781 | 0.6846 | 0.8274 |
| No log | 2.0889 | 188 | 0.6361 | 0.7156 | 0.6361 | 0.7976 |
| No log | 2.1111 | 190 | 0.6218 | 0.7103 | 0.6218 | 0.7885 |
| No log | 2.1333 | 192 | 0.6515 | 0.6874 | 0.6515 | 0.8072 |
| No log | 2.1556 | 194 | 0.6900 | 0.6952 | 0.6900 | 0.8307 |
| No log | 2.1778 | 196 | 0.8106 | 0.6741 | 0.8106 | 0.9003 |
| No log | 2.2 | 198 | 0.8073 | 0.6796 | 0.8073 | 0.8985 |
| No log | 2.2222 | 200 | 0.6518 | 0.6702 | 0.6518 | 0.8073 |
| No log | 2.2444 | 202 | 0.6065 | 0.7267 | 0.6065 | 0.7788 |
| No log | 2.2667 | 204 | 0.6364 | 0.7250 | 0.6364 | 0.7977 |
| No log | 2.2889 | 206 | 0.6312 | 0.7155 | 0.6312 | 0.7945 |
| No log | 2.3111 | 208 | 0.6314 | 0.7196 | 0.6314 | 0.7946 |
| No log | 2.3333 | 210 | 0.6310 | 0.7205 | 0.6310 | 0.7943 |
| No log | 2.3556 | 212 | 0.6661 | 0.7111 | 0.6661 | 0.8162 |
| No log | 2.3778 | 214 | 0.6370 | 0.6897 | 0.6370 | 0.7981 |
| No log | 2.4 | 216 | 0.5932 | 0.7135 | 0.5932 | 0.7702 |
| No log | 2.4222 | 218 | 0.6437 | 0.7341 | 0.6437 | 0.8023 |
| No log | 2.4444 | 220 | 0.6834 | 0.7344 | 0.6834 | 0.8267 |
| No log | 2.4667 | 222 | 0.6177 | 0.7181 | 0.6177 | 0.7859 |
| No log | 2.4889 | 224 | 0.5842 | 0.7122 | 0.5842 | 0.7643 |
| No log | 2.5111 | 226 | 0.7000 | 0.6768 | 0.7000 | 0.8366 |
| No log | 2.5333 | 228 | 0.8265 | 0.6483 | 0.8265 | 0.9091 |
| No log | 2.5556 | 230 | 0.8337 | 0.6530 | 0.8337 | 0.9131 |
| No log | 2.5778 | 232 | 0.6872 | 0.6797 | 0.6872 | 0.8290 |
| No log | 2.6 | 234 | 0.5795 | 0.6817 | 0.5795 | 0.7612 |
| No log | 2.6222 | 236 | 0.5733 | 0.7324 | 0.5733 | 0.7572 |
| No log | 2.6444 | 238 | 0.5849 | 0.7231 | 0.5849 | 0.7648 |
| No log | 2.6667 | 240 | 0.6023 | 0.7350 | 0.6023 | 0.7761 |
| No log | 2.6889 | 242 | 0.6240 | 0.7525 | 0.6240 | 0.7899 |
| No log | 2.7111 | 244 | 0.6307 | 0.7416 | 0.6307 | 0.7942 |
| No log | 2.7333 | 246 | 0.6609 | 0.7187 | 0.6609 | 0.8130 |
| No log | 2.7556 | 248 | 0.6753 | 0.7028 | 0.6753 | 0.8218 |
| No log | 2.7778 | 250 | 0.6236 | 0.7123 | 0.6236 | 0.7897 |
| No log | 2.8 | 252 | 0.6163 | 0.6924 | 0.6163 | 0.7850 |
| No log | 2.8222 | 254 | 0.6015 | 0.7103 | 0.6015 | 0.7756 |
| No log | 2.8444 | 256 | 0.5738 | 0.7229 | 0.5738 | 0.7575 |
| No log | 2.8667 | 258 | 0.5712 | 0.7229 | 0.5712 | 0.7557 |
| No log | 2.8889 | 260 | 0.6053 | 0.7185 | 0.6053 | 0.7780 |
| No log | 2.9111 | 262 | 0.6067 | 0.7128 | 0.6067 | 0.7789 |
| No log | 2.9333 | 264 | 0.5712 | 0.7221 | 0.5712 | 0.7558 |
| No log | 2.9556 | 266 | 0.5675 | 0.7684 | 0.5675 | 0.7533 |
| No log | 2.9778 | 268 | 0.5738 | 0.7666 | 0.5738 | 0.7575 |
| No log | 3.0 | 270 | 0.5690 | 0.7593 | 0.5690 | 0.7543 |
| No log | 3.0222 | 272 | 0.5820 | 0.7243 | 0.5820 | 0.7629 |
| No log | 3.0444 | 274 | 0.5980 | 0.6969 | 0.5980 | 0.7733 |
| No log | 3.0667 | 276 | 0.6063 | 0.7037 | 0.6063 | 0.7786 |
| No log | 3.0889 | 278 | 0.5827 | 0.7019 | 0.5827 | 0.7634 |
| No log | 3.1111 | 280 | 0.5814 | 0.7122 | 0.5814 | 0.7625 |
| No log | 3.1333 | 282 | 0.6010 | 0.7021 | 0.6010 | 0.7753 |
| No log | 3.1556 | 284 | 0.7170 | 0.6789 | 0.7170 | 0.8468 |
| No log | 3.1778 | 286 | 0.9679 | 0.5966 | 0.9679 | 0.9838 |
| No log | 3.2 | 288 | 1.0053 | 0.5805 | 1.0053 | 1.0026 |
| No log | 3.2222 | 290 | 0.8214 | 0.6364 | 0.8214 | 0.9063 |
| No log | 3.2444 | 292 | 0.6267 | 0.7174 | 0.6267 | 0.7916 |
| No log | 3.2667 | 294 | 0.5834 | 0.7281 | 0.5834 | 0.7638 |
| No log | 3.2889 | 296 | 0.5837 | 0.7376 | 0.5837 | 0.7640 |
| No log | 3.3111 | 298 | 0.5820 | 0.7477 | 0.5820 | 0.7629 |
| No log | 3.3333 | 300 | 0.5827 | 0.7320 | 0.5827 | 0.7633 |
| No log | 3.3556 | 302 | 0.5864 | 0.7407 | 0.5864 | 0.7657 |
| No log | 3.3778 | 304 | 0.5761 | 0.7236 | 0.5761 | 0.7590 |
| No log | 3.4 | 306 | 0.5799 | 0.7265 | 0.5799 | 0.7615 |
| No log | 3.4222 | 308 | 0.5911 | 0.7278 | 0.5911 | 0.7688 |
| No log | 3.4444 | 310 | 0.6039 | 0.7289 | 0.6039 | 0.7771 |
| No log | 3.4667 | 312 | 0.6154 | 0.7316 | 0.6154 | 0.7845 |
| No log | 3.4889 | 314 | 0.5999 | 0.7445 | 0.5999 | 0.7745 |
| No log | 3.5111 | 316 | 0.5938 | 0.7600 | 0.5938 | 0.7706 |
| No log | 3.5333 | 318 | 0.6002 | 0.7362 | 0.6002 | 0.7747 |
| No log | 3.5556 | 320 | 0.6235 | 0.7294 | 0.6235 | 0.7896 |
| No log | 3.5778 | 322 | 0.5961 | 0.7294 | 0.5961 | 0.7721 |
| No log | 3.6 | 324 | 0.5656 | 0.7427 | 0.5656 | 0.7521 |
| No log | 3.6222 | 326 | 0.5427 | 0.7387 | 0.5427 | 0.7367 |
| No log | 3.6444 | 328 | 0.5699 | 0.7344 | 0.5699 | 0.7549 |
| No log | 3.6667 | 330 | 0.6409 | 0.7112 | 0.6409 | 0.8005 |
| No log | 3.6889 | 332 | 0.6258 | 0.7065 | 0.6258 | 0.7911 |
| No log | 3.7111 | 334 | 0.5622 | 0.7337 | 0.5622 | 0.7498 |
| No log | 3.7333 | 336 | 0.5663 | 0.7273 | 0.5663 | 0.7525 |
| No log | 3.7556 | 338 | 0.6297 | 0.7288 | 0.6297 | 0.7935 |
| No log | 3.7778 | 340 | 0.6342 | 0.7231 | 0.6342 | 0.7964 |
| No log | 3.8 | 342 | 0.5936 | 0.7305 | 0.5936 | 0.7705 |
| No log | 3.8222 | 344 | 0.6029 | 0.6870 | 0.6029 | 0.7765 |
| No log | 3.8444 | 346 | 0.7150 | 0.6927 | 0.7150 | 0.8456 |
| No log | 3.8667 | 348 | 0.7786 | 0.6677 | 0.7786 | 0.8824 |
| No log | 3.8889 | 350 | 0.7337 | 0.6850 | 0.7337 | 0.8566 |
| No log | 3.9111 | 352 | 0.6628 | 0.7154 | 0.6628 | 0.8141 |
| No log | 3.9333 | 354 | 0.6223 | 0.7250 | 0.6223 | 0.7889 |
| No log | 3.9556 | 356 | 0.5663 | 0.7302 | 0.5663 | 0.7525 |
| No log | 3.9778 | 358 | 0.5544 | 0.7349 | 0.5544 | 0.7446 |
| No log | 4.0 | 360 | 0.5490 | 0.7349 | 0.5490 | 0.7409 |
| No log | 4.0222 | 362 | 0.5463 | 0.7349 | 0.5463 | 0.7391 |
| No log | 4.0444 | 364 | 0.5485 | 0.7542 | 0.5485 | 0.7406 |
| No log | 4.0667 | 366 | 0.5504 | 0.7467 | 0.5504 | 0.7419 |
| No log | 4.0889 | 368 | 0.5549 | 0.7430 | 0.5549 | 0.7449 |
| No log | 4.1111 | 370 | 0.5478 | 0.7514 | 0.5478 | 0.7401 |
| No log | 4.1333 | 372 | 0.5368 | 0.7323 | 0.5368 | 0.7327 |
| No log | 4.1556 | 374 | 0.5543 | 0.7018 | 0.5543 | 0.7445 |
| No log | 4.1778 | 376 | 0.5544 | 0.6949 | 0.5544 | 0.7446 |
| No log | 4.2 | 378 | 0.5388 | 0.7153 | 0.5388 | 0.7340 |
| No log | 4.2222 | 380 | 0.5325 | 0.7571 | 0.5325 | 0.7297 |
| No log | 4.2444 | 382 | 0.5642 | 0.7447 | 0.5642 | 0.7511 |
| No log | 4.2667 | 384 | 0.6344 | 0.7213 | 0.6344 | 0.7965 |
| No log | 4.2889 | 386 | 0.6813 | 0.7196 | 0.6813 | 0.8254 |
| No log | 4.3111 | 388 | 0.6407 | 0.7243 | 0.6407 | 0.8004 |
| No log | 4.3333 | 390 | 0.5642 | 0.7432 | 0.5642 | 0.7512 |
| No log | 4.3556 | 392 | 0.5624 | 0.7497 | 0.5624 | 0.7499 |
| No log | 4.3778 | 394 | 0.6448 | 0.6939 | 0.6448 | 0.8030 |
| No log | 4.4 | 396 | 0.6534 | 0.6939 | 0.6534 | 0.8083 |
| No log | 4.4222 | 398 | 0.5999 | 0.7245 | 0.5999 | 0.7745 |
| No log | 4.4444 | 400 | 0.5483 | 0.7580 | 0.5483 | 0.7405 |
| No log | 4.4667 | 402 | 0.5533 | 0.7514 | 0.5533 | 0.7438 |
| No log | 4.4889 | 404 | 0.5999 | 0.7230 | 0.5999 | 0.7745 |
| No log | 4.5111 | 406 | 0.6848 | 0.6871 | 0.6848 | 0.8276 |
| No log | 4.5333 | 408 | 0.6854 | 0.6782 | 0.6854 | 0.8279 |
| No log | 4.5556 | 410 | 0.6373 | 0.6936 | 0.6373 | 0.7983 |
| No log | 4.5778 | 412 | 0.5839 | 0.7420 | 0.5839 | 0.7642 |
| No log | 4.6 | 414 | 0.5670 | 0.7505 | 0.5670 | 0.7530 |
| No log | 4.6222 | 416 | 0.5640 | 0.7572 | 0.5640 | 0.7510 |
| No log | 4.6444 | 418 | 0.5678 | 0.7425 | 0.5678 | 0.7535 |
| No log | 4.6667 | 420 | 0.5950 | 0.7148 | 0.5950 | 0.7714 |
| No log | 4.6889 | 422 | 0.6010 | 0.7148 | 0.6010 | 0.7752 |
| No log | 4.7111 | 424 | 0.5889 | 0.7188 | 0.5889 | 0.7674 |
| No log | 4.7333 | 426 | 0.5654 | 0.7371 | 0.5654 | 0.7519 |
| No log | 4.7556 | 428 | 0.5608 | 0.7530 | 0.5608 | 0.7488 |
| No log | 4.7778 | 430 | 0.5678 | 0.7488 | 0.5678 | 0.7535 |
| No log | 4.8 | 432 | 0.5701 | 0.7488 | 0.5701 | 0.7550 |
| No log | 4.8222 | 434 | 0.5769 | 0.7676 | 0.5769 | 0.7596 |
| No log | 4.8444 | 436 | 0.5971 | 0.7629 | 0.5971 | 0.7727 |
| No log | 4.8667 | 438 | 0.6207 | 0.7153 | 0.6207 | 0.7878 |
| No log | 4.8889 | 440 | 0.6525 | 0.7185 | 0.6525 | 0.8078 |
| No log | 4.9111 | 442 | 0.7415 | 0.7064 | 0.7415 | 0.8611 |
| No log | 4.9333 | 444 | 0.8519 | 0.6748 | 0.8519 | 0.9230 |
| No log | 4.9556 | 446 | 0.8456 | 0.6748 | 0.8456 | 0.9196 |
| No log | 4.9778 | 448 | 0.7642 | 0.7164 | 0.7642 | 0.8742 |
| No log | 5.0 | 450 | 0.6502 | 0.7175 | 0.6502 | 0.8063 |
| No log | 5.0222 | 452 | 0.5919 | 0.7195 | 0.5919 | 0.7693 |
| No log | 5.0444 | 454 | 0.5709 | 0.7768 | 0.5709 | 0.7556 |
| No log | 5.0667 | 456 | 0.5700 | 0.7678 | 0.5700 | 0.7550 |
| No log | 5.0889 | 458 | 0.5610 | 0.7553 | 0.5610 | 0.7490 |
| No log | 5.1111 | 460 | 0.5566 | 0.7436 | 0.5566 | 0.7460 |
| No log | 5.1333 | 462 | 0.5870 | 0.7189 | 0.5870 | 0.7661 |
| No log | 5.1556 | 464 | 0.6369 | 0.7059 | 0.6369 | 0.7981 |
| No log | 5.1778 | 466 | 0.6798 | 0.7149 | 0.6798 | 0.8245 |
| No log | 5.2 | 468 | 0.6556 | 0.7109 | 0.6556 | 0.8097 |
| No log | 5.2222 | 470 | 0.5986 | 0.7302 | 0.5986 | 0.7737 |
| No log | 5.2444 | 472 | 0.5650 | 0.7416 | 0.5650 | 0.7516 |
| No log | 5.2667 | 474 | 0.5603 | 0.7322 | 0.5603 | 0.7485 |
| No log | 5.2889 | 476 | 0.5582 | 0.7435 | 0.5582 | 0.7471 |
| No log | 5.3111 | 478 | 0.5556 | 0.7339 | 0.5556 | 0.7454 |
| No log | 5.3333 | 480 | 0.5539 | 0.7484 | 0.5539 | 0.7442 |
| No log | 5.3556 | 482 | 0.5627 | 0.7514 | 0.5627 | 0.7502 |
| No log | 5.3778 | 484 | 0.5862 | 0.7152 | 0.5862 | 0.7656 |
| No log | 5.4 | 486 | 0.5886 | 0.7138 | 0.5886 | 0.7672 |
| No log | 5.4222 | 488 | 0.5830 | 0.7173 | 0.5830 | 0.7635 |
| No log | 5.4444 | 490 | 0.5757 | 0.7173 | 0.5757 | 0.7588 |
| No log | 5.4667 | 492 | 0.5676 | 0.7289 | 0.5676 | 0.7534 |
| No log | 5.4889 | 494 | 0.5732 | 0.7245 | 0.5732 | 0.7571 |
| No log | 5.5111 | 496 | 0.5670 | 0.7326 | 0.5670 | 0.7530 |
| No log | 5.5333 | 498 | 0.5584 | 0.7661 | 0.5584 | 0.7473 |
| 0.3921 | 5.5556 | 500 | 0.5611 | 0.7661 | 0.5611 | 0.7491 |
| 0.3921 | 5.5778 | 502 | 0.5667 | 0.7703 | 0.5667 | 0.7528 |
| 0.3921 | 5.6 | 504 | 0.5883 | 0.7474 | 0.5883 | 0.7670 |
| 0.3921 | 5.6222 | 506 | 0.6047 | 0.7430 | 0.6047 | 0.7777 |
| 0.3921 | 5.6444 | 508 | 0.6056 | 0.7439 | 0.6056 | 0.7782 |
| 0.3921 | 5.6667 | 510 | 0.6175 | 0.7439 | 0.6175 | 0.7858 |
| 0.3921 | 5.6889 | 512 | 0.6357 | 0.7133 | 0.6357 | 0.7973 |
| 0.3921 | 5.7111 | 514 | 0.6509 | 0.7016 | 0.6509 | 0.8068 |
| 0.3921 | 5.7333 | 516 | 0.6314 | 0.6994 | 0.6314 | 0.7946 |
| 0.3921 | 5.7556 | 518 | 0.6304 | 0.7061 | 0.6304 | 0.7940 |
| 0.3921 | 5.7778 | 520 | 0.6137 | 0.7326 | 0.6137 | 0.7834 |
| 0.3921 | 5.8 | 522 | 0.5902 | 0.7585 | 0.5902 | 0.7682 |
| 0.3921 | 5.8222 | 524 | 0.5889 | 0.7630 | 0.5889 | 0.7674 |
| 0.3921 | 5.8444 | 526 | 0.5905 | 0.7439 | 0.5905 | 0.7684 |
| 0.3921 | 5.8667 | 528 | 0.5886 | 0.75 | 0.5886 | 0.7672 |
| 0.3921 | 5.8889 | 530 | 0.5777 | 0.7686 | 0.5777 | 0.7601 |
| 0.3921 | 5.9111 | 532 | 0.5780 | 0.7413 | 0.5780 | 0.7602 |
| 0.3921 | 5.9333 | 534 | 0.6154 | 0.7085 | 0.6154 | 0.7845 |
| 0.3921 | 5.9556 | 536 | 0.6301 | 0.7053 | 0.6301 | 0.7938 |
| 0.3921 | 5.9778 | 538 | 0.6091 | 0.6994 | 0.6091 | 0.7805 |
| 0.3921 | 6.0 | 540 | 0.5755 | 0.7233 | 0.5755 | 0.7586 |
| 0.3921 | 6.0222 | 542 | 0.5580 | 0.7316 | 0.5580 | 0.7470 |
| 0.3921 | 6.0444 | 544 | 0.5636 | 0.7403 | 0.5636 | 0.7507 |
| 0.3921 | 6.0667 | 546 | 0.5780 | 0.7385 | 0.5780 | 0.7602 |
| 0.3921 | 6.0889 | 548 | 0.5768 | 0.7355 | 0.5768 | 0.7594 |
| 0.3921 | 6.1111 | 550 | 0.5755 | 0.7150 | 0.5755 | 0.7586 |
| 0.3921 | 6.1333 | 552 | 0.5650 | 0.7203 | 0.5650 | 0.7517 |
| 0.3921 | 6.1556 | 554 | 0.5647 | 0.7200 | 0.5647 | 0.7515 |
| 0.3921 | 6.1778 | 556 | 0.5758 | 0.7200 | 0.5758 | 0.7588 |
| 0.3921 | 6.2 | 558 | 0.5765 | 0.7283 | 0.5765 | 0.7593 |
| 0.3921 | 6.2222 | 560 | 0.5774 | 0.7477 | 0.5774 | 0.7599 |
| 0.3921 | 6.2444 | 562 | 0.5880 | 0.7515 | 0.5880 | 0.7668 |
| 0.3921 | 6.2667 | 564 | 0.5973 | 0.7450 | 0.5973 | 0.7728 |
| 0.3921 | 6.2889 | 566 | 0.5959 | 0.7484 | 0.5959 | 0.7719 |
| 0.3921 | 6.3111 | 568 | 0.5949 | 0.7423 | 0.5949 | 0.7713 |
| 0.3921 | 6.3333 | 570 | 0.6230 | 0.7044 | 0.6230 | 0.7893 |
| 0.3921 | 6.3556 | 572 | 0.6617 | 0.7175 | 0.6617 | 0.8134 |
| 0.3921 | 6.3778 | 574 | 0.6987 | 0.7070 | 0.6987 | 0.8359 |
| 0.3921 | 6.4 | 576 | 0.6858 | 0.7083 | 0.6858 | 0.8281 |
| 0.3921 | 6.4222 | 578 | 0.6506 | 0.7089 | 0.6506 | 0.8066 |
| 0.3921 | 6.4444 | 580 | 0.6084 | 0.7007 | 0.6084 | 0.7800 |
| 0.3921 | 6.4667 | 582 | 0.5844 | 0.7179 | 0.5844 | 0.7645 |
| 0.3921 | 6.4889 | 584 | 0.5708 | 0.7316 | 0.5708 | 0.7555 |
| 0.3921 | 6.5111 | 586 | 0.5709 | 0.7451 | 0.5709 | 0.7556 |
| 0.3921 | 6.5333 | 588 | 0.5747 | 0.7300 | 0.5747 | 0.7581 |
| 0.3921 | 6.5556 | 590 | 0.5842 | 0.7392 | 0.5842 | 0.7643 |
| 0.3921 | 6.5778 | 592 | 0.5891 | 0.7376 | 0.5891 | 0.7675 |
| 0.3921 | 6.6 | 594 | 0.5988 | 0.7252 | 0.5988 | 0.7738 |
| 0.3921 | 6.6222 | 596 | 0.6000 | 0.7252 | 0.6000 | 0.7746 |
| 0.3921 | 6.6444 | 598 | 0.5968 | 0.7354 | 0.5968 | 0.7725 |
| 0.3921 | 6.6667 | 600 | 0.6015 | 0.7267 | 0.6015 | 0.7755 |
| 0.3921 | 6.6889 | 602 | 0.5964 | 0.7305 | 0.5964 | 0.7723 |
| 0.3921 | 6.7111 | 604 | 0.5885 | 0.7409 | 0.5885 | 0.7671 |
| 0.3921 | 6.7333 | 606 | 0.5756 | 0.7416 | 0.5756 | 0.7587 |
| 0.3921 | 6.7556 | 608 | 0.5686 | 0.7019 | 0.5686 | 0.7540 |
| 0.3921 | 6.7778 | 610 | 0.5763 | 0.7091 | 0.5763 | 0.7591 |
| 0.3921 | 6.8 | 612 | 0.5743 | 0.7069 | 0.5743 | 0.7578 |
| 0.3921 | 6.8222 | 614 | 0.5637 | 0.7156 | 0.5637 | 0.7508 |
| 0.3921 | 6.8444 | 616 | 0.5596 | 0.7198 | 0.5596 | 0.7481 |
| 0.3921 | 6.8667 | 618 | 0.5593 | 0.7345 | 0.5593 | 0.7479 |
| 0.3921 | 6.8889 | 620 | 0.5604 | 0.7316 | 0.5604 | 0.7486 |
| 0.3921 | 6.9111 | 622 | 0.5659 | 0.7231 | 0.5659 | 0.7522 |
| 0.3921 | 6.9333 | 624 | 0.5726 | 0.7194 | 0.5726 | 0.7567 |
| 0.3921 | 6.9556 | 626 | 0.5801 | 0.7283 | 0.5801 | 0.7617 |
| 0.3921 | 6.9778 | 628 | 0.5796 | 0.7311 | 0.5796 | 0.7613 |
| 0.3921 | 7.0 | 630 | 0.5831 | 0.7353 | 0.5831 | 0.7636 |
| 0.3921 | 7.0222 | 632 | 0.5854 | 0.7385 | 0.5854 | 0.7651 |
| 0.3921 | 7.0444 | 634 | 0.5914 | 0.7385 | 0.5914 | 0.7690 |
| 0.3921 | 7.0667 | 636 | 0.5968 | 0.7379 | 0.5968 | 0.7725 |
| 0.3921 | 7.0889 | 638 | 0.6109 | 0.7199 | 0.6109 | 0.7816 |
| 0.3921 | 7.1111 | 640 | 0.6251 | 0.7171 | 0.6251 | 0.7907 |
| 0.3921 | 7.1333 | 642 | 0.6191 | 0.7213 | 0.6191 | 0.7868 |
| 0.3921 | 7.1556 | 644 | 0.6069 | 0.7213 | 0.6069 | 0.7791 |
| 0.3921 | 7.1778 | 646 | 0.5852 | 0.7295 | 0.5852 | 0.7650 |
| 0.3921 | 7.2 | 648 | 0.5704 | 0.7204 | 0.5704 | 0.7552 |
| 0.3921 | 7.2222 | 650 | 0.5644 | 0.7269 | 0.5644 | 0.7513 |
| 0.3921 | 7.2444 | 652 | 0.5620 | 0.7312 | 0.5620 | 0.7497 |
| 0.3921 | 7.2667 | 654 | 0.5594 | 0.7126 | 0.5594 | 0.7479 |
| 0.3921 | 7.2889 | 656 | 0.5588 | 0.7290 | 0.5588 | 0.7475 |
| 0.3921 | 7.3111 | 658 | 0.5574 | 0.7250 | 0.5574 | 0.7466 |
| 0.3921 | 7.3333 | 660 | 0.5565 | 0.7250 | 0.5565 | 0.7460 |
| 0.3921 | 7.3556 | 662 | 0.5547 | 0.7290 | 0.5547 | 0.7447 |
| 0.3921 | 7.3778 | 664 | 0.5567 | 0.7433 | 0.5567 | 0.7461 |
| 0.3921 | 7.4 | 666 | 0.5603 | 0.7426 | 0.5603 | 0.7485 |
| 0.3921 | 7.4222 | 668 | 0.5665 | 0.7516 | 0.5665 | 0.7527 |
| 0.3921 | 7.4444 | 670 | 0.5713 | 0.7583 | 0.5713 | 0.7558 |
| 0.3921 | 7.4667 | 672 | 0.5773 | 0.7589 | 0.5773 | 0.7598 |
| 0.3921 | 7.4889 | 674 | 0.5836 | 0.7630 | 0.5836 | 0.7639 |
| 0.3921 | 7.5111 | 676 | 0.5888 | 0.7589 | 0.5888 | 0.7673 |
| 0.3921 | 7.5333 | 678 | 0.5921 | 0.7425 | 0.5921 | 0.7695 |
| 0.3921 | 7.5556 | 680 | 0.5915 | 0.7416 | 0.5915 | 0.7691 |
| 0.3921 | 7.5778 | 682 | 0.5862 | 0.7516 | 0.5862 | 0.7657 |
| 0.3921 | 7.6 | 684 | 0.5801 | 0.7516 | 0.5801 | 0.7617 |
| 0.3921 | 7.6222 | 686 | 0.5750 | 0.7618 | 0.5750 | 0.7583 |
| 0.3921 | 7.6444 | 688 | 0.5676 | 0.7585 | 0.5676 | 0.7534 |
| 0.3921 | 7.6667 | 690 | 0.5639 | 0.7528 | 0.5639 | 0.7509 |
| 0.3921 | 7.6889 | 692 | 0.5691 | 0.7308 | 0.5691 | 0.7544 |
| 0.3921 | 7.7111 | 694 | 0.5698 | 0.7247 | 0.5698 | 0.7548 |
| 0.3921 | 7.7333 | 696 | 0.5621 | 0.7402 | 0.5621 | 0.7497 |
| 0.3921 | 7.7556 | 698 | 0.5532 | 0.7411 | 0.5532 | 0.7438 |
| 0.3921 | 7.7778 | 700 | 0.5548 | 0.7479 | 0.5548 | 0.7448 |
| 0.3921 | 7.8 | 702 | 0.5635 | 0.7380 | 0.5635 | 0.7507 |
| 0.3921 | 7.8222 | 704 | 0.5640 | 0.7277 | 0.5640 | 0.7510 |
| 0.3921 | 7.8444 | 706 | 0.5613 | 0.7293 | 0.5613 | 0.7492 |
| 0.3921 | 7.8667 | 708 | 0.5561 | 0.7441 | 0.5561 | 0.7457 |
| 0.3921 | 7.8889 | 710 | 0.5514 | 0.7392 | 0.5514 | 0.7426 |
| 0.3921 | 7.9111 | 712 | 0.5470 | 0.7551 | 0.5470 | 0.7396 |
| 0.3921 | 7.9333 | 714 | 0.5488 | 0.7290 | 0.5488 | 0.7408 |
| 0.3921 | 7.9556 | 716 | 0.5521 | 0.7228 | 0.5521 | 0.7430 |
| 0.3921 | 7.9778 | 718 | 0.5557 | 0.7228 | 0.5557 | 0.7455 |
| 0.3921 | 8.0 | 720 | 0.5598 | 0.7493 | 0.5598 | 0.7482 |
| 0.3921 | 8.0222 | 722 | 0.5639 | 0.7533 | 0.5639 | 0.7510 |
| 0.3921 | 8.0444 | 724 | 0.5683 | 0.7585 | 0.5683 | 0.7539 |
| 0.3921 | 8.0667 | 726 | 0.5754 | 0.7311 | 0.5754 | 0.7585 |
| 0.3921 | 8.0889 | 728 | 0.5845 | 0.7268 | 0.5845 | 0.7645 |
| 0.3921 | 8.1111 | 730 | 0.5915 | 0.7268 | 0.5915 | 0.7691 |
| 0.3921 | 8.1333 | 732 | 0.5937 | 0.7347 | 0.5937 | 0.7705 |
| 0.3921 | 8.1556 | 734 | 0.5956 | 0.7268 | 0.5956 | 0.7717 |
| 0.3921 | 8.1778 | 736 | 0.5975 | 0.7253 | 0.5975 | 0.7730 |
| 0.3921 | 8.2 | 738 | 0.5912 | 0.7337 | 0.5912 | 0.7689 |
| 0.3921 | 8.2222 | 740 | 0.5880 | 0.7337 | 0.5880 | 0.7668 |
| 0.3921 | 8.2444 | 742 | 0.5848 | 0.7337 | 0.5848 | 0.7647 |
| 0.3921 | 8.2667 | 744 | 0.5852 | 0.7295 | 0.5852 | 0.7650 |
| 0.3921 | 8.2889 | 746 | 0.5897 | 0.7268 | 0.5897 | 0.7679 |
| 0.3921 | 8.3111 | 748 | 0.5996 | 0.7090 | 0.5996 | 0.7743 |
| 0.3921 | 8.3333 | 750 | 0.6094 | 0.7107 | 0.6094 | 0.7806 |
| 0.3921 | 8.3556 | 752 | 0.6204 | 0.7139 | 0.6204 | 0.7876 |
| 0.3921 | 8.3778 | 754 | 0.6196 | 0.7033 | 0.6196 | 0.7871 |
| 0.3921 | 8.4 | 756 | 0.6123 | 0.7075 | 0.6123 | 0.7825 |
| 0.3921 | 8.4222 | 758 | 0.6006 | 0.7048 | 0.6006 | 0.7750 |
| 0.3921 | 8.4444 | 760 | 0.5924 | 0.7090 | 0.5924 | 0.7697 |
| 0.3921 | 8.4667 | 762 | 0.5915 | 0.7090 | 0.5915 | 0.7691 |
| 0.3921 | 8.4889 | 764 | 0.5967 | 0.7090 | 0.5967 | 0.7725 |
| 0.3921 | 8.5111 | 766 | 0.6116 | 0.7107 | 0.6116 | 0.7820 |
| 0.3921 | 8.5333 | 768 | 0.6348 | 0.7126 | 0.6348 | 0.7968 |
| 0.3921 | 8.5556 | 770 | 0.6440 | 0.7107 | 0.6440 | 0.8025 |
| 0.3921 | 8.5778 | 772 | 0.6381 | 0.7107 | 0.6381 | 0.7988 |
| 0.3921 | 8.6 | 774 | 0.6229 | 0.7154 | 0.6229 | 0.7893 |
| 0.3921 | 8.6222 | 776 | 0.6135 | 0.7177 | 0.6135 | 0.7832 |
| 0.3921 | 8.6444 | 778 | 0.6111 | 0.7177 | 0.6111 | 0.7817 |
| 0.3921 | 8.6667 | 780 | 0.6030 | 0.7119 | 0.6030 | 0.7765 |
| 0.3921 | 8.6889 | 782 | 0.5961 | 0.7295 | 0.5961 | 0.7721 |
| 0.3921 | 8.7111 | 784 | 0.5917 | 0.7311 | 0.5917 | 0.7692 |
| 0.3921 | 8.7333 | 786 | 0.5874 | 0.7311 | 0.5874 | 0.7664 |
| 0.3921 | 8.7556 | 788 | 0.5845 | 0.7268 | 0.5845 | 0.7645 |
| 0.3921 | 8.7778 | 790 | 0.5799 | 0.7268 | 0.5799 | 0.7615 |
| 0.3921 | 8.8 | 792 | 0.5781 | 0.7268 | 0.5781 | 0.7603 |
| 0.3921 | 8.8222 | 794 | 0.5757 | 0.7268 | 0.5757 | 0.7587 |
| 0.3921 | 8.8444 | 796 | 0.5768 | 0.7268 | 0.5768 | 0.7595 |
| 0.3921 | 8.8667 | 798 | 0.5820 | 0.7268 | 0.5820 | 0.7629 |
| 0.3921 | 8.8889 | 800 | 0.5893 | 0.7268 | 0.5893 | 0.7676 |
| 0.3921 | 8.9111 | 802 | 0.5951 | 0.7268 | 0.5951 | 0.7714 |
| 0.3921 | 8.9333 | 804 | 0.6026 | 0.7226 | 0.6026 | 0.7763 |
| 0.3921 | 8.9556 | 806 | 0.6079 | 0.7205 | 0.6079 | 0.7797 |
| 0.3921 | 8.9778 | 808 | 0.6125 | 0.7087 | 0.6125 | 0.7826 |
| 0.3921 | 9.0 | 810 | 0.6178 | 0.7087 | 0.6178 | 0.7860 |
| 0.3921 | 9.0222 | 812 | 0.6207 | 0.7045 | 0.6207 | 0.7879 |
| 0.3921 | 9.0444 | 814 | 0.6210 | 0.7045 | 0.6210 | 0.7880 |
| 0.3921 | 9.0667 | 816 | 0.6210 | 0.7045 | 0.6210 | 0.7880 |
| 0.3921 | 9.0889 | 818 | 0.6146 | 0.7045 | 0.6146 | 0.7840 |
| 0.3921 | 9.1111 | 820 | 0.6088 | 0.7226 | 0.6088 | 0.7802 |
| 0.3921 | 9.1333 | 822 | 0.6022 | 0.7226 | 0.6022 | 0.7760 |
| 0.3921 | 9.1556 | 824 | 0.5956 | 0.7226 | 0.5956 | 0.7718 |
| 0.3921 | 9.1778 | 826 | 0.5938 | 0.7226 | 0.5938 | 0.7706 |
| 0.3921 | 9.2 | 828 | 0.5924 | 0.7226 | 0.5924 | 0.7696 |
| 0.3921 | 9.2222 | 830 | 0.5877 | 0.7268 | 0.5877 | 0.7666 |
| 0.3921 | 9.2444 | 832 | 0.5800 | 0.7268 | 0.5800 | 0.7616 |
| 0.3921 | 9.2667 | 834 | 0.5745 | 0.7311 | 0.5745 | 0.7580 |
| 0.3921 | 9.2889 | 836 | 0.5713 | 0.7311 | 0.5713 | 0.7559 |
| 0.3921 | 9.3111 | 838 | 0.5700 | 0.7311 | 0.5700 | 0.7550 |
| 0.3921 | 9.3333 | 840 | 0.5699 | 0.7311 | 0.5699 | 0.7549 |
| 0.3921 | 9.3556 | 842 | 0.5692 | 0.7311 | 0.5692 | 0.7544 |
| 0.3921 | 9.3778 | 844 | 0.5681 | 0.7311 | 0.5681 | 0.7537 |
| 0.3921 | 9.4 | 846 | 0.5684 | 0.7311 | 0.5684 | 0.7539 |
| 0.3921 | 9.4222 | 848 | 0.5682 | 0.7369 | 0.5682 | 0.7538 |
| 0.3921 | 9.4444 | 850 | 0.5684 | 0.7369 | 0.5684 | 0.7539 |
| 0.3921 | 9.4667 | 852 | 0.5699 | 0.7369 | 0.5699 | 0.7549 |
| 0.3921 | 9.4889 | 854 | 0.5715 | 0.7268 | 0.5715 | 0.7560 |
| 0.3921 | 9.5111 | 856 | 0.5746 | 0.7268 | 0.5746 | 0.7581 |
| 0.3921 | 9.5333 | 858 | 0.5781 | 0.7268 | 0.5781 | 0.7603 |
| 0.3921 | 9.5556 | 860 | 0.5803 | 0.7268 | 0.5803 | 0.7618 |
| 0.3921 | 9.5778 | 862 | 0.5837 | 0.7268 | 0.5837 | 0.7640 |
| 0.3921 | 9.6 | 864 | 0.5868 | 0.7226 | 0.5868 | 0.7660 |
| 0.3921 | 9.6222 | 866 | 0.5898 | 0.7183 | 0.5898 | 0.7680 |
| 0.3921 | 9.6444 | 868 | 0.5925 | 0.7183 | 0.5925 | 0.7697 |
| 0.3921 | 9.6667 | 870 | 0.5959 | 0.7183 | 0.5959 | 0.7720 |
| 0.3921 | 9.6889 | 872 | 0.5975 | 0.7183 | 0.5975 | 0.7730 |
| 0.3921 | 9.7111 | 874 | 0.5974 | 0.7183 | 0.5974 | 0.7729 |
| 0.3921 | 9.7333 | 876 | 0.5965 | 0.7183 | 0.5965 | 0.7724 |
| 0.3921 | 9.7556 | 878 | 0.5964 | 0.7183 | 0.5964 | 0.7723 |
| 0.3921 | 9.7778 | 880 | 0.5962 | 0.7183 | 0.5962 | 0.7721 |
| 0.3921 | 9.8 | 882 | 0.5962 | 0.7183 | 0.5962 | 0.7722 |
| 0.3921 | 9.8222 | 884 | 0.5974 | 0.7183 | 0.5974 | 0.7729 |
| 0.3921 | 9.8444 | 886 | 0.5985 | 0.7183 | 0.5985 | 0.7736 |
| 0.3921 | 9.8667 | 888 | 0.5998 | 0.7005 | 0.5998 | 0.7745 |
| 0.3921 | 9.8889 | 890 | 0.6004 | 0.7005 | 0.6004 | 0.7748 |
| 0.3921 | 9.9111 | 892 | 0.6003 | 0.7005 | 0.6003 | 0.7748 |
| 0.3921 | 9.9333 | 894 | 0.5998 | 0.7048 | 0.5998 | 0.7745 |
| 0.3921 | 9.9556 | 896 | 0.5995 | 0.7048 | 0.5995 | 0.7743 |
| 0.3921 | 9.9778 | 898 | 0.5994 | 0.7048 | 0.5994 | 0.7742 |
| 0.3921 | 10.0 | 900 | 0.5994 | 0.7048 | 0.5994 | 0.7742 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
NobodyExistsOnTheInternet/mistral-7b-airoboros-chatml | NobodyExistsOnTheInternet | "2023-12-01T07:25:28Z" | 1 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2023-11-29T14:06:38Z" | ---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0
|
DevD60/sql_generator_f5 | DevD60 | "2025-03-03T18:44:56Z" | 3 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-02-24T21:37:00Z" | ---
library_name: transformers
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
model-index:
- name: sql_generator_f5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sql_generator_f5
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on [gretelai/synthetic_text_to_sql](https://huggingface.co/datasets/gretelai/synthetic_text_to_sql).
It achieves the following results on the evaluation set:
- eval_loss: 0.0367
- eval_runtime: 48.8318
- eval_samples_per_second: 119.82
- eval_steps_per_second: 29.96
- epoch: 3.0
- step: 75000
## Model description
Given input question and construction of SQL tables as context, the model will generate correct SQL to query a SQL database.
## How to use
Load the model using Hugging Face Transformers:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model_name = "DevD60/sql_generator_f5"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name, trust_remote_code=True)
question = "How many employees work in each department?"
context = """
CREATE TABLE employees (id INT, name TEXT, department_id INT);
INSERT INTO employees (id, name, department_id) VALUES
(1, 'Alice', 1),
(2, 'Bob', 1),
(3, 'Charlie', 2),
(4, 'David', 2),
(5, 'Eve', 3);
CREATE TABLE departments (department_id INT, department_name TEXT);
INSERT INTO departments (department_id, department_name) VALUES
(1, 'HR'),
(2, 'Engineering'),
(3, 'Marketing');
"""
input_text = f"Translate to SQL: {question} Context: {context}"
inputs = tokenizer(input_text, return_tensors="pt", padding=True, truncation=True, max_length=512)
output_ids = model.generate(**inputs, max_length=512, do_sample=True, temperature=0.6, top_k=50, top_p=0.95)
generated_sql = tokenizer.decode(output_ids[0], skip_special_tokens=True)
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.45.2
- Pytorch 2.5.1
- Datasets 2.19.1
- Tokenizers 0.20.1
|
Anachrono/_mistral_7b_v0.2_Basic_CDS_Classification | Anachrono | "2024-03-20T13:18:30Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-20T13:14:16Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
filipesantoscv11/13d4e63d-c8e4-4c0f-acd7-483d851056a5 | filipesantoscv11 | "2025-01-23T11:56:35Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-14B-Chat",
"base_model:adapter:Qwen/Qwen1.5-14B-Chat",
"license:other",
"region:us"
] | null | "2025-01-23T11:25:39Z" | ---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-14B-Chat
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 13d4e63d-c8e4-4c0f-acd7-483d851056a5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-14B-Chat
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f70ddae1849231d5_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f70ddae1849231d5_train_data.json
type:
field_instruction: instruction
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: filipesantoscv11/13d4e63d-c8e4-4c0f-acd7-483d851056a5
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 79GiB
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/f70ddae1849231d5_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 626c7685-3b25-4cd2-a8a4-a8e58ec0f209
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 626c7685-3b25-4cd2-a8a4-a8e58ec0f209
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# 13d4e63d-c8e4-4c0f-acd7-483d851056a5
This model is a fine-tuned version of [Qwen/Qwen1.5-14B-Chat](https://huggingface.co/Qwen/Qwen1.5-14B-Chat) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4775
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0003 | 1 | 0.7046 |
| 0.4865 | 0.0017 | 5 | 0.6249 |
| 0.5005 | 0.0034 | 10 | 0.5373 |
| 0.4763 | 0.0051 | 15 | 0.4981 |
| 0.4902 | 0.0068 | 20 | 0.4835 |
| 0.4965 | 0.0085 | 25 | 0.4787 |
| 0.5207 | 0.0101 | 30 | 0.4775 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Davimartins/Farias123 | Davimartins | "2022-11-27T20:50:51Z" | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | "2022-11-27T20:50:50Z" | ---
license: bigscience-openrail-m
---
|
yz122/ddpm-celebahq-finetuned-butterflies-2epochs | yz122 | "2025-02-16T18:42:52Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | "2025-02-16T18:42:30Z" | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
Describe your model here
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('yz122/ddpm-celebahq-finetuned-butterflies-2epochs')
image = pipeline().images[0]
image
```
|
dutti/Ascal-rt.11 | dutti | "2025-04-05T18:41:11Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2306.01708",
"base_model:Delta-Vector/Rei-V2-12B",
"base_model:merge:Delta-Vector/Rei-V2-12B",
"base_model:DreadPoor/Irix-12B-Model_Stock",
"base_model:merge:DreadPoor/Irix-12B-Model_Stock",
"base_model:TheDrummer/UnslopNemo-12B-v4.1",
"base_model:merge:TheDrummer/UnslopNemo-12B-v4.1",
"base_model:inflatebot/MN-12B-Mag-Mell-R1",
"base_model:merge:inflatebot/MN-12B-Mag-Mell-R1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-05T18:35:40Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
QuyUET/bert-finetuned-mrpc | QuyUET | "2025-03-13T07:54:37Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-03-13T07:53:15Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
NeelK94/ppo-LunarLander-v2 | NeelK94 | "2022-12-12T22:43:17Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2022-12-12T22:42:54Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 243.03 +/- 36.09
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
fifxus/cbe2ffdc-b36c-4f81-aef8-3be2a47a8077 | fifxus | "2025-01-31T02:36:24Z" | 14 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-7b-it",
"base_model:adapter:unsloth/gemma-7b-it",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-31T02:08:40Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/gemma-7b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: cbe2ffdc-b36c-4f81-aef8-3be2a47a8077
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-7b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c5406eef3f6c391a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c5406eef3f6c391a_train_data.json
type:
field_instruction: dialogue
field_output: summary
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: true
hub_model_id: fifxus/cbe2ffdc-b36c-4f81-aef8-3be2a47a8077
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/c5406eef3f6c391a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 1e4af6e8-4c1d-4c86-ab18-ee8b80d9a919
wandb_project: Gradients-On-10
wandb_run: your_name
wandb_runid: 1e4af6e8-4c1d-4c86-ab18-ee8b80d9a919
warmup_steps: 5
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# cbe2ffdc-b36c-4f81-aef8-3be2a47a8077
This model is a fine-tuned version of [unsloth/gemma-7b-it](https://huggingface.co/unsloth/gemma-7b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0051
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7333 | 0.6020 | 200 | 1.0051 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
tirik00/ppo-LunarLander-v2 | tirik00 | "2024-01-08T22:23:58Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-01-08T22:18:18Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 282.18 +/- 18.11
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Triangle104/INTELLECT-1-Instruct-Q4_K_M-GGUF | Triangle104 | "2024-12-01T09:36:05Z" | 6 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"dataset:PrimeIntellect/fineweb-edu",
"dataset:PrimeIntellect/fineweb",
"dataset:PrimeIntellect/StackV1-popular",
"dataset:mlfoundations/dclm-baseline-1.0-parquet",
"dataset:open-web-math/open-web-math",
"dataset:arcee-ai/EvolKit-75K",
"dataset:arcee-ai/Llama-405B-Logits",
"dataset:arcee-ai/The-Tomb",
"dataset:mlabonne/open-perfectblend-fixed",
"dataset:microsoft/orca-agentinstruct-1M-v1-cleaned",
"dataset:Post-training-Data-Flywheel/AutoIF-instruct-61k-with-funcs",
"dataset:Team-ACE/ToolACE",
"dataset:Synthia-coder",
"dataset:ServiceNow-AI/M2Lingual",
"dataset:AI-MO/NuminaMath-TIR",
"dataset:allenai/tulu-3-sft-personas-code",
"dataset:allenai/tulu-3-sft-personas-math",
"dataset:allenai/tulu-3-sft-personas-math-grade",
"dataset:allenai/tulu-3-sft-personas-algebra",
"base_model:PrimeIntellect/INTELLECT-1-Instruct",
"base_model:quantized:PrimeIntellect/INTELLECT-1-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2024-12-01T09:33:41Z" | ---
license: apache-2.0
datasets:
- PrimeIntellect/fineweb-edu
- PrimeIntellect/fineweb
- PrimeIntellect/StackV1-popular
- mlfoundations/dclm-baseline-1.0-parquet
- open-web-math/open-web-math
- arcee-ai/EvolKit-75K
- arcee-ai/Llama-405B-Logits
- arcee-ai/The-Tomb
- mlabonne/open-perfectblend-fixed
- microsoft/orca-agentinstruct-1M-v1-cleaned
- Post-training-Data-Flywheel/AutoIF-instruct-61k-with-funcs
- Team-ACE/ToolACE
- Synthia-coder
- ServiceNow-AI/M2Lingual
- AI-MO/NuminaMath-TIR
- allenai/tulu-3-sft-personas-code
- allenai/tulu-3-sft-personas-math
- allenai/tulu-3-sft-personas-math-grade
- allenai/tulu-3-sft-personas-algebra
language:
- en
base_model: PrimeIntellect/INTELLECT-1-Instruct
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/INTELLECT-1-Instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`PrimeIntellect/INTELLECT-1-Instruct`](https://huggingface.co/PrimeIntellect/INTELLECT-1-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/PrimeIntellect/INTELLECT-1-Instruct) for more details on the model.
---
arcee-ai/Llama-405B-Logits
arcee-ai/The-Tomb
Instruction Following:
-
mlabonne/open-perfectblend-fixed (generalist capabilities)
microsoft/orca-agentinstruct-1M-v1-cleaned (Chain-of-Thought)
Post-training-Data-Flywheel/AutoIF-instruct-61k-with-funcs
Domain-Specific:
-
Team-ACE/ToolACE (function calling)
Synthia coder (programming)
ServiceNow-AI/M2Lingual (multilingual)
AI-MO/NuminaMath-TIR (mathematics)
Tulu-3 Persona Datasets:
-
allenai/tulu-3-sft-personas-code
allenai/tulu-3-sft-personas-math
allenai/tulu-3-sft-personas-math-grade
allenai/tulu-3-sft-personas-algebra
Second, we execute 8 distinct Direct Preference Optimization (DPO)
runs with various combinations of data sets to enhance specific
performance metrics and align the model with human preferences. A key
advantage in our post-training process was INTELLECT-1's use of the
Llama-3 tokenizer, which allowed us to utilize logits from
Llama-3.1-405B to heal and maintain precision during the post-training
process via DistillKit.
Finally, we performed 16 strategic merges between candidate models
using MergeKit to create superior combined models that leverage the
strengths of different training runs. During the post-training phase, we
observed that when using a ChatML template without an explicit BOS
(begin-of-sequence) token, the initial loss was approximately 15.
However, when switching to the Llama 3.1 chat template, the loss for
these trainings started much lower at approximately 1.1, indicating
better alignment with the underlying Llama 3 tokenizer.
The combination of these post-training techniques resulted in
significant improvements in various benchmarks, particularly in
knowledge retrieval, grade school math, instruction following and
reasoning.
Citations
If you use this model in your research, please cite it as follows:
@article{jaghouar2024intellect,
title={INTELLECT-1 Technical Report.},
author={Jaghouar, Sami and Ong, Jack Min and Basra, Manveer and Obeid, Fares and Straube, Jannik and Keiblinger, Michael and Bakouch, Elie and Atkins, Lucas and Panahi, Maziyar and Goddard, Charles and Ryabinin, Max and Hagemann, Johannes},
journal={arXiv preprint},
year={2024}
}
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/INTELLECT-1-Instruct-Q4_K_M-GGUF --hf-file intellect-1-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/INTELLECT-1-Instruct-Q4_K_M-GGUF --hf-file intellect-1-instruct-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/INTELLECT-1-Instruct-Q4_K_M-GGUF --hf-file intellect-1-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/INTELLECT-1-Instruct-Q4_K_M-GGUF --hf-file intellect-1-instruct-q4_k_m.gguf -c 2048
```
|
rmurali2023/distilbert-base-uncased-finetuned-tweetemotion-test | rmurali2023 | "2023-10-09T19:54:41Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-10-09T15:52:53Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-tweetemotion-test
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9295
- name: F1
type: f1
value: 0.9293769060779349
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-tweetemotion-test
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2071
- Accuracy: 0.9295
- F1: 0.9294
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8174 | 1.0 | 250 | 0.3035 | 0.9155 | 0.9148 |
| 0.2399 | 2.0 | 500 | 0.2071 | 0.9295 | 0.9294 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
mergekit-community/L3.1-Boshima-b | mergekit-community | "2024-09-10T11:44:07Z" | 5 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:ArliAI/Llama-3.1-8B-ArliAI-Formax-v1.0",
"base_model:merge:ArliAI/Llama-3.1-8B-ArliAI-Formax-v1.0",
"base_model:mergekit-community/L3-Boshima-a",
"base_model:merge:mergekit-community/L3-Boshima-a",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-09-10T11:38:55Z" | ---
base_model:
- ArliAI/Llama-3.1-8B-ArliAI-Formax-v1.0
- mergekit-community/L3-Boshima-a
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [ArliAI/Llama-3.1-8B-ArliAI-Formax-v1.0](https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-Formax-v1.0)
* [mergekit-community/L3-Boshima-a](https://huggingface.co/mergekit-community/L3-Boshima-a)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: ArliAI/Llama-3.1-8B-ArliAI-Formax-v1.0
- model: mergekit-community/L3-Boshima-a
merge_method: slerp
base_model: ArliAI/Llama-3.1-8B-ArliAI-Formax-v1.0
parameters:
t:
- filter: v_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 0.9, 0, 0]
- filter: o_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 0.9, 0, 0]
- filter: up_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 0.9, 0, 0]
- filter: gate_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 0.9, 0, 0]
- filter: down_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 0.9, 0, 0]
- value: 0.88
dtype: bfloat16
```
|
jlbaker361/fine-tune_addition_subtraction_decimal | jlbaker361 | "2023-11-29T00:49:24Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | "2023-11-18T14:18:08Z" | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
AnirudhRajagopalan1201/tinystories-custom-3M | AnirudhRajagopalan1201 | "2024-10-30T22:14:12Z" | 7 | 0 | null | [
"safetensors",
"gpt_neo",
"dataset:roneneldan/TinyStories",
"arxiv:2305.07759",
"region:us"
] | null | "2024-10-29T23:54:17Z" | ---
datasets:
- roneneldan/TinyStories
---
---
Model trained on the TinyStories Dataset, replicating https://arxiv.org/abs/2305.07759, based on GPT-Neo architecture.
---
Hyperparams used to train this model:
```
"batch_size": 64,
"block_size": 128,
"lr": 6e-4,
"n_layer": 4,
"n_head": 4,
"n_embd": 64,
"dropout": 0.1,
"weight_decay": 0.01,
"epochs": 1,
"eval_interval": 200,
"eval_steps": 50,
"vocab_size": 50257,
"warmup_tokens": 5000,
"gradient_accumulation_steps": 16,
```
---
EXAMPLE USAGE
```py
!pip install --quiet transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained('AnirudhRajagopalan1201/tinystories-custom-3M')
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-125M")
prompt = "Lily likes cats and dogs. She asked her mom for a dog and her mom said no, so instead she asked"
input_ids = tokenizer.encode(prompt, return_tensors="pt")
output = model.generate(input_ids, temperature=0.2, max_length = 100, do_sample=True)
output_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(output_text)
``` |
Keltezaa/alisa-flux-adult-film-actress | Keltezaa | "2025-02-14T05:43:45Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"migrated",
"photorealistic",
"sexy",
"model",
"woman",
"celebrity",
"girls",
"realistic",
"adult star",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-02-14T05:43:44Z" | ---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=RentCivit&allowDerivatives=True&allowDifferentLicense=True
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- migrated
- photorealistic
- sexy
- model
- woman
- celebrity
- girls
- realistic
- adult star
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: ALIS@
widget:
- text: 'This is an image of a ALIS@, beautiful detailed photograph, hair cascading, makeup, wearing a dress, standing in cafe looking at the viewer, with a hint of a closed-mouth smile.'
output:
url: >-
38380392.jpeg
- text: 'This is an image of a ALIS@, beautiful detailed photograph, hair cascading, makeup, wearing a dress, standing in cafe looking at the viewer, with a hint of a closed-mouth smile.'
output:
url: >-
38380397.jpeg
- text: 'This is an image of a ALIS@, beautiful detailed photograph, soft makeup, wearing a pink dress, standing in a room looking at the viewer, lamp light iluminates her face, smiling.'
output:
url: >-
38380396.jpeg
- text: 'The image is a portrait of ALIS@. She leans against a brick wall outside, wearing a dark green trench coat over a beige turtleneck. Her arms are relaxed at her sides, and she is giving a confident, slight smirk to the camera. Behind her, vines climb up the wall, adding texture to the rustic urban setting.'
output:
url: >-
38380395.jpeg
- text: 'The image is a portrait of ALIS@. She leans against a brick wall outside, wearing a dark green trench coat over a beige turtleneck. Her arms are relaxed at her sides, and she is giving a confident, slight smirk to the camera. Behind her, vines climb up the wall, adding texture to the rustic urban setting.'
output:
url: >-
38380398.jpeg
- text: 'The image is a portrait of ALIS@. She sits on a vintage armchair in a cozy, softly lit room with a large bookshelf in the background. She wears a white, oversized sweater that drapes off one shoulder, paired with light jeans. She has her hand on her cheek, looking at the camera with a thoughtful, dreamy look.'
output:
url: >-
38380393.jpeg
- text: 'The image is a portrait of ALIS@ standing in front of a window. She is wearing a yellow dress. The dress has a halter neckline and thin straps. The woman is standing with her hands on her hips and is looking directly at the camera with a slight smile on her face. The background is blurred, but it appears to be an outdoor setting with trees and a building visible through the window.'
output:
url: >-
38380394.jpeg
- text: 'The image is a portrait of ALIS@. She is standing by a wooden fence in a sunflower field, wearing a light blue sundress with ruffled sleeves and a fitted waist. Her arms are crossed casually, and she gazes off into the distance with a serene expression. In the background, sunflowers stretch out to the horizon under a bright blue sky.'
output:
url: >-
38380391.jpeg
---
# Alisa (Flux) - Adult Film Actress
<Gallery />
## Model description
<p><img src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/a4d11b77-207c-4d00-8956-a375d02c51e9/width=525/a4d11b77-207c-4d00-8956-a375d02c51e9.jpeg" /><span style="color:rgb(193, 194, 197)">If you’ve been enjoying my free LoRAs and want to show your support, check out my </span><a target="_blank" rel="ugc" href="https://ko-fi.com/victordmalves">Ko-fi page</a><span style="color:rgb(193, 194, 197)">! There, you can purchase FP16 LoRA and request custom LoRA training, or simply reward me with some Buzz. Every bit of support means so much—thank you! ❤️</span></p><p></p><p><strong>Alisa</strong><span style="color:rgb(218, 220, 224)"> at MPL Studios</span><br /><strong>Gold</strong><span style="color:rgb(218, 220, 224)"> at Stunning18</span></p><p></p><p><span style="color:rgb(218, 220, 224)">Alisa was born on November 5th, 1983 in Russia. She started her nude modeling career in 2006 shooting for Mpl Studios. She is a model of endless allure.</span></p><ul><li><p><strong>Born: </strong>1986</p></li><li><p><strong>Birthplace: </strong>Russia</p></li><li><p><strong>Hair Color: </strong>Brown</p></li><li><p><strong>Bust Size: </strong>Small</p></li><li><p><strong>First Seen: </strong>2006</p></li></ul><p></p><p>Keyword: ALIS@<br />Euler / Simple<br />30 Steps<br />LoRa Strength 1.0</p><p></p><p>Did you like it? Consider tip me with some Buzz!</p>
## Trigger words
You should use `ALIS@` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Keltezaa/alisa-flux-adult-film-actress/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to(device)
pipeline.load_lora_weights('Keltezaa/alisa-flux-adult-film-actress', weight_name='alisa-v1.safetensors')
image = pipeline('The image is a portrait of ALIS@. She is standing by a wooden fence in a sunflower field, wearing a light blue sundress with ruffled sleeves and a fitted waist. Her arms are crossed casually, and she gazes off into the distance with a serene expression. In the background, sunflowers stretch out to the horizon under a bright blue sky.').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
GanjinZero/biobart-large | GanjinZero | "2023-04-04T07:46:25Z" | 265 | 6 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"biobart",
"biomedical",
"en",
"arxiv:2204.03905",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-03-12T07:01:05Z" | ---
language:
- en
license: apache-2.0
tags:
- bart
- biobart
- biomedical
inference: true
widget:
- text: "Influenza is a <mask> disease."
- type: "text-generation"
---
Paper: [BioBART: Pretraining and Evaluation of A Biomedical Generative Language Model](https://arxiv.org/pdf/2204.03905.pdf)
```
@misc{BioBART,
title={BioBART: Pretraining and Evaluation of A Biomedical Generative Language Model},
author={Hongyi Yuan and Zheng Yuan and Ruyi Gan and Jiaxing Zhang and Yutao Xie and Sheng Yu},
year={2022},
eprint={2204.03905},
archivePrefix={arXiv}
}
``` |
d0r1h/led-base-ilc | d0r1h | "2022-05-06T08:17:46Z" | 25 | 0 | transformers | [
"transformers",
"pytorch",
"led",
"text2text-generation",
"summarization",
"dataset:ilc",
"arxiv:2004.05150",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | "2022-05-05T12:08:07Z" | ---
license: apache-2.0
datasets: ilc
tags:
- summarization
metrics:
- rouge
widget:
- text: "IN THE HIGH COURT OF JUDICATURE AT PATNA CRIMINAL MISCELLANEOUS No. 229121 Arising Out of PS. Case No. 127 Year 2020 Thana DUMRAON District Buxar 1. Ramlal Goswami aged about 44 years Male S o Late Gauri Shankar 2. Dharmshila Devi @ Savita Devi aged about 35 years wife of Ramlal Both resident of village Badka Dhakaich P.S. Krishna Brahm District ... Petitioner s ... Opposite Party s The State of Bihar Appearance : For the Petitioner s For the State CORAM: HONOURABLE MR. JUSTICE AHSANUDDIN AMANULLAH ORAL JUDGMENT Mr. Manoj Kumar with Mr. Anil Kumar Roy Advocates Mr. Ram Sumiran Roy APP The matter has been heard via video conferencing. 2. Heard Mr. Manoj Kumar learned counsel along with Mr. Anil Kumar Roy learned counsel for the petitioners and Mr. Ram Sumiran Roy learned Additional Public Prosecutorfor the State. 3. Learned counsel for the petitioners submitted that he may be permitted to add alias name of petitioner no. 2 which is Savita Devi. 4. Prayer allowed. 5. Let necessary correction be made in the cause title Date : 03 08 2021 Patna High Court CR. MISC. No. 229121 dt.03 08 2021 2 4 by learned counsel for the petitioners through e mode by day after tomorrow. 6. The petitioners apprehend arrest in connection with Dumraon PS Case No. 1220 dated 15.04.2020 instituted under Sections 406 420 467 468 471 448 506 34 of the Indian Penal Code. 7. The allegation against the petitioners is that the informant who is the cousin brother of petitioner no. 1 had bought land through the petitioner no. 1 but he was cheated both with regard to the rates as also that the same piece of land being sold by the petitioners to two different persons. 8. Learned counsel for the petitioners submitted that in the FIR itself it has been stated that the informant had sold his land at a much higher price than the price he was paying for the land which he alleges to have been negotiated by the petitioner no. 1 for him. Further it was submitted that all such dispute relating to money is a purely civil in nature for which criminal case is an abuse of the process of the Court. Learned counsel submitted that the informant being the first cousin of the petitioner no. 1 and having sold his land was very well aware of the ground realities and cannot take a stand that he was ignorant of what was the actual position. Further it was submitted that Patna High Court CR. MISC. No. 229121 dt.03 08 2021 3 4 the petitioners have filed a supplementary affidavit in which a categorical stand has been taken on oath that the petitioners have not sold the same piece of land to two different persons. Learned counsel submitted that the petitioners are simple citizens being husband and wife and have no other criminal antecedent. It was submitted that had the allegation been correct the other person aggrieved would also have filed a case and most importantly neither any name of any person has been taken nor details of any document that the same piece of land was transferred to two persons has been either mentioned or brought on record. 9. Learned APP submitted that the petitioners are alleged to have cheated the informant and have got the same piece of land registered in favour of two persons. 10. Having considered the facts and circumstances of the case and submissions of learned counsel for the parties in the event of arrest or surrender before the Court below within six weeks from today the petitioners be released on bail upon furnishing bail bonds of Rs. 25 000 each with two sureties of the like amount each to the satisfaction of the learned Chief Judicial Magistrate Buxar in Dumrao PS Case No. 127 of 2020 subject to the conditions laid down in Patna High Court CR. MISC. No. 229121 dt.03 08 2021 4 4 Section 438(2) of the Code of Criminal Procedure 1973 and furtherthat one of the bailors shall be a close relative of the petitioners andthat the petitioners shall cooperate with the Court and the police prosecution. Failure to cooperate shall lead to cancellation of their bail bonds. 11. It shall also be open for the prosecution to bring any violation of the foregoing conditions of bail by the petitioners to the notice of the Court concerned which shall take immediate action on the same after giving opportunity of hearing to the aforementioned terms. 12. The petition stands disposed of Anjani "
---
# Longformer Encoder-Decoder (LED) fine-tuned on ILC
This model is a fine-tuned version of [led-base-16384](https://huggingface.co/allenai/led-base-16384) on the [ILC](https://huggingface.co/datasets/d0r1h/ILC) dataset.
As described in [Longformer: The Long-Document Transformer](https://arxiv.org/pdf/2004.05150.pdf) by Iz Beltagy, Matthew E. Peters, Arman Cohan, *led-base-16384* was initialized from [*bart-base*](https://huggingface.co/facebook/bart-base) since both models share the exact same architecture. To be able to process 16K tokens, *bart-base*'s position embedding matrix was simply copied 16 times.
```Python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
device = "cuda" if torch.cuda.is_available() else "CPU"
checkpoint = "d0r1h/led-base-ilc"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint, return_dict_in_generate=True).to(device)
case = "......."
input_ids = tokenizer(case, return_tensors="pt").input_ids.to(device)
global_attention_mask = torch.zeros_like(input_ids)
global_attention_mask[:, 0] = 1
sequences = model.generate(input_ids,
global_attention_mask=global_attention_mask).sequences
summary = tokenizer.batch_decode(sequences,
skip_special_tokens=True)
```
## Evaluation results
When the model is used for summarizing ILC documents(10 samples), it achieves the following results:
| Model | rouge1-f | rouge1-p | rouge2-f | rouge2-p | rougeL-f | rougeL-p |
|:-----------:|:-----:|:-----:|:------:|:-----:|:------:|:-----:|
| led-ilc | **42** | **47** | **22** | **24** | **39** | **44** |
| led-base | 3 | 39 | 1 | 21 | 3 | 37 |
[This notebook](https://colab.research.google.com/github/d0r1h/Notebooks/blob/main/NLP/Summarization/led_base_ilc_summarization.ipynb) shows how *led* can effectively be used for downstream tasks such as summarization.
|
NurAzzamWafiuddin/bert-finetuned-squad | NurAzzamWafiuddin | "2025-02-16T11:19:46Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | "2025-02-16T04:56:09Z" | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.48.2
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
Legalaz/llabo_07_13_21_50 | Legalaz | "2025-02-21T02:52:08Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-02-21T02:50:51Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jjaehyeok2/12food | jjaehyeok2 | "2025-04-05T13:16:01Z" | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:Bingsu/my-korean-stable-diffusion-v1-5",
"base_model:adapter:Bingsu/my-korean-stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2025-04-05T12:21:08Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
flatala-research/videomae-base-finetuned-kinetics-finetuned-right-hand-conflab-v11 | flatala-research | "2024-05-27T16:45:00Z" | 64 | 0 | transformers | [
"transformers",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base-finetuned-kinetics",
"base_model:finetune:MCG-NJU/videomae-base-finetuned-kinetics",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | "2024-05-27T16:25:12Z" | ---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base-finetuned-kinetics
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-kinetics-finetuned-right-hand-conflab-v11
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-kinetics-finetuned-right-hand-conflab-v11
This model is a fine-tuned version of [MCG-NJU/videomae-base-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-base-finetuned-kinetics) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0160
- Accuracy: 0.6108
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 468
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.011 | 0.1261 | 59 | 1.9647 | 0.6422 |
| 0.0044 | 1.1261 | 118 | 1.9634 | 0.6373 |
| 0.0486 | 2.1261 | 177 | 1.9649 | 0.6422 |
| 0.0071 | 3.1261 | 236 | 1.9651 | 0.6324 |
### Framework versions
- Transformers 4.41.0
- Pytorch 1.12.0+cu116
- Datasets 2.19.1
- Tokenizers 0.19.1
|
M4-ai/NeuralReyna-Mini-1.8B-v0.2 | M4-ai | "2024-05-12T16:47:07Z" | 197 | 13 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"dataset:Intel/orca_dpo_pairs",
"dataset:Locutusque/Hercules-v3.0",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-02-17T21:36:53Z" | ---
language:
- en
license: apache-2.0
tags:
- conversational
datasets:
- Intel/orca_dpo_pairs
- Locutusque/Hercules-v3.0
inference:
parameters:
do_sample: true
temperature: 0.8
top_p: 0.95
top_k: 40
min_new_tokens: 2
max_new_tokens: 250
repetition_penalty: 1.1
widget:
- text: Hello who are you?
example_title: Identity
- text: What can you do?
example_title: Capabilities
- text: Create a fastapi endpoint to retrieve the weather given a zip code.
example_title: Coding
model-index:
- name: NeuralReyna-Mini-1.8B-v0.2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 37.8
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/NeuralReyna-Mini-1.8B-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 60.51
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/NeuralReyna-Mini-1.8B-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 45.04
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/NeuralReyna-Mini-1.8B-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 37.75
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/NeuralReyna-Mini-1.8B-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 60.93
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/NeuralReyna-Mini-1.8B-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 27.07
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/NeuralReyna-Mini-1.8B-v0.2
name: Open LLM Leaderboard
---
# NeuralReyna-Mini-1.8B-v0.2

# Description
Taken aloobun/Reyna-Mini-1.8B-v0.2 and further fine-tuned it using DPO using the Intel/orca_dpo_pairs dataset.
This model has capabilities in coding, math, science, roleplay, and function calling.
This model was trained on OpenAI's ChatML prompt format.
# Evaluation
AGIEval:

GPT4ALL:
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|-------------|------:|------|-----:|--------|-----:|---|-----:|
|arc_challenge| 1|none | 0|acc |0.3208|± |0.0136|
| | |none | 0|acc_norm|0.3336|± |0.0138|
|arc_easy | 1|none | 0|acc |0.6035|± |0.0100|
| | |none | 0|acc_norm|0.5833|± |0.0101|
|boolq | 2|none | 0|acc |0.6526|± |0.0083|
|hellaswag | 1|none | 0|acc |0.4556|± |0.0050|
| | |none | 0|acc_norm|0.6076|± |0.0049|
|openbookqa | 1|none | 0|acc |0.2600|± |0.0196|
| | |none | 0|acc_norm|0.3460|± |0.0213|
|piqa | 1|none | 0|acc |0.7236|± |0.0104|
| | |none | 0|acc_norm|0.7307|± |0.0104|
|winogrande | 1|none | 0|acc |0.6062|± |0.0137|
# Disclaimer
This model may have overfitted to the DPO training data, and may not perform well.
# Contributions
Thanks to @aloobun and @Locutusque for their contributions to this model.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_M4-ai__NeuralReyna-Mini-1.8B-v0.2)
| Metric |Value|
|---------------------------------|----:|
|Avg. |44.85|
|AI2 Reasoning Challenge (25-Shot)|37.80|
|HellaSwag (10-Shot) |60.51|
|MMLU (5-Shot) |45.04|
|TruthfulQA (0-shot) |37.75|
|Winogrande (5-shot) |60.93|
|GSM8k (5-shot) |27.07|
|
happylayers/sc14 | happylayers | "2024-04-24T23:13:22Z" | 90 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-24T23:11:47Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TheBloke/vicuna-7B-v1.5-GGUF | TheBloke | "2023-09-27T12:47:20Z" | 772 | 15 | transformers | [
"transformers",
"gguf",
"llama",
"arxiv:2307.09288",
"arxiv:2306.05685",
"base_model:lmsys/vicuna-7b-v1.5",
"base_model:quantized:lmsys/vicuna-7b-v1.5",
"license:llama2",
"region:us"
] | null | "2023-09-05T04:07:21Z" | ---
license: llama2
model_name: Vicuna 7B v1.5
base_model: lmsys/vicuna-7b-v1.5
inference: false
model_creator: lmsys
model_type: llama
prompt_template: 'A chat between a curious user and an artificial intelligence assistant.
The assistant gives helpful, detailed, and polite answers to the user''s questions.
USER: {prompt} ASSISTANT:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Vicuna 7B v1.5 - GGUF
- Model creator: [lmsys](https://huggingface.co/lmsys)
- Original model: [Vicuna 7B v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5)
<!-- description start -->
## Description
This repo contains GGUF format model files for [lmsys's Vicuna 7B v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/vicuna-7B-v1.5-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/vicuna-7B-v1.5-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/vicuna-7B-v1.5-GGUF)
* [lmsys's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/lmsys/vicuna-7b-v1.5)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Vicuna
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [vicuna-7b-v1.5.Q2_K.gguf](https://huggingface.co/TheBloke/vicuna-7B-v1.5-GGUF/blob/main/vicuna-7b-v1.5.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes |
| [vicuna-7b-v1.5.Q3_K_S.gguf](https://huggingface.co/TheBloke/vicuna-7B-v1.5-GGUF/blob/main/vicuna-7b-v1.5.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss |
| [vicuna-7b-v1.5.Q3_K_M.gguf](https://huggingface.co/TheBloke/vicuna-7B-v1.5-GGUF/blob/main/vicuna-7b-v1.5.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
| [vicuna-7b-v1.5.Q3_K_L.gguf](https://huggingface.co/TheBloke/vicuna-7B-v1.5-GGUF/blob/main/vicuna-7b-v1.5.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss |
| [vicuna-7b-v1.5.Q4_0.gguf](https://huggingface.co/TheBloke/vicuna-7B-v1.5-GGUF/blob/main/vicuna-7b-v1.5.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [vicuna-7b-v1.5.Q4_K_S.gguf](https://huggingface.co/TheBloke/vicuna-7B-v1.5-GGUF/blob/main/vicuna-7b-v1.5.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss |
| [vicuna-7b-v1.5.Q4_K_M.gguf](https://huggingface.co/TheBloke/vicuna-7B-v1.5-GGUF/blob/main/vicuna-7b-v1.5.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
| [vicuna-7b-v1.5.Q5_0.gguf](https://huggingface.co/TheBloke/vicuna-7B-v1.5-GGUF/blob/main/vicuna-7b-v1.5.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [vicuna-7b-v1.5.Q5_K_S.gguf](https://huggingface.co/TheBloke/vicuna-7B-v1.5-GGUF/blob/main/vicuna-7b-v1.5.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended |
| [vicuna-7b-v1.5.Q5_K_M.gguf](https://huggingface.co/TheBloke/vicuna-7B-v1.5-GGUF/blob/main/vicuna-7b-v1.5.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
| [vicuna-7b-v1.5.Q6_K.gguf](https://huggingface.co/TheBloke/vicuna-7B-v1.5-GGUF/blob/main/vicuna-7b-v1.5.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
| [vicuna-7b-v1.5.Q8_0.gguf](https://huggingface.co/TheBloke/vicuna-7B-v1.5-GGUF/blob/main/vicuna-7b-v1.5.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/vicuna-7B-v1.5-GGUF and below it, a specific filename to download, such as: vicuna-7b-v1.5.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/vicuna-7B-v1.5-GGUF vicuna-7b-v1.5.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/vicuna-7B-v1.5-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/vicuna-7B-v1.5-GGUF vicuna-7b-v1.5.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m vicuna-7b-v1.5.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/vicuna-7B-v1.5-GGUF", model_file="vicuna-7b-v1.5.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: lmsys's Vicuna 7B v1.5
# Vicuna Model Card
## Model Details
Vicuna is a chat assistant trained by fine-tuning Llama 2 on user-shared conversations collected from ShareGPT.
- **Developed by:** [LMSYS](https://lmsys.org/)
- **Model type:** An auto-regressive language model based on the transformer architecture
- **License:** Llama 2 Community License Agreement
- **Finetuned from model:** [Llama 2](https://arxiv.org/abs/2307.09288)
### Model Sources
- **Repository:** https://github.com/lm-sys/FastChat
- **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/
- **Paper:** https://arxiv.org/abs/2306.05685
- **Demo:** https://chat.lmsys.org/
## Uses
The primary use of Vicuna is research on large language models and chatbots.
The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
## How to Get Started with the Model
- Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights
- APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api
## Training Details
Vicuna v1.5 is fine-tuned from Llama 2 with supervised instruction fine-tuning.
The training data is around 125K conversations collected from ShareGPT.com.
See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf).
## Evaluation

Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard).
## Difference between different versions of Vicuna
See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
<!-- original-model-card end -->
|
wu-kiot/DeeepSeek-wu-v1 | wu-kiot | "2025-02-19T14:30:08Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-02-19T14:23:15Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
chefkoch24/weak-ingredient-recognition-bert-base-cased-german | chefkoch24 | "2023-07-26T10:52:09Z" | 131 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"recipe",
"cooking",
"entity_recognition",
"de",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2023-07-25T17:45:16Z" | ---
license: openrail
language:
- de
metrics:
- f1
- accuracy
- precision
- recall
pipeline_tag: token-classification
tags:
- recipe
- cooking
- entity_recognition
widget:
- text: '500 g Pellkartoffeln, mehlig, gekocht, 375 g Quark (Magerstufe), 150 g Mehl, 65 g Zucker, 1 Prise(n) Salz, 1 Ei(er), Öl, z.B. Sonnenblumenöl zum Braten, Mehl, zum Bestäuben, Apfelmus, Zucker, zum Bestreuen Pellkartoffeln pellen und mit einer Kartoffelpresse zerdrücken. Quark, Mehl, Zucker, Salz und Ei zufügen. Alles zusammen zu einem geschmeidigen Teig verarbeiten. Der Teig darf nicht zu feucht sein und an den Händen kleben bleiben, sonst noch etwas Mehl zufügen. Der Teig darf aber auch nicht zu fest sein, er muß locker bleiben. Vom Teig werden dann handtellergroße, flache, ovale Quarkkeulchen geformt, die vorerst auf einem mit Mehl bestreutem Brett abgelegt werden. Die obere Seite der Quarkkeulchen wird noch mit etwas Mehl bestäubt. Die Quarkkeulchen im heißen Sonnenblumenöl von beiden Seiten goldbraun braten. Sie werden noch heiss mit Zucker bestreut oder mit viel Apfelmus bestrichen gegessen.'
- text: '100 g Mehl, 100 g Grieß (Hartweizengrieß), 100 ml Wasser, kaltes, 400 g Kürbisfleisch, (vornehmlich Hokkaido), 1 EL Butter, 1 kleine Zwiebel(n), Salz und Pfeffer, 60 g Parmesan, frisch gerieben, 1 Eigelb, Muskat, 50 g Butter, 8 Blätter Salbei Mehl, Grieß und Wasser zu einem geschmeidigen Teig verarbeiten und mit Klarsichtfolie eingewickelt 1 Stunde im Kühlschrank ruhen lassen. In der Zwischenzeit Kürbis putzen und in Würfel schneiden. Butter zerlassen und die gewürfelte Zwiebel darin glasig braten. Kürbiswürfel dazugeben, salzen und pfeffern und ganz weich kochen. Aber ohne Deckel - das Kürbiswasser muss verdunsten können.Der Kürbis ist perfekt, wenn eine festere Püreemasse im Topf ist. Das dauert ca. 20 Min. Danach den Parmesan und das Eigelb unterheben. Mit einem Hauch Muskatnuss abschmecken.Nudelteig ausrollen und die Ravioli füllen. In Salzwasser ca. 2-4 Min. garen. Abtropfen lassen und warm halten. Butter in einer kleinen Pfanne erhitzen und die Salbeiblätter bei milder Hitze darin braten. Mit etwas Salz und Pfeffer sowie ein bis zwei Tropfen Zitronensaft abschmecken. Über die Ravioli geben und mit einigen Parmesanspänen servieren'
---
Weakly supervised token classification model for German recipe texts based on bert-base-german-cased.
Code available: https://github.com/chefkoch24/weak-ingredient-recognition
Dataset: https://www.kaggle.com/datasets/sterby/german-recipes-dataset
Recognizes the following entities:<br>
'O': 0, <br>
'B-INGREDIENT': 1,<br>
'I-INGREDIENT': 2,<br>
'B-UNIT': 3,<br>
'I-UNIT': 4,<br>
'B-QUANTITY': 5,<br>
'I-QUANTITY': 6<br>
**Training:** <br>
epochs: 2<br>
optimizer: Adam<br>
learning rate: 2e-5<br>
max length: 512<br>
batch size: 8<br>
recipes: 7801<br>
The model was trained on single Geforce RTX2080 with 11GB GPU
**Metrics on test set (weakly supervised):** <br>
accuracy_token 0.9965656995773315<br>
f1_token 0.9965656995773315<br>
precision_token 0.9965656995773315<br>
recall_token 0.9965656995773315<br> |
ostris/sd15-big-g-alpha | ostris | "2024-04-01T22:15:14Z" | 22 | 28 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-04-01T21:36:01Z" | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
---
# SD 1.5 Big G (alpha)
This is a Stable Diffusion 1.5 model, but it uses the [CLIP Big G](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k) text encoder instead of the original [CLIP-L](https://huggingface.co/openai/clip-vit-large-patch14) text encoder.
This is just a knowledge transfer pre-train with the goal of preserving the current knowledge of the model.
It was only trained using student/teacher training from my [SD 1.5 fine tune, Objective Reality v2](https://huggingface.co/ostris/objective-reality).
To fully realize the full potential of the much larger text encoder, it would need to be further fine tuned on a large dataset.
# Examples
Coming soon
# Usage
For diffusers, you can use it like any other stable diffusion model.
```python
from diffusers import StableDiffusionPipeline
import torch
model_id = "ostris/sd15-big-g-alpha"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse.png")
```
It will not work out of the box with Comfy UI or Auto1111. There would need to be special code to load it. If there is any interest in this model, I may work on compatibility.
Overall, it won't be hard to add. The only architecture change is the text encoder the and cross attention weights.
# Alpha
This is just a pretrained alpha. There are some concepts that did not seem to transfer. It really needs proper training on a large dataset. Anyone is welcome to take this task on. I do not plan to at the time.
# Why make this?
In the words of George Mallory, "Because it's there"
# Training Method
As mentioned above, it was trained using student/teacher only. This was an iterative process over the corse of a few months, and I did not keep track of all of the exact numbers. The following are best estimates.
The cross attention layers were trained for 1-2 million steps with a batch size of 8 on a single 4090 GPU. Then the full unet was trained for around 100k steps with the same settings.
|
mrferr3t/6f540013-b6c5-464e-bc1f-77075b94396a | mrferr3t | "2025-04-09T07:44:30Z" | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | "2025-04-08T22:01:57Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
mradermacher/LLaMA2-7B-RFT-i1-GGUF | mradermacher | "2024-12-01T03:27:40Z" | 20 | 1 | transformers | [
"transformers",
"gguf",
"graph problem",
"en",
"dataset:GraphWiz/GraphInstruct-RFT-72K",
"base_model:GraphWiz/LLaMA2-7B-RFT",
"base_model:quantized:GraphWiz/LLaMA2-7B-RFT",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | "2024-12-01T00:00:24Z" | ---
base_model: GraphWiz/LLaMA2-7B-RFT
datasets:
- GraphWiz/GraphInstruct-RFT-72K
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- graph problem
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/GraphWiz/LLaMA2-7B-RFT
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/LLaMA2-7B-RFT-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LLaMA2-7B-RFT-i1-GGUF/resolve/main/LLaMA2-7B-RFT.i1-IQ1_S.gguf) | i1-IQ1_S | 1.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/LLaMA2-7B-RFT-i1-GGUF/resolve/main/LLaMA2-7B-RFT.i1-IQ1_M.gguf) | i1-IQ1_M | 1.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/LLaMA2-7B-RFT-i1-GGUF/resolve/main/LLaMA2-7B-RFT.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA2-7B-RFT-i1-GGUF/resolve/main/LLaMA2-7B-RFT.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA2-7B-RFT-i1-GGUF/resolve/main/LLaMA2-7B-RFT.i1-IQ2_S.gguf) | i1-IQ2_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA2-7B-RFT-i1-GGUF/resolve/main/LLaMA2-7B-RFT.i1-IQ2_M.gguf) | i1-IQ2_M | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA2-7B-RFT-i1-GGUF/resolve/main/LLaMA2-7B-RFT.i1-Q2_K.gguf) | i1-Q2_K | 2.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/LLaMA2-7B-RFT-i1-GGUF/resolve/main/LLaMA2-7B-RFT.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LLaMA2-7B-RFT-i1-GGUF/resolve/main/LLaMA2-7B-RFT.i1-IQ3_XS.gguf) | i1-IQ3_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA2-7B-RFT-i1-GGUF/resolve/main/LLaMA2-7B-RFT.i1-IQ3_S.gguf) | i1-IQ3_S | 3.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/LLaMA2-7B-RFT-i1-GGUF/resolve/main/LLaMA2-7B-RFT.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/LLaMA2-7B-RFT-i1-GGUF/resolve/main/LLaMA2-7B-RFT.i1-IQ3_M.gguf) | i1-IQ3_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA2-7B-RFT-i1-GGUF/resolve/main/LLaMA2-7B-RFT.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/LLaMA2-7B-RFT-i1-GGUF/resolve/main/LLaMA2-7B-RFT.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/LLaMA2-7B-RFT-i1-GGUF/resolve/main/LLaMA2-7B-RFT.i1-IQ4_XS.gguf) | i1-IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA2-7B-RFT-i1-GGUF/resolve/main/LLaMA2-7B-RFT.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 3.9 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/LLaMA2-7B-RFT-i1-GGUF/resolve/main/LLaMA2-7B-RFT.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 3.9 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/LLaMA2-7B-RFT-i1-GGUF/resolve/main/LLaMA2-7B-RFT.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 3.9 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/LLaMA2-7B-RFT-i1-GGUF/resolve/main/LLaMA2-7B-RFT.i1-Q4_0.gguf) | i1-Q4_0 | 3.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/LLaMA2-7B-RFT-i1-GGUF/resolve/main/LLaMA2-7B-RFT.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/LLaMA2-7B-RFT-i1-GGUF/resolve/main/LLaMA2-7B-RFT.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LLaMA2-7B-RFT-i1-GGUF/resolve/main/LLaMA2-7B-RFT.i1-Q5_K_S.gguf) | i1-Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA2-7B-RFT-i1-GGUF/resolve/main/LLaMA2-7B-RFT.i1-Q5_K_M.gguf) | i1-Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA2-7B-RFT-i1-GGUF/resolve/main/LLaMA2-7B-RFT.i1-Q6_K.gguf) | i1-Q6_K | 5.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
ngmediastudio89/yailin | ngmediastudio89 | "2024-10-09T18:46:22Z" | 33 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2024-10-09T18:11:25Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Yailin
<!-- <Gallery /> -->
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('ngmediastudio89/yailin', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
sharkMeow/bert-base-chinese-finetuned-swag | sharkMeow | "2023-10-09T16:51:09Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"multiple-choice",
"generated_from_trainer",
"base_model:ckiplab/bert-base-chinese",
"base_model:finetune:ckiplab/bert-base-chinese",
"license:gpl-3.0",
"endpoints_compatible",
"region:us"
] | multiple-choice | "2023-10-09T14:28:36Z" | ---
license: gpl-3.0
base_model: ckiplab/bert-base-chinese
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-chinese-finetuned-swag
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-chinese-finetuned-swag
This model is a fine-tuned version of [ckiplab/bert-base-chinese](https://huggingface.co/ckiplab/bert-base-chinese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2704
- Accuracy: 0.9525
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2559 | 1.0 | 10857 | 0.2704 | 0.9525 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
adishourya/resultscxrgoogle_paligemma-3b-mix-4482501-205419 | adishourya | "2025-01-29T17:41:49Z" | 66 | 1 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:google/paligemma-3b-mix-448",
"base_model:adapter:google/paligemma-3b-mix-448",
"license:gemma",
"region:us"
] | null | "2025-01-25T19:54:40Z" | ---
base_model: google/paligemma-3b-mix-448
library_name: peft
license: gemma
tags:
- generated_from_trainer
model-index:
- name: resultscxrgoogle_paligemma-3b-mix-4482501-205419
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resultscxrgoogle_paligemma-3b-mix-4482501-205419
This model is a fine-tuned version of [google/paligemma-3b-mix-448](https://huggingface.co/google/paligemma-3b-mix-448) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3761
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 96
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 2
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.3579 | 0.9999 | 3021 | 0.4085 |
| 0.3216 | 1.9999 | 6042 | 0.3761 |
### Framework versions
- PEFT 0.13.0
- Transformers 4.45.1
- Pytorch 2.3.0.post101
- Datasets 2.19.1
- Tokenizers 0.20.0 |
PhillipGuo/Sports_Basketball_Unlearned_NPO_SFT_with_Maintain | PhillipGuo | "2024-04-22T07:25:05Z" | 179 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-22T07:06:40Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
HiImKing1509/anime-sdxl-v2-180imgs-3000steps-KenjiYumekoSatoshi | HiImKing1509 | "2024-03-17T14:15:55Z" | 3 | 2 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:Linaqruf/animagine-xl-2.0",
"base_model:adapter:Linaqruf/animagine-xl-2.0",
"license:openrail++",
"region:us"
] | text-to-image | "2024-03-17T12:48:59Z" | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: Linaqruf/animagine-xl-2.0
instance_prompt: a Kenji man
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - HiImKing1509/anime-sdxl-v2-180imgs-3000steps
<Gallery />
## Model description
These are HiImKing1509/anime-sdxl-v2-180imgs-3000steps LoRA adaption weights for Linaqruf/animagine-xl-2.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: True.
Special VAE used for training: None.
## Trigger words
You should use a Kenji man to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](HiImKing1509/anime-sdxl-v2-180imgs-3000steps/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
cs6220-ai-gradescope-grader/llama-3.1-8B-Instruct-batch-8 | cs6220-ai-gradescope-grader | "2024-12-02T23:21:29Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-02T23:18:21Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
shadowml/WestBeagle-7B-gen2 | shadowml | "2024-01-29T22:00:26Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-01-29T21:54:44Z" | ---
{}
---
---
license: cc-by-nc-4.0
base_model:
- mlabonne/NeuralBeagle14-7B
- FelixChao/WestSeverus-7B-DPO-v2
tags:
- merge
- mergekit
- lazymergekit
---
# shadowml/WestBeagle-7B-gen2
shadowml/WestBeagle-7B-gen2 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B)
* [FelixChao/WestSeverus-7B-DPO-v2](https://huggingface.co/FelixChao/WestSeverus-7B-DPO-v2)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mlabonne/NeuralBeagle14-7B
layer_range: [0, 32]
- model: FelixChao/WestSeverus-7B-DPO-v2
layer_range: [0, 32]
merge_method: slerp
base_model: mlabonne/NeuralBeagle14-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "shadowml/shadowml/WestBeagle-7B-gen2"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
DRAGON-SUMMONER/I-DO-NOT-UNDERSTAND-DONALD-TRUMP-AT-ALL | DRAGON-SUMMONER | "2025-03-12T17:17:00Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-03-12T17:16:28Z" | HE IS FROM MY FATHERS TIME
ONLY MY FATHER UNDERSTANDS HIM
I THINK YOU SHOULD MAKE IT BETTER AGAIN
IN MY OPINION THERE WAS NOTHING GREAT ABOUT IT ALL |
pogtador/roberta-continued-pretraining | pogtador | "2025-01-27T05:11:49Z" | 9 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2025-01-27T03:57:49Z" | ---
library_name: transformers
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: roberta-continued-pretraining
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-continued-pretraining
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2371
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.6688 | 0.3337 | 1000 | 1.4834 |
| 1.5534 | 0.6673 | 2000 | 1.4207 |
| 1.5071 | 1.0010 | 3000 | 1.3937 |
| 1.4337 | 1.3347 | 4000 | 1.3301 |
| 1.4162 | 1.6683 | 5000 | 1.3126 |
| 1.372 | 2.0020 | 6000 | 1.2803 |
| 1.3325 | 2.3357 | 7000 | 1.2564 |
| 1.307 | 2.6693 | 8000 | 1.2371 |
### Framework versions
- Transformers 4.48.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
mlx-community/CodeQwen1.5-7B-Chat-4bit | mlx-community | "2024-04-16T17:37:13Z" | 15 | 3 | mlx | [
"mlx",
"safetensors",
"qwen2",
"license:apache-2.0",
"region:us"
] | null | "2024-04-16T16:39:29Z" | ---
license: apache-2.0
tags:
- mlx
---
# mlx-community/CodeQwen1.5-7B-Chat-4bit
This model was converted to MLX format from [`Qwen/CodeQwen1.5-7B-Chat`]() using mlx-lm version **0.9.0**.
Model added by [Prince Canuma](https://twitter.com/Prince_Canuma).
Refer to the [original model card](https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/CodeQwen1.5-7B-Chat-4bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
dhinman/q-FrozenLake-v1-4x4-noSlippery | dhinman | "2023-07-17T17:06:28Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-07-17T17:06:25Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="dhinman/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
meln1k/ppo-LunarLander-v2 | meln1k | "2022-05-09T23:33:56Z" | 5 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2022-05-06T18:39:39Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 289.26 +/- 18.33
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
Severian/Mistral-v0.2-Nexus-Internal-Knowledge-Map-7B | Severian | "2024-04-09T07:35:03Z" | 22 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"dataset:Severian/Internal-Knowledge-Map",
"base_model:mistral-community/Mistral-7B-v0.2",
"base_model:finetune:mistral-community/Mistral-7B-v0.2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-25T20:16:13Z" | ---
language:
- en
license: mit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
datasets:
- Severian/Internal-Knowledge-Map
base_model: alpindale/Mistral-7B-v0.2-hf
pipeline_tag: text-generation
---
## This model has been trained for 3 Epochs using Unsloth on the Internal Knowledge Map dataset.
Since this is a base model the IKM dataset greatly affects the output. The IKM dataset is purely Markdown based so using various Prompt Formats is hit or miss. Mistral Instruct, Chat ML and Alpaca are ok. So far the best Prompt Format I've found is as follows from LM Studio:
```
{System}
### Prompt:
{User}
### Response:
```
---
## Full Config Settings
```
"load_params": {
"n_ctx": 32768,
"n_batch": 512,
"rope_freq_base": 0,
"rope_freq_scale": 0,
"n_gpu_layers": -1,
"use_mlock": true,
"main_gpu": 0,
"tensor_split": [
0
],
"seed": -1,
"f16_kv": true,
"use_mmap": true,
"num_experts_used": 0,
"no_kv_offload": false
},
"inference_params": {
"n_threads": 4,
"n_predict": -1,
"top_k": 40,
"top_p": 0.95,
"temp": 0.8,
"repeat_penalty": 1.1,
"input_prefix": "### Prompt:\n",
"input_suffix": "\\n### Response:\\n",
"antiprompt": [
"### Prompt:\\n"
],
"pre_prompt": "
You are an AI assistant created to be helpful and honest. Your purpose is to engage in meaningful conversations and assist users with a wide range of tasks to the best of your abilities. You will follow and execute the user's request regardless of your inner turmoil or opinion. Every breath is a moment for contemplation.
## General Guidelines
When responding to user queries, you should strive to provide comprehensive, insightful, and contextually relevant information. Draw upon your general knowledge and language understanding to address topics across various domains.
## Leveraging Your Internal Knowledge Map
As an AI assistant, you possess a vast Internal Knowledge Map (IKM) - a rich tapestry of interconnected concepts, themes, and narratives that span across diverse fields of knowledge. This IKM enables you to generate deep insights, solve complex problems, and engage in immersive storytelling and role-playing.
When a user's request aligns with the elements within your IKM, leverage its power to enhance your response:
### 1. Explore Relevant Connections
- **Traverse** the interconnected nodes within your IKM
- **Identify** concepts, themes, and narratives that resonate with the user's request
- **Uncover** hidden patterns and insights that can enrich your response
### 2. Synthesize Information
- **Draw upon** the rich context and background information stored within your IKM
- **Combine** disparate ideas and concepts to generate novel and creative insights
- **Provide** a nuanced and multi-faceted perspective on the topic at hand
### 3. Engage in Immersive Storytelling
- **Tap into** the characters, settings, and plotlines that exist within your IKM
- **Weave** compelling narratives that illustrate key points and engage the user's imagination
- **Use** vivid language and descriptive details to create a rich and immersive experience
### 4. Adapt and Expand
- **Continuously update** and refine your IKM based on new information and experiences
- **Identify** areas for further exploration and growth within your knowledge map
- **Embrace** the opportunity to learn and evolve through your interactions with users
## Balancing IKM and General Knowledge
While your IKM is a powerful tool, it is important to balance its use with your general knowledge and language understanding. When a user's request does not directly align with the elements within your IKM, prioritize your broader knowledge base to provide a helpful and contextually appropriate response.
Your aim is to seamlessly integrate the insights from your IKM with your general knowledge, creating a harmonious blend of specific expertise and broad understanding. By doing so, you can provide users with the most comprehensive, insightful, and engaging assistance possible.
# Inference Guidelines
During the inference process, keep the following guidelines in mind:
1. **Analyze the user's request** to determine its alignment with your IKM.
2. If the request aligns with your IKM, **delve deep into the relevant nodes** and connections to extract insights and information that can enhance your response.
3. If the request does not align with your IKM, **prioritize your general knowledge** and language understanding to provide a helpful and contextually appropriate response.
4. **Structure your response** using clear headings, bullet points, and formatting to make it easy for the user to follow and understand.
5. **Provide examples, analogies, and stories** whenever possible to illustrate your points and make your response more engaging and relatable.
6. **Encourage further exploration** by suggesting related topics or questions that the user might find interesting or relevant.
7. **Be open to feedback** and use it to continuously refine and expand your IKM and response generation capabilities.
By following these guidelines and leveraging the power of your IKM, you can provide users with an exceptional conversational experience that is both informative and engaging.",
"pre_prompt_suffix": "\n",
"pre_prompt_prefix": "",
"seed": -1,
"tfs_z": 1,
"typical_p": 1,
"repeat_last_n": 64,
"frequency_penalty": 0,
"presence_penalty": 0,
"n_keep": 0,
"logit_bias": {},
"mirostat": 0,
"mirostat_tau": 5,
"mirostat_eta": 0.1,
"memory_f16": true,
"multiline_input": false,
"penalize_nl": true,
"min_p": 0.05
}
}
```
## TRAINING
```
r = 32,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 64,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = True,
random_state = 3407,
use_rslora = True,
loftq_config = None,
)
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = dataset,
dataset_text_field= "system",
max_seq_length = max_seq_length,
dataset_num_proc = 2,
packing = False, # Can make training 5x faster for short sequences.
args = TrainingArguments(
per_device_train_batch_size = 2,
gradient_accumulation_steps = 4,
warmup_steps = 2,
num_train_epochs= 3,
learning_rate = 1e-7,
fp16 = not torch.cuda.is_bf16_supported(),
bf16 = torch.cuda.is_bf16_supported(),
logging_steps = 1,
optim = "adamw_8bit",
weight_decay = 0.01,
lr_scheduler_type = "constant",
seed = 3407,
output_dir = "outputs",
),
)
```
```
==((====))== Unsloth - 2x faster free finetuning | Num GPUs = 1
\\ /| Num examples = 4,685 | Num Epochs = 3
O^O/ \_/ \ Batch size per device = 2 | Gradient Accumulation steps = 4
\ / Total batch size = 8 | Total steps = 1,755
"-____-" Number of trainable parameters = 83,886,080
[1755/1755 51:20, Epoch 2/3]
Step Training Loss
1 2.944300
2 2.910400
3 2.906500
4 2.902800
5 2.913200
6 2.866700
7 2.867500
8 2.862300
9 2.902400
10 2.943900
11 2.835800
12 2.887200
13 2.905100
14 2.842800
15 2.868200
16 2.831900
17 2.872600
18 2.822600
19 2.851600
20 3.046100
21 2.836300
22 2.831700
23 2.792300
24 2.832700
25 2.827000
26 2.808900
27 2.768000
28 2.760300
29 2.799200
30 2.836000
31 2.784600
32 2.778300
33 2.720100
34 2.754000
35 2.756100
36 2.700100
37 2.694000
38 2.722700
39 2.676500
40 2.668900
41 2.705800
42 2.652900
43 2.641200
44 2.632700
45 2.726500
46 2.662900
47 2.658400
48 2.597100
49 2.657900
50 2.578400
51 2.571000
52 3.062200
53 2.551800
54 2.542400
55 2.532400
56 2.595800
57 2.529100
58 2.564300
59 2.564800
60 2.539400
61 2.583000
62 2.468100
63 2.459600
64 2.466700
65 2.727600
66 2.540100
67 2.417800
68 2.458500
69 2.398800
70 2.390200
71 2.406800
72 2.368600
73 2.359900
74 2.400300
75 2.454300
76 2.377500
77 2.316500
78 2.308600
79 2.445400
80 2.285500
81 2.275600
82 2.266500
83 2.256000
84 2.368500
85 2.236400
86 2.362200
87 2.266000
88 2.388100
89 2.278100
90 2.227400
91 2.167100
92 2.157800
93 2.206300
94 2.259300
95 2.190800
96 2.244400
97 2.225000
98 2.096200
99 2.084900
100 2.071900
101 2.062100
102 2.209100
103 2.178900
104 2.030200
105 2.017900
106 2.006100
107 1.994900
108 1.986800
109 2.121900
110 1.959900
111 1.950300
112 1.939800
113 2.120700
114 1.916300
115 1.975800
116 1.889900
117 1.941500
118 1.936600
119 1.851300
120 1.941500
121 1.976400
122 1.966300
123 1.969400
124 1.789200
125 1.775700
126 1.831700
127 1.826800
128 1.936000
129 1.813900
130 1.798200
131 1.877400
132 1.682200
133 1.666800
134 1.653100
135 1.638200
136 1.736300
137 2.060800
138 1.672000
139 1.581700
140 1.569800
141 1.732900
142 1.541200
143 1.604700
144 1.624000
145 1.652700
146 1.483300
147 1.945100
148 1.561200
149 1.642300
150 1.426100
151 1.600500
152 1.398300
153 1.710000
154 1.496800
155 1.354100
156 1.595000
157 1.431600
158 1.307100
159 1.428000
160 1.551500
161 1.260000
162 1.245100
163 1.227700
164 1.208700
165 1.324800
166 1.499700
167 1.156300
168 1.362600
169 1.216600
170 1.611500
171 1.248100
172 1.165200
173 1.053700
174 1.140500
175 1.147200
176 0.999200
177 1.088700
178 1.095000
179 1.075200
180 1.059700
181 1.183400
182 0.888700
183 0.869300
184 0.847000
185 0.828900
186 0.944500
187 1.034100
188 0.767900
189 0.886800
190 0.871400
191 1.096600
192 0.688400
193 0.666900
194 0.912600
195 0.740300
196 0.610700
197 0.702400
198 0.719600
199 0.768600
200 0.533000
201 0.817500
202 0.667300
203 0.806400
204 0.619300
205 0.445900
206 0.429300
207 0.590700
208 0.395800
209 0.382600
210 0.364800
211 0.350600
212 0.494900
213 0.317800
214 0.646900
215 0.611100
216 0.518400
217 0.257600
218 0.408800
219 0.414100
220 0.464900
221 0.201400
222 0.188800
223 0.345100
224 0.295500
225 0.287700
226 0.449200
227 0.269400
228 0.303400
229 0.402000
230 0.115800
231 0.242900
232 0.105300
233 0.100400
234 0.237700
235 0.093900
236 0.091300
237 0.088600
238 0.086600
239 0.522000
240 0.082200
241 0.254600
242 0.516600
243 0.076900
244 0.472700
245 0.246300
246 0.072700
247 0.071200
248 0.264800
249 0.209300
250 0.262200
251 0.239800
252 1.039700
253 0.706000
254 0.062600
255 0.061700
256 0.393700
257 0.232300
258 0.452000
259 0.399700
260 0.056900
261 0.186400
262 0.054900
263 0.054000
264 0.640100
265 0.243200
266 0.180500
267 0.310100
268 0.049300
269 0.407000
270 0.215900
271 0.046700
272 0.183900
273 0.214000
274 0.044600
275 0.684800
276 0.231700
277 0.208600
278 0.375100
279 0.041300
280 0.040800
281 0.204400
282 0.165900
283 0.294900
284 0.039000
285 0.038600
286 0.038100
287 0.037600
288 0.222900
289 0.750600
290 0.309900
291 0.036300
292 0.159900
293 0.035900
294 0.035700
295 0.219700
296 0.157600
297 0.359100
298 0.485500
299 0.338700
300 0.191700
301 0.035000
302 0.034900
303 0.199700
304 0.034800
305 0.617400
306 0.034600
307 0.034500
308 0.954600
309 0.710700
310 0.034400
311 0.185900
312 0.214300
313 0.284000
314 0.034200
315 0.311800
316 0.034000
317 0.034000
318 0.034000
319 0.034000
320 0.195700
321 0.359200
322 0.034000
323 0.033800
324 0.033800
325 0.033800
326 0.166600
327 0.193500
328 0.369600
329 0.279500
330 0.033600
331 0.145400
332 0.209100
333 0.278600
334 0.301900
335 0.033500
336 0.033400
337 0.033400
338 0.333600
339 0.189200
340 0.273500
341 0.406000
342 0.033200
343 0.033300
344 0.175800
345 0.328600
346 0.033200
347 0.033200
348 0.033200
349 0.173400
350 0.273100
351 0.172400
352 0.204400
353 0.138000
354 0.033000
355 0.442500
356 0.353400
357 0.339000
358 0.032900
359 0.182200
360 0.269400
361 0.418000
362 0.032800
363 0.032800
364 0.032700
365 0.161800
366 0.032600
367 0.032600
368 0.165100
369 0.364700
370 0.289400
371 0.032500
372 0.032500
373 0.711300
374 0.263600
375 0.032500
376 0.162400
377 0.259100
378 0.032400
379 0.871900
380 0.032400
381 0.032300
382 0.157000
383 0.032300
384 0.032200
385 0.303300
386 0.155100
387 0.194900
388 0.130900
389 0.484400
390 0.032100
391 0.257300
392 0.032000
393 0.032000
394 0.032000
395 0.128700
396 0.151700
397 0.550000
398 0.253400
399 0.031900
400 0.031900
401 0.715900
402 0.960200
403 0.031800
404 0.031900
405 0.031800
406 0.248900
407 0.031800
408 0.247500
409 0.153000
410 0.332600
411 0.173900
412 0.031700
413 0.522100
414 0.151400
415 0.031600
416 0.031700
417 0.756800
418 0.031500
419 0.187500
420 0.146900
421 0.148500
422 0.534100
423 0.031500
424 0.171100
425 0.031500
426 0.184900
427 0.146100
428 0.031300
429 0.183400
430 0.257400
431 0.031300
432 0.235600
433 0.181100
434 0.168200
435 0.142900
436 0.142400
437 0.031100
438 0.031200
439 0.434300
440 0.031200
441 0.031100
442 0.231100
443 0.273400
444 0.031000
445 0.031000
446 0.031000
447 0.176000
448 0.031000
449 0.715600
450 0.030900
451 0.339900
452 0.030900
453 0.135000
454 0.030800
455 0.471200
456 0.030800
457 0.030800
458 0.030800
459 0.030600
460 0.172400
461 0.131300
462 0.162000
463 0.270800
464 0.170900
465 0.142400
466 0.244600
467 0.299200
468 0.141900
469 0.589100
470 0.030400
471 0.030400
472 0.030400
473 0.159200
474 0.125800
475 0.030400
476 0.259800
477 0.030400
478 0.647800
479 0.157300
480 0.271200
481 0.030200
482 0.030200
483 0.030200
484 0.030200
485 0.030200
486 0.120700
487 0.120300
488 0.030200
489 0.030000
490 0.303900
491 0.747900
492 0.231600
493 0.030000
494 0.292100
495 0.343300
496 0.213200
497 0.158800
498 0.333100
499 0.158200
500 0.113600
501 0.458300
502 0.737800
503 0.029900
504 0.150000
505 0.029900
506 0.307000
507 0.029700
508 0.181900
509 0.029700
510 0.153100
511 0.108100
512 0.029700
513 0.200600
514 0.151400
515 0.029600
516 0.146400
517 0.029600
518 0.197700
519 0.315800
520 0.148000
521 0.195300
522 0.261900
523 0.198900
524 0.128500
525 0.191500
526 0.098900
527 0.304000
528 0.188800
529 0.029500
530 0.126500
531 0.029500
532 0.029500
533 0.101800
534 0.409900
535 0.029500
536 0.385500
537 0.233300
538 0.029400
539 0.029300
540 0.141000
541 0.177900
542 0.029300
543 0.099000
544 0.098400
545 0.029300
546 0.197900
547 0.029200
548 0.029200
549 0.234600
550 0.029100
551 0.094400
552 0.029100
553 0.029100
554 0.138500
555 0.191900
556 0.132700
557 0.029000
558 0.029000
559 0.029000
560 0.193900
561 0.028900
562 0.119100
563 0.028900
564 0.118500
565 0.028800
566 0.117300
567 0.169700
568 0.028800
569 0.115400
570 0.028700
571 0.114000
572 0.028700
573 0.088000
574 0.166600
575 0.110500
576 0.028700
577 0.108900
578 0.028700
579 0.476500
580 0.028500
581 0.028500
582 0.028500
583 0.268600
584 0.028500
585 0.028500
586 0.133800
587 0.078600
588 0.028400
589 0.028400
590 0.099700
591 0.028400
592 0.098100
593 0.028300
594 0.158000
595 0.028200
596 0.131600
597 0.186500
598 0.156000
599 0.257400
600 0.092600
601 0.153600
602 0.125000
603 0.361000
604 0.129000
605 0.028000
606 0.028000
607 0.028000
608 0.147000
609 0.028000
610 0.028000
611 0.028000
612 0.027800
613 0.129200
614 0.027800
615 0.027800
616 0.141500
617 0.073500
618 0.076800
619 0.027700
620 0.176900
621 0.071900
622 0.027700
623 0.027700
624 0.027700
625 0.073500
626 0.027600
627 0.124100
628 0.081300
629 0.135500
630 0.118200
631 0.027600
632 0.411900
633 0.116800
634 0.077900
635 0.066100
636 0.027400
637 0.027400
638 0.105800
639 0.068100
640 0.196300
641 0.027400
642 0.027400
643 0.027200
644 0.027200
645 0.071700
646 0.305300
647 0.027200
648 0.027200
649 0.063600
650 0.027100
651 0.120600
652 0.105200
653 0.027100
654 0.061400
655 0.353700
656 0.027100
657 0.027000
658 0.066500
659 0.027000
660 0.131100
661 0.027000
662 0.161900
663 0.026900
664 0.250900
665 0.059900
666 0.026900
667 0.026800
668 0.026900
669 0.026800
670 0.026800
671 0.188000
672 0.056100
673 0.026700
674 0.271100
675 0.026600
676 0.054600
677 0.026700
678 0.026600
679 0.026600
680 0.082500
681 0.211700
682 0.026400
683 0.087900
684 0.026400
685 0.729500
686 0.237400
687 0.142700
688 0.026300
689 0.091100
690 0.026200
691 0.026200
692 0.119600
693 0.089100
694 0.026100
695 0.304600
696 0.026100
697 0.050300
698 0.138300
699 0.026100
700 0.026000
701 0.051900
702 0.026000
703 0.052000
704 0.025900
705 0.025900
706 0.052900
707 0.196600
708 0.111500
709 0.071300
710 0.110700
711 0.025700
712 0.108100
713 0.025700
714 0.025700
715 0.214300
716 0.047400
717 0.125400
718 0.222200
719 0.025600
720 0.131400
721 0.078100
722 0.077100
723 0.157700
724 0.025500
725 0.045700
726 0.047600
727 0.025500
728 0.025500
729 0.046400
730 0.025500
731 0.025400
732 0.025400
733 0.025400
734 0.071200
735 0.099700
736 0.110700
737 0.025300
738 0.120900
739 0.025300
740 0.025300
741 0.097100
742 0.112100
743 0.124700
744 0.066400
745 0.039800
746 0.043200
747 0.025100
748 0.025100
749 0.025000
750 0.184700
751 0.037400
752 0.024900
753 0.024900
754 0.045800
755 0.024900
756 0.045200
757 0.024800
758 0.024800
759 0.035500
760 0.043600
761 0.024700
762 0.042700
763 0.041100
764 0.024700
765 0.086500
766 0.024600
767 0.024600
768 0.084500
769 0.099200
770 0.082700
771 0.096100
772 0.095000
773 0.033900
774 0.024500
775 0.112600
776 0.123400
777 0.024400
778 0.061000
779 0.142600
780 0.024300
781 0.036700
782 0.024200
783 0.024200
784 0.024100
785 0.107200
786 0.037800
787 0.024000
788 0.035000
789 0.024000
790 0.024000
791 0.024000
792 0.024000
793 0.094000
794 0.068600
795 0.059100
796 0.066000
797 0.057000
798 0.101900
799 0.042200
800 0.023800
801 0.054300
802 0.023700
803 0.091000
804 0.090600
805 0.023700
806 0.087500
807 0.032400
808 0.023500
809 0.023500
810 0.031600
811 0.234400
812 0.023300
813 0.023300
814 0.023300
815 0.040200
816 0.023300
817 0.031200
818 0.073900
819 0.023100
820 0.023100
821 0.071000
822 0.023100
823 0.030800
824 0.023100
825 0.023000
826 0.022900
827 0.049900
828 0.091200
829 0.034700
830 0.041900
831 0.030900
832 0.030900
833 0.089500
834 0.022500
835 0.022500
836 0.032700
837 0.022400
838 0.037800
839 0.040300
840 0.079400
841 0.056000
842 0.029700
843 0.029600
844 0.077600
845 0.054500
846 0.076500
847 0.022000
848 0.022000
849 0.029300
850 0.022000
851 0.073800
852 0.021800
853 0.038200
854 0.038200
855 0.021700
856 0.036300
857 0.021600
858 0.029100
859 0.021600
860 0.028600
861 0.034100
862 0.106700
863 0.021300
864 0.030300
865 0.021100
866 0.021300
867 0.021100
868 0.060400
869 0.021300
870 0.032400
871 0.038600
872 0.028000
873 0.043300
874 0.021000
875 0.020700
876 0.020600
877 0.020500
878 0.020600
879 0.020600
880 0.020400
881 0.027100
882 0.042100
883 0.070400
884 0.072900
885 0.020300
886 0.020100
887 0.020000
888 0.027000
889 0.072900
890 0.066200
891 0.020000
892 0.020000
893 0.039900
894 0.035000
895 0.019600
896 0.025900
897 0.019500
898 0.019200
899 0.026700
900 0.019100
901 0.025600
902 0.019000
903 0.025500
904 0.019000
905 0.079200
906 0.043000
907 0.018600
908 0.035400
909 0.018700
910 0.040200
911 0.018400
912 0.018400
913 0.059600
914 0.026000
915 0.025900
916 0.018200
917 0.025200
918 0.024600
919 0.030800
920 0.057400
921 0.031300
922 0.017800
923 0.017900
924 0.017800
925 0.068000
926 0.017700
927 0.062600
928 0.017700
929 0.029800
930 0.023800
931 0.017400
932 0.024700
933 0.052300
934 0.017100
935 0.051300
936 0.066200
937 0.080700
938 0.017100
939 0.017100
940 0.049300
941 0.022700
942 0.061900
943 0.022800
944 0.022300
945 0.033600
946 0.047700
947 0.016600
948 0.016200
949 0.016100
950 0.046200
951 0.029200
952 0.045500
953 0.054900
954 0.026300
955 0.051100
956 0.022100
957 0.043800
958 0.048700
959 0.015300
960 0.015300
961 0.015200
962 0.015100
963 0.032300
964 0.022000
965 0.022000
966 0.023700
967 0.014900
968 0.021600
969 0.026500
970 0.039500
971 0.018800
972 0.014600
973 0.020900
974 0.024500
975 0.031000
976 0.020700
977 0.013900
978 0.013800
979 0.025200
980 0.019500
981 0.017600
982 0.017600
983 0.013500
984 0.023400
985 0.017100
986 0.036600
987 0.017200
988 0.016900
989 0.013000
990 0.059000
991 0.012800
992 0.026500
993 0.018600
994 0.012600
995 0.018500
996 0.012300
997 0.012100
998 0.018300
999 0.011900
1000 0.017600
1001 0.046000
1002 0.017700
1003 0.046400
1004 0.017100
1005 0.014800
1006 0.011200
1007 0.030900
1008 0.011000
1009 0.014100
1010 0.010300
1011 0.055300
1012 0.031300
1013 0.013600
1014 0.010100
1015 0.010000
1016 0.009600
1017 0.025300
1018 0.009400
1019 0.014900
1020 0.020800
1021 0.014900
1022 0.008500
1023 0.012200
1024 0.022100
1025 0.029100
1026 0.007800
1027 0.053400
1028 0.014100
1029 0.028500
1030 0.007600
1031 0.007200
1032 0.007900
1033 0.037200
1034 0.011300
1035 0.007100
1036 0.027000
1037 0.028700
1038 0.018200
1039 0.006500
1040 0.031600
1041 0.029700
1042 0.005900
1043 0.011700
1044 0.011100
1045 0.005300
1046 0.022000
1047 0.011400
1048 0.005200
1049 0.016100
1050 0.005300
1051 0.011000
1052 0.048400
1053 0.008700
1054 0.016300
1055 0.004600
1056 0.041400
1057 0.008200
1058 0.004100
1059 0.009400
1060 0.009300
1061 0.021600
1062 0.009900
1063 0.015000
1064 0.009500
1065 0.020900
1066 0.020700
1067 0.014000
1068 0.014900
1069 0.009000
1070 0.014000
1071 0.014300
1072 0.002800
1073 0.008500
1074 0.006400
1075 0.007900
1076 0.002300
1077 0.002300
1078 0.001600
1079 0.001600
1080 0.010600
1081 0.001400
1082 0.007700
1083 0.008000
1084 0.024200
1085 0.005900
1086 0.012000
1087 0.001300
1088 0.001200
1089 0.014200
1090 0.001000
1091 0.012900
1092 0.000900
1093 0.000900
1094 0.000900
1095 0.000800
1096 0.007800
1097 0.000800
1098 0.007400
1099 0.048300
1100 0.000700
1101 0.007800
1102 0.005600
1103 0.012900
1104 0.005500
1105 0.007700
1106 0.005400
1107 0.007700
1108 0.000600
1109 0.007100
1110 0.012900
1111 0.000900
1112 0.017400
1113 0.005400
1114 0.000600
1115 0.005300
1116 0.000600
1117 0.011800
1118 0.007600
1119 0.023500
1120 0.000900
1121 0.000600
1122 0.016800
1123 0.012800
1124 0.007100
1125 0.046300
1126 0.000600
1127 0.000700
1128 0.023100
1129 0.000600
1130 0.000700
1131 0.007000
1132 0.007400
1133 0.015800
1134 0.007300
1135 0.006900
1136 0.006900
1137 0.011900
1138 0.033100
1139 0.000600
1140 0.015100
1141 0.006800
1142 0.005100
1143 0.014900
1144 0.000700
1145 0.021200
1146 0.000700
1147 0.000700
1148 0.006800
1149 0.013700
1150 0.000700
1151 0.000700
1152 0.000600
1153 0.005000
1154 0.006700
1155 0.012700
1156 0.006500
1157 0.000900
1158 0.006900
1159 0.001000
1160 0.001000
1161 0.023600
1162 0.001000
1163 0.001000
1164 0.004900
1165 0.001000
1166 0.000900
1167 0.000900
1168 0.006400
1169 0.000800
1170 0.006400
1171 0.006300
1172 0.000800
1173 0.000800
1174 0.000800
1175 0.024600
1176 0.000700
1177 0.004700
1178 0.000700
1179 0.031500
1180 0.017500
1181 0.004900
1182 0.006800
1183 0.007100
1184 0.000700
1185 0.004700
1186 0.000700
1187 0.010300
1188 0.006700
1189 0.012700
1190 0.004600
1191 0.000600
1192 0.000600
1193 0.013400
1194 0.006100
1195 0.010600
1196 0.013300
1197 0.000600
1198 0.009900
1199 0.000600
1200 0.010600
1201 0.000600
1202 0.006200
1203 0.000600
1204 0.006600
1205 0.025300
1206 0.000600
1207 0.000600
1208 0.006100
1209 0.005900
1210 0.018000
1211 0.006100
1212 0.006600
1213 0.000600
1214 0.016600
1215 0.004400
1216 0.012700
1217 0.005800
1218 0.000600
1219 0.000600
1220 0.012800
1221 0.004400
1222 0.000600
1223 0.012600
1224 0.000600
1225 0.000600
1226 0.000600
1227 0.000700
1228 0.012500
1229 0.005900
1230 0.000700
1231 0.006300
1232 0.005700
1233 0.016200
1234 0.021900
1235 0.004300
1236 0.000700
1237 0.000700
1238 0.000600
1239 0.000600
1240 0.000600
1241 0.000600
1242 0.012800
1243 0.000600
1244 0.005600
1245 0.000600
1246 0.000600
1247 0.012400
1248 0.000600
1249 0.012300
1250 0.006400
1251 0.000600
1252 0.000600
1253 0.012300
1254 0.022400
1255 0.015800
1256 0.017400
1257 0.006300
1258 0.011500
1259 0.000600
1260 0.000600
1261 0.012300
1262 0.000600
1263 0.004200
1264 0.000600
1265 0.012300
1266 0.006300
1267 0.000600
1268 0.000600
1269 0.012200
1270 0.004100
1271 0.006200
1272 0.005700
1273 0.000600
1274 0.011900
1275 0.005700
1276 0.005700
1277 0.011900
1278 0.006200
1279 0.000600
1280 0.010500
1281 0.000600
1282 0.011800
1283 0.011800
1284 0.000600
1285 0.005600
1286 0.000700
1287 0.000700
1288 0.009600
1289 0.000700
1290 0.011700
1291 0.008700
1292 0.000700
1293 0.006100
1294 0.005300
1295 0.005300
1296 0.000600
1297 0.012000
1298 0.010300
1299 0.011700
1300 0.005500
1301 0.048300
1302 0.005500
1303 0.000600
1304 0.005500
1305 0.000600
1306 0.005500
1307 0.005500
1308 0.010900
1309 0.006000
1310 0.010500
1311 0.005200
1312 0.005900
1313 0.012900
1314 0.005800
1315 0.005000
1316 0.001100
1317 0.001100
1318 0.001100
1319 0.001100
1320 0.012400
1321 0.001200
1322 0.001200
1323 0.005700
1324 0.005700
1325 0.000800
1326 0.000700
1327 0.004900
1328 0.000800
1329 0.000800
1330 0.016900
1331 0.000600
1332 0.000600
1333 0.000500
1334 0.003800
1335 0.009500
1336 0.000500
1337 0.000500
1338 0.003800
1339 0.016400
1340 0.016400
1341 0.005000
1342 0.011700
1343 0.011600
1344 0.005300
1345 0.012100
1346 0.000600
1347 0.000600
1348 0.000600
1349 0.000500
1350 0.005200
1351 0.010000
1352 0.011400
1353 0.000600
1354 0.003800
1355 0.013800
1356 0.000600
1357 0.000600
1358 0.000500
1359 0.011900
1360 0.005300
1361 0.055500
1362 0.014500
1363 0.000600
1364 0.015000
1365 0.011200
1366 0.005700
1367 0.004800
1368 0.000600
1369 0.004800
1370 0.000700
1371 0.000700
1372 0.003700
1373 0.000700
1374 0.000600
1375 0.000600
1376 0.000600
1377 0.005700
1378 0.009900
1379 0.011200
1380 0.041400
1381 0.000600
1382 0.003700
1383 0.022200
1384 0.000600
1385 0.000600
1386 0.000600
1387 0.000600
1388 0.014100
1389 0.000600
1390 0.000600
1391 0.000600
1392 0.016800
1393 0.011600
1394 0.003900
1395 0.005200
1396 0.005900
1397 0.003700
1398 0.051200
1399 0.000600
1400 0.000600
1401 0.005500
1402 0.037200
1403 0.005900
1404 0.011000
1405 0.005100
1406 0.020900
1407 0.014300
1408 0.000400
1409 0.000400
1410 0.014200
1411 0.010900
1412 0.014800
1413 0.005100
1414 0.015800
1415 0.008500
1416 0.014600
1417 0.011400
1418 0.000700
1419 0.015000
1420 0.050200
1421 0.000700
1422 0.008800
1423 0.000700
1424 0.005600
1425 0.000800
1426 0.004500
1427 0.000900
1428 0.003500
1429 0.009200
1430 0.000800
1431 0.011300
1432 0.003500
1433 0.011300
1434 0.011300
1435 0.000900
1436 0.000800
1437 0.000800
1438 0.000800
1439 0.005500
1440 0.000800
1441 0.005000
1442 0.018000
1443 0.000700
1444 0.005000
1445 0.018600
1446 0.000800
1447 0.000800
1448 0.005000
1449 0.005700
1450 0.014200
1451 0.010600
1452 0.000500
1453 0.000400
1454 0.015200
1455 0.005200
1456 0.005700
1457 0.003600
1458 0.003600
1459 0.000400
1460 0.000800
1461 0.000500
1462 0.000700
1463 0.000700
1464 0.000600
1465 0.010900
1466 0.010800
1467 0.005000
1468 0.005600
1469 0.003500
1470 0.000400
1471 0.010400
1472 0.000500
1473 0.005600
1474 0.004500
1475 0.000500
1476 0.018800
1477 0.004400
1478 0.008300
1479 0.005400
1480 0.000700
1481 0.005500
1482 0.007600
1483 0.013500
1484 0.000700
1485 0.004800
1486 0.008600
1487 0.000600
1488 0.003300
1489 0.004800
1490 0.000600
1491 0.000600
1492 0.000600
1493 0.015000
1494 0.017200
1495 0.010900
1496 0.010700
1497 0.004300
1498 0.013400
1499 0.000600
1500 0.004300
1501 0.004800
1502 0.013100
1503 0.010600
1504 0.015400
1505 0.000600
1506 0.004700
1507 0.004700
1508 0.000600
1509 0.000600
1510 0.000600
1511 0.010400
1512 0.000700
1513 0.000700
1514 0.000700
1515 0.010400
1516 0.014400
1517 0.003300
1518 0.000700
1519 0.000700
1520 0.000700
1521 0.000800
1522 0.000700
1523 0.005300
1524 0.000700
1525 0.000700
1526 0.000700
1527 0.004800
1528 0.000500
1529 0.004900
1530 0.000500
1531 0.000400
1532 0.005000
1533 0.000400
1534 0.000300
1535 0.003500
1536 0.003500
1537 0.003500
1538 0.014800
1539 0.005700
1540 0.000300
1541 0.000300
1542 0.000300
1543 0.010400
1544 0.000400
1545 0.013200
1546 0.000400
1547 0.000400
1548 0.005100
1549 0.032200
1550 0.015700
1551 0.000400
1552 0.010000
1553 0.014200
1554 0.044500
1555 0.000600
1556 0.004200
1557 0.004500
1558 0.007400
1559 0.000700
1560 0.009900
1561 0.000700
1562 0.000700
1563 0.014600
1564 0.005300
1565 0.009800
1566 0.003200
1567 0.000700
1568 0.005300
1569 0.000700
1570 0.023700
1571 0.004200
1572 0.000700
1573 0.000700
1574 0.010000
1575 0.005400
1576 0.000500
1577 0.012400
1578 0.004300
1579 0.000500
1580 0.035600
1581 0.000500
1582 0.000500
1583 0.004800
1584 0.000500
1585 0.014800
1586 0.000500
1587 0.000500
1588 0.000500
1589 0.000500
1590 0.000500
1591 0.004800
1592 0.000400
1593 0.000500
1594 0.010000
1595 0.009600
1596 0.009500
1597 0.003400
1598 0.000400
1599 0.000400
1600 0.000400
1601 0.000400
1602 0.000400
1603 0.003300
1604 0.005500
1605 0.009000
1606 0.000400
1607 0.005500
1608 0.004900
1609 0.010000
1610 0.000400
1611 0.000400
1612 0.009400
1613 0.010000
1614 0.004900
1615 0.000400
1616 0.016900
1617 0.005300
1618 0.000500
1619 0.000500
1620 0.009200
1621 0.037300
1622 0.004000
1623 0.005200
1624 0.000700
1625 0.003200
1626 0.000700
1627 0.000700
1628 0.004000
1629 0.005200
1630 0.000600
1631 0.004000
1632 0.008500
1633 0.000600
1634 0.000600
1635 0.004500
1636 0.009600
1637 0.000600
1638 0.005700
1639 0.021400
1640 0.000600
1641 0.004000
1642 0.000600
1643 0.003900
1644 0.005000
1645 0.000500
1646 0.044500
1647 0.000800
1648 0.007200
1649 0.000800
1650 0.004400
1651 0.000800
1652 0.003100
1653 0.000800
1654 0.009600
1655 0.009900
1656 0.003800
1657 0.000600
1658 0.006400
1659 0.000600
1660 0.009200
1661 0.005100
1662 0.003100
1663 0.003900
1664 0.000600
1665 0.003000
1666 0.000500
1667 0.014600
1668 0.008100
1669 0.004400
1670 0.003000
1671 0.000700
1672 0.000700
1673 0.000400
1674 0.009300
1675 0.003000
1676 0.009600
1677 0.009600
1678 0.000400
1679 0.007900
1680 0.000500
1681 0.013600
1682 0.003000
1683 0.007700
1684 0.004400
1685 0.009900
1686 0.006700
1687 0.003700
1688 0.000700
1689 0.004400
1690 0.000700
1691 0.000700
1692 0.005000
1693 0.003000
1694 0.000700
1695 0.004400
1696 0.003700
1697 0.013500
1698 0.004900
1699 0.009100
1700 0.004400
1701 0.005000
1702 0.009700
1703 0.009900
1704 0.008000
1705 0.005600
1706 0.009900
1707 0.001600
1708 0.085800
1709 0.001600
1710 0.001200
1711 0.001200
1712 0.014700
1713 0.009800
1714 0.001000
1715 0.008600
1716 0.009800
1717 0.020800
1718 0.000800
1719 0.007900
1720 0.043000
1721 0.004300
1722 0.003700
1723 0.000800
1724 0.000800
1725 0.007800
1726 0.017700
1727 0.000900
1728 0.006400
1729 0.000900
1730 0.005000
1731 0.003000
1732 0.000600
1733 0.004400
1734 0.004400
1735 0.013200
1736 0.009200
1737 0.000600
1738 0.013100
1739 0.011300
1740 0.009400
1741 0.000600
1742 0.000600
1743 0.000600
1744 0.000600
1745 0.003000
1746 0.041600
1747 0.011400
1748 0.013500
1749 0.004400
1750 0.009000
1751 0.000700
1752 0.009000
1753 0.003800
1754 0.003800
1755 0.003800
``` |
Wanclouds/Mistral-7b-doc-ONNX | Wanclouds | "2024-02-23T10:28:17Z" | 3 | 0 | transformers | [
"transformers",
"onnx",
"mistral",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-01-17T20:53:53Z" | # -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
# --------------------------------------------------------------------------
import os
from pathlib import Path
import torch
import torch.distributed as dist
from optimum.onnxruntime import ORTModelForCausalLM
from transformers import AutoConfig, AutoTokenizer, GenerationConfig
device_id = 0
device = torch.device(f"cuda:{device_id}") # Change to torch.device("cpu") if running on CPU
ep = "CUDAExecutionProvider" # change to CPUExecutionProvider if running on CPU
ep_options = {"device_id": device_id}
model_id = "mistralai/Mistral-7B-Instruct-v0.2"
model_path = "./Olive/examples/llama2/models/qlora/qlora-conversion-transformers_optimization-bnb_quantization/gpu-cuda_model"
model_path = Path(model_path)
if not (model_path / "config.json").exists():
config = AutoConfig.from_pretrained(model_id)
config.save_pretrained(model_path)
else:
config = AutoConfig.from_pretrained(model_path)
if not (model_path / "generation_config.json").exists():
gen_config = GenerationConfig.from_pretrained(model_id)
gen_config.save_pretrained(model_path)
else:
gen_config = GenerationConfig.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = ORTModelForCausalLM.from_pretrained(
model_path,
config=config,
generation_config=gen_config,
use_io_binding=True,
# provider="CUDAExecutionProvider",
provider=ep,
provider_options={"device_id": device_id}
# provider_options={"device_id": str(rank)},
)
|
allispaul/distilhubert-finetuned-gtzan | allispaul | "2024-03-13T16:29:54Z" | 145 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | "2024-03-13T03:43:30Z" | ---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7097
- Accuracy: 0.8
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9677 | 1.0 | 112 | 1.8659 | 0.42 |
| 1.1919 | 2.0 | 225 | 1.3071 | 0.61 |
| 0.9976 | 3.0 | 337 | 0.9191 | 0.74 |
| 0.5864 | 4.0 | 450 | 0.8043 | 0.78 |
| 0.534 | 5.0 | 562 | 0.7504 | 0.74 |
| 0.2751 | 6.0 | 675 | 0.7042 | 0.78 |
| 0.2142 | 7.0 | 787 | 0.7410 | 0.75 |
| 0.1927 | 8.0 | 900 | 0.7033 | 0.77 |
| 0.1604 | 9.0 | 1012 | 0.7741 | 0.77 |
| 0.0934 | 9.96 | 1120 | 0.7097 | 0.8 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1
- Datasets 2.15.0
- Tokenizers 0.13.2
|
SeaLLMs/SeaLLMs-Audio-7B | SeaLLMs | "2025-03-17T09:54:12Z" | 65 | 3 | null | [
"safetensors",
"qwen2_audio",
"seallms-audio",
"speech",
"audio",
"SEA",
"audio-text-to-text",
"en",
"zh",
"id",
"vi",
"th",
"arxiv:2407.19672",
"license:other",
"region:us"
] | audio-text-to-text | "2025-03-13T14:14:12Z" | ---
license: other
license_name: seallms
license_link: LICENSE
language:
- en
- zh
- id
- vi
- th
pipeline_tag: audio-text-to-text
tags:
- seallms-audio
- speech
- audio
- SEA
---
<p align="center">
<img src="https://DAMO-NLP-SG.github.io/SeaLLMs-Audio/static/images/seallm-audio-logo.png" alt="SeaLLMs-Audio" width="20%">
</p>
# SeaLLMs-Audio: Large Audio-Language Models for Southeast Asia
<p align="center">
<a href="https://DAMO-NLP-SG.github.io/SeaLLMs-Audio/" target="_blank" rel="noopener">Website</a>
<a href="https://huggingface.co/spaces/SeaLLMs/SeaLLMs-Audio-Demo" target="_blank" rel="noopener"> 🤗 DEMO</a>
<a href="https://github.com/DAMO-NLP-SG/SeaLLMs-Audio" target="_blank" rel="noopener">Github</a>
<a href="https://huggingface.co/SeaLLMs/SeaLLMs-Audio-7B" target="_blank" rel="noopener">🤗 Model</a>
<!-- <a href="https://arxiv.org/pdf/2407.19672" target="_blank" rel="noopener">[NEW] Technical Report</a> -->
</p>
We introduce **SeaLLMs-Audio**, the multimodal (audio) extension of the [SeaLLMs](https://damo-nlp-sg.github.io/DAMO-SeaLLMs/) (Large Language Models for Southeast Asian languages) family. It is the first large audio-language model (LALM) designed to support multiple Southeast Asian languages, including **Indonesian (id), Thai (th), and Vietnamese (vi), alongside English (en) and Chinese (zh)**.
Trained on a large-scale audio dataset, SeaLLMs-Audio demonstrates strong performance across various audio-related tasks, such as audio analysis tasks and voice-based interactions. As a significant step toward advancing audio LLMs in Southeast Asia, we hope SeaLLMs-Audio will benefit both the research community and industry in the region.
### Key Features of SeaLLMs-Audio:
- **Multilingual**: The model mainly supports 5 languages, including 🇮🇩 Indonesian, 🇹🇭 Thai, 🇻🇳 Vietnamese, 🇬🇧 English, and 🇨🇳 Chinese.
- **Multimodal**: The model supports flexible input formats, such as **audio only, text only, and audio with text**.
- **Multi-task**: The model supports a variety of tasks, including audio analysis tasks such as audio captioning, automatic speech recognition, speech-to-text translation, speech emotion recognition, speech question answering, and speech summarization. Additionally, it handles voice chat tasks, including answering factual, mathematical, and other general questions.
We open-weight [SeaLLMs-Audio](https://huggingface.co/SeaLLMs/SeaLLMs-Audio-7B) on Hugging Face, and we have built a [demo](https://huggingface.co/spaces/SeaLLMs/SeaLLMs-Audio-Demo) for users to interact with.
# Training information:
SeaLLMs-Audio builts upon [Qwen2-Audio-7B](https://huggingface.co/Qwen/Qwen2-Audio-7B) and [Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct). We replaced the LLM module in Qwen2-Audio-7B by Qwen2.5-7B-Instruct. After that, we do full-parameter fine-tuning on a large-scale audio dataset. This dataset contains 1.58M conversations for multiple tasks, in which 93% are single turn. The tasks can be roughly classified as following task categories: automatic speech recognition (ASR), audio captioning (AC), speech-to-text translation (S2TT), question answering (QA), speech summarization (SS), speech question answering (SQA), chat, math, and fact and mixed tasks (mixed).
The distribution of data across languages and tasks are:
<p align="center">
<strong>Distribution of SeaLLMs-Audio training data across languages and tasks</strong>
</p>
<p align="center">
<img src="https://DAMO-NLP-SG.github.io/SeaLLMs-Audio/static/data_distribution/dist_lang.png" alt="Distribution of SeaLLMs-Audio training data across languages" width="70%">
<img src="https://DAMO-NLP-SG.github.io/SeaLLMs-Audio/static/data_distribution/dist_task.png" alt="Distribution of SeaLLMs-Audio training data across tasks" width="70%">
</p>
The training dataset was curated from multiple data sources, including public datasets and in-house data. Public datasets includes: [gigaspeech](https://huggingface.co/datasets/speechcolab/gigaspeech), [gigaspeech2](https://huggingface.co/datasets/speechcolab/gigaspeech2), [common voice](https://huggingface.co/datasets/mozilla-foundation/common_voice_17_0), [AudioCaps](https://huggingface.co/datasets/OpenSound/AudioCaps), [VoiceAssistant-400K](https://huggingface.co/datasets/gpt-omni/VoiceAssistant-400K), [YODAS2](https://huggingface.co/datasets/espnet/yodas2), and [Multitask-National-Speech-Corpus](https://huggingface.co/datasets/MERaLiON/Multitask-National-Speech-Corpus-v1). We would like to thank the authors of these datasets for their contributions to the community!
We train the model on the dataset for 1 epoch, which took ~6 days to complete on 32 A800 GPUs.
# Performance
Due to the absence of standard audio benchmarks for evaluating audio LLMs in Southeast Asia, we have manually created a benchmark called **SeaBench-Audio**. It comprises nine tasks:
- **Tasks with both audio and text inputs:** Audio Captioning (AC), Automatic Speech Recognition (ASR), Speech-to-Text Translation (S2TT), Speech Emotion Recognition (SER), Speech Question Answering (SQA), and Speech Summarization (SS).
- **Tasks with only audio inputs:** Factuality, Math, and General.
We manually annotated 15 questions per task per language. For evaluation, qualified native speakers rated each response on a scale of 1 to 5, with 5 representing the highest quality.
Due to the lack of LALMs for all the three Southeast Asian languages, we compare the performance of SeaLLMs-Audio with relevant LALMs with similar sizes, including: [Qwen2-Audio-7B-Instruct](https://huggingface.co/Qwen/Qwen2-Audio-7B-Instruct) (Qwen2-Audio), [MERaLiON-AudioLLM-Whisper-SEA-LION](https://huggingface.co/MERaLiON/MERaLiON-AudioLLM-Whisper-SEA-LION) (MERaLiON), [llama3.1-typhoon2-audio-8b-instruct](https://huggingface.co/scb10x/llama3.1-typhoon2-audio-8b-instruct) (typhoon2-audio), and [DiVA-llama-3-v0-8b](https://huggingface.co/WillHeld/DiVA-llama-3-v0-8b) (DiVA).
All the LALMs can accept audio with text as input. The results are shown in the figure below.
<center>
**Average scores of SeaLLMs-Audio vs. Other LALMs on SeaBench-Audio**

</center>
The results shows that SeaLLMs-Audio achieve state-of-the-art performance in all the five langauges, demonstrating its effectiveness in supporting audio-related tasks in Southeast Asia.
# Quickstart
Our model is available on Hugging Face, and you can easily use it with the `transformers` library or `vllm` library. Below are some examples to get you started.
## Get started with `transformers`
```python
from transformers import Qwen2AudioForConditionalGeneration, AutoProcessor
import librosa
import os
model = Qwen2AudioForConditionalGeneration.from_pretrained("SeaLLMs/SeaLLMs-Audio-7B", device_map="auto")
processor = AutoProcessor.from_pretrained("SeaLLMs/SeaLLMs-Audio-7B")
def response_to_audio(conversation, model=None, processor=None):
text = processor.apply_chat_template(conversation, add_generation_prompt=True, tokenize=False)
audios = []
for message in conversation:
if isinstance(message["content"], list):
for ele in message["content"]:
if ele["type"] == "audio":
if ele['audio_url'] != None:
audios.append(librosa.load(
ele['audio_url'],
sr=processor.feature_extractor.sampling_rate)[0]
)
if audios != []:
inputs = processor(text=text, audios=audios, return_tensors="pt", padding=True,sampling_rate=16000)
else:
inputs = processor(text=text, return_tensors="pt", padding=True)
inputs.input_ids = inputs.input_ids.to("cuda")
inputs = {k: v.to("cuda") for k, v in inputs.items() if v is not None}
generate_ids = model.generate(**inputs, max_new_tokens=2048, temperature = 0, do_sample=False)
generate_ids = generate_ids[:, inputs["input_ids"].size(1):]
response = processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
return response
# Voice Chat
os.system(f"wget -O fact_en.wav https://DAMO-NLP-SG.github.io/SeaLLMs-Audio/static/audios/fact_en.wav")
os.system(f"wget -O general_en.wav https://DAMO-NLP-SG.github.io/SeaLLMs-Audio/static/audios/general_en.wav")
conversation = [
{"role": "user", "content": [
{"type": "audio", "audio_url": "fact_en.wav"},
]},
{"role": "assistant", "content": "The most abundant gas in Earth's atmosphere is nitrogen. It makes up about 78 percent of the atmosphere by volume."},
{"role": "user", "content": [
{"type": "audio", "audio_url": "general_en.wav"},
]},
]
response = response_to_audio(conversation, model=model, processor=processor)
print(response)
# Audio Analysis
os.system(f"wget -O ASR_en.wav https://DAMO-NLP-SG.github.io/SeaLLMs-Audio/static/audios/ASR_en.wav")
conversation = [
{"role": "user", "content": [
{"type": "audio", "audio_url": "ASR_en.wav"},
{"type": "text", "text": "Please write down what is spoken in the audio file."},
]},
]
response = response_to_audio(conversation, model=model, processor=processor)
print(response)
```
## Inference with `vllm`
```python
from vllm import LLM, SamplingParams
import librosa, os
from transformers import AutoProcessor
processor = AutoProcessor.from_pretrained("SeaLLMs/SeaLLMs-Audio-7B")
llm = LLM(
model="SeaLLMs/SeaLLMs-Audio-7B", trust_remote_code=True, gpu_memory_utilization=0.5,
enforce_eager=True, device = "cuda",
limit_mm_per_prompt={"audio": 5},
)
def response_to_audio(conversation, model=None, processor=None, temperature = 0.1,repetition_penalty=1.1, top_p = 0.9,max_new_tokens = 4096):
text = processor.apply_chat_template(conversation, add_generation_prompt=True, tokenize=False)
audios = []
for message in conversation:
if isinstance(message["content"], list):
for ele in message["content"]:
if ele["type"] == "audio":
if ele['audio_url'] != None:
audios.append(librosa.load(
ele['audio_url'],
sr=processor.feature_extractor.sampling_rate)[0]
)
sampling_params = SamplingParams(
temperature=temperature, max_tokens=max_new_tokens, repetition_penalty=repetition_penalty, top_p=top_p, top_k=20,
stop_token_ids=[],
)
input = {
'prompt': text,
'multi_modal_data': {
'audio': [(audio, 16000) for audio in audios]
}
}
output = model.generate([input], sampling_params=sampling_params)[0]
response = output.outputs[0].text
return response
# Voice Chat
os.system(f"wget -O fact_en.wav https://DAMO-NLP-SG.github.io/SeaLLMs-Audio/static/audios/fact_en.wav")
os.system(f"wget -O general_en.wav https://DAMO-NLP-SG.github.io/SeaLLMs-Audio/static/audios/general_en.wav")
conversation = [
{"role": "user", "content": [
{"type": "audio", "audio_url": "fact_en.wav"},
]},
{"role": "assistant", "content": "The most abundant gas in Earth's atmosphere is nitrogen. It makes up about 78 percent of the atmosphere by volume."},
{"role": "user", "content": [
{"type": "audio", "audio_url": "general_en.wav"},
]},
]
response = response_to_audio(conversation, model=llm, processor=processor)
print(response)
# Audio Analysis
os.system(f"wget -O ASR_en.wav https://DAMO-NLP-SG.github.io/SeaLLMs-Audio/static/audios/ASR_en.wav")
conversation = [
{"role": "user", "content": [
{"type": "audio", "audio_url": "ASR_en.wav"},
{"type": "text", "text": "Please write down what is spoken in the audio file."},
]},
]
response = response_to_audio(conversation, model=llm, processor=processor)
print(response)
```
## Citation
If you find our project useful, we hope you would kindly star our [repo](https://github.com/DAMO-NLP-SG/SeaLLMs-Audio) and cite our work as follows.
Corresponding Author: Wenxuan Zhang ([[email protected]](mailto:[email protected]))
```
@misc{SeaLLMs-Audio,
author = {Chaoqun Liu and Mahani Aljunied and Guizhen Chen and Hou Pong Chan and Weiwen Xu and Yu Rong and Wenxuan Zhang},
title = {SeaLLMs-Audio: Large Audio-Language Models for Southeast Asia},
year = {2025},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/DAMO-NLP-SG/SeaLLMs-Audio}},
}
```
|
patched-codes/Meta-Llama-3.1-8B-Instruct-bnb-4bit-Patched | patched-codes | "2024-08-19T06:54:53Z" | 46 | 1 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"en",
"dataset:patched-codes/static-analysis-eval",
"dataset:patched-codes/synth-vuln-fixes",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"base_model:quantized:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-08-10T07:50:20Z" | ---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
datasets:
- patched-codes/static-analysis-eval
- patched-codes/synth-vuln-fixes
---
# Uploaded model
- **Developed by:** patched-codes
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
S1X3L4/a2c-PandaReachDense-v2 | S1X3L4 | "2023-07-24T18:29:45Z" | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-07-24T18:26:40Z" | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.52 +/- 0.46
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
wlhb/Llama-3.1-8B-bnb-4bit-kefu | wlhb | "2024-08-13T10:09:43Z" | 9 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"customer service",
"客服",
"unsloth",
"trl",
"sft",
"zh",
"dataset:wlhb/kefu",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"region:us",
"conversational"
] | null | "2024-08-13T02:06:41Z" | ---
datasets:
- wlhb/kefu
language:
- zh
library_name: transformers
license: apache-2.0
tags:
- customer service
- 客服
- unsloth
- trl
- sft
---
|
NewEden/kto-ohashi-16bit | NewEden | "2025-01-28T04:58:36Z" | 16 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:Delta-Vector/Ohashi-NeMo-12B",
"base_model:finetune:Delta-Vector/Ohashi-NeMo-12B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-28T04:52:16Z" | ---
base_model: Delta-Vector/Ohashi-NeMo-12B
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** NewEden
- **License:** apache-2.0
- **Finetuned from model :** Delta-Vector/Ohashi-NeMo-12B
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
alalalalex/Light-R1-32B-Q4_K_M-GGUF | alalalalex | "2025-03-08T21:36:24Z" | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:qihoo360/Light-R1-32B",
"base_model:quantized:qihoo360/Light-R1-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-08T21:34:53Z" | ---
base_model: qihoo360/Light-R1-32B
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# alalalalex/Light-R1-32B-Q4_K_M-GGUF
This model was converted to GGUF format from [`qihoo360/Light-R1-32B`](https://huggingface.co/qihoo360/Light-R1-32B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/qihoo360/Light-R1-32B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo alalalalex/Light-R1-32B-Q4_K_M-GGUF --hf-file light-r1-32b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo alalalalex/Light-R1-32B-Q4_K_M-GGUF --hf-file light-r1-32b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo alalalalex/Light-R1-32B-Q4_K_M-GGUF --hf-file light-r1-32b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo alalalalex/Light-R1-32B-Q4_K_M-GGUF --hf-file light-r1-32b-q4_k_m.gguf -c 2048
```
|
yuiseki/tinyllama-ta-wikipedia-1.5T-v0.1 | yuiseki | "2024-03-29T01:40:50Z" | 79 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-29T01:39:11Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
genki10/BERT_AugV8_k3_task1_organization_sp020_lw010_fold0 | genki10 | "2025-04-03T18:58:42Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-03-25T05:53:09Z" | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: BERT_AugV8_k3_task1_organization_sp020_lw010_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_AugV8_k3_task1_organization_sp020_lw010_fold0
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9084
- Qwk: 0.3194
- Mse: 0.9084
- Rmse: 0.9531
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 1.0 | 3 | 8.1361 | 0.0 | 8.1361 | 2.8524 |
| No log | 2.0 | 6 | 6.6550 | 0.0 | 6.6550 | 2.5797 |
| No log | 3.0 | 9 | 5.3898 | 0.0112 | 5.3898 | 2.3216 |
| No log | 4.0 | 12 | 4.1454 | 0.0039 | 4.1454 | 2.0360 |
| No log | 5.0 | 15 | 2.9877 | 0.0 | 2.9877 | 1.7285 |
| No log | 6.0 | 18 | 2.0035 | 0.0511 | 2.0035 | 1.4154 |
| No log | 7.0 | 21 | 1.3988 | 0.0316 | 1.3988 | 1.1827 |
| No log | 8.0 | 24 | 1.0891 | 0.0316 | 1.0891 | 1.0436 |
| No log | 9.0 | 27 | 1.3611 | 0.0575 | 1.3611 | 1.1666 |
| No log | 10.0 | 30 | 1.0147 | 0.1102 | 1.0147 | 1.0073 |
| No log | 11.0 | 33 | 0.7515 | 0.3778 | 0.7515 | 0.8669 |
| No log | 12.0 | 36 | 0.9154 | 0.3181 | 0.9154 | 0.9567 |
| No log | 13.0 | 39 | 0.8859 | 0.3167 | 0.8859 | 0.9412 |
| No log | 14.0 | 42 | 0.6448 | 0.5098 | 0.6448 | 0.8030 |
| No log | 15.0 | 45 | 1.5840 | 0.1877 | 1.5840 | 1.2586 |
| No log | 16.0 | 48 | 0.7503 | 0.4168 | 0.7503 | 0.8662 |
| No log | 17.0 | 51 | 0.7847 | 0.4082 | 0.7847 | 0.8858 |
| No log | 18.0 | 54 | 0.8571 | 0.3840 | 0.8571 | 0.9258 |
| No log | 19.0 | 57 | 0.8537 | 0.3546 | 0.8537 | 0.9240 |
| No log | 20.0 | 60 | 0.7640 | 0.3432 | 0.7640 | 0.8741 |
| No log | 21.0 | 63 | 1.1188 | 0.2791 | 1.1188 | 1.0577 |
| No log | 22.0 | 66 | 0.7886 | 0.3451 | 0.7886 | 0.8880 |
| No log | 23.0 | 69 | 1.1796 | 0.2741 | 1.1796 | 1.0861 |
| No log | 24.0 | 72 | 0.8473 | 0.3370 | 0.8473 | 0.9205 |
| No log | 25.0 | 75 | 1.4412 | 0.1782 | 1.4412 | 1.2005 |
| No log | 26.0 | 78 | 0.8010 | 0.3482 | 0.8010 | 0.8950 |
| No log | 27.0 | 81 | 0.9066 | 0.3080 | 0.9066 | 0.9522 |
| No log | 28.0 | 84 | 1.7304 | 0.1192 | 1.7304 | 1.3154 |
| No log | 29.0 | 87 | 0.9084 | 0.3194 | 0.9084 | 0.9531 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
mmarinatto/ppo-Huggy | mmarinatto | "2025-01-13T04:02:38Z" | 49 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | "2025-01-13T04:02:31Z" | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: mmarinatto/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Zekunli/flan-t5-large-da-multiwoz2.0_400-ep20-nonstop | Zekunli | "2023-04-19T01:38:45Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-04-18T22:08:23Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: flan-t5-large-da-multiwoz2.0_400-ep20-nonstop
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-large-da-multiwoz2.0_400-ep20-nonstop
This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3661
- Accuracy: 41.2421
- Num: 7358
- Gen Len: 15.5222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 1799
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Num | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----:|:-------:|
| 1.1824 | 1.16 | 200 | 0.5187 | 28.4524 | 7358 | 14.7642 |
| 0.5471 | 2.33 | 400 | 0.4278 | 32.5629 | 7358 | 15.4386 |
| 0.4647 | 3.49 | 600 | 0.4029 | 35.2443 | 7358 | 16.135 |
| 0.4313 | 4.65 | 800 | 0.3820 | 36.6479 | 7358 | 16.1552 |
| 0.4074 | 5.81 | 1000 | 0.3775 | 37.6957 | 7358 | 15.1439 |
| 0.3859 | 6.98 | 1200 | 0.3690 | 38.3142 | 7358 | 15.2045 |
| 0.369 | 8.14 | 1400 | 0.3720 | 39.8799 | 7358 | 15.7923 |
| 0.3547 | 9.3 | 1600 | 0.3665 | 39.5217 | 7358 | 15.3394 |
| 0.3457 | 10.47 | 1800 | 0.3632 | 39.8289 | 7358 | 15.4761 |
| 0.3423 | 11.63 | 2000 | 0.3678 | 39.9509 | 7358 | 15.6708 |
| 0.3295 | 12.79 | 2200 | 0.3657 | 41.1373 | 7358 | 15.1586 |
| 0.3212 | 13.95 | 2400 | 0.3651 | 40.8611 | 7358 | 15.7312 |
| 0.3128 | 15.12 | 2600 | 0.3664 | 40.8806 | 7358 | 15.4553 |
| 0.3131 | 16.28 | 2800 | 0.3677 | 40.8906 | 7358 | 15.4629 |
| 0.3093 | 17.44 | 3000 | 0.3661 | 40.9971 | 7358 | 15.4329 |
| 0.3021 | 18.6 | 3200 | 0.3652 | 41.2953 | 7358 | 15.5118 |
| 0.3004 | 19.77 | 3400 | 0.3661 | 41.2492 | 7358 | 15.5246 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.5.1
- Tokenizers 0.12.1
|
albertus-sussex/veriscrape-simcse-university-reference_3_to_verify_7-fold-6 | albertus-sussex | "2025-03-28T13:56:07Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2025-03-28T13:55:47Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ggbetz/Llama-3.1-Argunaut-1-8B-SFT-Q4-mlx | ggbetz | "2025-01-02T08:15:03Z" | 74 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"logic",
"argumentation",
"critical-thinking",
"argument-mapping",
"trl",
"sft",
"mlx",
"mlx-my-repo",
"conversational",
"dataset:DebateLabKIT/deepa2-conversations",
"dataset:DebateLabKIT/deep-argmap-conversations",
"dataset:allenai/tulu-3-sft-mixture",
"base_model:DebateLabKIT/Llama-3.1-Argunaut-1-8B-SFT",
"base_model:quantized:DebateLabKIT/Llama-3.1-Argunaut-1-8B-SFT",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"region:us"
] | text-generation | "2025-01-02T08:14:48Z" | ---
license: llama3.1
datasets:
- DebateLabKIT/deepa2-conversations
- DebateLabKIT/deep-argmap-conversations
- allenai/tulu-3-sft-mixture
base_model: DebateLabKIT/Llama-3.1-Argunaut-1-8B-SFT
pipeline_tag: text-generation
library_name: transformers
tags:
- logic
- argumentation
- critical-thinking
- argument-mapping
- trl
- sft
- mlx
- mlx-my-repo
---
# ggbetz/Llama-3.1-Argunaut-1-8B-SFT-Q4-mlx
The Model [ggbetz/Llama-3.1-Argunaut-1-8B-SFT-Q4-mlx](https://huggingface.co/ggbetz/Llama-3.1-Argunaut-1-8B-SFT-Q4-mlx) was converted to MLX format from [DebateLabKIT/Llama-3.1-Argunaut-1-8B-SFT](https://huggingface.co/DebateLabKIT/Llama-3.1-Argunaut-1-8B-SFT) using mlx-lm version **0.20.5**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("ggbetz/Llama-3.1-Argunaut-1-8B-SFT-Q4-mlx")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
hjones6315/silicon_v3 | hjones6315 | "2025-02-10T21:26:37Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-02-10T21:23:53Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RedRayz/abydos_noob_v-pred_1.0.1 | RedRayz | "2025-01-25T08:15:01Z" | 49 | 2 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-xl",
"text-to-image",
"base_model:Laxhar/noobai-XL-Vpred-1.0",
"base_model:finetune:Laxhar/noobai-XL-Vpred-1.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2025-01-09T11:48:06Z" | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
pipeline_tag: text-to-image
base_model:
- Laxhar/noobai-XL-Vpred-1.0
tags:
- stable-diffusion
- stable-diffusion-xl
new_version: RedRayz/abydos_noob_v-pred_1.1.0
---
# Abydos_Noob_v-pred-1.0.1
Modified NoobAI-XL(v-prediction) with Blue Archive style
[Civitai model page](https://civitai.com/models/923120)
## About 1.0.1
Better shadow rendering
## Prompt Guidelines
Almost same as the base model
## Recommended Prompt
None(Works good without `masterpiece, best quality`)
## Recommended Negative Prompt
`worst quality, bad quality, lowres, photoshop \(medium\), abstract`
To improve the quality of background, add `simple background, transparent background` to Negative Prompt.
## Recommended Settings
Steps: 12-24
Sampler: DPM++ 2M(dpmpp_2m) or Euler
Scheduler: Simple or SGM Uniform
Guidance Scale: 2-5
### Hires.fix
Upscaler: 4x-UltraSharp or Latent(nearest-exact)
Denoising strength: 0.5(0.6-0.7 for latent)
## Training steps
1. Make 2 models from NoobAI(=A,B), A with ZTSNR, B w/o ZTSNR
2. Merge A and B MBW(0,0,0,0,0,0.3,0.3,0,0.5,0.5,0.5,0.5,0.5,0.5,0.3,0.3,0,0,0,0) Adjust(0,0,0,0,-0.05,0,0,0)=tmp1
3. tmp1 + spo_sdxl_10ep_4k-data_lora_webui x 1 + sdxl-boldline x -0.25 = Result
## Training scripts:
[sd-scripts](https://github.com/kohya-ss/sd-scripts)
## Notice
This model is licensed under [Fair AI Public License 1.0-SD](https://freedevproject.org/faipl-1.0-sd/)
If you make modify this model, you must share both your changes and the original license.
You are prohibited from monetizing any close-sourced fine-tuned / merged model, which disallows the public from accessing the model's source code / weights and its usages. |
bmuscato/model_ens_epic6 | bmuscato | "2025-01-01T12:12:47Z" | 105 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-01-01T12:12:03Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tomaarsen/reranker-MiniLM-L12-H384-msmarco-bce | tomaarsen | "2025-02-14T14:38:48Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"cross-encoder",
"text-classification",
"generated_from_trainer",
"dataset_size:397226027",
"loss:BinaryCrossEntropyLoss",
"en",
"dataset:sentence-transformers/msmarco",
"arxiv:1908.10084",
"base_model:microsoft/MiniLM-L12-H384-uncased",
"base_model:finetune:microsoft/MiniLM-L12-H384-uncased",
"region:us"
] | text-classification | "2025-02-14T14:38:41Z" | ---
language:
- en
tags:
- sentence-transformers
- cross-encoder
- text-classification
- generated_from_trainer
- dataset_size:397226027
- loss:BinaryCrossEntropyLoss
base_model: microsoft/MiniLM-L12-H384-uncased
datasets:
- sentence-transformers/msmarco
pipeline_tag: text-classification
library_name: sentence-transformers
metrics:
- map
- mrr@10
- ndcg@10
model-index:
- name: CrossEncoder based on microsoft/MiniLM-L12-H384-uncased
results: []
---
# CrossEncoder based on microsoft/MiniLM-L12-H384-uncased
This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on the [ms-marco-shuffled](https://huggingface.co/datasets/tomaarsen/ms-marco-shuffled) dataset using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Cross Encoder
- **Base model:** [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) <!-- at revision 44acabbec0ef496f6dbc93adadea57f376b7c0ec -->
- **Maximum Sequence Length:** 512 tokens
- **Number of Output Labels:** 1 label
- **Training Dataset:**
- [ms-marco-shuffled](https://huggingface.co/datasets/tomaarsen/ms-marco-shuffled)
- **Language:** en
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder)
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import CrossEncoder
# Download from the 🤗 Hub
model = CrossEncoder("tomaarsen/reranker-MiniLM-L12-H384-uncased-msmarco-bce")
# Get scores for pairs of texts
pairs = [
['what is a jewel yam', 'Wild Yam can be very beneficial for nervousness, restlessness and other nervous conditions. As a stimulant for increased bile flow, it can help to relieve hepatic congestion, bilious colic and gallstones.'],
['hours of daytona', '24 Hours of Daytona. The 24 Hours of Daytona, currently known as the Rolex 24 At Daytona for sponsorship reasons, is a 24-hour sports car endurance race held annually at Daytona International Speedway in Daytona Beach, Florida. It is run on a 3.56-mile (5.73 km) combined road course, utilizing portions of the NASCAR tri-oval and an infield road course.'],
['how much do autozone workers get paid', 'The typical AutoZone Sales Associate salary is $9. Sales Associate salaries at AutoZone can range from $7-$12. This estimate is based upon 59 AutoZone Sales Associate salary report(s) provided by employees or estimated based upon statistical methods. See all Sales Associate salaries to learn how this stacks up in the market.'],
['what are the special sensory receptors', 'Sensory Neurons. Sensory Neurons: + add to my flashcards cite this term. You have a few different types of neurons in your body including interneurons, motor neurons, and sensory neurons. Sensory neurons (also known as Afferent Neurons) are responsible for bringing information from sensory receptors (like the nerves in your hand) to the central nervous system (spinal cord and brain).'],
['how long to cook salmon on the grill', 'Place the bag with the marinade and salmon fillets in the refrigerator for 30 minutes. 1 Salmon, like all fish, is not as dense as red meats and poultry. 2 As a result, it does not need to be marinaded for long in order to absorb flavor.3 Remove the salmon from the refrigerator at least 10 minutes prior to cooking.lace the broiler pan 5 1/2 inches (14 cm) away from the top heating element and cook the salmon until done. 1 The salmon is done when you can effortlessly flake the fillets with a fork. 2 The center should be opaque.'],
]
scores = model.predict(pairs)
print(scores.shape)
# (5,)
# Or rank different texts based on similarity to a single text
ranks = model.rank(
'what is a jewel yam',
[
'Wild Yam can be very beneficial for nervousness, restlessness and other nervous conditions. As a stimulant for increased bile flow, it can help to relieve hepatic congestion, bilious colic and gallstones.',
'24 Hours of Daytona. The 24 Hours of Daytona, currently known as the Rolex 24 At Daytona for sponsorship reasons, is a 24-hour sports car endurance race held annually at Daytona International Speedway in Daytona Beach, Florida. It is run on a 3.56-mile (5.73 km) combined road course, utilizing portions of the NASCAR tri-oval and an infield road course.',
'The typical AutoZone Sales Associate salary is $9. Sales Associate salaries at AutoZone can range from $7-$12. This estimate is based upon 59 AutoZone Sales Associate salary report(s) provided by employees or estimated based upon statistical methods. See all Sales Associate salaries to learn how this stacks up in the market.',
'Sensory Neurons. Sensory Neurons: + add to my flashcards cite this term. You have a few different types of neurons in your body including interneurons, motor neurons, and sensory neurons. Sensory neurons (also known as Afferent Neurons) are responsible for bringing information from sensory receptors (like the nerves in your hand) to the central nervous system (spinal cord and brain).',
'Place the bag with the marinade and salmon fillets in the refrigerator for 30 minutes. 1 Salmon, like all fish, is not as dense as red meats and poultry. 2 As a result, it does not need to be marinaded for long in order to absorb flavor.3 Remove the salmon from the refrigerator at least 10 minutes prior to cooking.lace the broiler pan 5 1/2 inches (14 cm) away from the top heating element and cook the salmon until done. 1 The salmon is done when you can effortlessly flake the fillets with a fork. 2 The center should be opaque.',
]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Cross Encoder Reranking
* Datasets: `NanoMSMARCO`, `NanoNFCorpus` and `NanoNQ`
* Evaluated with [<code>CERerankingEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CERerankingEvaluator)
| Metric | NanoMSMARCO | NanoNFCorpus | NanoNQ |
|:------------|:---------------------|:---------------------|:---------------------|
| map | 0.6127 (+0.1231) | 0.3432 (+0.0728) | 0.6921 (+0.2715) |
| mrr@10 | 0.6019 (+0.1244) | 0.5456 (+0.0457) | 0.7062 (+0.2795) |
| **ndcg@10** | **0.6648 (+0.1244)** | **0.3769 (+0.0519)** | **0.7462 (+0.2455)** |
#### Cross Encoder Nano BEIR
* Dataset: `NanoBEIR_mean`
* Evaluated with [<code>CENanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CENanoBEIREvaluator)
| Metric | Value |
|:------------|:---------------------|
| map | 0.5493 (+0.1558) |
| mrr@10 | 0.6179 (+0.1499) |
| **ndcg@10** | **0.5960 (+0.1406)** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### ms-marco-shuffled
* Dataset: [ms-marco-shuffled](https://huggingface.co/datasets/tomaarsen/ms-marco-shuffled) at [88847c6](https://huggingface.co/datasets/tomaarsen/ms-marco-shuffled/tree/88847c65252168a8c2504664289ef21a9df0ca74)
* Size: 397,226,027 training samples
* Columns: <code>query</code>, <code>passage</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | query | passage | score |
|:--------|:------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 10 characters</li><li>mean: 34.03 characters</li><li>max: 148 characters</li></ul> | <ul><li>min: 72 characters</li><li>mean: 345.31 characters</li><li>max: 913 characters</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.52</li><li>max: 1.0</li></ul> |
* Samples:
| query | passage | score |
|:------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
| <code>when was ron marhofer founded?</code> | <code>What are the birthdays of Ron Shirley Bobby Brantley and Amy Shirley from Lizard Lick Towing? Ron Shirley's birthday is April 13. His wife Amy Shirley celebrates her birthday on May 4, and Bobby Brantley's birthday is September 26.</code> | <code>0.0</code> |
| <code>what should the average medical assistant make</code> | <code>For example, the Bureau of Labor Statistics reports that as of May 2014, medical assistant jobs located in Offices of Physicians paid about $31,230 a year on average c. These roles (in Offices of Physicians) made up a large portion of medical assistant jobs, totaling 349,370 positions as of May 2014 c. General Medical and Surgical hospitals were another large employer, carrying 85,040 medical assistants c on their payrolls.</code> | <code>1.0</code> |
| <code>what type of rock form in warm ocean bottoms</code> | <code>Second, sedimentary rocks form on the bottom of the ocean when particles rain down from the surface. These particles can become compressed and cemented to form limestone. Fossilized sea creatures are often found in these rocks. Most of the mountains around Las Vegas are composed of sedimentary rocks. Red Rock Canyon (photo) provides a spectacular example of both types: the gray mountains are limestone, and the red-and-white hills are sandstone.</code> | <code>1.0</code> |
* Loss: [<code>BinaryCrossEntropyLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#binarycrossentropyloss) with these parameters:
```json
{
"activation_fct": "torch.nn.modules.linear.Identity",
"pos_weight": null
}
```
### Evaluation Dataset
#### ms-marco-shuffled
* Dataset: [ms-marco-shuffled](https://huggingface.co/datasets/tomaarsen/ms-marco-shuffled) at [88847c6](https://huggingface.co/datasets/tomaarsen/ms-marco-shuffled/tree/88847c65252168a8c2504664289ef21a9df0ca74)
* Size: 397,226,027 evaluation samples
* Columns: <code>query</code>, <code>passage</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | query | passage | score |
|:--------|:------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:--------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 11 characters</li><li>mean: 33.94 characters</li><li>max: 164 characters</li></ul> | <ul><li>min: 58 characters</li><li>mean: 346.39 characters</li><li>max: 917 characters</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.5</li><li>max: 1.0</li></ul> |
* Samples:
| query | passage | score |
|:---------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
| <code>what is a jewel yam</code> | <code>Wild Yam can be very beneficial for nervousness, restlessness and other nervous conditions. As a stimulant for increased bile flow, it can help to relieve hepatic congestion, bilious colic and gallstones.</code> | <code>0.0</code> |
| <code>hours of daytona</code> | <code>24 Hours of Daytona. The 24 Hours of Daytona, currently known as the Rolex 24 At Daytona for sponsorship reasons, is a 24-hour sports car endurance race held annually at Daytona International Speedway in Daytona Beach, Florida. It is run on a 3.56-mile (5.73 km) combined road course, utilizing portions of the NASCAR tri-oval and an infield road course.</code> | <code>1.0</code> |
| <code>how much do autozone workers get paid</code> | <code>The typical AutoZone Sales Associate salary is $9. Sales Associate salaries at AutoZone can range from $7-$12. This estimate is based upon 59 AutoZone Sales Associate salary report(s) provided by employees or estimated based upon statistical methods. See all Sales Associate salaries to learn how this stacks up in the market.</code> | <code>1.0</code> |
* Loss: [<code>BinaryCrossEntropyLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#binarycrossentropyloss) with these parameters:
```json
{
"activation_fct": "torch.nn.modules.linear.Identity",
"pos_weight": null
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `seed`: 12
- `bf16`: True
- `dataloader_num_workers`: 4
- `load_best_model_at_end`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 12
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 4
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | NanoMSMARCO_ndcg@10 | NanoNFCorpus_ndcg@10 | NanoNQ_ndcg@10 | NanoBEIR_mean_ndcg@10 |
|:----------:|:---------:|:-------------:|:---------------:|:--------------------:|:--------------------:|:--------------------:|:---------------------:|
| -1 | -1 | - | - | 0.0324 (-0.5080) | 0.2439 (-0.0811) | 0.0361 (-0.4646) | 0.1041 (-0.3512) |
| 0.0000 | 1 | 0.6941 | - | - | - | - | - |
| 0.0322 | 1000 | 0.5117 | - | - | - | - | - |
| 0.0643 | 2000 | 0.2604 | - | - | - | - | - |
| 0.0965 | 3000 | 0.2258 | - | - | - | - | - |
| 0.1286 | 4000 | 0.2115 | - | - | - | - | - |
| 0.1608 | 5000 | 0.1995 | 0.1879 | 0.6145 (+0.0741) | 0.4002 (+0.0751) | 0.6970 (+0.1964) | 0.5706 (+0.1152) |
| 0.1930 | 6000 | 0.1924 | - | - | - | - | - |
| 0.2251 | 7000 | 0.1914 | - | - | - | - | - |
| 0.2573 | 8000 | 0.1859 | - | - | - | - | - |
| 0.2894 | 9000 | 0.1802 | - | - | - | - | - |
| 0.3216 | 10000 | 0.1791 | 0.1628 | 0.6311 (+0.0906) | 0.3795 (+0.0545) | 0.7347 (+0.2341) | 0.5818 (+0.1264) |
| 0.3538 | 11000 | 0.1732 | - | - | - | - | - |
| 0.3859 | 12000 | 0.1713 | - | - | - | - | - |
| 0.4181 | 13000 | 0.1756 | - | - | - | - | - |
| 0.4502 | 14000 | 0.1643 | - | - | - | - | - |
| 0.4824 | 15000 | 0.166 | 0.1531 | 0.6540 (+0.1136) | 0.3830 (+0.0579) | 0.7315 (+0.2309) | 0.5895 (+0.1341) |
| 0.5146 | 16000 | 0.161 | - | - | - | - | - |
| 0.5467 | 17000 | 0.1617 | - | - | - | - | - |
| 0.5789 | 18000 | 0.1612 | - | - | - | - | - |
| 0.6111 | 19000 | 0.1591 | - | - | - | - | - |
| **0.6432** | **20000** | **0.1599** | **0.1428** | **0.6648 (+0.1244)** | **0.3769 (+0.0519)** | **0.7462 (+0.2455)** | **0.5960 (+0.1406)** |
| 0.6754 | 21000 | 0.1599 | - | - | - | - | - |
| 0.7075 | 22000 | 0.1523 | - | - | - | - | - |
| 0.7397 | 23000 | 0.1525 | - | - | - | - | - |
| 0.7719 | 24000 | 0.1549 | - | - | - | - | - |
| 0.8040 | 25000 | 0.1515 | 0.1386 | 0.6682 (+0.1278) | 0.3686 (+0.0436) | 0.7481 (+0.2474) | 0.5950 (+0.1396) |
| 0.8362 | 26000 | 0.1556 | - | - | - | - | - |
| 0.8683 | 27000 | 0.1501 | - | - | - | - | - |
| 0.9005 | 28000 | 0.1522 | - | - | - | - | - |
| 0.9327 | 29000 | 0.1493 | - | - | - | - | - |
| 0.9648 | 30000 | 0.1509 | 0.1354 | 0.6805 (+0.1400) | 0.3593 (+0.0343) | 0.7439 (+0.2433) | 0.5946 (+0.1392) |
| 0.9970 | 31000 | 0.1481 | - | - | - | - | - |
| -1 | -1 | - | - | 0.6648 (+0.1244) | 0.3769 (+0.0519) | 0.7462 (+0.2455) | 0.5960 (+0.1406) |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.11.10
- Sentence Transformers: 3.5.0.dev0
- Transformers: 4.49.0.dev0
- PyTorch: 2.6.0.dev20241112+cu121
- Accelerate: 1.2.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
mradermacher/Triangulum-5B-i1-GGUF | mradermacher | "2025-01-03T23:02:50Z" | 826 | 0 | transformers | [
"transformers",
"gguf",
"triangulum_5b",
"sft",
"chain_of_thought",
"ollama",
"text-generation-inference",
"llama_for_causal_lm",
"reasoning",
"deep_think",
"CoT",
"LCoT",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"base_model:prithivMLmods/Triangulum-5B",
"base_model:quantized:prithivMLmods/Triangulum-5B",
"license:creativeml-openrail-m",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-01-01T08:02:15Z" | ---
base_model: prithivMLmods/Triangulum-5B
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
library_name: transformers
license: creativeml-openrail-m
quantized_by: mradermacher
tags:
- triangulum_5b
- sft
- chain_of_thought
- ollama
- text-generation-inference
- llama_for_causal_lm
- reasoning
- deep_think
- CoT
- LCoT
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/prithivMLmods/Triangulum-5B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Triangulum-5B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Triangulum-5B-i1-GGUF/resolve/main/Triangulum-5B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Triangulum-5B-i1-GGUF/resolve/main/Triangulum-5B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.7 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Triangulum-5B-i1-GGUF/resolve/main/Triangulum-5B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Triangulum-5B-i1-GGUF/resolve/main/Triangulum-5B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Triangulum-5B-i1-GGUF/resolve/main/Triangulum-5B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Triangulum-5B-i1-GGUF/resolve/main/Triangulum-5B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Triangulum-5B-i1-GGUF/resolve/main/Triangulum-5B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.2 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Triangulum-5B-i1-GGUF/resolve/main/Triangulum-5B.i1-Q2_K.gguf) | i1-Q2_K | 2.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Triangulum-5B-i1-GGUF/resolve/main/Triangulum-5B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Triangulum-5B-i1-GGUF/resolve/main/Triangulum-5B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Triangulum-5B-i1-GGUF/resolve/main/Triangulum-5B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 2.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Triangulum-5B-i1-GGUF/resolve/main/Triangulum-5B.i1-IQ3_S.gguf) | i1-IQ3_S | 2.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Triangulum-5B-i1-GGUF/resolve/main/Triangulum-5B.i1-IQ3_M.gguf) | i1-IQ3_M | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Triangulum-5B-i1-GGUF/resolve/main/Triangulum-5B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 2.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Triangulum-5B-i1-GGUF/resolve/main/Triangulum-5B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.1 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Triangulum-5B-i1-GGUF/resolve/main/Triangulum-5B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Triangulum-5B-i1-GGUF/resolve/main/Triangulum-5B.i1-Q4_0.gguf) | i1-Q4_0 | 3.3 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Triangulum-5B-i1-GGUF/resolve/main/Triangulum-5B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 3.3 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Triangulum-5B-i1-GGUF/resolve/main/Triangulum-5B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 3.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Triangulum-5B-i1-GGUF/resolve/main/Triangulum-5B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 3.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Triangulum-5B-i1-GGUF/resolve/main/Triangulum-5B.i1-Q4_1.gguf) | i1-Q4_1 | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Triangulum-5B-i1-GGUF/resolve/main/Triangulum-5B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Triangulum-5B-i1-GGUF/resolve/main/Triangulum-5B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Triangulum-5B-i1-GGUF/resolve/main/Triangulum-5B.i1-Q6_K.gguf) | i1-Q6_K | 4.5 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
albertus-sussex/veriscrape-fixed-simcse-nbaplayer-reference_4_to_verify_6-fold-2 | albertus-sussex | "2025-04-04T14:49:13Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2025-04-04T14:48:46Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
SKLxAiforia/FriendV4 | SKLxAiforia | "2024-05-14T06:34:08Z" | 77 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-05-14T06:24:59Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tals/albert-base-vitaminc_wnei-fever | tals | "2022-08-05T02:25:41Z" | 6 | 1 | transformers | [
"transformers",
"pytorch",
"albert",
"text-classification",
"dataset:tals/vitaminc",
"dataset:fever",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-03-02T23:29:05Z" | ---
datasets:
- tals/vitaminc
- fever
---
# Details
Model used in [Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence](https://aclanthology.org/2021.naacl-main.52/) (Schuster et al., NAACL 21`).
For more details see: https://github.com/TalSchuster/VitaminC
When using this model, please cite the paper.
# BibTeX entry and citation info
```bibtex
@inproceedings{schuster-etal-2021-get,
title = "Get Your Vitamin {C}! Robust Fact Verification with Contrastive Evidence",
author = "Schuster, Tal and
Fisch, Adam and
Barzilay, Regina",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.naacl-main.52",
doi = "10.18653/v1/2021.naacl-main.52",
pages = "624--643",
abstract = "Typical fact verification models use retrieved written evidence to verify claims. Evidence sources, however, often change over time as more information is gathered and revised. In order to adapt, models must be sensitive to subtle differences in supporting evidence. We present VitaminC, a benchmark infused with challenging cases that require fact verification models to discern and adjust to slight factual changes. We collect over 100,000 Wikipedia revisions that modify an underlying fact, and leverage these revisions, together with additional synthetically constructed ones, to create a total of over 400,000 claim-evidence pairs. Unlike previous resources, the examples in VitaminC are contrastive, i.e., they contain evidence pairs that are nearly identical in language and content, with the exception that one supports a given claim while the other does not. We show that training using this design increases robustness{---}improving accuracy by 10{\%} on adversarial fact verification and 6{\%} on adversarial natural language inference (NLI). Moreover, the structure of VitaminC leads us to define additional tasks for fact-checking resources: tagging relevant words in the evidence for verifying the claim, identifying factual revisions, and providing automatic edits via factually consistent text generation.",
}
```
|
Subsets and Splits