modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-15 06:29:46
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 426
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-15 06:29:46
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
KimByeongSu/gpt-neo-125m-cs-finetuning-10000-2 | KimByeongSu | "2024-03-26T09:06:52Z" | 104 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"base_model:EleutherAI/gpt-neo-125m",
"base_model:finetune:EleutherAI/gpt-neo-125m",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-26T09:02:39Z" | ---
license: mit
base_model: EleutherAI/gpt-neo-125m
tags:
- generated_from_trainer
model-index:
- name: gpt-neo-125m-cs-finetuning-10000-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-neo-125m-cs-finetuning-10000-2
This model is a fine-tuned version of [EleutherAI/gpt-neo-125m](https://huggingface.co/EleutherAI/gpt-neo-125m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3213
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 130 | 3.3903 |
| No log | 2.0 | 260 | 3.3344 |
| No log | 3.0 | 390 | 3.3213 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.13.1+cu117
- Datasets 2.14.6
- Tokenizers 0.15.0
|
Thivin/distilbert-base-uncased-finetuned-ner | Thivin | "2022-11-18T10:51:19Z" | 110 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2022-11-18T09:10:34Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3100
- Precision: 0.9309
- Recall: 0.9435
- F1: 0.9371
- Accuracy: 0.9294
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 234 | 0.2362 | 0.9356 | 0.9484 | 0.9420 | 0.9335 |
| No log | 2.0 | 468 | 0.2854 | 0.9303 | 0.9425 | 0.9363 | 0.9282 |
| 0.2119 | 3.0 | 702 | 0.3100 | 0.9309 | 0.9435 | 0.9371 | 0.9294 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
slakshmi/my-pet-dog-xzg | slakshmi | "2023-08-11T15:21:22Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-08-11T15:16:16Z" | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog-XZG Dreambooth model trained by slakshmi following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: AJCE94
Sample pictures of this concept:
.jpg)
|
testphase73/whisper-large-ur | testphase73 | "2023-04-10T06:36:57Z" | 77 | 0 | transformers | [
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"ur",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2023-04-07T06:28:37Z" | ---
language:
- ur
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Tiny Urdu - Bilal
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: ur
split: test
args: 'config: ur, split: test'
metrics:
- name: Wer
type: wer
value: 75.2080188861798
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tiny Urdu - Bilal
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7874
- Wer: 75.2080
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8408 | 0.27 | 1000 | 0.9658 | 168.8842 |
| 0.8024 | 0.54 | 2000 | 0.8615 | 79.9063 |
| 0.735 | 0.81 | 3000 | 0.8074 | 84.7556 |
| 0.692 | 1.08 | 4000 | 0.7874 | 75.2080 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
|
nvidia/stt_de_conformer_transducer_large | nvidia | "2025-02-27T13:02:50Z" | 34 | 6 | nemo | [
"nemo",
"automatic-speech-recognition",
"speech",
"audio",
"CTC",
"Conformer",
"Transformer",
"pytorch",
"NeMo",
"hf-asr-leaderboard",
"de",
"dataset:VoxPopuli",
"dataset:multilingual_librispeech",
"dataset:mozilla-foundation/common_voice_7_0",
"arxiv:2005.08100",
"license:cc-by-4.0",
"model-index",
"region:us"
] | automatic-speech-recognition | "2022-06-28T02:45:53Z" | ---
language:
- de
library_name: nemo
datasets:
- VoxPopuli
- multilingual_librispeech
- mozilla-foundation/common_voice_7_0
thumbnail: null
tags:
- automatic-speech-recognition
- speech
- audio
- CTC
- Conformer
- Transformer
- pytorch
- NeMo
- hf-asr-leaderboard
license: cc-by-4.0
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: stt_de_conformer_transducer_large
results:
- task:
type: Automatic Speech Recognition
name: speech-recognition
dataset:
name: common-voice-7-0
type: mozilla-foundation/common_voice_7_0
config: de
split: test
args:
language: de
metrics:
- name: Test WER
type: wer
value: 4.93
- task:
type: Automatic Speech Recognition
name: automatic-speech-recognition
dataset:
name: Multilingual LibriSpeech
type: facebook/multilingual_librispeech
config: german
split: test
args:
language: de
metrics:
- name: Test WER
type: wer
value: 3.85
- task:
type: Automatic Speech Recognition
name: automatic-speech-recognition
dataset:
name: Vox Populi
type: polinaeterna/voxpopuli
args:
language: de
metrics:
- name: Test WER
type: wer
value: 5.7
---
# NVIDIA Conformer-Transducer Large (de)
<style>
img {
display: inline;
}
</style>
| [](#model-architecture)
| [](#model-architecture)
| [](#datasets)
This model transcribes speech in lower case German alphabet along with spaces.
It is a "large" versions of Conformer-Transducer (around 120M parameters) model.
See the [model architecture](#model-architecture) section and [NeMo documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#conformer-transducer) for complete architecture details.
## NVIDIA NeMo: Training
To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest Pytorch version.
```
pip install nemo_toolkit['all']
```
## How to Use this Model
The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
### Automatically instantiate the model
```python
import nemo.collections.asr as nemo_asr
asr_model = nemo_asr.models.EncDecRNNTBPEModel.from_pretrained("nvidia/stt_de_conformer_transducer_large")
```
### Transcribing using Python
First, let's get a sample
```
wget https://dldata-public.s3.us-east-2.amazonaws.com/2086-149220-0033.wav
```
Then simply do:
```
output = asr_model.transcribe(['2086-149220-0033.wav'])
print(output[0].text)
```
### Transcribing many audio files
```shell
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py
pretrained_name="nvidia/stt_de_conformer_transducer_large"
audio_dir="<DIRECTORY CONTAINING AUDIO FILES>"
```
### Input
This model accepts 16000 KHz Mono-channel Audio (wav files) as input.
### Output
This model provides transcribed speech as a string for a given audio sample.
## Model Architecture
Conformer-Transducer model is an autoregressive variant of Conformer model [1] for Automatic Speech Recognition which uses Transducer loss/decoding instead of CTC Loss. You may find more info on the detail of this model here: [Conformer-Transducer Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html).
## Training
The NeMo toolkit [3] was used for training the models for over several hundred epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/asr_ctc/speech_to_text_ctc_bpe.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/conf/conformer/conformer_ctc_bpe.yaml).
The tokenizers for these models were built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py).
### Datasets
All the models in this collection are trained on a composite dataset (NeMo ASRSET) comprising of several thousand hours of German speech:
- VoxPopuli (DE) 200 hrs subset
- Multilingual Librispeech (MLS DE) - 1500 hrs subset
- Mozilla Common Voice (v7.0)
Note: older versions of the model may have trained on smaller set of datasets.
## Performance
The list of the available models in this collection is shown in the following table. Performances of the ASR models are reported in terms of Word Error Rate (WER%) with greedy decoding.
| Version | Tokenizer | Vocabulary Size | MCV7.0 dev | MCV7.0 test | MLS dev | MLS test | Voxpopuli dev | Voxpopuli test |
|---------|-----------------------|-----------------|---------------|---------------|------------|-----------|------------|----------------|
| 1.6.0 | SentencePiece Unigram | 1024 | 4.40 | 4.93 | 3.22 | 3.85 | 11.04 | 8.85 |
## Limitations
Since this model was trained on publicly available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.
## NVIDIA Riva: Deployment
[NVIDIA Riva](https://developer.nvidia.com/riva), is an accelerated speech AI SDK deployable on-prem, in all clouds, multi-cloud, hybrid, on edge, and embedded.
Additionally, Riva provides:
* World-class out-of-the-box accuracy for the most common languages with model checkpoints trained on proprietary data with hundreds of thousands of GPU-compute hours
* Best in class accuracy with run-time word boosting (e.g., brand and product names) and customization of acoustic model, language model, and inverse text normalization
* Streaming speech recognition, Kubernetes compatible scaling, and enterprise-grade support
Although this model isn’t supported yet by Riva, the [list of supported models is here](https://huggingface.co/models?other=Riva).
Check out [Riva live demo](https://developer.nvidia.com/riva#demos).
## References
[1] [Conformer: Convolution-augmented Transformer for Speech Recognition](https://arxiv.org/abs/2005.08100)
[2] [Google Sentencepiece Tokenizer](https://github.com/google/sentencepiece)
[3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
## Licence
License to use this model is covered by the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/). By downloading the public and release version of the model, you accept the terms and conditions of the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) license. |
adamo1139/Yi-1.5-34B-32K-AEZAKMI-1706 | adamo1139 | "2024-06-18T22:58:23Z" | 40 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-17T23:29:41Z" | ---
license: apache-2.0
---
Not too happy about it overall since it still likes to reply that it's made by OpenAI but at least most of the time the output seems reasonable and natural, also somewhat uncensored. I don't like the new Yi 1.5, it's slopped tooo much.
I think some people are waiting for Yi 1.5 34B finetunes so that's why I am releasing it. |
Jonjew/DarrylHannah1980 | Jonjew | "2025-03-08T18:59:27Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
] | text-to-image | "2025-03-08T18:59:22Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
<lora:Darryl_Hannah_Flux:1.2> This is a beautiful photograph of a woman,
blonde hair cascading over her shoulders. She is wearing a boatneck dress,
Standing in a cafe. Looking at the viewer. Smile.
output:
url: images/00011-42604771.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: unknown
---
# Darryl Hannah - 1980
<Gallery />
## Model description
FROM https://civitai.com/models/1080303/darryl-hannah-198090s-flux
Strength 1
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/DarrylHannah1980/tree/main) them in the Files & versions tab.
|
Flo2000/RevRealistic | Flo2000 | "2023-11-04T16:16:55Z" | 0 | 0 | null | [
"text2text-generation",
"region:us"
] | text2text-generation | "2023-11-04T16:15:35Z" | ---
pipeline_tag: text2text-generation
--- |
shazab/videomae-base-finetuned-ucf_crime2 | shazab | "2023-05-30T11:21:43Z" | 63 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"videomae",
"video-classification",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | "2023-05-30T07:02:15Z" | ---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf_crime2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf_crime2
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8463
- Accuracy: 0.5200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 2700
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.475 | 0.05 | 135 | 0.9935 | 0.6004 |
| 1.44 | 1.05 | 270 | 1.4196 | 0.4274 |
| 1.1084 | 2.05 | 405 | 0.9135 | 0.6737 |
| 0.8732 | 3.05 | 540 | 1.1984 | 0.5479 |
| 1.4184 | 4.05 | 675 | 1.3373 | 0.4926 |
| 1.1355 | 5.05 | 810 | 0.9888 | 0.6148 |
| 0.4522 | 6.05 | 945 | 1.0745 | 0.5694 |
| 0.7754 | 7.05 | 1080 | 1.5848 | 0.5330 |
| 1.1235 | 8.05 | 1215 | 1.3688 | 0.5753 |
| 1.611 | 9.05 | 1350 | 0.6958 | 0.7694 |
| 0.5714 | 10.05 | 1485 | 0.8027 | 0.7542 |
| 0.716 | 11.05 | 1620 | 1.3503 | 0.6782 |
| 0.6642 | 12.05 | 1755 | 1.0798 | 0.6957 |
| 0.8451 | 13.05 | 1890 | 1.2328 | 0.7479 |
| 0.6157 | 14.05 | 2025 | 1.9403 | 0.5762 |
| 0.3358 | 15.05 | 2160 | 1.3435 | 0.6939 |
| 0.5394 | 16.05 | 2295 | 1.2524 | 0.7056 |
| 0.3334 | 17.05 | 2430 | 1.1190 | 0.7645 |
| 0.3513 | 18.05 | 2565 | 1.2137 | 0.7461 |
| 0.2531 | 19.05 | 2700 | 1.2131 | 0.7362 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
UrukHan/t5-russian-summarization | UrukHan | "2023-04-05T10:11:59Z" | 156,833 | 18 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:UrukHan/wav2vec2-russian",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-04-02T18:09:27Z" | ---
tags:
- generated_from_trainer
datasets: UrukHan/wav2vec2-russian
widget:
- text: Запад после начала российской специальной операции по демилитаризации Украины
ввел несколько раундов новых экономических санкций. В Кремле новые ограничения
назвали серьезными, но отметили, что Россия готовилась к ним заранее.
model-index:
- name: t5-russian-summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
---
# t5-russian-summarization
---
модель для исправление текста из распознаного аудио. моя модлеь для распознования аудио https://huggingface.co/UrukHan/wav2vec2-russian и его результаты можно закидывать в эту модель. тестил на видео случайном с ютюба
<table border="0">
<tr>
<td><b style="font-size:30px">Input</b></td>
<td><b style="font-size:30px">Output</b></td>
</tr>
<tr>
<td>Запад после начала российской специальной операции по демилитаризации Украины ввел несколько раундов новых экономических санкций. В Кремле новые ограничения назвали серьезными, но отметили, что Россия готовилась к ним заранее.</td>
<td>Запад ввел новые санкции против России</td>
</tr>
</table>
#
---
Датасеты для обучения:
UrukHan/t5-russian-summarization : https://huggingface.co/datasets/UrukHan/t5-russian-summarization
---
# Запуск на вывод результатов пример работы с комментариями в колабе https://colab.research.google.com/drive/1ame2va9_NflYqy4RZ07HYmQ0moJYy7w2?usp=sharing :
#
```python
# Установим библиотеку трансформеров
!pip install transformers
# Импортируем библиотеки
from transformers import AutoModelForSeq2SeqLM, T5TokenizerFast
# Зададим название выбронной модели из хаба
MODEL_NAME = 'UrukHan/t5-russian-summarization'
MAX_INPUT = 256
# Загрузка модели и токенизатора
tokenizer = T5TokenizerFast.from_pretrained(MODEL_NAME)
model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_NAME)
# Входные данные (можно массив фраз или текст)
input_sequences = ['Запад после начала российской специальной операции по демилитаризации Украины ввел несколько раундов новых экономических санкций. В Кремле новые ограничения назвали серьезными, но отметили, что Россия готовилась к ним заранее.'] # или можно использовать одиночные фразы: input_sequences = 'сеглдыя хорош ден'
task_prefix = "Spell correct: " # Токенизирование данных
if type(input_sequences) != list: input_sequences = [input_sequences]
encoded = tokenizer(
[task_prefix + sequence for sequence in input_sequences],
padding="longest",
max_length=MAX_INPUT,
truncation=True,
return_tensors="pt",
)
predicts = model.generate(encoded) # # Прогнозирование
tokenizer.batch_decode(predicts, skip_special_tokens=True) # Декодируем данные
```
#
---
#Настроенный блокнот для запуска обучения и сохранения модели в свой репозиторий на huggingface hub:
#https://colab.research.google.com/drive/1H4IoasDqa2TEjGivVDp-4Pdpm0oxrCWd?usp=sharing
#
```python
# Установка библиотек
!pip install datasets
!apt install git-lfs
!pip install transformers
!pip install sentencepiece
!pip install rouge_score
# Импорт библиотек
import numpy as np
from datasets import Dataset
import tensorflow as
import nltk
from transformers import T5TokenizerFast, Seq2SeqTrainingArguments, Seq2SeqTrainer, AutoModelForSeq2SeqLM, DataCollatorForSeq2Seq
import torch
from transformers.optimization import Adafactor, AdafactorSchedule
from datasets import load_dataset, load_metric
# загрузка параметров
raw_datasets = load_dataset("xsum")
metric = load_metric("rouge")
nltk.download('punkt')
# Ввести свой ключ huggingface hyb
from huggingface_hub import notebook_login
notebook_login()
# Определение параметров
REPO = "t5-russian-summarization" # Введите наазвание название репозитория
MODEL_NAME = "UrukHan/t5-russian-summarization" # Введите наазвание выбранной модели из хаба
MAX_INPUT = 256 # Введите максимальную длинну входных данных в токенах (длинна входных фраз в словах (можно считать полслова токен))
MAX_OUTPUT = 64 # Введите максимальную длинну прогнозов в токенах (можно уменьшить для задач суммризации или других задач где выход короче)
BATCH_SIZE = 8
DATASET = 'UrukHan/t5-russian-summarization' # Введите наазвание название датасета
# Загрузка датасета использование других типов данных опишу ниже
data = load_dataset(DATASET)
# Загрузка модели и токенизатора
tokenizer = T5TokenizerFast.from_pretrained(MODEL_NAME)
model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_NAME)
model.config.max_length = MAX_OUTPUT # по умолчанию 20, поэтому во всех моделях прогнозы обрезаются выходные последовательности
# Закоментить после первого соъранения в репозиторий свой необъязательно
tokenizer.push_to_hub(repo_name)
train = data['train']
test = data['test'].train_test_split(0.02)['test'] # Уменьшил так тестовыу. выборку чтоб не ждать долго расчет ошибок между эпохами
data_collator = DataCollatorForSeq2Seq(tokenizer, model=model) #return_tensors="tf"
def compute_metrics(eval_pred):
predictions, labels = eval_pred
decoded_preds = tokenizer.batch_decode(predictions, skip_special_tokens=True)
# Replace -100 in the labels as we can't decode them.
labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
# Rouge expects a newline after each sentence
decoded_preds = ["\n".join(nltk.sent_tokenize(pred.strip())) for pred in decoded_preds]
decoded_labels = ["\n".join(nltk.sent_tokenize(label.strip())) for label in decoded_labels]
result = metric.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True)
# Extract a few results
result = {key: value.mid.fmeasure * 100 for key, value in result.items()}
# Add mean generated length
prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in predictions]
result["gen_len"] = np.mean(prediction_lens)
return {k: round(v, 4) for k, v in result.items()}
training_args = Seq2SeqTrainingArguments(
output_dir = REPO,
#overwrite_output_dir=True,
evaluation_strategy='steps',
#learning_rate=2e-5,
eval_steps=5000,
save_steps=5000,
num_train_epochs=1,
predict_with_generate=True,
per_device_train_batch_size=BATCH_SIZE,
per_device_eval_batch_size=BATCH_SIZE,
fp16=True,
save_total_limit=2,
#generation_max_length=256,
#generation_num_beams=4,
weight_decay=0.005,
#logging_dir='logs',
push_to_hub=True,
)
# Выберем вручную оптимизатор. Т5 в оригинальной архитектуре использует Адафактор оптимизатор
optimizer = Adafactor(
model.parameters(),
lr=1e-5,
eps=(1e-30, 1e-3),
clip_threshold=1.0,
decay_rate=-0.8,
beta1=None,
weight_decay=0.0,
relative_step=False,
scale_parameter=False,
warmup_init=False,
)
lr_scheduler = AdafactorSchedule(optimizer)
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
train_dataset = train,
eval_dataset = test,
optimizers = (optimizer, lr_scheduler),
tokenizer = tokenizer,
compute_metrics=compute_metrics
)
trainer.train()
trainer.push_to_hub()
```
#
---
# Пример конвертации массивов для данной сети
#
```python
input_data = ['Запад после начала российской специальной операции по демилитаризации Украины ввел несколько раундов новых экономических санкций. В Кремле новые ограничения назвали серьезными, но отметили, что Россия готовилась к ним заранее.']
output_data = ['Запад ввел новые санкции против России']
# Токенизируем входные данные
task_prefix = "Spell correct: "
input_sequences = input_data
encoding = tokenizer(
[task_prefix + sequence for sequence in input_sequences],
padding="longest",
max_length=MAX_INPUT,
truncation=True,
return_tensors="pt",
)
input_ids, attention_mask = encoding.input_ids, encoding.attention_mask
# Токенизируем выходные данные
target_encoding = tokenizer(output_data, padding="longest", max_length=MAX_OUTPUT, truncation=True)
labels = target_encoding.input_ids
# replace padding token id's of the labels by -100
labels = torch.tensor(labels)
labels[labels == tokenizer.pad_token_id] = -100'''
# Конвертируем наши данные в формат dataset
data = Dataset.from_pandas(pd.DataFrame({'input_ids': list(np.array(input_ids)), 'attention_mask': list(np.array(attention_mask)), 'labels': list(np.array(labels))}))
data = data.train_test_split(0.02)
# и получим на вход сети для нашешго trainer: train_dataset = data['train'], eval_dataset = data['test']
|
bguisard/rl_course_vizdoom_health_gathering_supreme | bguisard | "2023-03-04T21:54:27Z" | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-04T20:56:45Z" | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 18.75 +/- 3.84
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r bguisard/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.8.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.8.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
no3/june-wd-1.3-beta2 | no3 | "2023-01-27T14:36:02Z" | 7 | 1 | diffusers | [
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2022-12-07T06:59:22Z" | ---
license: creativeml-openrail-m
---
### June from [Obituary - A Grave Beginning](https://invidious.weblibre.org/watch?v=0l940bPkV1o) on [WD](https://huggingface.co/hakurei/waifu-diffusion) via Dreambooth
#### model by no3
This your waifu-diffusion v1.3 model fine-tuned june taught to waifu-diffusion v1.3 with Dreambooth.
It can be used by modifying the `instance_prompt`: **sks_june**
ou can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts).
### note
If you want to to use in UI like [AUTOMATIC1111](https://github.com/AUTOMATIC1111/stable-diffusion-webui) or any UI that's uses .ckpt files just download ckpt file here for your convenience. **just click on "june-wd-1.3-beta2.ckpt"**
[june-wd-1.3-beta2.ckpt](https://huggingface.co/no3/june-wd-1.3-beta2/resolve/main/june-wd-1.3-beta2.ckpt)
If you have issues or questions feel free to visit the Community Tab and start discussion about it.
Here are images used for training this concept:




















 |
luohy/L8-it | luohy | "2025-03-12T20:34:42Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-12T20:30:48Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
vinyzeira/est | vinyzeira | "2025-04-09T21:08:56Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-09T21:08:56Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
mradermacher/Multimash3-12B-slerp-GGUF | mradermacher | "2024-05-22T19:23:25Z" | 4 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"allknowingroger/Multimerge-12B-MoE",
"TomGrc/FusionNet_7Bx2_MoE_v0.1",
"en",
"base_model:allknowingroger/Multimash3-12B-slerp",
"base_model:quantized:allknowingroger/Multimash3-12B-slerp",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-05-22T18:37:58Z" | ---
base_model: allknowingroger/Multimash3-12B-slerp
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- allknowingroger/Multimerge-12B-MoE
- TomGrc/FusionNet_7Bx2_MoE_v0.1
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/allknowingroger/Multimash3-12B-slerp
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Multimash3-12B-slerp-GGUF/resolve/main/Multimash3-12B-slerp.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Multimash3-12B-slerp-GGUF/resolve/main/Multimash3-12B-slerp.IQ3_XS.gguf) | IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Multimash3-12B-slerp-GGUF/resolve/main/Multimash3-12B-slerp.Q3_K_S.gguf) | Q3_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Multimash3-12B-slerp-GGUF/resolve/main/Multimash3-12B-slerp.IQ3_S.gguf) | IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Multimash3-12B-slerp-GGUF/resolve/main/Multimash3-12B-slerp.IQ3_M.gguf) | IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Multimash3-12B-slerp-GGUF/resolve/main/Multimash3-12B-slerp.Q3_K_M.gguf) | Q3_K_M | 6.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Multimash3-12B-slerp-GGUF/resolve/main/Multimash3-12B-slerp.Q3_K_L.gguf) | Q3_K_L | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Multimash3-12B-slerp-GGUF/resolve/main/Multimash3-12B-slerp.IQ4_XS.gguf) | IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/Multimash3-12B-slerp-GGUF/resolve/main/Multimash3-12B-slerp.Q4_K_S.gguf) | Q4_K_S | 7.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Multimash3-12B-slerp-GGUF/resolve/main/Multimash3-12B-slerp.Q4_K_M.gguf) | Q4_K_M | 7.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Multimash3-12B-slerp-GGUF/resolve/main/Multimash3-12B-slerp.Q5_K_S.gguf) | Q5_K_S | 9.0 | |
| [GGUF](https://huggingface.co/mradermacher/Multimash3-12B-slerp-GGUF/resolve/main/Multimash3-12B-slerp.Q5_K_M.gguf) | Q5_K_M | 9.2 | |
| [GGUF](https://huggingface.co/mradermacher/Multimash3-12B-slerp-GGUF/resolve/main/Multimash3-12B-slerp.Q6_K.gguf) | Q6_K | 10.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Multimash3-12B-slerp-GGUF/resolve/main/Multimash3-12B-slerp.Q8_0.gguf) | Q8_0 | 13.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Samantha-1.11-70b-i1-GGUF | mradermacher | "2025-03-15T06:45:20Z" | 471 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:ehartford/samantha-data",
"base_model:cognitivecomputations/Samantha-1.11-70b",
"base_model:quantized:cognitivecomputations/Samantha-1.11-70b",
"license:llama2",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | "2025-03-15T02:52:18Z" | ---
base_model: cognitivecomputations/Samantha-1.11-70b
datasets:
- ehartford/samantha-data
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/cognitivecomputations/Samantha-1.11-70b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Samantha-1.11-70b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-IQ1_S.gguf) | i1-IQ1_S | 14.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-IQ1_M.gguf) | i1-IQ1_M | 16.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.4 | |
| [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.4 | |
| [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-IQ2_S.gguf) | i1-IQ2_S | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-IQ2_M.gguf) | i1-IQ2_M | 23.3 | |
| [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-Q2_K_S.gguf) | i1-Q2_K_S | 23.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-Q2_K.gguf) | i1-Q2_K | 25.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 26.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.4 | |
| [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-IQ3_S.gguf) | i1-IQ3_S | 30.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-IQ3_M.gguf) | i1-IQ3_M | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 36.9 | |
| [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-Q4_0.gguf) | i1-Q4_0 | 39.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-Q4_1.gguf) | i1-Q4_1 | 43.3 | |
| [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.6 | |
| [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 48.9 | |
| [PART 1](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 56.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
WillHeld/DiVA-llama-3-token-align-8b | WillHeld | "2024-10-08T22:41:36Z" | 6 | 0 | null | [
"safetensors",
"diva",
"custom_code",
"arxiv:2410.02678",
"region:us"
] | null | "2024-08-27T06:36:50Z" | # Model Card for Diva Llama 3
<!-- Provide a quick summary of what the model is/does. [Optional] -->
This is an ablation of our Distilled Voice Assistant (DiVA) model which can handle speech and text as inputs. This ablation is trained using only token-alignment loss as described in the ablations here: https://huggingface.co/papers/2410.02678
Weights and Biases Run: https://wandb.ai/i18nlp/DiVA%20Training%20Runs/runs/4t0mvbcd?nw=nwuserheld
## Citation
This is the token-alignment only model from https://huggingface.co/papers/2410.02678
**BibTeX:**
```
@misc{DiVA,
title={{D}istilling an {E}nd-to-{E}nd {V}oice {A}ssistant {W}ithout {I}nstruction {T}raining {D}ata},
author={William Held and Ella Li and Michael Ryan and Weiyan Shi and Yanzhe Zhang and Diyi Yang},
year={2024},
eprint={2410.02678},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2410.02678},
}
```
## Table of Contents
- [Model Card for DiVA Llama 3](#model-card-for-DiVA-Llama-3)
- [Citation](#citation)
- [Table of Contents](#table-of-contents)
- [Training Details](#training-details)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Environmental Impact](#environmental-impact)
- [Technical Specifications [optional]](#technical-specifications-optional)
- [Model Architecture and Objective](#model-architecture-and-objective)
- [Compute Infrastructure](#compute-infrastructure)
- [Hardware](#hardware)
- [Software](#software)
- [Model Card Contact](#model-card-contact)
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
This model was trained on the [CommonVoice](https://huggingface.co/datasets/mozilla-foundation/common_voice_16_1) corpus.
### Training Procedure
This model was trained for 7k gradient steps with a batch size of 512 Recordings and a linearly decaying learning rate from 5e-5 to zero, with a linear warmup of 70 steps.
### Environmental Impact
- **Hardware Type:** V4-32 TPU
- **Hours used:** 8 Hours
- **Cloud Provider:** Google Cloud.
- **Compute Region:** US Central C
### Hardware
This model was trained on at V4 TPU on Google Cloud.
### Software
This model was trained with [Levanter](https://github.com/stanford-crfm/levanter)
## Model Card Authors [optional]
<!-- This section provides another layer of transparency and accountability. Whose views is this model card representing? How many voices were included in its construction? Etc. -->
Will Held
## Model Card Contact
[email protected]
|
robiulawaldev/110df085-588f-4c70-b01d-1111475281a6 | robiulawaldev | "2025-02-01T18:24:27Z" | 22 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Hermes-2-Theta-Llama-3-8B",
"base_model:adapter:NousResearch/Hermes-2-Theta-Llama-3-8B",
"license:apache-2.0",
"region:us"
] | null | "2025-02-01T18:15:26Z" | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Hermes-2-Theta-Llama-3-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 110df085-588f-4c70-b01d-1111475281a6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Hermes-2-Theta-Llama-3-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c24ac0a103d8d0ec_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c24ac0a103d8d0ec_train_data.json
type:
field_instruction: INSTRUCTION
field_output: RESPONSE
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: robiulawaldev/110df085-588f-4c70-b01d-1111475281a6
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: constant
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/c24ac0a103d8d0ec_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6b3e5022-7d53-4da2-9cdb-16c61a46a191
wandb_project: Birthday-SN56-35-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6b3e5022-7d53-4da2-9cdb-16c61a46a191
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 110df085-588f-4c70-b01d-1111475281a6
This model is a fine-tuned version of [NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6733
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0005 | 1 | 1.1740 |
| 0.6706 | 0.0245 | 50 | 0.7045 |
| 0.6236 | 0.0490 | 100 | 0.7018 |
| 0.6783 | 0.0735 | 150 | 0.6844 |
| 0.6583 | 0.0980 | 200 | 0.6733 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
RichardErkhov/ahmedheakl_-_qwen2.5-0.5b-anghabench-32kcw-gguf | RichardErkhov | "2025-03-13T16:55:17Z" | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-13T16:41:01Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
qwen2.5-0.5b-anghabench-32kcw - GGUF
- Model creator: https://huggingface.co/ahmedheakl/
- Original model: https://huggingface.co/ahmedheakl/qwen2.5-0.5b-anghabench-32kcw/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [qwen2.5-0.5b-anghabench-32kcw.Q2_K.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_qwen2.5-0.5b-anghabench-32kcw-gguf/blob/main/qwen2.5-0.5b-anghabench-32kcw.Q2_K.gguf) | Q2_K | 0.32GB |
| [qwen2.5-0.5b-anghabench-32kcw.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_qwen2.5-0.5b-anghabench-32kcw-gguf/blob/main/qwen2.5-0.5b-anghabench-32kcw.IQ3_XS.gguf) | IQ3_XS | 0.32GB |
| [qwen2.5-0.5b-anghabench-32kcw.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_qwen2.5-0.5b-anghabench-32kcw-gguf/blob/main/qwen2.5-0.5b-anghabench-32kcw.IQ3_S.gguf) | IQ3_S | 0.32GB |
| [qwen2.5-0.5b-anghabench-32kcw.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_qwen2.5-0.5b-anghabench-32kcw-gguf/blob/main/qwen2.5-0.5b-anghabench-32kcw.Q3_K_S.gguf) | Q3_K_S | 0.32GB |
| [qwen2.5-0.5b-anghabench-32kcw.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_qwen2.5-0.5b-anghabench-32kcw-gguf/blob/main/qwen2.5-0.5b-anghabench-32kcw.IQ3_M.gguf) | IQ3_M | 0.32GB |
| [qwen2.5-0.5b-anghabench-32kcw.Q3_K.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_qwen2.5-0.5b-anghabench-32kcw-gguf/blob/main/qwen2.5-0.5b-anghabench-32kcw.Q3_K.gguf) | Q3_K | 0.33GB |
| [qwen2.5-0.5b-anghabench-32kcw.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_qwen2.5-0.5b-anghabench-32kcw-gguf/blob/main/qwen2.5-0.5b-anghabench-32kcw.Q3_K_M.gguf) | Q3_K_M | 0.33GB |
| [qwen2.5-0.5b-anghabench-32kcw.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_qwen2.5-0.5b-anghabench-32kcw-gguf/blob/main/qwen2.5-0.5b-anghabench-32kcw.Q3_K_L.gguf) | Q3_K_L | 0.34GB |
| [qwen2.5-0.5b-anghabench-32kcw.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_qwen2.5-0.5b-anghabench-32kcw-gguf/blob/main/qwen2.5-0.5b-anghabench-32kcw.IQ4_XS.gguf) | IQ4_XS | 0.33GB |
| [qwen2.5-0.5b-anghabench-32kcw.Q4_0.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_qwen2.5-0.5b-anghabench-32kcw-gguf/blob/main/qwen2.5-0.5b-anghabench-32kcw.Q4_0.gguf) | Q4_0 | 0.33GB |
| [qwen2.5-0.5b-anghabench-32kcw.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_qwen2.5-0.5b-anghabench-32kcw-gguf/blob/main/qwen2.5-0.5b-anghabench-32kcw.IQ4_NL.gguf) | IQ4_NL | 0.33GB |
| [qwen2.5-0.5b-anghabench-32kcw.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_qwen2.5-0.5b-anghabench-32kcw-gguf/blob/main/qwen2.5-0.5b-anghabench-32kcw.Q4_K_S.gguf) | Q4_K_S | 0.36GB |
| [qwen2.5-0.5b-anghabench-32kcw.Q4_K.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_qwen2.5-0.5b-anghabench-32kcw-gguf/blob/main/qwen2.5-0.5b-anghabench-32kcw.Q4_K.gguf) | Q4_K | 0.37GB |
| [qwen2.5-0.5b-anghabench-32kcw.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_qwen2.5-0.5b-anghabench-32kcw-gguf/blob/main/qwen2.5-0.5b-anghabench-32kcw.Q4_K_M.gguf) | Q4_K_M | 0.37GB |
| [qwen2.5-0.5b-anghabench-32kcw.Q4_1.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_qwen2.5-0.5b-anghabench-32kcw-gguf/blob/main/qwen2.5-0.5b-anghabench-32kcw.Q4_1.gguf) | Q4_1 | 0.35GB |
| [qwen2.5-0.5b-anghabench-32kcw.Q5_0.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_qwen2.5-0.5b-anghabench-32kcw-gguf/blob/main/qwen2.5-0.5b-anghabench-32kcw.Q5_0.gguf) | Q5_0 | 0.37GB |
| [qwen2.5-0.5b-anghabench-32kcw.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_qwen2.5-0.5b-anghabench-32kcw-gguf/blob/main/qwen2.5-0.5b-anghabench-32kcw.Q5_K_S.gguf) | Q5_K_S | 0.38GB |
| [qwen2.5-0.5b-anghabench-32kcw.Q5_K.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_qwen2.5-0.5b-anghabench-32kcw-gguf/blob/main/qwen2.5-0.5b-anghabench-32kcw.Q5_K.gguf) | Q5_K | 0.39GB |
| [qwen2.5-0.5b-anghabench-32kcw.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_qwen2.5-0.5b-anghabench-32kcw-gguf/blob/main/qwen2.5-0.5b-anghabench-32kcw.Q5_K_M.gguf) | Q5_K_M | 0.39GB |
| [qwen2.5-0.5b-anghabench-32kcw.Q5_1.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_qwen2.5-0.5b-anghabench-32kcw-gguf/blob/main/qwen2.5-0.5b-anghabench-32kcw.Q5_1.gguf) | Q5_1 | 0.39GB |
| [qwen2.5-0.5b-anghabench-32kcw.Q6_K.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_qwen2.5-0.5b-anghabench-32kcw-gguf/blob/main/qwen2.5-0.5b-anghabench-32kcw.Q6_K.gguf) | Q6_K | 0.47GB |
| [qwen2.5-0.5b-anghabench-32kcw.Q8_0.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_qwen2.5-0.5b-anghabench-32kcw-gguf/blob/main/qwen2.5-0.5b-anghabench-32kcw.Q8_0.gguf) | Q8_0 | 0.49GB |
Original model description:
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-Coder-0.5B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: qwen2.5-0.5b-anghabench-32kcw
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen2.5-0.5b-anghabench-32kcw
This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-0.5B-Instruct) on the anghabench dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0015
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- total_eval_batch_size: 4
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:-----:|:---------------:|
| 0.0034 | 0.4091 | 25000 | 0.0028 |
| 0.0017 | 0.8181 | 50000 | 0.0016 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
LHRuig/alexismngmsx | LHRuig | "2025-02-20T05:37:56Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | "2025-02-20T05:37:22Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: alexismngmsx
---
# alexismngmsx
<Gallery />
## Model description
alexismngmsx lora
## Trigger words
You should use `alexismngmsx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/alexismngmsx/tree/main) them in the Files & versions tab.
|
darthhexx/Phi-3-medium-128k-instruct-awq | darthhexx | "2024-07-10T08:29:58Z" | 8 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"nlp",
"code",
"conversational",
"custom_code",
"multilingual",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] | text-generation | "2024-07-10T08:22:24Z" | ---
license: mit
license_link: https://huggingface.co/microsoft/Phi-3-medium-128k-instruct/resolve/main/LICENSE
language:
- multilingual
pipeline_tag: text-generation
tags:
- nlp
- code
inference:
parameters:
temperature: 0.7
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
---
## Model Summary
The Phi-3-Medium-128K-Instruct is a 14B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties.
The model belongs to the Phi-3 family with the Medium version in two variants [4k](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) which is the context length (in tokens) that it can support.
The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures.
When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3-Medium-128K-Instruct showcased a robust and state-of-the-art performance among models of the same-size and next-size-up.
Resources and Technical Documentation:
+ [Phi-3 Microsoft Blog](https://aka.ms/Phi-3Build2024)
+ [Phi-3 Technical Report](https://aka.ms/phi3-tech-report)
+ [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai)
+ [Phi-3 Cookbook](https://github.com/microsoft/Phi-3CookBook)
| | Short Context | Long Context |
| ------- | ------------- | ------------ |
| Mini | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-onnx) ; [[GGUF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct-onnx)|
| Small | 8K [[HF]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct-onnx-cuda)|
| Medium | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cuda)|
| Vision | | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct-onnx-cuda)|
## Intended Uses
**Primary use cases**
The model is intended for broad commercial and research use in English. The model provides uses for general purpose AI systems and applications which require :
1) Memory/compute constrained environments
2) Latency bound scenarios
3) Strong reasoning (especially code, math and logic)
Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.
**Use case considerations**
Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.
Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.
## How to Use
Phi-3-Medium-128k-Instruct has been integrated in the development version (4.40.2) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following:
* When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function.
* Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source.
The current `transformers` version can be verified with: `pip list | grep transformers`.
Phi-3-Medium-128k-Instruct is also available in [Azure AI Studio](https://aka.ms/phi3-azure-ai).
### Tokenizer
Phi-3-Medium-128k-Instruct supports a vocabulary size of up to `32064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size.
### Chat Format
Given the nature of the training data, the Phi-3-Medium-128k-Instruct model is best suited for prompts using the chat format as follows.
You can provide the prompt as a question with a generic template as follow:
```markdown
<|user|>\nQuestion <|end|>\n<|assistant|>
```
For example:
```markdown
<|user|>
How to explain Internet for a medieval knight?<|end|>
<|assistant|>
```
where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following:
```markdown
<|user|>
I am going to Paris, what should I see?<|end|>
<|assistant|>
Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|>
<|user|>
What is so great about #1?<|end|>
<|assistant|>
```
### Sample inference code
This code snippets show how to get quickly started with running the model on a GPU:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model_id = "microsoft/Phi-3-medium-128k-instruct"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
messages = [
{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
{"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."},
{"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
```
*Some applications/frameworks might not include a BOS token (`<s>`) at the start of the conversation. Please ensure that it is included since it provides more reliable results.*
## Responsible AI Considerations
Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
+ Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.
+ Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
+ Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.
+ Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
+ Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:
+ Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
+ High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
+ Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
+ Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
+ Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
## Training
### Model
* Architecture: Phi-3-Medium-128k-Instruct has 14B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.
* Inputs: Text. It is best suited for prompts using chat format.
* Context length: 128k tokens
* GPUs: 512 H100-80G
* Training time: 42 days
* Training data: 4.8T tokens
* Outputs: Generated text in response to the input
* Dates: Our models were trained between February and April 2024
* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.
* Release dates: The model weight is released on May 21, 2024.
### Datasets
Our training data includes a wide variety of sources, totaling 4.8 trillion tokens (including 10% multilingual), and is a combination of
1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;
2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);
3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.
We are focusing on the quality of data that could potentially improve the reasoning ability for the model, and we filter the publicly available documents to contain the correct level of knowledge. As an example, the result of a game in premier league in a particular day might be good training data for frontier models, but we need to remove such information to leave more model capacity for reasoning for the small size models. More details about data can be found in the [Phi-3 Technical Report](https://aka.ms/phi3-tech-report).
## Benchmarks
We report the results for Phi-3-Medium-128k-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Mixtral-8x22b, Gemini-Pro, Command R+ 104B, Llama-3-70B-Instruct, GPT-3.5-Turbo-1106, and GPT-4-Turbo-1106(Chat).
All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.
As is now standard, we use few-shot prompts to evaluate the models, at temperature 0.
The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.
More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.
The number of k–shot examples is listed per-benchmark.
|Benchmark|Phi-3-Medium-128k-Instruct<br>14b|Command R+<br>104B|Mixtral<br>8x22B|Llama-3-70B-Instruct|GPT3.5-Turbo<br>version 1106|Gemini<br>Pro|GPT-4-Turbo<br>version 1106 (Chat)|
|---------|-----------------------|--------|-------------|-------------------|-------------------|----------|------------------------|
|AGI Eval<br>5-shot|49.7|50.1|54.0|56.9|48.4|49.0|59.6|
|MMLU<br>5-shot|76.6|73.8|76.2|80.2|71.4|66.7|84.0|
|BigBench Hard<br>3-shot|77.9|74.1|81.8|80.4|68.3|75.6|87.7|
|ANLI<br>7-shot|57.3|63.4|65.2|68.3|58.1|64.2|71.7|
|HellaSwag<br>5-shot|81.6|78.0|79.0|82.6|78.8|76.2|88.3|
|ARC Challenge<br>10-shot|91.0|86.9|91.3|93.0|87.4|88.3|95.6|
|ARC Easy<br>10-shot|97.6|95.7|96.9|98.2|96.3|96.1|98.8|
|BoolQ<br>2-shot|86.5|86.1|82.7|89.1|79.1|86.4|91.3|
|CommonsenseQA<br>10-shot|82.2|82.0|82.0|84.4|79.6|81.8|86.7|
|MedQA<br>2-shot|67.6|59.2|67.9|78.5|63.4|58.2|83.7|
|OpenBookQA<br>10-shot|87.2|86.8|88.6|91.8|86.0|86.4|93.4|
|PIQA<br>5-shot|87.8|86.4|85.0|85.3|86.6|86.2|90.1|
|Social IQA<br>5-shot|79.0|75.3|78.2|81.1|68.3|75.4|81.7|
|TruthfulQA (MC2)<br>10-shot|74.3|57.8|67.4|81.9|67.7|72.6|85.2|
|WinoGrande<br>5-shot|78.9|77.0|75.3|83.3|68.8|72.2|86.7|
|TriviaQA<br>5-shot|73.9|82.8|84.5|78.5|85.8|80.2|73.3|
|GSM8K Chain of Thought<br>8-shot|87.5|78.3|83.8|93.5|78.1|80.4|94.2|
|HumanEval<br>0-shot|58.5|61.6|39.6|78.7|62.2|64.4|79.9|
|MBPP<br>3-shot|73.8|68.9|70.7|81.3|77.8|73.2|86.7|
|Average|77.3|75.0|76.3|82.5|74.3|75.4|85.2|
We take a closer look at different categories across 80 public benchmark datasets at the table below:
|Benchmark|Phi-3-Medium-128k-Instruct<br>14b|Command R+<br>104B|Mixtral<br>8x22B|Llama-3-70B-Instruct|GPT3.5-Turbo<br>version 1106|Gemini<br>Pro|GPT-4-Turbo<br>version 1106 (Chat)|
|--------|------------------------|--------|-------------|-------------------|-------------------|----------|------------------------|
| Popular aggregated benchmark | 72.3 | 69.9 | 73.4 | 76.3 | 67.0 | 67.5 | 80.5 |
| Reasoning | 83.2 | 79.3 | 81.5 | 86.7 | 78.3 | 80.4 | 89.3 |
| Language understanding | 75.3 | 75.7 | 78.7 | 77.9 | 70.4 | 75.3 | 81.6 |
| Code generation | 64.2 | 68.6 | 60.0 | 69.3 | 70.4 | 66.7 | 76.1 |
| Math | 52.9 | 45.3 | 52.5 | 59.7 | 52.8 | 50.9 | 67.1 |
| Factual knowledge | 47.5 | 60.3 | 60.6 | 52.4 | 63.4 | 54.6 | 45.9 |
| Multilingual | 62.2 | 67.8 | 69.8 | 62.0 | 67.0 | 73.4 | 78.2 |
| Robustness | 70.2 | 57.9 | 65.5 | 78.7 | 69.3 | 69.7 | 84.6 |
## Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [DeepSpeed](https://github.com/microsoft/DeepSpeed)
* [Transformers](https://github.com/huggingface/transformers)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
## Hardware
Note that by default, the Phi-3-Medium model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
* NVIDIA A100
* NVIDIA A6000
* NVIDIA H100
If you want to run the model on:
+ Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [128k](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cuda)
## Cross Platform Support
ONNX runtime ecosystem now supports Phi3 Medium models across platforms and hardware.
Optimized phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML GPU acceleration is supported for Windows desktops GPUs (AMD, Intel, and NVIDIA).
Along with DML, ONNX Runtime provides cross platform support for Phi3 Medium across a range of devices CPU, GPU, and mobile.
Here are some of the optimized configurations we have added:
1. ONNX models for int4 DML: Quantized to int4 via AWQ
2. ONNX model for fp16 CUDA
3. ONNX model for int4 CUDA: Quantized to int4 via RTN
4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN
## License
The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-medium-128k/resolve/main/LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
|
Shezus/finetuning-sentiment-model-5000-samples | Shezus | "2023-07-03T23:03:36Z" | 99 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-07-03T22:54:14Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-5000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.903
- name: F1
type: f1
value: 0.902902902902903
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-5000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2992
- Accuracy: 0.903
- F1: 0.9029
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
lesso11/b29585bd-0b4f-41a1-9bcf-240bb6a55365 | lesso11 | "2025-03-09T16:26:05Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:adapter:Qwen/Qwen2.5-1.5B",
"license:apache-2.0",
"region:us"
] | null | "2025-03-09T15:44:23Z" | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b29585bd-0b4f-41a1-9bcf-240bb6a55365
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-1.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5459a8788029b49c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5459a8788029b49c_train_data.json
type:
field_input: sent_1
field_instruction: original_l1
field_output: sent_2
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 500
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso11/b29585bd-0b4f-41a1-9bcf-240bb6a55365
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000211
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 50
lora_alpha: 128
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/5459a8788029b49c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 500
saves_per_epoch: null
seed: 110
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4ff3bc45-7250-410b-9a91-04399ff26318
wandb_project: 11a
wandb_run: your_name
wandb_runid: 4ff3bc45-7250-410b-9a91-04399ff26318
warmup_steps: 100
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b29585bd-0b4f-41a1-9bcf-240bb6a55365
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3043
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000211
- train_batch_size: 4
- eval_batch_size: 4
- seed: 110
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0005 | 1 | 2.9322 |
| 1.2781 | 0.2462 | 500 | 1.3043 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
zelk12/MT-Merge7-gemma-2-9B | zelk12 | "2025-03-03T14:37:40Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:zelk12/MT-Merge7-NC-gemma-2-9B",
"base_model:merge:zelk12/MT-Merge7-NC-gemma-2-9B",
"base_model:zelk12/MT-Merge7-UW-gemma-2-9B",
"base_model:merge:zelk12/MT-Merge7-UW-gemma-2-9B",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-03T14:18:24Z" | ---
base_model:
- zelk12/MT-Merge7-UW-gemma-2-9B
- zelk12/MT-Merge7-NC-gemma-2-9B
library_name: transformers
tags:
- mergekit
- merge
license: gemma
pipeline_tag: text-generation
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* [zelk12/MT-Merge7-UW-gemma-2-9B](https://huggingface.co/zelk12/MT-Merge7-UW-gemma-2-9B)
* [zelk12/MT-Merge7-NC-gemma-2-9B](https://huggingface.co/zelk12/MT-Merge7-NC-gemma-2-9B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: zelk12/MT-Merge7-UW-gemma-2-9B
- model: zelk12/MT-Merge7-NC-gemma-2-9B
merge_method: slerp
base_model: zelk12/MT-Merge7-UW-gemma-2-9B
dtype: bfloat16
parameters:
t: 0.25
``` |
tensorblock/synapsellm-7b-mistral-v0.4-preview2-GGUF | tensorblock | "2024-12-28T18:31:50Z" | 20 | 0 | transformers | [
"transformers",
"gguf",
"code",
"TensorBlock",
"GGUF",
"en",
"base_model:WebraftAI/synapsellm-7b-mistral-v0.4-preview2",
"base_model:quantized:WebraftAI/synapsellm-7b-mistral-v0.4-preview2",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-12-28T17:47:05Z" | ---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- code
- TensorBlock
- GGUF
base_model: WebraftAI/synapsellm-7b-mistral-v0.4-preview2
model-index:
- name: synapsellm-7b-mistral-v0.4-preview2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 52.99
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=WebraftAI/synapsellm-7b-mistral-v0.4-preview2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 74.54
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=WebraftAI/synapsellm-7b-mistral-v0.4-preview2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 54.6
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=WebraftAI/synapsellm-7b-mistral-v0.4-preview2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 53.79
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=WebraftAI/synapsellm-7b-mistral-v0.4-preview2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 73.95
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=WebraftAI/synapsellm-7b-mistral-v0.4-preview2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 25.7
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=WebraftAI/synapsellm-7b-mistral-v0.4-preview2
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## WebraftAI/synapsellm-7b-mistral-v0.4-preview2 - GGUF
This repo contains GGUF format model files for [WebraftAI/synapsellm-7b-mistral-v0.4-preview2](https://huggingface.co/WebraftAI/synapsellm-7b-mistral-v0.4-preview2).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
<s>[INST] {prompt} [/INST]
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [synapsellm-7b-mistral-v0.4-preview2-Q2_K.gguf](https://huggingface.co/tensorblock/synapsellm-7b-mistral-v0.4-preview2-GGUF/blob/main/synapsellm-7b-mistral-v0.4-preview2-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [synapsellm-7b-mistral-v0.4-preview2-Q3_K_S.gguf](https://huggingface.co/tensorblock/synapsellm-7b-mistral-v0.4-preview2-GGUF/blob/main/synapsellm-7b-mistral-v0.4-preview2-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [synapsellm-7b-mistral-v0.4-preview2-Q3_K_M.gguf](https://huggingface.co/tensorblock/synapsellm-7b-mistral-v0.4-preview2-GGUF/blob/main/synapsellm-7b-mistral-v0.4-preview2-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [synapsellm-7b-mistral-v0.4-preview2-Q3_K_L.gguf](https://huggingface.co/tensorblock/synapsellm-7b-mistral-v0.4-preview2-GGUF/blob/main/synapsellm-7b-mistral-v0.4-preview2-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [synapsellm-7b-mistral-v0.4-preview2-Q4_0.gguf](https://huggingface.co/tensorblock/synapsellm-7b-mistral-v0.4-preview2-GGUF/blob/main/synapsellm-7b-mistral-v0.4-preview2-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [synapsellm-7b-mistral-v0.4-preview2-Q4_K_S.gguf](https://huggingface.co/tensorblock/synapsellm-7b-mistral-v0.4-preview2-GGUF/blob/main/synapsellm-7b-mistral-v0.4-preview2-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [synapsellm-7b-mistral-v0.4-preview2-Q4_K_M.gguf](https://huggingface.co/tensorblock/synapsellm-7b-mistral-v0.4-preview2-GGUF/blob/main/synapsellm-7b-mistral-v0.4-preview2-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [synapsellm-7b-mistral-v0.4-preview2-Q5_0.gguf](https://huggingface.co/tensorblock/synapsellm-7b-mistral-v0.4-preview2-GGUF/blob/main/synapsellm-7b-mistral-v0.4-preview2-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [synapsellm-7b-mistral-v0.4-preview2-Q5_K_S.gguf](https://huggingface.co/tensorblock/synapsellm-7b-mistral-v0.4-preview2-GGUF/blob/main/synapsellm-7b-mistral-v0.4-preview2-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [synapsellm-7b-mistral-v0.4-preview2-Q5_K_M.gguf](https://huggingface.co/tensorblock/synapsellm-7b-mistral-v0.4-preview2-GGUF/blob/main/synapsellm-7b-mistral-v0.4-preview2-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [synapsellm-7b-mistral-v0.4-preview2-Q6_K.gguf](https://huggingface.co/tensorblock/synapsellm-7b-mistral-v0.4-preview2-GGUF/blob/main/synapsellm-7b-mistral-v0.4-preview2-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [synapsellm-7b-mistral-v0.4-preview2-Q8_0.gguf](https://huggingface.co/tensorblock/synapsellm-7b-mistral-v0.4-preview2-GGUF/blob/main/synapsellm-7b-mistral-v0.4-preview2-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/synapsellm-7b-mistral-v0.4-preview2-GGUF --include "synapsellm-7b-mistral-v0.4-preview2-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/synapsellm-7b-mistral-v0.4-preview2-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
FounderOfHuggingface/gpt2_lora_r64_dbpedia_14_t300_e5_member_shadow40 | FounderOfHuggingface | "2023-12-05T01:20:35Z" | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | "2023-12-05T01:20:31Z" | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
tolgadev/Trendyol-LLM-7b-chat-v0.1-GGUF | tolgadev | "2024-02-13T18:28:03Z" | 228 | 14 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation",
"trendyol",
"llama-2",
"turkish",
"tr",
"en",
"base_model:Trendyol/Trendyol-LLM-7b-chat-v0.1",
"base_model:quantized:Trendyol/Trendyol-LLM-7b-chat-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | text-generation | "2024-02-12T08:27:44Z" | ---
model_name: Trendyol-LLM-7b-chat-v0.1
model_creator: Trendyol
base_model: Trendyol/Trendyol-LLM-7b-chat-v0.1
language:
- tr
- en
pipeline_tag: text-generation
license: apache-2.0
model_type: llama
library_name: transformers
inference: false
tags:
- trendyol
- llama-2
- turkish
quantized_by: tolgadev
---
## Trendyol-LLM-7b-chat-v0.1-GGUF models
----
## Description
This repo contains all types of GGUF formatted model files for [Trendyol-LLM-7b-chat-v0.1](https://huggingface.co/Trendyol/Trendyol-LLM-7b-chat-v0.1).
<img src="https://huggingface.co/Trendyol/Trendyol-LLM-7b-chat-v0.1/resolve/main/llama-tr-image.jpeg"
alt="drawing" width="400"/>
## Quantized LLM models and methods
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [Trendyol-LLM-7b-chat-v0.1.Q2_K.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-v0.1-GGUF/blob/main/trendyol-llm-7b-chat-v0.1.Q2_K.gguf) | Q2_K | 2 | 2.59 GB| 4.88 GB | smallest, significant quality loss - not recommended for most purposes |
| [Trendyol-LLM-7b-chat-v0.1.Q3_K_S.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-v0.1-GGUF/blob/main/trendyol-llm-7b-chat-v0.1.Q3_K_S.gguf) | Q3_K_S | 3 | 3.01 GB| 5.56 GB | very small, high quality loss |
| [Trendyol-LLM-7b-chat-v0.1.Q3_K_M.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-v0.1-GGUF/blob/main/trendyol-llm-7b-chat-v0.1.Q3_K_M.gguf) | Q3_K_M | 3 | 3.36 GB| 5.91 GB | very small, high quality loss |
| [Trendyol-LLM-7b-chat-v0.1.Q3_K_L.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-v0.1-GGUF/blob/main/trendyol-llm-7b-chat-v0.1.Q3_K_L.gguf) | Q3_K_L | 3 | 3.66 GB| 6.20 GB | small, substantial quality loss |
| [Trendyol-LLM-7b-chat-v0.1.Q4_0.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-v0.1-GGUF/blob/main/trendyol-llm-7b-chat-v0.1.Q4_0.gguf) | Q4_0 | 4 | 3.9 GB| 6.45 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Trendyol-LLM-7b-chat-v0.1.Q4_K_S.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-v0.1-GGUF/blob/main/trendyol-llm-7b-chat-v0.1.Q4_K_S.gguf) | Q4_K_S | 4 | 3.93 GB| 6.48 GB | small, greater quality loss |
| [Trendyol-LLM-7b-chat-v0.1.Q4_K_M.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-v0.1-GGUF/blob/main/trendyol-llm-7b-chat-v0.1.Q4_K_M.gguf) | Q4_K_M | 4 | 4.15 GB| 6.69 GB | medium, balanced quality - recommended |
| [Trendyol-LLM-7b-chat-v0.1.Q5_0.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-v0.1-GGUF/blob/main/trendyol-llm-7b-chat-v0.1.Q5_0.gguf) | Q5_0 | 5 | 4.73 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Trendyol-LLM-7b-chat-v0.1.Q5_K_S.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-v0.1-GGUF/blob/main/trendyol-llm-7b-chat-v0.1.Q5_K_S.gguf) | Q5_K_S | 5 | 4.75 GB| 7.27 GB | large, low quality loss - recommended |
| [Trendyol-LLM-7b-chat-v0.1.Q5_K_M.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-v0.1-GGUF/blob/main/trendyol-llm-7b-chat-v0.1.Q5_K_M.gguf) | Q5_K_M | 5 | 4.86 GB| 7.40 GB | large, very low quality loss - recommended |
| [Trendyol-LLM-7b-chat-v0.1.Q6_K.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-v0.1-GGUF/blob/main/trendyol-llm-7b-chat-v0.1.Q6_K.gguf) | Q6_K | 6 | 5.61 GB| 8.15 GB | very large, extremely low quality loss |
| [Trendyol-LLM-7b-chat-v0.1.Q8_0.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-v0.1-GGUF/blob/main/trendyol-llm-7b-chat-v0.1.Q8_0.gguf) | Q8_0 | 8 | 7.27 GB| 9.81 GB | very large, extremely low quality loss - not recommended |
The names of the quantization methods follow the naming convention: "q" + the number of bits + the variant used (detailed below). Here is a list of all the models and their corresponding use cases, based on model cards made by [TheBloke](https://huggingface.co/TheBloke/):
* `q2_k`: Uses Q4_K for the attention.vw and feed_forward.w2 tensors, Q2_K for the other tensors.
* `q3_k_l`: Uses Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K
* `q3_k_m`: Uses Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K
* `q3_k_s`: Uses Q3_K for all tensors
* `q4_0`: Original quant method, 4-bit.
* `q4_1`: Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models.
* `q4_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K
* `q4_k_s`: Uses Q4_K for all tensors
* `q5_0`: Higher accuracy, higher resource usage and slower inference.
* `q5_1`: Even higher accuracy, resource usage and slower inference.
* `q5_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K
* `q5_k_s`: Uses Q5_K for all tensors
* `q6_k`: Uses Q8_K for all tensors
* `q8_0`: Almost indistinguishable from float16. High resource use and slow. Not recommended for most users.
**TheBloke recommends using Q5_K_M** as it preserves most of the model's performance.
Alternatively, you can use Q4_K_M if you want to save some memory.
In general, K_M versions are better than K_S versions.
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
## Special thanks to [TheBloke on Huggingface](https://huggingface.co/TheBloke) and [Maxime Labonne on Github](https://github.com/mlabonne/llm-course)
-----
## Model Details
<img src="https://huggingface.co/Trendyol/Trendyol-LLM-7b-chat-v0.1/resolve/main/llama-tr-image.jpeg"
alt="drawing" width="400"/>
# **Trendyol LLM**
Trendyol LLM is a generative model that is based on LLaMa2 7B model. This is the repository for the chat model.
## Model Details
**Model Developers** Trendyol
**Variations** base and chat variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Trendyol LLM is an auto-regressive language model (based on LLaMa2 7b) that uses an optimized transformer architecture. The chat version is fine-tuned on 180K instruction sets with the following trainables by using LoRA:
- **lr**=1e-4
- **lora_rank**=64
- **lora_alpha**=128
- **lora_trainable**=q_proj,v_proj,k_proj,o_proj,gate_proj,down_proj,up_proj
- **modules_to_save**=embed_tokens,lm_head
- **lora_dropout**=0.05
- **fp16**=True
- **max_seq_length**=1024
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/lora_diagram.png"
alt="drawing" width="600"/>
## Usage
```python
from transformers import AutoModelForCausalLM, LlamaTokenizer, pipeline
model_id = "Trendyol/Trendyol-LLM-7b-chat-v0.1"
tokenizer = LlamaTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id,
device_map='auto',
load_in_8bit=True)
sampling_params = dict(do_sample=True, temperature=0.3, top_k=50, top_p=0.9)
pipe = pipeline("text-generation",
model=model,
tokenizer=tokenizer,
device_map="auto",
max_new_tokens=1024,
return_full_text=True,
repetition_penalty=1.1
)
DEFAULT_SYSTEM_PROMPT = "Sen yardımcı bir asistansın ve sana verilen talimatlar doğrultusunda en iyi cevabı üretmeye çalışacaksın.\n"
TEMPLATE = (
"[INST] <<SYS>>\n"
"{system_prompt}\n"
"<</SYS>>\n\n"
"{instruction} [/INST]"
)
def generate_prompt(instruction, system_prompt=DEFAULT_SYSTEM_PROMPT):
return TEMPLATE.format_map({'instruction': instruction,'system_prompt': system_prompt})
def generate_output(user_query, sys_prompt=DEFAULT_SYSTEM_PROMPT):
prompt = generate_prompt(user_query, sys_prompt)
outputs = pipe(prompt,
**sampling_params
)
return outputs[0]["generated_text"].split("[/INST]")[-1]
user_query = "Türkiye'de kaç il var?"
response = generate_output(user_query)
```
## Limitations, Risks, Bias, and Ethical Considerations
### Limitations and Known Biases
- **Primary Function and Application:** Trendyol LLM, an autoregressive language model, is primarily designed to predict the next token in a text string. While often used for various applications, it is important to note that it has not undergone extensive real-world application testing. Its effectiveness and reliability across diverse scenarios remain largely unverified.
- **Language Comprehension and Generation:** The model is primarily trained in standard English and Turkish. Its performance in understanding and generating slang, informal language, or other languages may be limited, leading to potential errors or misinterpretations.
- **Generation of False Information:** Users should be aware that Trendyol LLM may produce inaccurate or misleading information. Outputs should be considered as starting points or suggestions rather than definitive answers.
### Risks and Ethical Considerations
- **Potential for Harmful Use:** There is a risk that Trendyol LLM could be used to generate offensive or harmful language. We strongly discourage its use for any such purposes and emphasize the need for application-specific safety and fairness evaluations before deployment.
- **Unintended Content and Bias:** The model was trained on a large corpus of text data, which was not explicitly checked for offensive content or existing biases. Consequently, it may inadvertently produce content that reflects these biases or inaccuracies.
- **Toxicity:** Despite efforts to select appropriate training data, the model is capable of generating harmful content, especially when prompted explicitly. We encourage the open-source community to engage in developing strategies to minimize such risks.
### Recommendations for Safe and Ethical Usage
- **Human Oversight:** We recommend incorporating a human curation layer or using filters to manage and improve the quality of outputs, especially in public-facing applications. This approach can help mitigate the risk of generating objectionable content unexpectedly.
- **Application-Specific Testing:** Developers intending to use Trendyol LLM should conduct thorough safety testing and optimization tailored to their specific applications. This is crucial, as the model’s responses can be unpredictable and may occasionally be biased, inaccurate, or offensive.
- **Responsible Development and Deployment:** It is the responsibility of developers and users of Trendyol LLM to ensure its ethical and safe application. We urge users to be mindful of the model's limitations and to employ appropriate safeguards to prevent misuse or harmful consequences. |
KUN810/lora_of_Benares_from_Honkai_Ipmact_3rd | KUN810 | "2023-08-07T06:26:02Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-08-07T05:28:09Z" | 崩坏3贝纳勒斯的lora,由于图源较少因此有些过拟合。
例图别为本lora效果和配合细节增强(add_detail.safetensors)的效果。
!(https://huggingface.co/KUN810/lora_Honkai_Impact_3rd_Benares/blob/main/15974-832718923-dramatic%20angle%2C%20(honkai%20impact%203rd)%2C%20dutch%20angle%2C%20_(((masterpiece)))%2C%20((extremely%20detailed%20CG%20unity%204k%20wallpaper))%2C%20best%20quality.png " ")
!(https://huggingface.co/KUN810/lora_Honkai_Impact_3rd_Benares/blob/main/15979-1747533505-dramatic%20angle%2C%20(honkai%20impact%203rd)%2C%20dutch%20angle%2C%20%2C%20_(((masterpiece)))%2C%20((extremely%20detailed%20CG%20unity%204k%20wallpaper))%2C%20best%20quali.png " ") |
msiudek/astroPT_euclid_VIS_model | msiudek | "2025-03-19T10:18:36Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-03-11T16:06:46Z" |
# Euclid astroPT: a Foundation Model for Astronomy
Here we have the model files for the astroPT project trained on Euclid VIS images, the code to run inference
with these models is found here:
[https://github.com/smith42/astropt](https://github.com/smith42/astropt)
|
kenrogers/gte-ft-yt | kenrogers | "2025-02-23T08:52:45Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"qwen2",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:84",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"custom_code",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:Alibaba-NLP/gte-Qwen2-1.5B-instruct",
"base_model:finetune:Alibaba-NLP/gte-Qwen2-1.5B-instruct",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2025-02-23T08:48:51Z" | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:84
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: Alibaba-NLP/gte-Qwen2-1.5B-instruct
widget:
- source_sentence: "1. What advancements in technology are mentioned as contributing\
\ to faster inference times in applications? \n2. In what scenarios does the\
\ context suggest that response latency is less of a concern for users?"
sentences:
- your take on this yeah I mean so no not better uh it's definitely different it's
definitely uh you know do it's trying to do a different thing which is dope I
would say like at the end of the day uh they're they're using the same process
but they're they're they're finding different ways to uh to take advantage of
that process uh the recurrent depth is more of an architecture change right it's
more of a let's actually get this reasoning inherent to the to the model we're
going to train it to be very good at this recurrent task we're going to train
it to do this this accordion thing that it does very well right uh versus coconut
which is like let's adapt and add to existing uh architecture right to to get
this uh this kind of reasoning flavor that that coconut winds to get or winds
up getting so it's it's it's a it's the same process two different approaches
though where they're coming at it from two different angles uh I would say like
current depth is uh is interesting because it's
- right we kind of got to go a little bit more into the blackbox we gota go back
beyond the unknown yeah it happens but it's it's it's it's the the timing is right
and uh with companies like you know Nvidia with companies the other accelerators
that are that are coming out they're super good at inference Gro and S all these
other peeps right uh we're getting real fast at inference and so the spending
that time you know becomes less and less impactful to the user experience but
more importantly uh you know we have a lot of applications LMS aren't good for
yet where we don't care about response latency like research like uh PhD level
math where it's like it doesn't matter if it takes a day yeah because that means
it didn't take some some other person a day right like that's the that's the the
we're at this time the models are capable enough that we can think about problems
that we can't just do ourselves faster it the whole the whole you know ecosystem
is set up for this to be the right
- today that's right that's right so so reasoning is some right now because our
models are System One machines right this is the this is the they're not reasoners
they're they're uh they're they're just they they just do they just do they just
do right uh we need some way to stretch them into this reasoning domain and the
way that we do that is through some kind of test time computer some kind of test
time scaling things that you know it's interesting to think about but something
like an agent right is an example or expression of test time compute right we're
we're we're using the agent to leverage more compute to do cooler things right
so these kinds of systems are also test time compute uh very broad definition
you love agents are also reasoning right that's right agents are reason there
you go but the idea is that we we need some way to stretch the system one machine
to a system two machine and the way that we know how to do that right now is is
through these time compute methods
- source_sentence: '1. What are the two main approaches being demonstrated in the
context of reasoning and latent space?
2. How does the new coconut Library fit into the discussion of test time compute
scaling?'
sentences:
- going to be in latent space we're going to be in embedding space we're going to
be in the space where we can do math and stuff and importantly we can kind of
think that we're putting in this big old sequence you know especially if you think
of these long context LMS we're just jamming context in there and then we're popping
out one to one single token okay so really at the end of the day you can kind
of think of this as we're kind of doing this compression okay we're taking all
of this POS possibility space and all this crazy and then we're just like one
token we just want one so it's kind of interesting to to think off the bat that
llms in this sense are kind of giant compression algorithms we are condensing
all of that information into one of Let's just call it 500,000 different tokens
that we might have there are many different sizes of possible vocabulary but let's
pick a pick a big number that is on the order of magnitude of something we might
see hundreds of thousands here down
- to the most upvoted questions at the end of the sesh if you want to jump in on
YouTube or on LinkedIn live and throw a comment in live please do during the discussions
and join us in investigating this really cool new space all right with that let's
go ahead and hop right into it guys today we're talking about reasoning in continuous
latent space all right so we want to kind of wrap our head around all of these
key words and this is a really really cool idea when we can finally start to grock
it so I hope you guys are feeling as excited about it as I am by the end of the
session ideally after this hour you spend with us you're going to understand reasoning
in continuous Laten space including the continuous Chain of Thought or coconut
and recurrent depth approaches we want to discuss the impact of this kind of approach
on test time compute scaling some of the working hypotheses and some of the things
people are interested in in looking out there on the llm edge for as we continue
to
- impact of this kind of approach on test time compute scaling some of the working
hypotheses and some of the things people are interested in in looking out there
on the llm edge for as we continue to see the field progress I want to demonstrate
both approaches and check out the new coconut Library as well so how we're going
to go through this is we're going to essentially introduce this idea of reasoning
and latent space then we're going to talk about the scaling part of this before
we dig into the specific approaches and we get the demo on both approaches by
the end so it should be a lot of fun today let's go ahead and dig in reasoning
in latent space let's root ourselves first in some definitions when we talk about
reasoning we're talking about the action of thinking about something and it's
kind of funny in a logical way if you look up logic it uses the word reason and
there we are caught in a loop but reasoning is about thinking latent space is
about using a representation of our
- source_sentence: '1. What is the main idea behind recurrent depth as described in
the context?
2. How do the scaling tools mentioned in the context interact with each other?'
sentences:
- well it let's go back to our gpt2 style diagram and think about this the input
embeddings here are where we're essentially looping back to so what we do is we
kind of loop back before we generate the next token right back to this embedded
space and and I'm basically GNA run through again before I give you the next token
I'm going to keep chewing on it I'm going to keep thinking about it and this could
be you know in gbt2 this was 12 different decoder block Stacks you can imagine
a lot of different configurations and ways to do this but essentially what are
we doing we're avoiding that compression by staying in the latent space okay we're
avoiding that compression because of course when we do the actual prediction of
the next token you know this is my little Transformer here this is from The Illustrated
Transformer that also has an encoder and a decoder stack but the point here is
to look at the next token prediction to realize this is the GPT style decoder
stack and we are having an
- it's kind of funny in a logical way if you look up logic it uses the word reason
and there we are caught in a loop but reasoning is about thinking latent space
is about using a representation of our data that sort of captures the essential
features of it we can think of latent space as embedding space or the space of
math and numbers in other words it's just not the space of words and natural language
let's think about how this manifests in a Transformer architecture here I'm showing
a GPT style architecture from the gpt2 paper what we want to think about is we
want to put a sequence in and we want to get some next token prediction out when
we put the sequence in we're in the space of natural language when we get the
next token out we're in the space of natural language betwix in between we're
going to be in latent space we're going to be in embedding space we're going to
be in the space where we can do math and stuff and importantly we can kind of
think that we're putting in this big
- the thing yeah yeah okay okay so so recurrent depth in short I mean is like you
think about a single token and then you let the sequence go like that's what I
thought was interesting and and maybe that's not exactly right but that was my
understanding yeah I so that's not that's not yet not yet what's happening but
the idea is that this is complimentary uh so the idea is that we have this these
these and they call it out in the paper which is why we're bringing it up right
uh but the the idea is that we have this uh complimentary uh Suite of scaling
tools that shouldn't interfere with each other right that should allow us to uh
to to go forward unimpeded and that's and that's the idea right so uh we can marry
these methods together they're not yet married together right so we we still uh
we still decode one output right we're not de decoding like a token at a time
but you could certainly put this in a loop where it's going to think a lot about
the next stage of thinking right so we
- source_sentence: '1. What is the significance of having thousands of points in Laden
space compared to being limited to 500,000 tokens in token space?
2. How does the ability to represent every floating point number in each element
of the dimension embedding contribute to the expressiveness of the model?'
sentences:
- chains of thought and this is where this idea of test time compute came up and
this was a paper from Google in August last year called scaling test time compute
you know it's basically taking that scaling paper originally and saying well now
we have this sort of other axis to scale on and again this is the idea that we're
anthropomorphizing a little bit but humans tend to think longer on difficult problems
maybe we should let machines do that and when we think of test time Compu it's
just time spent thinking you know and so if we we think about kind of how we can
leverage this we've seen some of these things come out in recent weeks and recent
months we talked about deep seek R1 just last week and you know this is the same
idea it thinks before it answers and this is again just sort of the next step
in the evolution of what we've got going on here and we saw moreover deep seek
one generates one token at a time it's able to spend more time processing and
it generates these thinking
- architecture diagram let's think about how we're still kind of doing this loop
back we're still doing this reasoning in in space and now let's label the Prelude
the recurrent block and the Koda we want to think about the recurrent block as
an entire block or stack this is the useful way to sort of take this to the next
level and what we want to do is we want to imagine that now we're going to set
this up so that we're going to put a bunch of recurrent blocks kind of in parallel
recur to occur again right and we're going to set it up so it looks something
like this we have a single Prelude we have one recurrent block we have two recurrent
block we have n recurrent block and then we get a single Koda or output you can
configure this as whiz will show you in the code many different ways and this
is the big idea and it's a natural extension sort of depthwise to what we saw
with coconut so the big idea here is that you don't need to use tokens directly
same big idea as coconut recurrent
- across for you and we will go into more detail you know uh throughout the presentation
today yeah I mean like the big the big idea here right the the the the the the
big fun thing is that we have uh thousands and thousands and thousands and millions
right of of of points on the line that we can exist in Laden space whereas we're
like kind of owned by 500,000 tokens token space Oh uh you know like having having
every every every floating Point number in every single element of the dimension
embedding right uh can be expressed in Laden space so even if we only had you
know like uh 20,000 numbers we could represent per element but we have 4,000 elements
you do the math big number right so the more than 500,000 more than 500,000 certainly
right okay okay we scaled it up at that point we've scaled it up we have more
Nuance right we have like very slightly different as opposed to massively different
right and this uh this allows us to be more expressive yes we have to get back
to token
- source_sentence: "1. What has changed in the time required for inference that allows\
\ for more progress to be made? \n2. What challenges did participants face during\
\ the engineering boot camp in early 2024?"
sentences:
- and in early 2024 a lot of people were having you know issues with with streaming
the token out and a lot of people were you know it's like it's like it just becomes
so much easier to get you want a quick result boom gbt 40 mini or whatever it
is whatever equivalent of model are so good at those quick results those sort
of system one results that now we're like okay what if we want to tackle bigger
Beyond a single task kind of problems like we're seeing with deep research like
we're seeing with these other things that require it to go chew on some things
but I want to also just dig in there real quick because you mentioned agents and
when we think about deep research or some of these types of tools they're actually
agentic and they're using tools what we're talking about here is we're talking
about reasoning inside the llm and we're talking about doing engineering within
the llm and and sort of giving giving the sort of the brain itself instead of
the application we're not giving the
- is that we have some idea we have some thoughts that that say well we need to
keep progress going so what's the next lowest hanging fruit that is accommodated
by our Hardware uh and that's why it's like well we can just spend more time doing
inference then right we we can we can do inference so fast now that spending extra
time in inference isn't uh is feasible you know what would have used to take months
or or or or at least weeks now can take you know a day or hours and so it makes
sense you know the the the circumstances have changed uh we're running up against
a a wall with our tried and true bread and butter methods uh and so now is the
time for these you know for these kinds of uh leaps of progress yeah yeah and
I remember you know when we were teaching like the a engineering boot camp and
in early 2024 a lot of people were having you know issues with with streaming
the token out and a lot of people were you know it's like it's like it just becomes
so much easier to get you want
- we're at this time the models are capable enough that we can think about problems
that we can't just do ourselves faster it the whole the whole you know ecosystem
is set up for this to be the right time to push reason oh man okay all right there's
so many rabbit holes let's avoid them and let's keep it moving thanks whiz for
your insights on that as as well let's get into coconut guys let's talk about
how this actually manifests itself again big idea um you know we can start at
the very high level we can say this is about latent space it's not about language
space okay this is about and this is exactly what you'll read in the paper you'll
read language space may not always be optimal for reasoning let's go okay yeah
we got it and we want to utilize the last hidden state of the llm as a representation
of the reasoning State when we say hidden state or latent space or embedding space
or this sort of space of math and computation we're talking about the same space
of course the the exact
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on Alibaba-NLP/gte-Qwen2-1.5B-instruct
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@1
value: 0.8333333333333334
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 1.0
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1.0
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.8333333333333334
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3333333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.19999999999999998
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09999999999999999
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.8333333333333334
name: Cosine Recall@1
- type: cosine_recall@3
value: 1.0
name: Cosine Recall@3
- type: cosine_recall@5
value: 1.0
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9330328858630988
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9097222222222222
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9097222222222222
name: Cosine Map@100
---
# SentenceTransformer based on Alibaba-NLP/gte-Qwen2-1.5B-instruct
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Alibaba-NLP/gte-Qwen2-1.5B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen2-1.5B-instruct). It maps sentences & paragraphs to a 1536-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Alibaba-NLP/gte-Qwen2-1.5B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen2-1.5B-instruct) <!-- at revision 0d2ad8e1ac654a2b626e62154778a70868141208 -->
- **Maximum Sequence Length:** 32768 tokens
- **Output Dimensionality:** 1536 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 32768, 'do_lower_case': False}) with Transformer model: Qwen2Model
(1): Pooling({'word_embedding_dimension': 1536, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': True, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("kenrogers/gte-ft-yt")
# Run inference
sentences = [
'1. What has changed in the time required for inference that allows for more progress to be made? \n2. What challenges did participants face during the engineering boot camp in early 2024?',
"is that we have some idea we have some thoughts that that say well we need to keep progress going so what's the next lowest hanging fruit that is accommodated by our Hardware uh and that's why it's like well we can just spend more time doing inference then right we we can we can do inference so fast now that spending extra time in inference isn't uh is feasible you know what would have used to take months or or or or at least weeks now can take you know a day or hours and so it makes sense you know the the the circumstances have changed uh we're running up against a a wall with our tried and true bread and butter methods uh and so now is the time for these you know for these kinds of uh leaps of progress yeah yeah and I remember you know when we were teaching like the a engineering boot camp and in early 2024 a lot of people were having you know issues with with streaming the token out and a lot of people were you know it's like it's like it just becomes so much easier to get you want",
"and in early 2024 a lot of people were having you know issues with with streaming the token out and a lot of people were you know it's like it's like it just becomes so much easier to get you want a quick result boom gbt 40 mini or whatever it is whatever equivalent of model are so good at those quick results those sort of system one results that now we're like okay what if we want to tackle bigger Beyond a single task kind of problems like we're seeing with deep research like we're seeing with these other things that require it to go chew on some things but I want to also just dig in there real quick because you mentioned agents and when we think about deep research or some of these types of tools they're actually agentic and they're using tools what we're talking about here is we're talking about reasoning inside the llm and we're talking about doing engineering within the llm and and sort of giving giving the sort of the brain itself instead of the application we're not giving the",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1536]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:----------|
| cosine_accuracy@1 | 0.8333 |
| cosine_accuracy@3 | 1.0 |
| cosine_accuracy@5 | 1.0 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 0.8333 |
| cosine_precision@3 | 0.3333 |
| cosine_precision@5 | 0.2 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 0.8333 |
| cosine_recall@3 | 1.0 |
| cosine_recall@5 | 1.0 |
| cosine_recall@10 | 1.0 |
| **cosine_ndcg@10** | **0.933** |
| cosine_mrr@10 | 0.9097 |
| cosine_map@100 | 0.9097 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 84 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 84 samples:
| | sentence_0 | sentence_1 |
|:--------|:-----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 32 tokens</li><li>mean: 41.21 tokens</li><li>max: 60 tokens</li></ul> | <ul><li>min: 180 tokens</li><li>mean: 208.05 tokens</li><li>max: 231 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>1. What are the two big ideas aimed at scaling the power of LLMs during inference mentioned in the context? <br>2. How does the concept of reasoning in latent space relate to the efficiency of computation during inference?</code> | <code>okay whiz we're talking about reasoning in latent space today is that the same as test time compute yeah that's right nice nice okay and we've got two big ideas to cover that are aimed at scaling the power of llms during inference is that right that yeah that's right so we have we have two you know latent space methods uh we have our continuous Chain of Thought or coconut right and then we have our more more directly more uh you know uh budget forcing recurrent depth uh model yes man that's a lot so when we look across both of those there appears to be a pretty simple explanation it's almost like uh you know when we when we're in that sort of thinking space of computation we don't have to do the thinky thinky in words and that's better maybe even it will allow us to find a new scaling axis is that right yeah that's exactly right I mean the idea is that we have this uh you know we we have this way of taking advantage of of uh the most powerful thinking space in the Transformer and not</code> |
| <code>1. What are the two big ideas aimed at scaling the power of LLMs during inference mentioned in the context? <br>2. How does the concept of reasoning in latent space relate to the efficiency of computation during inference?</code> | <code>okay whiz we're talking about reasoning in latent space today is that the same as test time compute yeah that's right nice nice okay and we've got two big ideas to cover that are aimed at scaling the power of llms during inference is that right that yeah that's right so we have we have two you know latent space methods uh we have our continuous Chain of Thought or coconut right and then we have our more more directly more uh you know uh budget forcing recurrent depth uh model yes man that's a lot so when we look across both of those there appears to be a pretty simple explanation it's almost like uh you know when we when we're in that sort of thinking space of computation we don't have to do the thinky thinky in words and that's better maybe even it will allow us to find a new scaling axis is that right yeah that's exactly right I mean the idea is that we have this uh you know we we have this way of taking advantage of of uh the most powerful thinking space in the Transformer and not</code> |
| <code>1. What is the significance of staying in the "mind Palace" of the Transformer according to the context?<br>2. What are the main topics that will be covered in the demos mentioned in the context?</code> | <code>is that right yeah that's exactly right I mean the idea is that we have this uh you know we we have this way of taking advantage of of uh the most powerful thinking space in the Transformer and not just like for a second right not automatically resolving back to token space but kind of staying in this very like uh you know in in the mind Palace of the of the Transformer without having to write down the words yes okay okay okay so basically scaling is dead Long Live scaling something like that yeah scaling has died uh we should scale yeah all right all right all right well I'm pumped for the demos today we're going to see some thinking in latent space let's cover all the Concepts we need to get there we'll get you back in for some discussions along the way because this one's pretty meta thanks whiz all right guys we are gonna rock out on large reasoning models today while we were originally going to just cover chain of continuous thought or coconut we saw a paper come out a couple</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 10
- `per_device_eval_batch_size`: 10
- `num_train_epochs`: 10
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 10
- `per_device_eval_batch_size`: 10
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | cosine_ndcg@10 |
|:------:|:----:|:--------------:|
| 1.0 | 9 | 0.8744 |
| 2.0 | 18 | 0.9251 |
| 3.0 | 27 | 0.9301 |
| 4.0 | 36 | 0.9253 |
| 5.0 | 45 | 0.9177 |
| 5.5556 | 50 | 0.9330 |
| 6.0 | 54 | 0.9330 |
| 7.0 | 63 | 0.9330 |
| 8.0 | 72 | 0.9330 |
| 9.0 | 81 | 0.9330 |
| 10.0 | 90 | 0.9330 |
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.4.1
- Transformers: 4.48.3
- PyTorch: 2.5.1+cu124
- Accelerate: 1.3.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
bchadney/sarah1 | bchadney | "2025-04-09T22:30:49Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-04-09T22:16:44Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: sarah1
---
# Sarah1
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `sarah1` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "sarah1",
"lora_weights": "https://huggingface.co/bchadney/sarah1/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('bchadney/sarah1', weight_name='lora.safetensors')
image = pipeline('sarah1').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/bchadney/sarah1/discussions) to add images that show off what you’ve made with this LoRA.
|
Jiayuan32/a2c-PandaReachDense-v3 | Jiayuan32 | "2024-01-04T04:24:26Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-01-03T08:23:54Z" | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.14 +/- 0.11
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ruffy369/iris-alien | ruffy369 | "2024-07-25T04:58:02Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"iris",
"Atari 100k",
"Atari",
"Alien",
"reinforcement-learning",
"license:gpl-3.0",
"endpoints_compatible",
"region:us"
] | reinforcement-learning | "2024-05-28T07:03:15Z" | ---
license: gpl-3.0
pipeline_tag: reinforcement-learning
tags:
- iris
- Atari 100k
- Atari
- Alien
--- |
skirwan27/bert-finetuned-ner | skirwan27 | "2025-02-26T21:00:08Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2025-02-14T14:55:20Z" | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9346675487925902
- name: Recall
type: recall
value: 0.9510265903736116
- name: F1
type: f1
value: 0.9427761094427761
- name: Accuracy
type: accuracy
value: 0.9867987284393949
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0574
- Precision: 0.9347
- Recall: 0.9510
- F1: 0.9428
- Accuracy: 0.9868
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0739 | 1.0 | 1756 | 0.0691 | 0.8998 | 0.9323 | 0.9158 | 0.9807 |
| 0.0323 | 2.0 | 3512 | 0.0633 | 0.9308 | 0.9445 | 0.9376 | 0.9856 |
| 0.022 | 3.0 | 5268 | 0.0574 | 0.9347 | 0.9510 | 0.9428 | 0.9868 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
|
tutuhu/style4 | tutuhu | "2024-04-25T16:59:20Z" | 34 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-04-25T14:39:07Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nohilwan/foodGPT | nohilwan | "2024-09-25T06:59:24Z" | 8 | 0 | null | [
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"region:us"
] | image-classification | "2024-09-25T06:59:06Z" | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: foodGPT
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.5972222089767456
---
# foodGPT
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### American food

#### Chinese food

#### European food

#### Japanese food

#### korean food
 |
ILT37/en_to_vi_translation | ILT37 | "2024-06-04T15:29:29Z" | 106 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"vi",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-06-04T14:49:47Z" | ---
language:
- vi
- en
metrics:
- bleu
---
State-of-the-art English-Vietnamese and Vietnamese-English Translation models trained on [MTet](https://research.vietai.org/mtet/), [PhoMT](https://github.com/VinAIResearch/PhoMT).
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model_name = "ILT37/en_to_vo_translation"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
inputs = [
"vi: Theo báo cáo mới nhất của Linkedin về danh sách việc làm triển vọng với mức lương hấp dẫn năm 2020, các chức danh công việc liên quan đến AI như Chuyên gia AI (Artificial Intelligence Specialist), Kỹ sư ML (Machine Learning Engineer) đều xếp thứ hạng cao.",
"en: I go to school",
"en: ... is girlfriend of me"
]
outputs = model.generate(tokenizer(inputs, return_tensors="pt", padding=True).input_ids.to('cuda'), max_length=512)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
``` |
Kuaaangwen/bert-base-cased-finetuned-wikitext2 | Kuaaangwen | "2022-12-08T15:34:38Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-12-08T14:54:55Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-cased-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-wikitext2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6212
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8335 | 1.0 | 2393 | 1.7164 |
| 1.738 | 2.0 | 4786 | 1.6589 |
| 1.7029 | 3.0 | 7179 | 1.6216 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
TeeZee/Phi-3-mini-4k-instruct-LASER | TeeZee | "2024-05-09T11:22:17Z" | 135 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-09T10:37:53Z" | ---
license: mit
---
### Phi-3-mini-4k-instruct-LASER
- model used: [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct)
- LASER scripts used: rmt_laser.py,rmt_laser_snr.py,rmt_laser_snr_math.py from [laserRMT](https://github.com/cognitivecomputations/laserRMT)
### Results
- perplexity is reduced comparing to base model, waiting for HF eval results - due to trust_remote_code=True it won't happen soon. |
RichardErkhov/Qwen_-_Qwen1.5-32B-4bits | RichardErkhov | "2024-04-30T22:54:57Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:2309.16609",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-04-30T22:19:33Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Qwen1.5-32B - bnb 4bits
- Model creator: https://huggingface.co/Qwen/
- Original model: https://huggingface.co/Qwen/Qwen1.5-32B/
Original model description:
---
license: other
license_name: tongyi-qianwen-research
license_link: >-
https://huggingface.co/Qwen/Qwen1.5-32B/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- pretrained
---
# Qwen1.5-32B
## Introduction
Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:
* 8 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, 32B and 72B dense models, and an MoE model of 14B with 2.7B activated;
* Significant performance improvement in Chat models;
* Multilingual support of both base and chat models;
* Stable support of 32K context length for models of all sizes
* No need of `trust_remote_code`.
For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5).
## Model Details
Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA (except for 32B) and the mixture of SWA and full attention.
## Requirements
The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
```
KeyError: 'qwen2'.
```
## Usage
We do not advise you to use base language models for text generation. Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model.
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{qwen,
title={Qwen Technical Report},
author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu},
journal={arXiv preprint arXiv:2309.16609},
year={2023}
}
```
|
blademax45/DeepSeek-zenchat-lora | blademax45 | "2025-02-24T14:37:52Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2025-02-24T14:36:52Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
leom21/layoutlm-funsd | leom21 | "2024-05-01T12:27:46Z" | 10 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"layoutlm",
"token-classification",
"generated_from_trainer",
"dataset:funsd",
"base_model:microsoft/layoutlm-base-uncased",
"base_model:finetune:microsoft/layoutlm-base-uncased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-05-01T11:56:57Z" | ---
license: mit
base_model: microsoft/layoutlm-base-uncased
tags:
- generated_from_trainer
datasets:
- funsd
model-index:
- name: layoutlm-funsd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlm-funsd
This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on the funsd dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6771
- Answer: {'precision': 0.7107258938244854, 'recall': 0.8108776266996292, 'f1': 0.7575057736720554, 'number': 809}
- Header: {'precision': 0.3543307086614173, 'recall': 0.37815126050420167, 'f1': 0.3658536585365853, 'number': 119}
- Question: {'precision': 0.7716814159292036, 'recall': 0.8187793427230047, 'f1': 0.7945330296127562, 'number': 1065}
- Overall Precision: 0.7216
- Overall Recall: 0.7893
- Overall F1: 0.7539
- Overall Accuracy: 0.8139
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 1.8027 | 1.0 | 10 | 1.5884 | {'precision': 0.01997780244173141, 'recall': 0.022249690976514216, 'f1': 0.02105263157894737, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.18858307849133538, 'recall': 0.17370892018779344, 'f1': 0.18084066471163246, 'number': 1065} | 0.1079 | 0.1019 | 0.1048 | 0.3753 |
| 1.4071 | 2.0 | 20 | 1.2076 | {'precision': 0.23890339425587467, 'recall': 0.22620519159456118, 'f1': 0.23238095238095238, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.41302791696492486, 'recall': 0.5417840375586854, 'f1': 0.4687246141348498, 'number': 1065} | 0.3512 | 0.3813 | 0.3656 | 0.5772 |
| 1.0593 | 3.0 | 30 | 0.9154 | {'precision': 0.4750542299349241, 'recall': 0.5414091470951793, 'f1': 0.5060658578856152, 'number': 809} | {'precision': 0.11363636363636363, 'recall': 0.04201680672268908, 'f1': 0.06134969325153375, 'number': 119} | {'precision': 0.5922493681550126, 'recall': 0.6600938967136151, 'f1': 0.6243339253996447, 'number': 1065} | 0.5323 | 0.5750 | 0.5528 | 0.7136 |
| 0.802 | 4.0 | 40 | 0.7552 | {'precision': 0.5981404958677686, 'recall': 0.715698393077874, 'f1': 0.6516601012943164, 'number': 809} | {'precision': 0.20253164556962025, 'recall': 0.13445378151260504, 'f1': 0.1616161616161616, 'number': 119} | {'precision': 0.6680707666385847, 'recall': 0.7446009389671362, 'f1': 0.7042628774422734, 'number': 1065} | 0.6213 | 0.6964 | 0.6567 | 0.7659 |
| 0.6561 | 5.0 | 50 | 0.7030 | {'precision': 0.6381856540084389, 'recall': 0.7478368355995055, 'f1': 0.6886738759248718, 'number': 809} | {'precision': 0.3, 'recall': 0.226890756302521, 'f1': 0.25837320574162675, 'number': 119} | {'precision': 0.6780766096169519, 'recall': 0.7812206572769953, 'f1': 0.7260034904013962, 'number': 1065} | 0.6464 | 0.7346 | 0.6876 | 0.7889 |
| 0.5591 | 6.0 | 60 | 0.6842 | {'precision': 0.6502100840336135, 'recall': 0.765142150803461, 'f1': 0.7030096536059057, 'number': 809} | {'precision': 0.3132530120481928, 'recall': 0.2184873949579832, 'f1': 0.25742574257425743, 'number': 119} | {'precision': 0.7165820642978004, 'recall': 0.7953051643192488, 'f1': 0.7538940809968847, 'number': 1065} | 0.6730 | 0.7486 | 0.7088 | 0.7942 |
| 0.4858 | 7.0 | 70 | 0.6508 | {'precision': 0.6569948186528497, 'recall': 0.7836835599505563, 'f1': 0.7147688838782412, 'number': 809} | {'precision': 0.34210526315789475, 'recall': 0.3277310924369748, 'f1': 0.33476394849785407, 'number': 119} | {'precision': 0.7205503009458297, 'recall': 0.7868544600938967, 'f1': 0.7522441651705565, 'number': 1065} | 0.6740 | 0.7582 | 0.7136 | 0.8063 |
| 0.431 | 8.0 | 80 | 0.6674 | {'precision': 0.6578140960163432, 'recall': 0.796044499381953, 'f1': 0.7203579418344519, 'number': 809} | {'precision': 0.35964912280701755, 'recall': 0.3445378151260504, 'f1': 0.351931330472103, 'number': 119} | {'precision': 0.7482517482517482, 'recall': 0.8037558685446009, 'f1': 0.775011317338162, 'number': 1065} | 0.6889 | 0.7732 | 0.7286 | 0.7969 |
| 0.3878 | 9.0 | 90 | 0.6526 | {'precision': 0.6787564766839378, 'recall': 0.8096415327564895, 'f1': 0.7384441939120632, 'number': 809} | {'precision': 0.336283185840708, 'recall': 0.31932773109243695, 'f1': 0.32758620689655166, 'number': 119} | {'precision': 0.7586206896551724, 'recall': 0.7849765258215963, 'f1': 0.7715736040609138, 'number': 1065} | 0.7014 | 0.7672 | 0.7328 | 0.8073 |
| 0.3744 | 10.0 | 100 | 0.6519 | {'precision': 0.6854410201912858, 'recall': 0.7972805933250927, 'f1': 0.7371428571428571, 'number': 809} | {'precision': 0.3130434782608696, 'recall': 0.3025210084033613, 'f1': 0.3076923076923077, 'number': 119} | {'precision': 0.7611940298507462, 'recall': 0.8140845070422535, 'f1': 0.7867513611615246, 'number': 1065} | 0.7052 | 0.7767 | 0.7393 | 0.8120 |
| 0.3161 | 11.0 | 110 | 0.6696 | {'precision': 0.6948257655755016, 'recall': 0.8133498145859085, 'f1': 0.7494305239179954, 'number': 809} | {'precision': 0.3283582089552239, 'recall': 0.3697478991596639, 'f1': 0.34782608695652173, 'number': 119} | {'precision': 0.7604166666666666, 'recall': 0.8225352112676056, 'f1': 0.7902571041948578, 'number': 1065} | 0.7067 | 0.7918 | 0.7468 | 0.8060 |
| 0.3039 | 12.0 | 120 | 0.6656 | {'precision': 0.7007534983853606, 'recall': 0.8046971569839307, 'f1': 0.7491369390103566, 'number': 809} | {'precision': 0.3524590163934426, 'recall': 0.36134453781512604, 'f1': 0.35684647302904565, 'number': 119} | {'precision': 0.7695769576957696, 'recall': 0.8028169014084507, 'f1': 0.7858455882352942, 'number': 1065} | 0.7165 | 0.7772 | 0.7456 | 0.8131 |
| 0.2877 | 13.0 | 130 | 0.6742 | {'precision': 0.6927138331573389, 'recall': 0.8108776266996292, 'f1': 0.7471526195899771, 'number': 809} | {'precision': 0.32592592592592595, 'recall': 0.3697478991596639, 'f1': 0.3464566929133859, 'number': 119} | {'precision': 0.7651715039577837, 'recall': 0.8169014084507042, 'f1': 0.7901907356948229, 'number': 1065} | 0.7075 | 0.7878 | 0.7455 | 0.8109 |
| 0.2681 | 14.0 | 140 | 0.6743 | {'precision': 0.7128927410617552, 'recall': 0.8133498145859085, 'f1': 0.7598152424942264, 'number': 809} | {'precision': 0.36220472440944884, 'recall': 0.3865546218487395, 'f1': 0.37398373983739847, 'number': 119} | {'precision': 0.7734513274336283, 'recall': 0.8206572769953052, 'f1': 0.7963553530751709, 'number': 1065} | 0.7239 | 0.7918 | 0.7563 | 0.8148 |
| 0.2609 | 15.0 | 150 | 0.6771 | {'precision': 0.7107258938244854, 'recall': 0.8108776266996292, 'f1': 0.7575057736720554, 'number': 809} | {'precision': 0.3543307086614173, 'recall': 0.37815126050420167, 'f1': 0.3658536585365853, 'number': 119} | {'precision': 0.7716814159292036, 'recall': 0.8187793427230047, 'f1': 0.7945330296127562, 'number': 1065} | 0.7216 | 0.7893 | 0.7539 | 0.8139 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
mradermacher/Llama-4-Scout-17B-16E-Instruct-i1-GGUF | mradermacher | "2025-04-10T21:54:25Z" | 2,107 | 4 | transformers | [
"transformers",
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama4",
"ar",
"de",
"en",
"es",
"fr",
"hi",
"id",
"it",
"pt",
"th",
"tl",
"vi",
"base_model:meta-llama/Llama-4-Scout-17B-16E-Instruct",
"base_model:quantized:meta-llama/Llama-4-Scout-17B-16E-Instruct",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-04-08T16:18:39Z" | ---
base_model: meta-llama/Llama-4-Scout-17B-16E-Instruct
extra_gated_button_content: Submit
extra_gated_fields:
Affiliation: text
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
Country: country
Date of birth: date_picker
First Name: text
Job title:
options:
- Student
- Research Graduate
- AI researcher
- AI developer/engineer
- Reporter
- Other
type: select
Last Name: text
geo: ip_location
extra_gated_heading: Please be sure to provide your full legal name, date of birth,
and full organization name with all corporate identifiers. Avoid the use of acronyms
and special characters. Failure to follow these instructions may prevent you from
accessing this model and others on Hugging Face. You will not have the ability to
edit this form after submission, so please ensure all information is accurate.
extra_gated_prompt: |-
**LLAMA 4 COMMUNITY LICENSE AGREEMENT**
Llama 4 Version Effective Date: April 5, 2025
"**Agreement**" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein.
"**Documentation**" means the specifications, manuals and documentation accompanying Llama 4 distributed by Meta at [https://www.llama.com/docs/overview](https://llama.com/docs/overview).
"**Licensee**" or "**you**" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.
"**Llama 4**" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at [https://www.llama.com/llama-downloads](https://www.llama.com/llama-downloads).
"**Llama Materials**" means, collectively, Meta’s proprietary Llama 4 and Documentation (and any portion thereof) made available under this Agreement.
"**Meta**" or "**we**" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).
By clicking "I Accept" below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement.
1\. **License Rights and Redistribution**.
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials.
b. Redistribution and Use.
i. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service (including another AI model) that contains any of them, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display "Built with Llama" on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include "Llama" at the beginning of any such AI model name.
ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you.
iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a "Notice" text file distributed as a part of such copies: "Llama 4 is licensed under the Llama 4 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved."
iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at [https://www.llama.com/llama4/use-policy](https://www.llama.com/llama4/use-policy)), which is hereby incorporated by reference into this Agreement. 2\. **Additional Commercial Terms**. If, on the Llama 4 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.
3**. Disclaimer of Warranty**. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.
4\. **Limitation of Liability**. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.
5\. **Intellectual Property**.
a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use "Llama" (the "Mark") solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at [https://about.meta.com/brand/resources/meta/company-brand/](https://about.meta.com/brand/resources/meta/company-brand/)[)](https://en.facebookbrand.com/). All goodwill arising out of your use of the Mark will inure to the benefit of Meta.
b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.
c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 4 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials.
6\. **Term and Termination**. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement.
7\. **Governing Law and Jurisdiction**. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement.
language:
- ar
- de
- en
- es
- fr
- hi
- id
- it
- pt
- th
- tl
- vi
library_name: transformers
license: other
license_name: llama4
quantized_by: mradermacher
tags:
- facebook
- meta
- pytorch
- llama
- llama4
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/meta-llama/Llama-4-Scout-17B-16E-Instruct
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama-4-Scout-17B-16E-Instruct-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-4-Scout-17B-16E-Instruct-i1-GGUF/resolve/main/Llama-4-Scout-17B-16E-Instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 22.3 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-4-Scout-17B-16E-Instruct-i1-GGUF/resolve/main/Llama-4-Scout-17B-16E-Instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 24.7 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-4-Scout-17B-16E-Instruct-i1-GGUF/resolve/main/Llama-4-Scout-17B-16E-Instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 28.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-4-Scout-17B-16E-Instruct-i1-GGUF/resolve/main/Llama-4-Scout-17B-16E-Instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 31.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-4-Scout-17B-16E-Instruct-i1-GGUF/resolve/main/Llama-4-Scout-17B-16E-Instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 32.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-4-Scout-17B-16E-Instruct-i1-GGUF/resolve/main/Llama-4-Scout-17B-16E-Instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 35.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-4-Scout-17B-16E-Instruct-i1-GGUF/resolve/main/Llama-4-Scout-17B-16E-Instruct.i1-Q2_K_S.gguf) | i1-Q2_K_S | 37.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-4-Scout-17B-16E-Instruct-i1-GGUF/resolve/main/Llama-4-Scout-17B-16E-Instruct.i1-Q2_K.gguf) | i1-Q2_K | 39.7 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-4-Scout-17B-16E-Instruct-i1-GGUF/resolve/main/Llama-4-Scout-17B-16E-Instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 41.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-4-Scout-17B-16E-Instruct-i1-GGUF/resolve/main/Llama-4-Scout-17B-16E-Instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 44.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-4-Scout-17B-16E-Instruct-i1-GGUF/resolve/main/Llama-4-Scout-17B-16E-Instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 46.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-4-Scout-17B-16E-Instruct-i1-GGUF/resolve/main/Llama-4-Scout-17B-16E-Instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 46.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-4-Scout-17B-16E-Instruct-i1-GGUF/resolve/main/Llama-4-Scout-17B-16E-Instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 47.6 | |
| [PART 1](https://huggingface.co/mradermacher/Llama-4-Scout-17B-16E-Instruct-i1-GGUF/resolve/main/Llama-4-Scout-17B-16E-Instruct.i1-Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-4-Scout-17B-16E-Instruct-i1-GGUF/resolve/main/Llama-4-Scout-17B-16E-Instruct.i1-Q3_K_M.gguf.part2of2) | i1-Q3_K_M | 51.9 | IQ3_S probably better |
| [PART 1](https://huggingface.co/mradermacher/Llama-4-Scout-17B-16E-Instruct-i1-GGUF/resolve/main/Llama-4-Scout-17B-16E-Instruct.i1-Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-4-Scout-17B-16E-Instruct-i1-GGUF/resolve/main/Llama-4-Scout-17B-16E-Instruct.i1-Q3_K_L.gguf.part2of2) | i1-Q3_K_L | 56.1 | IQ3_M probably better |
| [PART 1](https://huggingface.co/mradermacher/Llama-4-Scout-17B-16E-Instruct-i1-GGUF/resolve/main/Llama-4-Scout-17B-16E-Instruct.i1-IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-4-Scout-17B-16E-Instruct-i1-GGUF/resolve/main/Llama-4-Scout-17B-16E-Instruct.i1-IQ4_XS.gguf.part2of2) | i1-IQ4_XS | 57.7 | |
| [PART 1](https://huggingface.co/mradermacher/Llama-4-Scout-17B-16E-Instruct-i1-GGUF/resolve/main/Llama-4-Scout-17B-16E-Instruct.i1-Q4_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-4-Scout-17B-16E-Instruct-i1-GGUF/resolve/main/Llama-4-Scout-17B-16E-Instruct.i1-Q4_0.gguf.part2of2) | i1-Q4_0 | 61.3 | fast, low quality |
| [PART 1](https://huggingface.co/mradermacher/Llama-4-Scout-17B-16E-Instruct-i1-GGUF/resolve/main/Llama-4-Scout-17B-16E-Instruct.i1-Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-4-Scout-17B-16E-Instruct-i1-GGUF/resolve/main/Llama-4-Scout-17B-16E-Instruct.i1-Q4_K_S.gguf.part2of2) | i1-Q4_K_S | 61.6 | optimal size/speed/quality |
| [PART 1](https://huggingface.co/mradermacher/Llama-4-Scout-17B-16E-Instruct-i1-GGUF/resolve/main/Llama-4-Scout-17B-16E-Instruct.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-4-Scout-17B-16E-Instruct-i1-GGUF/resolve/main/Llama-4-Scout-17B-16E-Instruct.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 65.5 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Llama-4-Scout-17B-16E-Instruct-i1-GGUF/resolve/main/Llama-4-Scout-17B-16E-Instruct.i1-Q4_1.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-4-Scout-17B-16E-Instruct-i1-GGUF/resolve/main/Llama-4-Scout-17B-16E-Instruct.i1-Q4_1.gguf.part2of2) | i1-Q4_1 | 67.7 | |
| [PART 1](https://huggingface.co/mradermacher/Llama-4-Scout-17B-16E-Instruct-i1-GGUF/resolve/main/Llama-4-Scout-17B-16E-Instruct.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-4-Scout-17B-16E-Instruct-i1-GGUF/resolve/main/Llama-4-Scout-17B-16E-Instruct.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 74.4 | |
| [PART 1](https://huggingface.co/mradermacher/Llama-4-Scout-17B-16E-Instruct-i1-GGUF/resolve/main/Llama-4-Scout-17B-16E-Instruct.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-4-Scout-17B-16E-Instruct-i1-GGUF/resolve/main/Llama-4-Scout-17B-16E-Instruct.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 76.6 | |
| [PART 1](https://huggingface.co/mradermacher/Llama-4-Scout-17B-16E-Instruct-i1-GGUF/resolve/main/Llama-4-Scout-17B-16E-Instruct.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-4-Scout-17B-16E-Instruct-i1-GGUF/resolve/main/Llama-4-Scout-17B-16E-Instruct.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 88.5 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
EleutherAI/pythia-2.8b-sentiment-first-ft | EleutherAI | "2024-03-22T18:23:30Z" | 9 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-16T01:46:46Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kk-aivio/200f4114-0fca-4b74-b365-f81ac9f59a76 | kk-aivio | "2025-02-04T05:56:36Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:OpenBuddy/openbuddy-llama2-13b-v8.1-fp16",
"base_model:adapter:OpenBuddy/openbuddy-llama2-13b-v8.1-fp16",
"region:us"
] | null | "2025-02-04T05:31:29Z" | ---
library_name: peft
base_model: OpenBuddy/openbuddy-llama2-13b-v8.1-fp16
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 200f4114-0fca-4b74-b365-f81ac9f59a76
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: OpenBuddy/openbuddy-llama2-13b-v8.1-fp16
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ad9a336907b8ae34_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ad9a336907b8ae34_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kk-aivio/200f4114-0fca-4b74-b365-f81ac9f59a76
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/ad9a336907b8ae34_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2ae55a37-53c0-49da-ae27-90302c180793
wandb_project: Birthday-SN56-17-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2ae55a37-53c0-49da-ae27-90302c180793
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 200f4114-0fca-4b74-b365-f81ac9f59a76
This model is a fine-tuned version of [OpenBuddy/openbuddy-llama2-13b-v8.1-fp16](https://huggingface.co/OpenBuddy/openbuddy-llama2-13b-v8.1-fp16) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | nan |
| 9.0062 | 0.0085 | 50 | nan |
| 46.0364 | 0.0169 | 100 | nan |
| 148.0058 | 0.0254 | 150 | nan |
| 72.2216 | 0.0338 | 200 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
butlermasango01/fine_tuned_model | butlermasango01 | "2025-02-20T15:26:47Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:NousResearch/Llama-2-7b-chat-hf",
"base_model:adapter:NousResearch/Llama-2-7b-chat-hf",
"region:us"
] | null | "2025-02-20T15:26:44Z" | ---
library_name: peft
base_model: NousResearch/Llama-2-7b-chat-hf
tags:
- generated_from_trainer
model-index:
- name: fine_tuned_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine_tuned_model
This model is a fine-tuned version of [NousResearch/Llama-2-7b-chat-hf](https://huggingface.co/NousResearch/Llama-2-7b-chat-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.14.0
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.3.1
- Tokenizers 0.21.0 |
mradermacher/phi-2-pruned50-GGUF | mradermacher | "2025-04-10T05:02:20Z" | 121 | 0 | transformers | [
"transformers",
"gguf",
"nm-vllm",
"sparse",
"en",
"base_model:RedHatAI/phi-2-pruned50",
"base_model:quantized:RedHatAI/phi-2-pruned50",
"endpoints_compatible",
"region:us"
] | null | "2024-12-18T12:56:11Z" | ---
base_model: RedHatAI/phi-2-pruned50
language:
- en
library_name: transformers
model_type: phi
quantized_by: mradermacher
tags:
- nm-vllm
- sparse
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/RedHatAI/phi-2-pruned50
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/phi-2-pruned50-GGUF/resolve/main/phi-2-pruned50.Q2_K.gguf) | Q2_K | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/phi-2-pruned50-GGUF/resolve/main/phi-2-pruned50.Q3_K_S.gguf) | Q3_K_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/phi-2-pruned50-GGUF/resolve/main/phi-2-pruned50.Q3_K_M.gguf) | Q3_K_M | 1.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/phi-2-pruned50-GGUF/resolve/main/phi-2-pruned50.IQ4_XS.gguf) | IQ4_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/phi-2-pruned50-GGUF/resolve/main/phi-2-pruned50.Q3_K_L.gguf) | Q3_K_L | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/phi-2-pruned50-GGUF/resolve/main/phi-2-pruned50.Q4_K_S.gguf) | Q4_K_S | 1.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/phi-2-pruned50-GGUF/resolve/main/phi-2-pruned50.Q4_K_M.gguf) | Q4_K_M | 1.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/phi-2-pruned50-GGUF/resolve/main/phi-2-pruned50.Q5_K_S.gguf) | Q5_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/phi-2-pruned50-GGUF/resolve/main/phi-2-pruned50.Q5_K_M.gguf) | Q5_K_M | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/phi-2-pruned50-GGUF/resolve/main/phi-2-pruned50.Q6_K.gguf) | Q6_K | 2.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/phi-2-pruned50-GGUF/resolve/main/phi-2-pruned50.Q8_0.gguf) | Q8_0 | 3.1 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/phi-2-pruned50-GGUF/resolve/main/phi-2-pruned50.f16.gguf) | f16 | 5.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
sn56t0/5d2b6af7-80f7-4717-a468-cd83889305f9 | sn56t0 | "2025-02-07T12:55:12Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"gptj",
"axolotl",
"generated_from_trainer",
"base_model:furiosa-ai/mlperf-gpt-j-6b",
"base_model:adapter:furiosa-ai/mlperf-gpt-j-6b",
"region:us"
] | null | "2025-02-07T12:31:03Z" | ---
library_name: peft
base_model: furiosa-ai/mlperf-gpt-j-6b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5d2b6af7-80f7-4717-a468-cd83889305f9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: furiosa-ai/mlperf-gpt-j-6b
bf16: true
chat_template: llama3
data_processes: 24
dataset_prepared_path: null
datasets:
- data_files:
- a6dae33cde59515e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a6dae33cde59515e_train_data.json
type:
field_input: selftext
field_instruction: title
field_output: answers.text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 4
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: sn56t0/5d2b6af7-80f7-4717-a468-cd83889305f9
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 9.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
lr_scheduler_warmup_steps: 50
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/a6dae33cde59515e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-8
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
seed: 930273260
sequence_len: 1024
shuffle: true
strict: false
tf32: true
tokenizer_type: AutoTokenizer
torch_compile: true
total_train_batch_size: 32
train_batch_size: 8
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: sn56-miner
wandb_mode: disabled
wandb_name: null
wandb_project: god
wandb_run: 1vm2
wandb_runid: null
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 5d2b6af7-80f7-4717-a468-cd83889305f9
This model is a fine-tuned version of [furiosa-ai/mlperf-gpt-j-6b](https://huggingface.co/furiosa-ai/mlperf-gpt-j-6b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7353
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 930273260
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-8
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 15.0156 | 0.0010 | 1 | 3.9155 |
| 11.1719 | 0.0513 | 50 | 2.8241 |
| 11.1992 | 0.1026 | 100 | 2.7627 |
| 11.0742 | 0.1540 | 150 | 2.7415 |
| 11.0703 | 0.2053 | 200 | 2.7353 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Elfrino/CitrineCircuit-20B | Elfrino | "2025-02-08T09:33:13Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2408.07990",
"base_model:Undi95/MXLewd-L2-20B",
"base_model:merge:Undi95/MXLewd-L2-20B",
"base_model:Undi95/PsyMedRP-v1-20B",
"base_model:merge:Undi95/PsyMedRP-v1-20B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-08T08:13:50Z" | ---
base_model:
- Undi95/MXLewd-L2-20B
- Undi95/PsyMedRP-v1-20B
library_name: transformers
tags:
- mergekit
- merge
---
***In Testing...***

# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [Undi95/PsyMedRP-v1-20B](https://huggingface.co/Undi95/PsyMedRP-v1-20B) as a base.
### Models Merged
The following models were included in the merge:
* [Undi95/MXLewd-L2-20B](https://huggingface.co/Undi95/MXLewd-L2-20B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: sce
models:
- model: Undi95/MXLewd-L2-20B
- model: Undi95/PsyMedRP-v1-20B
base_model: Undi95/PsyMedRP-v1-20B
tokenizer:
source: base
parameters:
select_topk: 0.1
dtype: bfloat16
```
|
Xu-Ouyang/pythia-410m-deduped-int2-step14000-GPTQ-wikitext2-uva | Xu-Ouyang | "2024-09-13T08:54:38Z" | 76 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"2-bit",
"gptq",
"region:us"
] | text-generation | "2024-09-13T08:54:16Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hcy11/distilbert-base-uncased-finetuned-squad | hcy11 | "2022-03-02T20:32:33Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | "2022-03-02T23:29:05Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2131
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.2672 | 1.0 | 5533 | 1.2131 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
isspek/roberta-base_monkeypox_gpt4o_2_2e-5_16_undersampling_0.5 | isspek | "2025-03-23T12:42:29Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-11-26T14:13:40Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/deepseek_medical-i1-GGUF | mradermacher | "2025-03-17T02:10:39Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"unsloth",
"trl",
"sft",
"en",
"base_model:PAUL1122/deepseek_medical",
"base_model:quantized:PAUL1122/deepseek_medical",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-03-17T01:26:08Z" | ---
base_model: PAUL1122/deepseek_medical
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- unsloth
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/PAUL1122/deepseek_medical
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/deepseek_medical-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/deepseek_medical-i1-GGUF/resolve/main/deepseek_medical.i1-IQ1_S.gguf) | i1-IQ1_S | 0.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/deepseek_medical-i1-GGUF/resolve/main/deepseek_medical.i1-IQ1_M.gguf) | i1-IQ1_M | 0.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/deepseek_medical-i1-GGUF/resolve/main/deepseek_medical.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek_medical-i1-GGUF/resolve/main/deepseek_medical.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek_medical-i1-GGUF/resolve/main/deepseek_medical.i1-IQ2_S.gguf) | i1-IQ2_S | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek_medical-i1-GGUF/resolve/main/deepseek_medical.i1-IQ2_M.gguf) | i1-IQ2_M | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek_medical-i1-GGUF/resolve/main/deepseek_medical.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.8 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/deepseek_medical-i1-GGUF/resolve/main/deepseek_medical.i1-Q2_K.gguf) | i1-Q2_K | 0.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/deepseek_medical-i1-GGUF/resolve/main/deepseek_medical.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/deepseek_medical-i1-GGUF/resolve/main/deepseek_medical.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek_medical-i1-GGUF/resolve/main/deepseek_medical.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/deepseek_medical-i1-GGUF/resolve/main/deepseek_medical.i1-IQ3_S.gguf) | i1-IQ3_S | 1.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/deepseek_medical-i1-GGUF/resolve/main/deepseek_medical.i1-IQ3_M.gguf) | i1-IQ3_M | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek_medical-i1-GGUF/resolve/main/deepseek_medical.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/deepseek_medical-i1-GGUF/resolve/main/deepseek_medical.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.1 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/deepseek_medical-i1-GGUF/resolve/main/deepseek_medical.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek_medical-i1-GGUF/resolve/main/deepseek_medical.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/deepseek_medical-i1-GGUF/resolve/main/deepseek_medical.i1-Q4_0.gguf) | i1-Q4_0 | 1.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/deepseek_medical-i1-GGUF/resolve/main/deepseek_medical.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/deepseek_medical-i1-GGUF/resolve/main/deepseek_medical.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/deepseek_medical-i1-GGUF/resolve/main/deepseek_medical.i1-Q4_1.gguf) | i1-Q4_1 | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek_medical-i1-GGUF/resolve/main/deepseek_medical.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek_medical-i1-GGUF/resolve/main/deepseek_medical.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek_medical-i1-GGUF/resolve/main/deepseek_medical.i1-Q6_K.gguf) | i1-Q6_K | 1.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
ChenMnZ/Llama-3-70b-instruct-BlockAP-w4g128 | ChenMnZ | "2024-07-21T09:41:46Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:2407.11062",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-21T07:18:52Z" | # Block-AP (EfficientQAT w/o E2E-AP)
[EfficientQAT](https://arxiv.org/abs/2407.11062) involves two consecutive training phases: Block-wise training of all parameters (Block-AP) and end-to-end training of quantization parameters (E2E-QP).
In this repo, we provide the quantized checkpoints of Block-AP. Anyone can use them to reproduce our results or carry following research.
## Performance
| Model | Quantization | WikiText2 PPL | Avg. Accuracy | Model Size (GB) | Hub link|
|-------|--------------|---------------|---------------|-----------------|----------|
Llama-2-7B|fp16|5.47|64.86|13.2|-|
Llama-2-7B|w4g128|5.56|64.07|3.7|[Link](https://huggingface.co/ChenMnZ/Llama-2-7b-BlockAd-w4g128)|
Llama-2-7B|w3g128|5.89|63.96|3.1|[Link](https://huggingface.co/ChenMnZ/Llama-2-7b-BlockAP-w3g128)|
Llama-2-7B|w2g64|7.65|59.54|2.3|[Link](https://huggingface.co/ChenMnZ/Llama-2-7b-BlockAP-w2g64)|
Llama-2-7B|w2g128|7.94|58.72|2.2|[Link](https://huggingface.co/ChenMnZ/Llama-2-7b-BlockAP-w2g128)|
Llama-2-13B|fp16|4.88|67.81|25.4|-|
Llama-2-13B|w4g128|4.96|67.27|6.8|[Link](https://huggingface.co/ChenMnZ/Llama-2-13b-BlockAP-w4g128)|
Llama-2-13B|w3g128|5.20|67.30|5.6|[Link](https://huggingface.co/ChenMnZ/Llama-2-13b-BlockAP-w3g128)|
Llama-2-13B|w2g64|6.55|63.10|4.0|[Link](https://huggingface.co/ChenMnZ/Llama-2-13b-BlockAP-w2g64)|
Llama-2-13B|w2g128|6.68|63.49|3.8|[Link](https://huggingface.co/ChenMnZ/Llama-2-13b-BlockAP-w2g128)|
Llama-2-70B|fp16|3.32|72.41|131.6|-|
Llama-2-70B|w4g128|3.41|72.54|35.8|[Link](https://huggingface.co/ChenMnZ/Llama-2-70b-BlockAP-w4g128)|
Llama-2-70B|w3g128|3.65|71.88|29.1|[Link](https://huggingface.co/ChenMnZ/Llama-2-70b-BlockAP-w3g128)|
Llama-2-70B|w2g64|4.96|69.44|20.1|[Link](https://huggingface.co/ChenMnZ/Llama-2-70b-BlockAP-w2g64)|
Llama-2-70B|w2g128|5.26|68.73|18.9|[Link](https://huggingface.co/ChenMnZ/Llama-2-70b-BlockAP-w2g128)|
Llama-3-8B|fp16|6.14|68.58|13.0|-|
Llama-3-8B|w4g128|6.50|68.43|5.4|[Link](https://huggingface.co/ChenMnZ/Llama-3-8b-BlockAP-w4g128)|
Llama-3-8B|w3g128|7.34|66.72|4.7|[Link](https://huggingface.co/ChenMnZ/Llama-3-8b-BlockAP-w3g128)|
Llama-3-8B|w2g64|12.47|58.65|3.9|[Link](https://huggingface.co/ChenMnZ/Llama-3-8b-BlockAP-w2g64)|
Llama-3-8B|w2g128|13.25|58.23|3.8|[Link](https://huggingface.co/ChenMnZ/Llama-3-8b-BlockAP-w2g128)|
Llama-3-70B|fp16|2.85|75.33|137.8|-|
Llama-3-70B|w4g128|3.18|74.50|38.9|[Link](https://huggingface.co/ChenMnZ/Llama-3-70b-BlockAP-w4g128)|
Llama-3-70B|w3g128|4.88|71.90|32.2|[Link](https://huggingface.co/ChenMnZ/Llama-3-70b-BlockAP-w3g128)|
Llama-3-70B|w2g64|13.75|66.70|23.2|[Link](https://huggingface.co/ChenMnZ/Llama-3-70b-BlockAP-w2g64)|
Llama-3-70B|w2g128|16.79|65.06|22.0|[Link](https://huggingface.co/ChenMnZ/Llama-3-70b-BlockAP-w2g128)|
Llama-3-8B-Instruct|fp16|8.29|68.43|13.0|-|
Llama-3-8B-Instruct|w4g128|8.76|67.80|5.4|[Link](https://huggingface.co/ChenMnZ/Llama-3-8b-instruct-BlockAP-w4g128)|
Llama-3-8B-Instruct|w3g128|9.83|66.54|4.7|[Link](https://huggingface.co/ChenMnZ/Llama-3-8b-instruct-BlockAP-w3g128)|
Llama-3-8B-Instruct|w2g64|16.77|58.62|3.9|[Link](https://huggingface.co/ChenMnZ/Llama-3-8b-instruct-BlockAP-w2g64)|
Llama-3-8B-Instruct|w2g128|18.02|57.19|3.8|[Link](https://huggingface.co/ChenMnZ/Llama-3-8b-instruct-BlockAP-w2g128)|
Llama-3-70B-Instruct|fp16|5.33|73.78|137.8|-|
Llama-3-70B-Instruct|w4g128|5.77|73.52|38.9|[Link](https://huggingface.co/ChenMnZ/Llama-3-70b-instruct-BlockAP-w4g128)|
Llama-3-70B-Instruct|w3g128|7.25|69.80|32.2|[Link](https://huggingface.co/ChenMnZ/Llama-3-70b-instruct-BlockAP-w3g128)|
Llama-3-70B-Instruct|w2g64|12.48|65.60|23.2|[Link](https://huggingface.co/ChenMnZ/Llama-3-70b-instruct-BlockAP-w2g64)|
Llama-3-70B-Instruct|w2g128|13.48|61.75|22.0|[Link](https://huggingface.co/ChenMnZ/Llama-3-70b-instruct-BlockAP-w2g128)|
## Usage
Please refer [https://github.com/OpenGVLab/EfficientQAT](https://github.com/OpenGVLab/EfficientQAT) for details. These checkpoints can be used to [following E2E-AP](https://github.com/OpenGVLab/EfficientQAT?tab=readme-ov-file#training), as well as be [inferenced](https://github.com/OpenGVLab/EfficientQAT?tab=readme-ov-file#inference) directly. |
tkzang/Mistral-7B-v0.3-finetune | tkzang | "2024-12-19T12:33:47Z" | 76 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-12-19T12:26:29Z" | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
shubhchaturvedi/shubh_1 | shubhchaturvedi | "2024-05-05T02:14:57Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-05-05T02:14:57Z" | ---
license: apache-2.0
---
|
damgomz/ft_32_3e6_base_x8 | damgomz | "2024-06-24T04:08:43Z" | 8 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-06-23T10:50:37Z" | ---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 64577.61004495621 |
| Emissions (Co2eq in kg) | 0.0390769031568386 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.7623729121769484 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0672677144942184 |
| Consumed energy (kWh) | 0.8296406266711681 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.12431189933654069 |
| Emissions (Co2eq in kg) | 0.025292897267607844 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_32_3e6_base_x8 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 3e-06 |
| batch_size | 32 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.732783 | 0.331379 |
| 1 | 0.400023 | 0.305358 | 0.884755 |
| 2 | 0.267000 | 0.251737 | 0.897428 |
| 3 | 0.220699 | 0.240164 | 0.918675 |
| 4 | 0.186956 | 0.222951 | 0.916962 |
| 5 | 0.163473 | 0.257377 | 0.918532 |
| 6 | 0.144596 | 0.238047 | 0.902378 |
|
dbalasub/finalcheck-ensem-qa | dbalasub | "2024-05-15T18:54:43Z" | 115 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-05-12T17:38:36Z" | ---
library_name: transformers
license: mit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
davidschulte/ESM_imppres_presupposition_change_of_state | davidschulte | "2025-03-28T13:20:07Z" | 25 | 0 | null | [
"safetensors",
"embedding_space_map",
"BaseLM:bert-base-multilingual-uncased",
"dataset:facebook/imppres",
"base_model:google-bert/bert-base-multilingual-uncased",
"base_model:finetune:google-bert/bert-base-multilingual-uncased",
"license:apache-2.0",
"region:us"
] | null | "2024-12-08T15:28:01Z" | ---
base_model: bert-base-multilingual-uncased
datasets:
- facebook/imppres
license: apache-2.0
tags:
- embedding_space_map
- BaseLM:bert-base-multilingual-uncased
---
# ESM facebook/imppres
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
ESM
- **Developed by:** David Schulte
- **Model type:** ESM
- **Base Model:** bert-base-multilingual-uncased
- **Intermediate Task:** facebook/imppres
- **ESM architecture:** linear
- **ESM embedding dimension:** 768
- **Language(s) (NLP):** [More Information Needed]
- **License:** Apache-2.0 license
- **ESM version:** 0.1.0
## Training Details
### Intermediate Task
- **Task ID:** facebook/imppres
- **Subset [optional]:** presupposition_change_of_state
- **Text Column:** ['premise', 'hypothesis']
- **Label Column:** gold_label
- **Dataset Split:** change_of_state
- **Sample size [optional]:** 1900
- **Sample seed [optional]:**
### Training Procedure [optional]
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Language Model Training Hyperparameters [optional]
- **Epochs:** 3
- **Batch size:** 32
- **Learning rate:** 2e-05
- **Weight Decay:** 0.01
- **Optimizer**: AdamW
### ESM Training Hyperparameters [optional]
- **Epochs:** 10
- **Batch size:** 32
- **Learning rate:** 0.001
- **Weight Decay:** 0.01
- **Optimizer**: AdamW
### Additional trainiung details [optional]
## Model evaluation
### Evaluation of fine-tuned language model [optional]
### Evaluation of ESM [optional]
MSE:
### Additional evaluation details [optional]
## What are Embedding Space Maps used for?
Embedding Space Maps are a part of ESM-LogME, a efficient method for finding intermediate datasets for transfer learning. There are two reasons to use ESM-LogME:
### You don't have enough training data for your problem
If you don't have a enough training data for your problem, just use ESM-LogME to find more.
You can supplement model training by including publicly available datasets in the training process.
1. Fine-tune a language model on suitable intermediate dataset.
2. Fine-tune the resulting model on your target dataset.
This workflow is called intermediate task transfer learning and it can significantly improve the target performance.
But what is a suitable dataset for your problem? ESM-LogME enable you to quickly rank thousands of datasets on the Hugging Face Hub by how well they are exptected to transfer to your target task.
### You want to find similar datasets to your target dataset
Using ESM-LogME can be used like search engine on the Hugging Face Hub. You can find similar tasks to your target task without having to rely on heuristics. ESM-LogME estimates how language models fine-tuned on each intermediate task would benefinit your target task. This quantitative approach combines the effects of domain similarity and task similarity.
## How can I use ESM-LogME / ESMs?
[](https://pypi.org/project/hf-dataset-selector)
We release **hf-dataset-selector**, a Python package for intermediate task selection using Embedding Space Maps.
**hf-dataset-selector** fetches ESMs for a given language model and uses it to find the best dataset for applying intermediate training to the target task. ESMs are found by their tags on the Huggingface Hub.
```python
from hfselect import Dataset, compute_task_ranking
# Load target dataset from the Hugging Face Hub
dataset = Dataset.from_hugging_face(
name="stanfordnlp/imdb",
split="train",
text_col="text",
label_col="label",
is_regression=False,
num_examples=1000,
seed=42
)
# Fetch ESMs and rank tasks
task_ranking = compute_task_ranking(
dataset=dataset,
model_name="bert-base-multilingual-uncased"
)
# Display top 5 recommendations
print(task_ranking[:5])
```
```python
1. davanstrien/test_imdb_embedd2 Score: -0.618529
2. davanstrien/test_imdb_embedd Score: -0.618644
3. davanstrien/test1 Score: -0.619334
4. stanfordnlp/imdb Score: -0.619454
5. stanfordnlp/sst Score: -0.62995
```
| Rank | Task ID | Task Subset | Text Column | Label Column | Task Split | Num Examples | ESM Architecture | Score |
|-------:|:------------------------------|:----------------|:--------------|:---------------|:-------------|---------------:|:-------------------|----------:|
| 1 | davanstrien/test_imdb_embedd2 | default | text | label | train | 10000 | linear | -0.618529 |
| 2 | davanstrien/test_imdb_embedd | default | text | label | train | 10000 | linear | -0.618644 |
| 3 | davanstrien/test1 | default | text | label | train | 10000 | linear | -0.619334 |
| 4 | stanfordnlp/imdb | plain_text | text | label | train | 10000 | linear | -0.619454 |
| 5 | stanfordnlp/sst | dictionary | phrase | label | dictionary | 10000 | linear | -0.62995 |
| 6 | stanfordnlp/sst | default | sentence | label | train | 8544 | linear | -0.63312 |
| 7 | kuroneko5943/snap21 | CDs_and_Vinyl_5 | sentence | label | train | 6974 | linear | -0.634365 |
| 8 | kuroneko5943/snap21 | Video_Games_5 | sentence | label | train | 6997 | linear | -0.638787 |
| 9 | kuroneko5943/snap21 | Movies_and_TV_5 | sentence | label | train | 6989 | linear | -0.639068 |
| 10 | fancyzhx/amazon_polarity | amazon_polarity | content | label | train | 10000 | linear | -0.639718 |
For more information on how to use ESMs please have a look at the [official Github repository](https://github.com/davidschulte/hf-dataset-selector). We provide documentation further documentation and tutorials for finding intermediate datasets and training your own ESMs.
## How do Embedding Space Maps work?
<!-- This section describes the evaluation protocols and provides the results. -->
Embedding Space Maps (ESMs) are neural networks that approximate the effect of fine-tuning a language model on a task. They can be used to quickly transform embeddings from a base model to approximate how a fine-tuned model would embed the the input text.
ESMs can be used for intermediate task selection with the ESM-LogME workflow.
## How can I use Embedding Space Maps for Intermediate Task Selection?
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
If you are using this Embedding Space Maps, please cite our [paper](https://aclanthology.org/2024.emnlp-main.529/).
**BibTeX:**
```
@inproceedings{schulte-etal-2024-less,
title = "Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning",
author = "Schulte, David and
Hamborg, Felix and
Akbik, Alan",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.529/",
doi = "10.18653/v1/2024.emnlp-main.529",
pages = "9431--9442",
abstract = "Intermediate task transfer learning can greatly improve model performance. If, for example, one has little training data for emotion detection, first fine-tuning a language model on a sentiment classification dataset may improve performance strongly. But which task to choose for transfer learning? Prior methods producing useful task rankings are infeasible for large source pools, as they require forward passes through all source language models. We overcome this by introducing Embedding Space Maps (ESMs), light-weight neural networks that approximate the effect of fine-tuning a language model. We conduct the largest study on NLP task transferability and task selection with 12k source-target pairs. We find that applying ESMs on a prior method reduces execution time and disk space usage by factors of 10 and 278, respectively, while retaining high selection performance (avg. regret@5 score of 2.95)."
}
```
**APA:**
```
Schulte, D., Hamborg, F., & Akbik, A. (2024, November). Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing (pp. 9431-9442).
```
## Additional Information
|
nat-hunt/704e0391-0700-462f-bc09-a647bc364757 | nat-hunt | "2025-01-11T12:53:27Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Genstruct-7B",
"base_model:adapter:NousResearch/Genstruct-7B",
"license:apache-2.0",
"region:us"
] | null | "2025-01-11T12:47:56Z" | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Genstruct-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 704e0391-0700-462f-bc09-a647bc364757
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Genstruct-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- cc7d20aa77ac7b7f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/cc7d20aa77ac7b7f_train_data.json
type:
field_input: airline
field_instruction: review
field_output: title
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: nat-hunt/704e0391-0700-462f-bc09-a647bc364757
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/cc7d20aa77ac7b7f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 563ec364-22a9-4c70-95b8-421abd33c058
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 563ec364-22a9-4c70-95b8-421abd33c058
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 704e0391-0700-462f-bc09-a647bc364757
This model is a fine-tuned version of [NousResearch/Genstruct-7B](https://huggingface.co/NousResearch/Genstruct-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8100
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 14.7588 | 0.0005 | 1 | 4.2187 |
| 18.8408 | 0.0016 | 3 | 4.1506 |
| 13.5545 | 0.0033 | 6 | 3.4923 |
| 13.5721 | 0.0049 | 9 | 2.8100 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
phongtintruong/meomeo-mhubert-vietbud-21221447-120 | phongtintruong | "2025-02-12T14:48:13Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"meomeo",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-02-12T14:47:36Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tahirmuhammadcs/multi-ner | tahirmuhammadcs | "2024-05-24T13:15:10Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-05-24T13:15:10Z" | ---
license: apache-2.0
---
|
JennnDexter/Segments | JennnDexter | "2023-09-03T05:37:24Z" | 36 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"segformer",
"generated_from_trainer",
"dataset:scene_parse_150",
"base_model:nvidia/mit-b0",
"base_model:finetune:nvidia/mit-b0",
"license:other",
"endpoints_compatible",
"region:us"
] | null | "2023-08-24T07:24:12Z" | ---
license: other
base_model: nvidia/mit-b0
tags:
- generated_from_trainer
datasets:
- scene_parse_150
model-index:
- name: Segments
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Segments
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the scene_parse_150 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Triangle104/AwA-0.5B-Q4_K_M-GGUF | Triangle104 | "2025-01-03T21:29:46Z" | 9 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:Spestly/AwA-0.5B",
"base_model:quantized:Spestly/AwA-0.5B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-12-29T12:56:12Z" | ---
base_model: Spestly/AwA-0.5B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- llama-cpp
- gguf-my-repo
license: apache-2.0
language:
- en
library_name: transformers
---
# Triangle104/AwA-0.5B-Q4_K_M-GGUF
This model was converted to GGUF format from [`Spestly/AwA-0.5B`](https://huggingface.co/Spestly/AwA-0.5B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Spestly/AwA-0.5B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/AwA-0.5B-Q4_K_M-GGUF --hf-file awa-0.5b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/AwA-0.5B-Q4_K_M-GGUF --hf-file awa-0.5b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/AwA-0.5B-Q4_K_M-GGUF --hf-file awa-0.5b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/AwA-0.5B-Q4_K_M-GGUF --hf-file awa-0.5b-q4_k_m.gguf -c 2048
```
|
tensorblock/COKALL-13B-v3-GGUF | tensorblock | "2024-12-30T16:03:46Z" | 14 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:DopeorNope/COKALL-13B-v3",
"base_model:quantized:DopeorNope/COKALL-13B-v3",
"endpoints_compatible",
"region:us"
] | null | "2024-12-30T14:54:45Z" | ---
base_model: DopeorNope/COKALL-13B-v3
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## DopeorNope/COKALL-13B-v3 - GGUF
This repo contains GGUF format model files for [DopeorNope/COKALL-13B-v3](https://huggingface.co/DopeorNope/COKALL-13B-v3).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [COKALL-13B-v3-Q2_K.gguf](https://huggingface.co/tensorblock/COKALL-13B-v3-GGUF/blob/main/COKALL-13B-v3-Q2_K.gguf) | Q2_K | 4.939 GB | smallest, significant quality loss - not recommended for most purposes |
| [COKALL-13B-v3-Q3_K_S.gguf](https://huggingface.co/tensorblock/COKALL-13B-v3-GGUF/blob/main/COKALL-13B-v3-Q3_K_S.gguf) | Q3_K_S | 5.751 GB | very small, high quality loss |
| [COKALL-13B-v3-Q3_K_M.gguf](https://huggingface.co/tensorblock/COKALL-13B-v3-GGUF/blob/main/COKALL-13B-v3-Q3_K_M.gguf) | Q3_K_M | 6.430 GB | very small, high quality loss |
| [COKALL-13B-v3-Q3_K_L.gguf](https://huggingface.co/tensorblock/COKALL-13B-v3-GGUF/blob/main/COKALL-13B-v3-Q3_K_L.gguf) | Q3_K_L | 7.022 GB | small, substantial quality loss |
| [COKALL-13B-v3-Q4_0.gguf](https://huggingface.co/tensorblock/COKALL-13B-v3-GGUF/blob/main/COKALL-13B-v3-Q4_0.gguf) | Q4_0 | 7.468 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [COKALL-13B-v3-Q4_K_S.gguf](https://huggingface.co/tensorblock/COKALL-13B-v3-GGUF/blob/main/COKALL-13B-v3-Q4_K_S.gguf) | Q4_K_S | 7.525 GB | small, greater quality loss |
| [COKALL-13B-v3-Q4_K_M.gguf](https://huggingface.co/tensorblock/COKALL-13B-v3-GGUF/blob/main/COKALL-13B-v3-Q4_K_M.gguf) | Q4_K_M | 7.968 GB | medium, balanced quality - recommended |
| [COKALL-13B-v3-Q5_0.gguf](https://huggingface.co/tensorblock/COKALL-13B-v3-GGUF/blob/main/COKALL-13B-v3-Q5_0.gguf) | Q5_0 | 9.083 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [COKALL-13B-v3-Q5_K_S.gguf](https://huggingface.co/tensorblock/COKALL-13B-v3-GGUF/blob/main/COKALL-13B-v3-Q5_K_S.gguf) | Q5_K_S | 9.083 GB | large, low quality loss - recommended |
| [COKALL-13B-v3-Q5_K_M.gguf](https://huggingface.co/tensorblock/COKALL-13B-v3-GGUF/blob/main/COKALL-13B-v3-Q5_K_M.gguf) | Q5_K_M | 9.341 GB | large, very low quality loss - recommended |
| [COKALL-13B-v3-Q6_K.gguf](https://huggingface.co/tensorblock/COKALL-13B-v3-GGUF/blob/main/COKALL-13B-v3-Q6_K.gguf) | Q6_K | 10.800 GB | very large, extremely low quality loss |
| [COKALL-13B-v3-Q8_0.gguf](https://huggingface.co/tensorblock/COKALL-13B-v3-GGUF/blob/main/COKALL-13B-v3-Q8_0.gguf) | Q8_0 | 13.988 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/COKALL-13B-v3-GGUF --include "COKALL-13B-v3-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/COKALL-13B-v3-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
leopoldh/fine_tuned_lora_model | leopoldh | "2024-11-30T13:02:58Z" | 132 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-30T12:43:13Z" | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
danihdms/cat_dog_root_me | danihdms | "2025-01-27T15:08:22Z" | 8,469 | 1 | null | [
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"region:us"
] | image-classification | "2025-01-27T15:08:11Z" | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: cat_dog_root_me
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
# cat_dog_root_me
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### cat

#### dog
 |
bofenghuang/vigogne-7b-instruct | bofenghuang | "2023-07-11T10:18:13Z" | 1,493 | 23 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"LLM",
"fr",
"license:openrail",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-03-22T21:36:45Z" | ---
license: openrail
language:
- fr
pipeline_tag: text-generation
library_name: transformers
tags:
- llama
- LLM
inference: false
---
<p align="center" width="100%">
<img src="https://huggingface.co/bofenghuang/vigogne-7b-instruct/resolve/main/vigogne_logo.png" alt="Vigogne" style="width: 40%; min-width: 300px; display: block; margin: auto;">
</p>
# Vigogne-7B-Instruct: A French Instruction-following LLaMA Model
Vigogne-7B-Instruct is a LLaMA-7B model fine-tuned to follow the French instructions.
For more information, please visit the Github repo: https://github.com/bofenghuang/vigogne
**Usage and License Notices**: Same as [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca), Vigogne is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes.
## Changelog
All versions are available in branches.
- **V1.0**: Initial release, trained on the translated Stanford Alpaca dataset.
- **V1.1**: Improved translation quality of the Stanford Alpaca dataset.
- **V2.0**: Expanded training dataset to 224k for better performance.
- **V3.0**: Further expanded training dataset to 262k for improved results.
## Usage
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
from vigogne.preprocess import generate_instruct_prompt
model_name_or_path = "bofenghuang/vigogne-7b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, padding_side="right", use_fast=False)
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, torch_dtype=torch.float16, device_map="auto")
user_query = "Expliquez la différence entre DoS et phishing."
prompt = generate_instruct_prompt(user_query)
input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to(model.device)
input_length = input_ids.shape[1]
generated_outputs = model.generate(
input_ids=input_ids,
generation_config=GenerationConfig(
temperature=0.1,
do_sample=True,
repetition_penalty=1.0,
max_new_tokens=512,
),
return_dict_in_generate=True,
)
generated_tokens = generated_outputs.sequences[0, input_length:]
generated_text = tokenizer.decode(generated_tokens, skip_special_tokens=True)
print(generated_text)
```
You can also infer this model by using the following Google Colab Notebook.
<a href="https://colab.research.google.com/github/bofenghuang/vigogne/blob/main/notebooks/infer_instruct.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Limitations
Vigogne is still under development, and there are many limitations that have to be addressed. Please note that it is possible that the model generates harmful or biased content, incorrect information or generally unhelpful answers.
|
Mehdikarim/ppo-Huggy | Mehdikarim | "2025-04-02T19:12:23Z" | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | "2025-04-02T19:12:09Z" | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Mehdikarim/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
timm/resnet50.ra_in1k | timm | "2025-01-21T21:40:36Z" | 177 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"transformers",
"arxiv:2110.00476",
"arxiv:1512.03385",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-04-05T18:13:36Z" | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
- transformers
---
# Model card for resnet50.ra_in1k
A ResNet-B image classification model.
This model features:
* ReLU activations
* single layer 7x7 convolution with pooling
* 1x1 convolution shortcut downsample
Trained on ImageNet-1k in `timm` using recipe template described below.
Recipe details:
* RandAugment `RA` recipe. Inspired by and evolved from EfficientNet RandAugment recipes. Published as `B` recipe in [ResNet Strikes Back](https://arxiv.org/abs/2110.00476).
* RMSProp (TF 1.0 behaviour) optimizer, EMA weight averaging
* Step (exponential decay w/ staircase) LR schedule with warmup
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 25.6
- GMACs: 4.1
- Activations (M): 11.1
- Image size: train = 224 x 224, test = 288 x 288
- **Papers:**
- ResNet strikes back: An improved training procedure in timm: https://arxiv.org/abs/2110.00476
- Deep Residual Learning for Image Recognition: https://arxiv.org/abs/1512.03385
- **Original:** https://github.com/huggingface/pytorch-image-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('resnet50.ra_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnet50.ra_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 112, 112])
# torch.Size([1, 256, 56, 56])
# torch.Size([1, 512, 28, 28])
# torch.Size([1, 1024, 14, 14])
# torch.Size([1, 2048, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnet50.ra_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2048, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
|model |img_size|top1 |top5 |param_count|gmacs|macts|img/sec|
|------------------------------------------|--------|-----|-----|-----------|-----|-----|-------|
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|320 |86.72|98.17|93.6 |35.2 |69.7 |451 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|288 |86.51|98.08|93.6 |28.5 |56.4 |560 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|288 |86.49|98.03|93.6 |28.5 |56.4 |557 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|224 |85.96|97.82|93.6 |17.2 |34.2 |923 |
|[resnext101_32x32d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x32d.fb_wsl_ig1b_ft_in1k)|224 |85.11|97.44|468.5 |87.3 |91.1 |254 |
|[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|416 |85.0 |97.12|191.9 |108.4|213.8|134 |
|[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|352 |84.96|97.22|102.1 |50.2 |101.2|291 |
|[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|320 |84.73|97.18|102.1 |41.5 |83.7 |353 |
|[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|384 |84.71|96.99|164.0 |77.6 |154.7|183 |
|[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|288 |84.57|97.08|93.6 |28.5 |56.4 |557 |
|[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|320 |84.45|97.08|93.2 |31.5 |67.8 |446 |
|[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|352 |84.43|96.97|129.9 |51.1 |105.5|280 |
|[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|288 |84.36|96.92|93.6 |27.6 |53.0 |595 |
|[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|320 |84.35|97.04|66.8 |24.1 |47.7 |610 |
|[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|288 |84.3 |96.94|164.0 |43.7 |87.1 |333 |
|[resnext101_32x8d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_swsl_ig1b_ft_in1k)|224 |84.28|97.17|88.8 |16.5 |31.2 |1100 |
|[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|320 |84.24|96.86|191.9 |64.2 |126.6|228 |
|[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|288 |84.19|96.87|93.6 |27.2 |51.6 |613 |
|[resnext101_32x16d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_wsl_ig1b_ft_in1k)|224 |84.18|97.19|194.0 |36.3 |51.2 |581 |
|[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|288 |84.11|97.11|44.6 |15.1 |29.0 |1144 |
|[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|320 |83.97|96.82|64.7 |31.2 |67.3 |518 |
|[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|256 |83.87|96.75|93.2 |20.2 |43.4 |692 |
|[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|224 |83.86|96.65|93.6 |17.2 |34.2 |923 |
|[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|320 |83.72|96.61|86.6 |24.3 |48.1 |617 |
|[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|256 |83.69|96.78|66.8 |15.4 |30.6 |943 |
|[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|224 |83.68|96.61|93.6 |16.7 |32.0 |986 |
|[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|320 |83.67|96.74|60.2 |24.1 |47.7 |706 |
|[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|256 |83.59|96.61|129.9 |27.1 |55.8 |526 |
|[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|224 |83.58|96.4 |93.6 |16.5 |31.2 |1013 |
|[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|224 |83.54|96.83|44.6 |9.1 |17.6 |1864 |
|[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|288 |83.46|96.54|60.2 |19.1 |37.3 |904 |
|[resnext101_32x16d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_swsl_ig1b_ft_in1k)|224 |83.35|96.85|194.0 |36.3 |51.2 |582 |
|[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|256 |83.23|96.53|64.7 |20.0 |43.1 |809 |
|[resnext101_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_swsl_ig1b_ft_in1k)|224 |83.22|96.75|44.2 |8.0 |21.2 |1814 |
|[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|288 |83.16|96.38|83.5 |25.7 |51.6 |590 |
|[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|256 |83.14|96.38|60.2 |15.4 |30.5 |1096 |
|[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|320 |83.02|96.45|44.6 |16.5 |34.8 |992 |
|[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|288 |82.98|96.54|44.6 |13.4 |28.2 |1077 |
|[resnext101_64x4d.tv_in1k](https://huggingface.co/timm/resnext101_64x4d.tv_in1k)|224 |82.98|96.25|83.5 |15.5 |31.2 |989 |
|[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|256 |82.86|96.28|86.6 |15.6 |30.8 |951 |
|[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|224 |82.83|96.22|88.8 |16.5 |31.2 |1099 |
|[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|224 |82.8 |96.13|60.2 |11.6 |22.6 |1486 |
|[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|288 |82.8 |96.32|44.6 |13.0 |26.8 |1291 |
|[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|288 |82.74|95.71|60.2 |19.1 |37.3 |905 |
|[resnext101_32x8d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_wsl_ig1b_ft_in1k)|224 |82.69|96.63|88.8 |16.5 |31.2 |1100 |
|[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|288 |82.62|95.75|60.2 |19.1 |37.3 |904 |
|[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|288 |82.61|96.49|25.6 |8.9 |20.6 |1729 |
|[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|288 |82.53|96.13|36.8 |9.9 |21.5 |1773 |
|[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|224 |82.5 |96.02|126.9 |22.8 |21.2 |1078 |
|[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|224 |82.46|95.92|83.5 |15.5 |31.2 |987 |
|[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|288 |82.36|96.18|35.7 |8.1 |20.9 |1964 |
|[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|320 |82.35|96.14|25.6 |8.8 |24.1 |1386 |
|[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|288 |82.31|95.63|44.6 |13.0 |26.8 |1291 |
|[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|288 |82.29|96.01|63.6 |13.6 |28.5 |1078 |
|[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|224 |82.29|96.0 |60.2 |11.6 |22.6 |1484 |
|[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|288 |82.27|96.06|68.9 |18.9 |23.8 |1176 |
|[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|256 |82.26|96.07|44.6 |10.6 |22.2 |1542 |
|[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|288 |82.24|95.73|44.6 |13.0 |26.8 |1290 |
|[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|288 |82.2 |96.14|27.6 |7.0 |23.8 |1547 |
|[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|224 |82.18|96.05|44.6 |8.1 |17.1 |1771 |
|[resnext50_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_swsl_ig1b_ft_in1k)|224 |82.17|96.22|25.0 |4.3 |14.4 |2943 |
|[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|288 |82.12|95.65|25.6 |7.1 |19.6 |1704 |
|[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|288 |82.03|95.94|25.0 |7.0 |23.8 |1745 |
|[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|288 |82.0 |96.15|24.9 |5.8 |12.7 |1787 |
|[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|256 |81.99|95.85|36.8 |7.8 |17.0 |2230 |
|[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|176 |81.98|95.72|88.8 |10.3 |19.4 |1768 |
|[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|224 |81.97|95.24|60.2 |11.6 |22.6 |1486 |
|[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|224 |81.93|95.75|44.6 |7.8 |16.2 |2122 |
|[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|224 |81.9 |95.77|44.6 |7.8 |16.2 |2118 |
|[resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k)|224 |81.84|96.1 |194.0 |36.3 |51.2 |583 |
|[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|256 |81.78|95.94|35.7 |6.4 |16.6 |2471 |
|[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|224 |81.77|95.22|60.2 |11.6 |22.6 |1485 |
|[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|224 |81.74|96.06|25.6 |5.4 |12.4 |2813 |
|[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|288 |81.65|95.54|25.6 |7.1 |19.6 |1703 |
|[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|288 |81.64|95.88|25.6 |7.2 |19.7 |1694 |
|[resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k)|224 |81.62|96.04|88.8 |16.5 |31.2 |1101 |
|[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|224 |81.61|95.76|68.9 |11.4 |14.4 |1930 |
|[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|288 |81.61|95.83|25.6 |8.5 |19.2 |1868 |
|[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|224 |81.5 |95.16|44.6 |7.8 |16.2 |2125 |
|[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|288 |81.48|95.16|25.0 |7.0 |23.8 |1745 |
|[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|288 |81.47|95.71|25.9 |6.9 |18.6 |2071 |
|[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|224 |81.45|95.53|68.9 |11.4 |14.4 |1929 |
|[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|288 |81.44|95.22|25.6 |7.2 |19.7 |1908 |
|[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|256 |81.44|95.67|25.6 |5.6 |15.4 |2168 |
|[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|288 |81.4 |95.82|30.2 |6.8 |13.9 |2132 |
|[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|288 |81.37|95.74|25.6 |7.2 |19.7 |1910 |
|[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|224 |81.32|95.19|44.6 |7.8 |16.2 |2125 |
|[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|288 |81.3 |95.65|28.1 |6.8 |18.4 |1803 |
|[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|288 |81.3 |95.11|25.0 |7.0 |23.8 |1746 |
|[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|224 |81.27|95.62|27.6 |4.3 |14.4 |2591 |
|[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|224 |81.26|95.16|25.6 |4.3 |11.8 |2823 |
|[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|288 |81.23|95.54|15.7 |4.8 |19.6 |2117 |
|[senet154.gluon_in1k](https://huggingface.co/timm/senet154.gluon_in1k)|224 |81.23|95.35|115.1 |20.8 |38.7 |545 |
|[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|288 |81.22|95.11|25.6 |6.8 |18.4 |2089 |
|[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|288 |81.22|95.63|25.6 |6.8 |18.4 |676 |
|[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|288 |81.18|95.09|25.6 |7.2 |19.7 |1908 |
|[resnet50.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet50.fb_swsl_ig1b_ft_in1k)|224 |81.18|95.98|25.6 |4.1 |11.1 |3455 |
|[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|224 |81.17|95.34|25.0 |4.3 |14.4 |2933 |
|[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|224 |81.1 |95.33|25.0 |4.3 |14.4 |2934 |
|[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|288 |81.1 |95.23|28.1 |6.8 |18.4 |1801 |
|[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|288 |81.1 |95.12|28.1 |6.8 |18.4 |1799 |
|[resnet152s.gluon_in1k](https://huggingface.co/timm/resnet152s.gluon_in1k)|224 |81.02|95.41|60.3 |12.9 |25.0 |1347 |
|[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|288 |80.97|95.44|25.6 |6.8 |18.4 |2085 |
|[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|256 |80.94|95.45|25.9 |5.4 |14.7 |2571 |
|[resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.93|95.73|44.2 |8.0 |21.2 |1814 |
|[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|288 |80.91|95.55|25.6 |6.8 |18.4 |2084 |
|[seresnext101_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_32x4d.gluon_in1k)|224 |80.9 |95.31|49.0 |8.0 |21.3 |1585 |
|[seresnext101_64x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_64x4d.gluon_in1k)|224 |80.9 |95.3 |88.2 |15.5 |31.2 |918 |
|[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|288 |80.86|95.52|25.6 |6.8 |18.4 |2085 |
|[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|224 |80.85|95.43|25.6 |4.1 |11.1 |3450 |
|[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|224 |80.84|95.02|25.6 |4.3 |11.8 |2821 |
|[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|224 |80.79|95.62|24.9 |3.5 |7.7 |2961 |
|[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|288 |80.79|95.36|19.8 |6.0 |14.8 |2506 |
|[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|288 |80.79|95.58|19.9 |4.2 |10.6 |2349 |
|[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|288 |80.78|94.99|25.6 |6.8 |18.4 |2088 |
|[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|288 |80.71|95.43|25.6 |6.8 |18.4 |2087 |
|[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|288 |80.7 |95.39|25.0 |7.0 |23.8 |1749 |
|[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|192 |80.69|95.24|63.6 |6.0 |12.7 |2270 |
|[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|224 |80.68|94.71|25.6 |4.4 |11.9 |3162 |
|[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|288 |80.68|95.36|19.7 |6.0 |14.8 |2637 |
|[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|224 |80.67|95.3 |25.6 |4.1 |11.1 |3452 |
|[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|288 |80.67|95.42|25.0 |7.4 |25.1 |1626 |
|[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|224 |80.63|95.21|25.6 |5.2 |11.6 |3034 |
|[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|224 |80.61|95.32|25.6 |4.4 |11.9 |2813 |
|[resnext101_64x4d.gluon_in1k](https://huggingface.co/timm/resnext101_64x4d.gluon_in1k)|224 |80.61|94.99|83.5 |15.5 |31.2 |989 |
|[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|288 |80.6 |95.31|19.9 |6.0 |14.8 |2578 |
|[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|256 |80.57|95.17|15.7 |3.8 |15.5 |2710 |
|[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|224 |80.56|95.0 |60.2 |11.6 |22.6 |1483 |
|[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|224 |80.53|95.16|25.6 |4.4 |11.9 |3164 |
|[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|224 |80.53|94.46|25.0 |4.3 |14.4 |2930 |
|[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|176 |80.48|94.98|126.9 |14.3 |13.2 |1719 |
|[resnet152d.gluon_in1k](https://huggingface.co/timm/resnet152d.gluon_in1k)|224 |80.47|95.2 |60.2 |11.8 |23.4 |1428 |
|[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|288 |80.45|95.32|25.6 |6.8 |18.4 |2086 |
|[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|224 |80.45|95.24|30.2 |4.1 |8.4 |3530 |
|[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|224 |80.45|94.63|25.0 |4.3 |14.4 |2936 |
|[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|176 |80.43|95.09|68.9 |7.3 |9.0 |3015 |
|[resnet101d.gluon_in1k](https://huggingface.co/timm/resnet101d.gluon_in1k)|224 |80.42|95.01|44.6 |8.1 |17.0 |2007 |
|[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|224 |80.38|94.6 |25.6 |4.1 |11.1 |3461 |
|[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|256 |80.36|95.1 |19.8 |4.8 |11.7 |3267 |
|[resnext101_32x4d.gluon_in1k](https://huggingface.co/timm/resnext101_32x4d.gluon_in1k)|224 |80.34|94.93|44.2 |8.0 |21.2 |1814 |
|[resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.32|95.4 |25.0 |4.3 |14.4 |2941 |
|[resnet101s.gluon_in1k](https://huggingface.co/timm/resnet101s.gluon_in1k)|224 |80.28|95.16|44.7 |9.2 |18.6 |1851 |
|[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|224 |80.26|95.08|28.1 |4.1 |11.1 |2972 |
|[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|288 |80.24|95.24|25.6 |8.5 |19.9 |1523 |
|[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|224 |80.22|94.63|25.6 |4.4 |11.9 |3162 |
|[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|176 |80.2 |94.64|60.2 |7.2 |14.0 |2346 |
|[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|224 |80.08|94.74|28.1 |4.1 |11.1 |2969 |
|[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|256 |80.08|94.97|19.7 |4.8 |11.7 |3284 |
|[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|256 |80.06|94.99|19.9 |4.8 |11.7 |3216 |
|[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|224 |80.06|94.95|25.6 |4.1 |11.1 |1109 |
|[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|224 |80.02|94.71|28.1 |4.1 |11.1 |2962 |
|[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|288 |79.97|95.05|25.6 |6.8 |18.4 |2086 |
|[resnet152c.gluon_in1k](https://huggingface.co/timm/resnet152c.gluon_in1k)|224 |79.92|94.84|60.2 |11.8 |23.4 |1455 |
|[seresnext50_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext50_32x4d.gluon_in1k)|224 |79.91|94.82|27.6 |4.3 |14.4 |2591 |
|[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|224 |79.91|94.67|25.6 |4.1 |11.1 |3456 |
|[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|176 |79.9 |94.6 |44.6 |4.9 |10.1 |3341 |
|[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|224 |79.89|94.97|35.7 |4.5 |12.1 |2774 |
|[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|224 |79.88|94.87|25.6 |4.1 |11.1 |3455 |
|[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|320 |79.86|95.07|16.0 |5.2 |16.4 |2168 |
|[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|224 |79.85|94.56|25.6 |4.1 |11.1 |3460 |
|[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|288 |79.83|94.97|25.6 |6.8 |18.4 |2087 |
|[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|224 |79.82|94.62|44.6 |7.8 |16.2 |2114 |
|[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|224 |79.76|94.6 |25.0 |4.3 |14.4 |2943 |
|[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|224 |79.74|94.95|25.6 |4.1 |11.1 |3455 |
|[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|224 |79.74|94.87|19.9 |2.5 |6.4 |3929 |
|[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|288 |79.71|94.83|19.7 |6.0 |14.8 |2710 |
|[resnet152.gluon_in1k](https://huggingface.co/timm/resnet152.gluon_in1k)|224 |79.68|94.74|60.2 |11.6 |22.6 |1486 |
|[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|224 |79.67|94.87|25.0 |4.5 |15.2 |2729 |
|[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|288 |79.63|94.91|25.6 |6.8 |18.4 |2086 |
|[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|224 |79.56|94.72|25.6 |4.3 |11.8 |2805 |
|[resnet101c.gluon_in1k](https://huggingface.co/timm/resnet101c.gluon_in1k)|224 |79.53|94.58|44.6 |8.1 |17.0 |2062 |
|[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|224 |79.52|94.61|25.6 |4.1 |11.1 |3459 |
|[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|176 |79.42|94.64|25.6 |2.6 |6.9 |5397 |
|[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|288 |79.4 |94.66|18.0 |5.9 |14.6 |2752 |
|[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|224 |79.38|94.57|25.6 |4.1 |11.1 |3459 |
|[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|176 |79.37|94.3 |25.0 |2.7 |9.0 |4577 |
|[resnext50_32x4d.gluon_in1k](https://huggingface.co/timm/resnext50_32x4d.gluon_in1k)|224 |79.36|94.43|25.0 |4.3 |14.4 |2942 |
|[resnext101_32x8d.tv_in1k](https://huggingface.co/timm/resnext101_32x8d.tv_in1k)|224 |79.31|94.52|88.8 |16.5 |31.2 |1100 |
|[resnet101.gluon_in1k](https://huggingface.co/timm/resnet101.gluon_in1k)|224 |79.31|94.53|44.6 |7.8 |16.2 |2125 |
|[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|224 |79.31|94.63|25.6 |5.2 |12.0 |2524 |
|[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|176 |79.27|94.49|25.6 |2.6 |6.9 |5404 |
|[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|224 |79.25|94.31|25.0 |4.3 |14.4 |2931 |
|[resnet50.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet50.fb_ssl_yfcc100m_ft_in1k)|224 |79.22|94.84|25.6 |4.1 |11.1 |3451 |
|[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|256 |79.21|94.56|19.7 |4.8 |11.7 |3392 |
|[resnet50d.gluon_in1k](https://huggingface.co/timm/resnet50d.gluon_in1k)|224 |79.07|94.48|25.6 |4.4 |11.9 |3162 |
|[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|224 |79.03|94.38|25.6 |4.1 |11.1 |3453 |
|[resnet50.am_in1k](https://huggingface.co/timm/resnet50.am_in1k)|224 |79.01|94.39|25.6 |4.1 |11.1 |3461 |
|[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|256 |79.01|94.37|18.0 |4.6 |11.6 |3440 |
|[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|256 |78.9 |94.54|16.0 |3.4 |10.5 |3421 |
|[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|160 |78.89|94.11|60.2 |5.9 |11.5 |2745 |
|[wide_resnet101_2.tv_in1k](https://huggingface.co/timm/wide_resnet101_2.tv_in1k)|224 |78.84|94.28|126.9 |22.8 |21.2 |1079 |
|[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|288 |78.83|94.24|16.8 |4.5 |16.8 |2251 |
|[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|224 |78.81|94.32|25.6 |4.1 |11.1 |3454 |
|[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|288 |78.74|94.33|16.8 |4.5 |16.7 |2264 |
|[resnet50s.gluon_in1k](https://huggingface.co/timm/resnet50s.gluon_in1k)|224 |78.72|94.23|25.7 |5.5 |13.5 |2796 |
|[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|224 |78.71|94.24|25.6 |4.4 |11.9 |3154 |
|[wide_resnet50_2.tv_in1k](https://huggingface.co/timm/wide_resnet50_2.tv_in1k)|224 |78.47|94.09|68.9 |11.4 |14.4 |1934 |
|[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|224 |78.46|94.27|25.6 |4.1 |11.1 |3454 |
|[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|288 |78.43|94.35|21.8 |6.5 |7.5 |3291 |
|[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|288 |78.42|94.04|10.5 |3.1 |13.3 |3226 |
|[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|320 |78.33|94.13|16.0 |5.2 |16.4 |2391 |
|[resnet152.tv_in1k](https://huggingface.co/timm/resnet152.tv_in1k)|224 |78.32|94.04|60.2 |11.6 |22.6 |1487 |
|[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|288 |78.28|94.1 |10.4 |3.1 |13.3 |3062 |
|[bat_resnext26ts.ch_in1k](https://huggingface.co/timm/bat_resnext26ts.ch_in1k)|256 |78.25|94.1 |10.7 |2.5 |12.5 |3393 |
|[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|224 |78.06|93.78|25.6 |4.1 |11.1 |3450 |
|[resnet50c.gluon_in1k](https://huggingface.co/timm/resnet50c.gluon_in1k)|224 |78.0 |93.99|25.6 |4.4 |11.9 |3286 |
|[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|288 |78.0 |93.91|10.3 |3.1 |13.3 |3297 |
|[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|224 |77.98|93.75|16.8 |2.7 |10.1 |3841 |
|[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|288 |77.92|93.77|21.8 |6.1 |6.2 |3609 |
|[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|160 |77.88|93.71|44.6 |4.0 |8.3 |3926 |
|[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|256 |77.87|93.84|16.0 |3.4 |10.5 |3772 |
|[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|256 |77.86|93.79|10.4 |2.4 |10.5 |4263 |
|[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|160 |77.82|93.81|35.7 |2.3 |6.2 |5238 |
|[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|256 |77.81|93.82|10.5 |2.4 |10.5 |4183 |
|[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|160 |77.79|93.6 |25.6 |2.2 |6.0 |5329 |
|[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|160 |77.73|93.32|25.0 |2.2 |7.4 |5576 |
|[resnext50_32x4d.tv_in1k](https://huggingface.co/timm/resnext50_32x4d.tv_in1k)|224 |77.61|93.7 |25.0 |4.3 |14.4 |2944 |
|[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|224 |77.59|93.61|16.8 |2.7 |10.2 |3807 |
|[resnet50.gluon_in1k](https://huggingface.co/timm/resnet50.gluon_in1k)|224 |77.58|93.72|25.6 |4.1 |11.1 |3455 |
|[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|256 |77.44|93.56|10.3 |2.4 |10.5 |4284 |
|[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|288 |77.41|93.63|16.0 |4.3 |13.5 |2907 |
|[resnet101.tv_in1k](https://huggingface.co/timm/resnet101.tv_in1k)|224 |77.38|93.54|44.6 |7.8 |16.2 |2125 |
|[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|160 |77.22|93.27|25.6 |2.2 |6.1 |5982 |
|[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|288 |77.17|93.47|10.3 |3.1 |13.3 |3392 |
|[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|288 |77.15|93.27|21.8 |6.1 |6.2 |3615 |
|[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|224 |77.1 |93.37|21.8 |3.9 |4.5 |5436 |
|[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|224 |77.02|93.07|28.1 |4.1 |11.1 |2952 |
|[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|256 |76.78|93.13|10.3 |2.4 |10.5 |4410 |
|[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|224 |76.7 |93.17|16.0 |2.6 |8.2 |4859 |
|[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|288 |76.5 |93.35|21.8 |6.1 |6.2 |3617 |
|[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|224 |76.42|92.87|21.8 |3.7 |3.7 |5984 |
|[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|288 |76.35|93.18|16.0 |3.9 |12.2 |3331 |
|[resnet50.tv_in1k](https://huggingface.co/timm/resnet50.tv_in1k)|224 |76.13|92.86|25.6 |4.1 |11.1 |3457 |
|[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|160 |75.96|92.5 |25.6 |2.1 |5.7 |6490 |
|[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|224 |75.52|92.44|21.8 |3.7 |3.7 |5991 |
|[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|224 |75.3 |92.58|16.0 |2.4 |7.4 |5583 |
|[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|224 |75.16|92.18|21.8 |3.7 |3.7 |5994 |
|[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|160 |75.1 |92.08|28.1 |2.1 |5.7 |5513 |
|[resnet34.gluon_in1k](https://huggingface.co/timm/resnet34.gluon_in1k)|224 |74.57|91.98|21.8 |3.7 |3.7 |5984 |
|[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|288 |73.81|91.83|11.7 |3.4 |5.4 |5196 |
|[resnet34.tv_in1k](https://huggingface.co/timm/resnet34.tv_in1k)|224 |73.32|91.42|21.8 |3.7 |3.7 |5979 |
|[resnet18.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet18.fb_swsl_ig1b_ft_in1k)|224 |73.28|91.73|11.7 |1.8 |2.5 |10213 |
|[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|288 |73.16|91.03|11.7 |3.0 |4.1 |6050 |
|[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|224 |72.98|91.11|21.8 |3.7 |3.7 |5967 |
|[resnet18.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet18.fb_ssl_yfcc100m_ft_in1k)|224 |72.6 |91.42|11.7 |1.8 |2.5 |10213 |
|[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|288 |72.37|90.59|11.7 |3.0 |4.1 |6051 |
|[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|224 |72.26|90.31|10.1 |1.7 |5.8 |7026 |
|[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|224 |72.26|90.68|11.7 |2.1 |3.3 |8707 |
|[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|224 |71.49|90.07|11.7 |1.8 |2.5 |10187 |
|[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|176 |71.31|89.69|10.1 |1.1 |3.6 |10970 |
|[resnet18.gluon_in1k](https://huggingface.co/timm/resnet18.gluon_in1k)|224 |70.84|89.76|11.7 |1.8 |2.5 |10210 |
|[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|224 |70.64|89.47|11.7 |1.8 |2.5 |10194 |
|[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|160 |70.56|89.52|21.8 |1.9 |1.9 |10737 |
|[resnet18.tv_in1k](https://huggingface.co/timm/resnet18.tv_in1k)|224 |69.76|89.07|11.7 |1.8 |2.5 |10205 |
|[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|224 |68.34|88.03|5.4 |1.1 |2.4 |13079 |
|[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|224 |68.25|88.17|11.7 |1.8 |2.5 |10167 |
|[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|176 |66.71|86.96|5.4 |0.7 |1.5 |20327 |
|[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|160 |65.66|86.26|11.7 |0.9 |1.3 |18229 |
## Citation
```bibtex
@inproceedings{wightman2021resnet,
title={ResNet strikes back: An improved training procedure in timm},
author={Wightman, Ross and Touvron, Hugo and Jegou, Herve},
booktitle={NeurIPS 2021 Workshop on ImageNet: Past, Present, and Future}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@article{He2015,
author = {Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun},
title = {Deep Residual Learning for Image Recognition},
journal = {arXiv preprint arXiv:1512.03385},
year = {2015}
}
```
|
reichenbach/ppo-Huggy | reichenbach | "2024-12-29T14:03:31Z" | 9 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | "2024-12-29T14:03:25Z" | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: reichenbach/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
suyoungpark11/exp-31 | suyoungpark11 | "2025-03-26T07:24:24Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2025-03-26T07:20:10Z" | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Kimang18/whisper-small-khmer-mlx-fp32 | Kimang18 | "2025-04-01T14:35:39Z" | 5 | 0 | mlx | [
"mlx",
"safetensors",
"whisper",
"Khmer",
"automatic-speech-recognition",
"km",
"dataset:seanghay/km-speech-corpus",
"dataset:seanghay/khmer_mwpt_speech",
"license:apache-2.0",
"model-index",
"region:us"
] | automatic-speech-recognition | "2025-03-15T00:25:09Z" | Temporary Redirect. Redirecting to /api/resolve-cache/models/Kimang18/whisper-small-khmer-mlx-fp32/be7c0ad3c0870929a00579b526818019c597ff71/README.md?%2FKimang18%2Fwhisper-small-khmer-mlx-fp32%2Fresolve%2Fmain%2FREADME.md=&etag=%223a589fab2a391cbb53c864fb0e1878cf5d98c391%22 |
yahyaabd/allstats-v1-2 | yahyaabd | "2025-01-16T17:37:54Z" | 144 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:79621",
"loss:CosineSimilarityLoss",
"dataset:yahyaabd/allstats-search-pairs-dataset",
"arxiv:1908.10084",
"base_model:sentence-transformers/paraphrase-multilingual-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-multilingual-mpnet-base-v2",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2025-01-16T17:37:12Z" | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:79621
- loss:CosineSimilarityLoss
base_model: sentence-transformers/paraphrase-multilingual-mpnet-base-v2
widget:
- source_sentence: Data demografi Indonesia 2021 perempuan dan lakilaki
sentences:
- Buletin Statistik Perdagangan Luar Negeri Ekspor Menurut Komoditi HS, Februari
2015
- Statistik Potensi Desa Provinsi Jawa Barat 2014
- Pengeluaran untuk Konsumsi Penduduk Indonesia, September 2017
- source_sentence: Data analisis tematik kependudukan Indonesia migrasi dan ketenagakerjaan
sentences:
- Direktori Perusahaan Industri Penggilingan Padi Tahun 2012 Provinsi Bengkulu
- Buletin Statistik Perdagangan Luar Negeri Ekspor Menurut HS, Juni 2023
- Luas Panen dan Produksi Padi 2022
- source_sentence: Daftar perusahaan penggilingan padi Kalimantan
sentences:
- Ringkasan Neraca Arus Dana, Triwulan II, 2011*), (Miliar Rupiah)
- Klasifikasi Baku Komoditas Indonesia 2012 Buku 1
- Statistik Penduduk Lanjut Usia Provinsi Nusa Tenggara Barat 2010-Hasil Sensus
Penduduk 2010
- source_sentence: Perdagangan luar negeri impor Januari 2010
sentences:
- Buletin Statistik Perdagangan Luar Negeri Impor Januari 2010
- Statistik Tanaman Sayuran dan Buah-buahan Semusim Indonesia 2012
- Klasifikasi Baku Komoditas Indonesia (KBKI) 2012 Buku 4
- source_sentence: Biaya hidup kelompok perumahan Indonesia 2017
sentences:
- Indeks Harga Perdagangan Besar 2007
- Statistik Upah 2013
- Survei Biaya Hidup (SBH) 2018 Bulukumba, Watampone, Makassar, Pare-Pare, dan Palopo
datasets:
- yahyaabd/allstats-search-pairs-dataset
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
model-index:
- name: SentenceTransformer based on sentence-transformers/paraphrase-multilingual-mpnet-base-v2
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: allstats semantic mpnet eval
type: allstats-semantic-mpnet-eval
metrics:
- type: pearson_cosine
value: 0.9832363617810226
name: Pearson Cosine
- type: spearman_cosine
value: 0.858203940325394
name: Spearman Cosine
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: allstats semantic mpnet test
type: allstats-semantic-mpnet-test
metrics:
- type: pearson_cosine
value: 0.9839788057910102
name: Pearson Cosine
- type: spearman_cosine
value: 0.8555371006656465
name: Spearman Cosine
---
# SentenceTransformer based on sentence-transformers/paraphrase-multilingual-mpnet-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) on the [allstats-search-pairs-dataset](https://huggingface.co/datasets/yahyaabd/allstats-search-pairs-dataset) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) <!-- at revision 75c57757a97f90ad739aca51fa8bfea0e485a7f2 -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [allstats-search-pairs-dataset](https://huggingface.co/datasets/yahyaabd/allstats-search-pairs-dataset)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("yahyaabd/allstats-v1-2")
# Run inference
sentences = [
'Biaya hidup kelompok perumahan Indonesia 2017',
'Statistik Upah 2013',
'Survei Biaya Hidup (SBH) 2018 Bulukumba, Watampone, Makassar, Pare-Pare, dan Palopo',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Datasets: `allstats-semantic-mpnet-eval` and `allstats-semantic-mpnet-test`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | allstats-semantic-mpnet-eval | allstats-semantic-mpnet-test |
|:--------------------|:-----------------------------|:-----------------------------|
| pearson_cosine | 0.9832 | 0.984 |
| **spearman_cosine** | **0.8582** | **0.8555** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### allstats-search-pairs-dataset
* Dataset: [allstats-search-pairs-dataset](https://huggingface.co/datasets/yahyaabd/allstats-search-pairs-dataset) at [6712cb1](https://huggingface.co/datasets/yahyaabd/allstats-search-pairs-dataset/tree/6712cb14bbd89da6f87890ac082b09e0adb7a02e)
* Size: 79,621 training samples
* Columns: <code>query</code>, <code>doc</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | query | doc | label |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 5 tokens</li><li>mean: 10.78 tokens</li><li>max: 39 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 13.73 tokens</li><li>max: 58 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.44</li><li>max: 0.99</li></ul> |
* Samples:
| query | doc | label |
|:--------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------|:------------------|
| <code>Produksi jagung di Indonesia tahun 2009</code> | <code>Indeks Unit Value Ekspor Menurut Kode SITC Bulan Februari 2024</code> | <code>0.1</code> |
| <code>Data produksi industri manufaktur 2021</code> | <code>Perkembangan Indeks Produksi Industri Manufaktur 2021</code> | <code>0.96</code> |
| <code>direktori perusahaan industri penggilingan padi tahun 2012 provinsi sulawesi utara dan gorontalo</code> | <code>Neraca Pemerintahan Umum Indonesia 2007-2012</code> | <code>0.03</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Evaluation Dataset
#### allstats-search-pairs-dataset
* Dataset: [allstats-search-pairs-dataset](https://huggingface.co/datasets/yahyaabd/allstats-search-pairs-dataset) at [6712cb1](https://huggingface.co/datasets/yahyaabd/allstats-search-pairs-dataset/tree/6712cb14bbd89da6f87890ac082b09e0adb7a02e)
* Size: 9,952 evaluation samples
* Columns: <code>query</code>, <code>doc</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | query | doc | label |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 5 tokens</li><li>mean: 10.75 tokens</li><li>max: 40 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 14.09 tokens</li><li>max: 49 tokens</li></ul> | <ul><li>min: 0.01</li><li>mean: 0.48</li><li>max: 0.99</li></ul> |
* Samples:
| query | doc | label |
|:--------------------------------------------------------------------|:-----------------------------------------------------------------|:------------------|
| <code>Daftar perusahaan industri pengolahan skala kecil 2006</code> | <code>Statistik Migrasi Nusa Tenggara Barat Hasil SP 2010</code> | <code>0.05</code> |
| <code>Populasi Indonesia per provinsi 2000-2010</code> | <code>Indikator Ekonomi Desember 2023</code> | <code>0.08</code> |
| <code>Data harga barang desa non-pangan tahun 2022</code> | <code>Statistik Kunjungan Tamu Asing 2004</code> | <code>0.1</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `num_train_epochs`: 16
- `warmup_ratio`: 0.1
- `fp16`: True
- `dataloader_num_workers`: 4
- `load_best_model_at_end`: True
- `label_smoothing_factor`: 0.01
- `eval_on_start`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 16
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 4
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.01
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: True
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | allstats-semantic-mpnet-eval_spearman_cosine | allstats-semantic-mpnet-test_spearman_cosine |
|:-----------:|:---------:|:-------------:|:---------------:|:--------------------------------------------:|:--------------------------------------------:|
| 0 | 0 | - | 0.0958 | 0.6404 | - |
| 0.2008 | 250 | 0.0493 | 0.0262 | 0.7681 | - |
| 0.4016 | 500 | 0.0232 | 0.0187 | 0.7719 | - |
| 0.6024 | 750 | 0.0182 | 0.0157 | 0.7784 | - |
| 0.8032 | 1000 | 0.0163 | 0.0142 | 0.7805 | - |
| 1.0040 | 1250 | 0.0141 | 0.0137 | 0.7782 | - |
| 1.2048 | 1500 | 0.0114 | 0.0121 | 0.7815 | - |
| 1.4056 | 1750 | 0.0108 | 0.0115 | 0.7848 | - |
| 1.6064 | 2000 | 0.0109 | 0.0111 | 0.7873 | - |
| 1.8072 | 2250 | 0.0101 | 0.0103 | 0.7923 | - |
| 2.0080 | 2500 | 0.0096 | 0.0103 | 0.7931 | - |
| 2.2088 | 2750 | 0.0074 | 0.0092 | 0.7941 | - |
| 2.4096 | 3000 | 0.0067 | 0.0089 | 0.7985 | - |
| 2.6104 | 3250 | 0.0065 | 0.0086 | 0.8016 | - |
| 2.8112 | 3500 | 0.0064 | 0.0083 | 0.8017 | - |
| 3.0120 | 3750 | 0.0064 | 0.0084 | 0.8049 | - |
| 3.2129 | 4000 | 0.0045 | 0.0078 | 0.8090 | - |
| 3.4137 | 4250 | 0.0046 | 0.0078 | 0.8078 | - |
| 3.6145 | 4500 | 0.0043 | 0.0075 | 0.8111 | - |
| 3.8153 | 4750 | 0.0047 | 0.0074 | 0.8103 | - |
| 4.0161 | 5000 | 0.0047 | 0.0077 | 0.8084 | - |
| 4.2169 | 5250 | 0.0036 | 0.0072 | 0.8161 | - |
| 4.4177 | 5500 | 0.0034 | 0.0070 | 0.8187 | - |
| 4.6185 | 5750 | 0.0033 | 0.0068 | 0.8186 | - |
| 4.8193 | 6000 | 0.0035 | 0.0068 | 0.8220 | - |
| 5.0201 | 6250 | 0.0033 | 0.0071 | 0.8193 | - |
| 5.2209 | 6500 | 0.0027 | 0.0067 | 0.8239 | - |
| 5.4217 | 6750 | 0.0026 | 0.0069 | 0.8269 | - |
| 5.6225 | 7000 | 0.0027 | 0.0064 | 0.8275 | - |
| 5.8233 | 7250 | 0.0026 | 0.0066 | 0.8278 | - |
| 6.0241 | 7500 | 0.0026 | 0.0064 | 0.8293 | - |
| 6.2249 | 7750 | 0.0019 | 0.0064 | 0.8298 | - |
| 6.4257 | 8000 | 0.0019 | 0.0062 | 0.8319 | - |
| 6.6265 | 8250 | 0.0021 | 0.0064 | 0.8300 | - |
| 6.8273 | 8500 | 0.0022 | 0.0061 | 0.8338 | - |
| 7.0281 | 8750 | 0.0021 | 0.0063 | 0.8330 | - |
| 7.2289 | 9000 | 0.0016 | 0.0061 | 0.8374 | - |
| 7.4297 | 9250 | 0.0016 | 0.0059 | 0.8394 | - |
| 7.6305 | 9500 | 0.0016 | 0.0060 | 0.8379 | - |
| 7.8313 | 9750 | 0.0017 | 0.0062 | 0.8371 | - |
| 8.0321 | 10000 | 0.0017 | 0.0061 | 0.8392 | - |
| 8.2329 | 10250 | 0.0013 | 0.0061 | 0.8407 | - |
| 8.4337 | 10500 | 0.0013 | 0.0059 | 0.8418 | - |
| 8.6345 | 10750 | 0.0014 | 0.0060 | 0.8400 | - |
| 8.8353 | 11000 | 0.0014 | 0.0064 | 0.8415 | - |
| 9.0361 | 11250 | 0.0013 | 0.0060 | 0.8428 | - |
| 9.2369 | 11500 | 0.001 | 0.0058 | 0.8435 | - |
| 9.4378 | 11750 | 0.0011 | 0.0058 | 0.8458 | - |
| 9.6386 | 12000 | 0.0011 | 0.0060 | 0.8460 | - |
| 9.8394 | 12250 | 0.0011 | 0.0060 | 0.8468 | - |
| 10.0402 | 12500 | 0.0011 | 0.0060 | 0.8448 | - |
| 10.2410 | 12750 | 0.0009 | 0.0060 | 0.8476 | - |
| 10.4418 | 13000 | 0.0008 | 0.0059 | 0.8489 | - |
| 10.6426 | 13250 | 0.0009 | 0.0058 | 0.8485 | - |
| 10.8434 | 13500 | 0.0009 | 0.0059 | 0.8496 | - |
| 11.0442 | 13750 | 0.0008 | 0.0059 | 0.8492 | - |
| 11.2450 | 14000 | 0.0007 | 0.0058 | 0.8489 | - |
| 11.4458 | 14250 | 0.0007 | 0.0058 | 0.8507 | - |
| 11.6466 | 14500 | 0.0007 | 0.0057 | 0.8515 | - |
| 11.8474 | 14750 | 0.0007 | 0.0058 | 0.8505 | - |
| 12.0482 | 15000 | 0.0007 | 0.0058 | 0.8515 | - |
| 12.2490 | 15250 | 0.0006 | 0.0058 | 0.8523 | - |
| 12.4498 | 15500 | 0.0006 | 0.0057 | 0.8525 | - |
| 12.6506 | 15750 | 0.0006 | 0.0056 | 0.8542 | - |
| 12.8514 | 16000 | 0.0006 | 0.0056 | 0.8545 | - |
| 13.0522 | 16250 | 0.0006 | 0.0056 | 0.8550 | - |
| 13.2530 | 16500 | 0.0005 | 0.0056 | 0.8550 | - |
| 13.4538 | 16750 | 0.0005 | 0.0056 | 0.8553 | - |
| 13.6546 | 17000 | 0.0005 | 0.0055 | 0.8556 | - |
| 13.8554 | 17250 | 0.0005 | 0.0056 | 0.8559 | - |
| 14.0562 | 17500 | 0.0005 | 0.0056 | 0.8561 | - |
| 14.2570 | 17750 | 0.0004 | 0.0055 | 0.8570 | - |
| 14.4578 | 18000 | 0.0004 | 0.0055 | 0.8567 | - |
| 14.6586 | 18250 | 0.0004 | 0.0055 | 0.8575 | - |
| 14.8594 | 18500 | 0.0004 | 0.0055 | 0.8575 | - |
| **15.0602** | **18750** | **0.0004** | **0.0055** | **0.858** | **-** |
| 15.2610 | 19000 | 0.0004 | 0.0055 | 0.8583 | - |
| 15.4618 | 19250 | 0.0003 | 0.0055 | 0.8583 | - |
| 15.6627 | 19500 | 0.0004 | 0.0055 | 0.8581 | - |
| 15.8635 | 19750 | 0.0004 | 0.0055 | 0.8582 | - |
| 16.0 | 19920 | - | - | - | 0.8555 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.3.1
- Transformers: 4.47.0
- PyTorch: 2.5.1+cu121
- Accelerate: 1.2.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
ANGKJ1995/MiniLMv2-L6-H384-distilled-from-RoBERTa-Large-mbib-2048 | ANGKJ1995 | "2024-06-21T04:20:54Z" | 8 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-06-21T04:20:14Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
salmaniq/Text-to-speech-voice_cloning | salmaniq | "2023-10-24T11:41:38Z" | 0 | 2 | null | [
"region:us"
] | null | "2023-10-24T11:38:41Z" |
# TTS-RVC-API
Yes, we can use Coqui with RVC!
#Why combine the two frameworks? Coqui is a text-to-speech framework (vocoder and encoder), but cloning your own voice takes decades and offers no guarantee of better results. That's why we use RVC (Retrieval-Based Voice Conversion), which works only for speech-to-speech. You can train the model with just 2-3 minutes of dataset as it uses Hubert (a pre-trained model to fine-tune quickly and provide better results).
## Installation
How to use Coqui + RVC api?
```python
https://github.com/skshadan/TTS-RVC-API.git
```
```python
python -m venv .venv
. .venv/bin/activate
pip install -r requirements.txt
pip install TTS
python -m uvicorn app.main:app
```
Now update `config.toml` with relative paths
config `model_dir` path or set a `speaker_name` in the request body
Where the RVC v2 model is mounted on the container at:
```python
/
└── models
└── speaker1
├── speaker1.pth
└── speaker1.index
```
Now Run this
```python
python -m uvicorn app.main:app
```
## POST REQUEST
```python
http://localhost:8000/generate
```
```python
emotions : happy,sad,angry,dull
speed = 1.0 - 2.0
```
```python
{
"speaker_name": "speaker3",
"input_text": "Hey there! Welcome to the world",
"emotion": "Surprise",
"speed": 1.0
}
```
# CODE SNIPPET
```python
import requests
import json
import time
url = "http://127.0.0.1:8000/generate"
payload = json.dumps({
"speaker_name": "speaker3",
"input_text": "Are you mad? The way you've betrayed me is beyond comprehension, a slap in the face that's left me boiling with an anger so intense it's as if you've thrown gasoline on a fire, utterly destroying any trust that was left.",
"emotion": "Dull",
"speed": 1.0
})
headers = {
'Content-Type': 'application/json'
}
start_time = time.time() # Start the timer
response = requests.request("POST", url, headers=headers, data=payload)
end_time = time.time() # Stop the timer
if response.status_code == 200:
audio_content = response.content
# Save the audio to a file
with open("generated_audio.wav", "wb") as audio_file:
audio_file.write(audio_content)
print("Audio saved successfully.")
print("Time taken:", end_time - start_time, "seconds")
else:
print("Error:", response.text)
```
## Feedback
If you have any feedback, issues please reach out to [email protected]
|
Closen/dqn-SpaceInvadersNoFrameskip-v4 | Closen | "2023-01-11T14:28:07Z" | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-01-11T14:27:24Z" | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 832.50 +/- 329.28
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Closen -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Closen -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Closen
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
unsloth/Llama-3.2-1B-bnb-4bit | unsloth | "2025-01-22T09:33:04Z" | 35,526 | 9 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-3",
"meta",
"facebook",
"unsloth",
"en",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:quantized:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-09-25T18:38:40Z" | ---
base_model: meta-llama/Llama-3.2-1B
language:
- en
library_name: transformers
license: llama3.2
tags:
- llama-3
- llama
- meta
- facebook
- unsloth
- transformers
---
## ***See [our collection](https://huggingface.co/collections/unsloth/llama-32-66f46afde4ca573864321a22) for all versions of Llama 3.2 including GGUF, 4-bit and original 16-bit formats.***
# Finetune Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
We have a free Google Colab Tesla T4 notebook for Llama 3.2 (1B) here: https://colab.research.google.com/drive/1T5-zKWM_5OD21QHwXHiV9ixTRR7k3iB9?usp=sharing
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
# unsloth/Llama-3.2-1B-bnb-4bit
For more details on the model, please go to Meta's original [model card](https://huggingface.co/meta-llama/Llama-3.2-1B)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Llama-3.1 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less |
| **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less |
| **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
## Special Thanks
A huge thank you to the Meta and Llama team for creating and releasing these models.
## Model Information
The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
**Model developer**: Meta
**Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
**Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
**Llama 3.2 family of models** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date:** Sept 25, 2024
**Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
**License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement).
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
|
mlx-community/UI-TARS-7B-SFT-4bit | mlx-community | "2025-03-03T12:12:02Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_vl",
"image-text-to-text",
"multimodal",
"gui",
"mlx",
"conversational",
"en",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | "2025-03-03T12:11:03Z" | ---
license: apache-2.0
language:
- en
pipeline_tag: image-text-to-text
tags:
- multimodal
- gui
- mlx
library_name: transformers
---
# mlx-community/UI-TARS-7B-SFT-4bit
This model was converted to MLX format from [`bytedance-research/UI-TARS-7B-SFT`]() using mlx-vlm version **0.1.14**.
Refer to the [original model card](https://huggingface.co/bytedance-research/UI-TARS-7B-SFT) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model mlx-community/UI-TARS-7B-SFT-4bit --max-tokens 100 --temp 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
piazzola/address-detection-model | piazzola | "2023-10-13T18:28:38Z" | 34 | 0 | transformers | [
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"base_model:facebook/opt-350m",
"base_model:finetune:facebook/opt-350m",
"license:cc-by-nc-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-10-13T17:14:12Z" | ---
license: cc-by-nc-2.0
base_model: facebook/opt-350m
tags:
- generated_from_trainer
model-index:
- name: tmp_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tmp_trainer
This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the [addressWithContext](https://huggingface.co/datasets/piazzola/addressWithContext) dataset.
## Model description
**Make sure to set max_new_tokens = 20; otherwise, the model will generate one token at a time.**
```
nlp = pipeline("text-generation",
model="piazzola/tmp_trainer",
max_new_tokens=20)
nlp("I live at 15 Firstfield Road.")
```
**Note that if you would like to try longer sentences using the Hosted inference API
on the right hand side on this website, you might need to click "Compute" more than one time to get the address.**
## Intended uses & limitations
The model is intended to detect addresses that occur in a sentence.
## Training and evaluation data
This model is trained on `piazzola/addressWithContext`.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1 |
sgolkar/gpt2-medium-finetuned-brookstraining | sgolkar | "2023-04-03T22:25:30Z" | 205 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-04-03T21:38:17Z" | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-medium-finetuned-brookstraining
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-medium-finetuned-brookstraining
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.8470
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 100 | 3.4632 |
| No log | 2.0 | 200 | 3.4360 |
| No log | 3.0 | 300 | 3.4539 |
| No log | 4.0 | 400 | 3.4867 |
| 3.2934 | 5.0 | 500 | 3.5341 |
| 3.2934 | 6.0 | 600 | 3.6145 |
| 3.2934 | 7.0 | 700 | 3.6938 |
| 3.2934 | 8.0 | 800 | 3.8198 |
| 3.2934 | 9.0 | 900 | 3.9274 |
| 2.2258 | 10.0 | 1000 | 4.0388 |
| 2.2258 | 11.0 | 1100 | 4.1807 |
| 2.2258 | 12.0 | 1200 | 4.2635 |
| 2.2258 | 13.0 | 1300 | 4.3549 |
| 2.2258 | 14.0 | 1400 | 4.5134 |
| 1.5305 | 15.0 | 1500 | 4.5719 |
| 1.5305 | 16.0 | 1600 | 4.6932 |
| 1.5305 | 17.0 | 1700 | 4.7392 |
| 1.5305 | 18.0 | 1800 | 4.7729 |
| 1.5305 | 19.0 | 1900 | 4.8324 |
| 1.1988 | 20.0 | 2000 | 4.8470 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1
- Datasets 2.11.0
- Tokenizers 0.11.0
|
QuantFactory/DeepSeek-R1-Distill-Qwen-1.5B-GGUF | QuantFactory | "2025-01-23T17:58:05Z" | 1,819 | 1 | transformers | [
"transformers",
"gguf",
"arxiv:2501.12948",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-23T17:48:53Z" |
---
license: mit
library_name: transformers
---
[](https://hf.co/QuantFactory)
# QuantFactory/DeepSeek-R1-Distill-Qwen-1.5B-GGUF
This is quantized version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) created using llama.cpp
# Original Model Card
# DeepSeek-R1
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" />
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20R1-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;">
<img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;">
<img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;">
<img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;">
<img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE-CODE" style="margin: 2px;">
<img alt="Code License" src="https://img.shields.io/badge/Code_License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE-MODEL" style="margin: 2px;">
<img alt="Model License" src="https://img.shields.io/badge/Model_License-Model_Agreement-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<p align="center">
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf"><b>Paper Link</b>👁️</a>
</p>
## 1. Introduction
We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1.
DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrated remarkable performance on reasoning.
With RL, DeepSeek-R1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors.
However, DeepSeek-R1-Zero encounters challenges such as endless repetition, poor readability, and language mixing. To address these issues and further enhance reasoning performance,
we introduce DeepSeek-R1, which incorporates cold-start data before RL.
DeepSeek-R1 achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.
To support the research community, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, and six dense models distilled from DeepSeek-R1 based on Llama and Qwen. DeepSeek-R1-Distill-Qwen-32B outperforms OpenAI-o1-mini across various benchmarks, achieving new state-of-the-art results for dense models.
**NOTE: Before running DeepSeek-R1 series models locally, we kindly recommend reviewing the [Usage Recommendation](#usage-recommendations) section.**
<p align="center">
<img width="80%" src="figures/benchmark.jpg">
</p>
## 2. Model Summary
---
**Post-Training: Large-Scale Reinforcement Learning on the Base Model**
- We directly apply reinforcement learning (RL) to the base model without relying on supervised fine-tuning (SFT) as a preliminary step. This approach allows the model to explore chain-of-thought (CoT) for solving complex problems, resulting in the development of DeepSeek-R1-Zero. DeepSeek-R1-Zero demonstrates capabilities such as self-verification, reflection, and generating long CoTs, marking a significant milestone for the research community. Notably, it is the first open research to validate that reasoning capabilities of LLMs can be incentivized purely through RL, without the need for SFT. This breakthrough paves the way for future advancements in this area.
- We introduce our pipeline to develop DeepSeek-R1. The pipeline incorporates two RL stages aimed at discovering improved reasoning patterns and aligning with human preferences, as well as two SFT stages that serve as the seed for the model's reasoning and non-reasoning capabilities.
We believe the pipeline will benefit the industry by creating better models.
---
**Distillation: Smaller Models Can Be Powerful Too**
- We demonstrate that the reasoning patterns of larger models can be distilled into smaller models, resulting in better performance compared to the reasoning patterns discovered through RL on small models. The open source DeepSeek-R1, as well as its API, will benefit the research community to distill better smaller models in the future.
- Using the reasoning data generated by DeepSeek-R1, we fine-tuned several dense models that are widely used in the research community. The evaluation results demonstrate that the distilled smaller dense models perform exceptionally well on benchmarks. We open-source distilled 1.5B, 7B, 8B, 14B, 32B, and 70B checkpoints based on Qwen2.5 and Llama3 series to the community.
## 3. Model Downloads
### DeepSeek-R1 Models
<div align="center">
| **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
| :------------: | :------------: | :------------: | :------------: | :------------: |
| DeepSeek-R1-Zero | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Zero) |
| DeepSeek-R1 | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1) |
</div>
DeepSeek-R1-Zero & DeepSeek-R1 are trained based on DeepSeek-V3-Base.
For more details regarding the model architecture, please refer to [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repository.
### DeepSeek-R1-Distill Models
<div align="center">
| **Model** | **Base Model** | **Download** |
| :------------: | :------------: | :------------: |
| DeepSeek-R1-Distill-Qwen-1.5B | [Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) |
| DeepSeek-R1-Distill-Qwen-7B | [Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) |
| DeepSeek-R1-Distill-Llama-8B | [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) |
| DeepSeek-R1-Distill-Qwen-14B | [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) |
|DeepSeek-R1-Distill-Qwen-32B | [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) |
| DeepSeek-R1-Distill-Llama-70B | [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B) |
</div>
DeepSeek-R1-Distill models are fine-tuned based on open-source models, using samples generated by DeepSeek-R1.
We slightly change their configs and tokenizers. Please use our setting to run these models.
## 4. Evaluation Results
### DeepSeek-R1-Evaluation
For all our models, the maximum generation length is set to 32,768 tokens. For benchmarks requiring sampling, we use a temperature of $0.6$, a top-p value of $0.95$, and generate 64 responses per query to estimate pass@1.
<div align="center">
| Category | Benchmark (Metric) | Claude-3.5-Sonnet-1022 | GPT-4o 0513 | DeepSeek V3 | OpenAI o1-mini | OpenAI o1-1217 | DeepSeek R1 |
|----------|-------------------|----------------------|------------|--------------|----------------|------------|--------------|
| | Architecture | - | - | MoE | - | - | MoE |
| | # Activated Params | - | - | 37B | - | - | 37B |
| | # Total Params | - | - | 671B | - | - | 671B |
| English | MMLU (Pass@1) | 88.3 | 87.2 | 88.5 | 85.2 | **91.8** | 90.8 |
| | MMLU-Redux (EM) | 88.9 | 88.0 | 89.1 | 86.7 | - | **92.9** |
| | MMLU-Pro (EM) | 78.0 | 72.6 | 75.9 | 80.3 | - | **84.0** |
| | DROP (3-shot F1) | 88.3 | 83.7 | 91.6 | 83.9 | 90.2 | **92.2** |
| | IF-Eval (Prompt Strict) | **86.5** | 84.3 | 86.1 | 84.8 | - | 83.3 |
| | GPQA-Diamond (Pass@1) | 65.0 | 49.9 | 59.1 | 60.0 | **75.7** | 71.5 |
| | SimpleQA (Correct) | 28.4 | 38.2 | 24.9 | 7.0 | **47.0** | 30.1 |
| | FRAMES (Acc.) | 72.5 | 80.5 | 73.3 | 76.9 | - | **82.5** |
| | AlpacaEval2.0 (LC-winrate) | 52.0 | 51.1 | 70.0 | 57.8 | - | **87.6** |
| | ArenaHard (GPT-4-1106) | 85.2 | 80.4 | 85.5 | 92.0 | - | **92.3** |
| Code | LiveCodeBench (Pass@1-COT) | 33.8 | 34.2 | - | 53.8 | 63.4 | **65.9** |
| | Codeforces (Percentile) | 20.3 | 23.6 | 58.7 | 93.4 | **96.6** | 96.3 |
| | Codeforces (Rating) | 717 | 759 | 1134 | 1820 | **2061** | 2029 |
| | SWE Verified (Resolved) | **50.8** | 38.8 | 42.0 | 41.6 | 48.9 | 49.2 |
| | Aider-Polyglot (Acc.) | 45.3 | 16.0 | 49.6 | 32.9 | **61.7** | 53.3 |
| Math | AIME 2024 (Pass@1) | 16.0 | 9.3 | 39.2 | 63.6 | 79.2 | **79.8** |
| | MATH-500 (Pass@1) | 78.3 | 74.6 | 90.2 | 90.0 | 96.4 | **97.3** |
| | CNMO 2024 (Pass@1) | 13.1 | 10.8 | 43.2 | 67.6 | - | **78.8** |
| Chinese | CLUEWSC (EM) | 85.4 | 87.9 | 90.9 | 89.9 | - | **92.8** |
| | C-Eval (EM) | 76.7 | 76.0 | 86.5 | 68.9 | - | **91.8** |
| | C-SimpleQA (Correct) | 55.4 | 58.7 | **68.0** | 40.3 | - | 63.7 |
</div>
### Distilled Model Evaluation
<div align="center">
| Model | AIME 2024 pass@1 | AIME 2024 cons@64 | MATH-500 pass@1 | GPQA Diamond pass@1 | LiveCodeBench pass@1 | CodeForces rating |
|------------------------------------------|------------------|-------------------|-----------------|----------------------|----------------------|-------------------|
| GPT-4o-0513 | 9.3 | 13.4 | 74.6 | 49.9 | 32.9 | 759 |
| Claude-3.5-Sonnet-1022 | 16.0 | 26.7 | 78.3 | 65.0 | 38.9 | 717 |
| o1-mini | 63.6 | 80.0 | 90.0 | 60.0 | 53.8 | **1820** |
| QwQ-32B-Preview | 44.0 | 60.0 | 90.6 | 54.5 | 41.9 | 1316 |
| DeepSeek-R1-Distill-Qwen-1.5B | 28.9 | 52.7 | 83.9 | 33.8 | 16.9 | 954 |
| DeepSeek-R1-Distill-Qwen-7B | 55.5 | 83.3 | 92.8 | 49.1 | 37.6 | 1189 |
| DeepSeek-R1-Distill-Qwen-14B | 69.7 | 80.0 | 93.9 | 59.1 | 53.1 | 1481 |
| DeepSeek-R1-Distill-Qwen-32B | **72.6** | 83.3 | 94.3 | 62.1 | 57.2 | 1691 |
| DeepSeek-R1-Distill-Llama-8B | 50.4 | 80.0 | 89.1 | 49.0 | 39.6 | 1205 |
| DeepSeek-R1-Distill-Llama-70B | 70.0 | **86.7** | **94.5** | **65.2** | **57.5** | 1633 |
</div>
## 5. Chat Website & API Platform
You can chat with DeepSeek-R1 on DeepSeek's official website: [chat.deepseek.com](https://chat.deepseek.com), and switch on the button "DeepThink"
We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/)
## 6. How to Run Locally
### DeepSeek-R1 Models
Please visit [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repo for more information about running DeepSeek-R1 locally.
### DeepSeek-R1-Distill Models
DeepSeek-R1-Distill models can be utilized in the same manner as Qwen or Llama models.
For instance, you can easily start a service using [vLLM](https://github.com/vllm-project/vllm):
```shell
vllm serve deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --tensor-parallel-size 2 --max-model-len 32768 --enforce-eager
```
You can also easily start a service using [SGLang](https://github.com/sgl-project/sglang)
```bash
python3 -m sglang.launch_server --model deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --trust-remote-code --tp 2
```
### Usage Recommendations
**We recommend adhering to the following configurations when utilizing the DeepSeek-R1 series models, including benchmarking, to achieve the expected performance:**
1. Set the temperature within the range of 0.5-0.7 (0.6 is recommended) to prevent endless repetitions or incoherent outputs.
2. **Avoid adding a system prompt; all instructions should be contained within the user prompt.**
3. For mathematical problems, it is advisable to include a directive in your prompt such as: "Please reason step by step, and put your final answer within \boxed{}."
4. When evaluating model performance, it is recommended to conduct multiple tests and average the results.
## 7. License
This code repository and the model weights are licensed under the [MIT License](https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE).
DeepSeek-R1 series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. Please note that:
- DeepSeek-R1-Distill-Qwen-1.5B, DeepSeek-R1-Distill-Qwen-7B, DeepSeek-R1-Distill-Qwen-14B and DeepSeek-R1-Distill-Qwen-32B are derived from [Qwen-2.5 series](https://github.com/QwenLM/Qwen2.5), which are originally licensed under [Apache 2.0 License](https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE), and now finetuned with 800k samples curated with DeepSeek-R1.
- DeepSeek-R1-Distill-Llama-8B is derived from Llama3.1-8B-Base and is originally licensed under [llama3.1 license](https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/LICENSE).
- DeepSeek-R1-Distill-Llama-70B is derived from Llama3.3-70B-Instruct and is originally licensed under [llama3.3 license](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE).
## 8. Citation
```
@misc{deepseekai2025deepseekr1incentivizingreasoningcapability,
title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning},
author={DeepSeek-AI and Daya Guo and Dejian Yang and Haowei Zhang and Junxiao Song and Ruoyu Zhang and Runxin Xu and Qihao Zhu and Shirong Ma and Peiyi Wang and Xiao Bi and Xiaokang Zhang and Xingkai Yu and Yu Wu and Z. F. Wu and Zhibin Gou and Zhihong Shao and Zhuoshu Li and Ziyi Gao and Aixin Liu and Bing Xue and Bingxuan Wang and Bochao Wu and Bei Feng and Chengda Lu and Chenggang Zhao and Chengqi Deng and Chenyu Zhang and Chong Ruan and Damai Dai and Deli Chen and Dongjie Ji and Erhang Li and Fangyun Lin and Fucong Dai and Fuli Luo and Guangbo Hao and Guanting Chen and Guowei Li and H. Zhang and Han Bao and Hanwei Xu and Haocheng Wang and Honghui Ding and Huajian Xin and Huazuo Gao and Hui Qu and Hui Li and Jianzhong Guo and Jiashi Li and Jiawei Wang and Jingchang Chen and Jingyang Yuan and Junjie Qiu and Junlong Li and J. L. Cai and Jiaqi Ni and Jian Liang and Jin Chen and Kai Dong and Kai Hu and Kaige Gao and Kang Guan and Kexin Huang and Kuai Yu and Lean Wang and Lecong Zhang and Liang Zhao and Litong Wang and Liyue Zhang and Lei Xu and Leyi Xia and Mingchuan Zhang and Minghua Zhang and Minghui Tang and Meng Li and Miaojun Wang and Mingming Li and Ning Tian and Panpan Huang and Peng Zhang and Qiancheng Wang and Qinyu Chen and Qiushi Du and Ruiqi Ge and Ruisong Zhang and Ruizhe Pan and Runji Wang and R. J. Chen and R. L. Jin and Ruyi Chen and Shanghao Lu and Shangyan Zhou and Shanhuang Chen and Shengfeng Ye and Shiyu Wang and Shuiping Yu and Shunfeng Zhou and Shuting Pan and S. S. Li and Shuang Zhou and Shaoqing Wu and Shengfeng Ye and Tao Yun and Tian Pei and Tianyu Sun and T. Wang and Wangding Zeng and Wanjia Zhao and Wen Liu and Wenfeng Liang and Wenjun Gao and Wenqin Yu and Wentao Zhang and W. L. Xiao and Wei An and Xiaodong Liu and Xiaohan Wang and Xiaokang Chen and Xiaotao Nie and Xin Cheng and Xin Liu and Xin Xie and Xingchao Liu and Xinyu Yang and Xinyuan Li and Xuecheng Su and Xuheng Lin and X. Q. Li and Xiangyue Jin and Xiaojin Shen and Xiaosha Chen and Xiaowen Sun and Xiaoxiang Wang and Xinnan Song and Xinyi Zhou and Xianzu Wang and Xinxia Shan and Y. K. Li and Y. Q. Wang and Y. X. Wei and Yang Zhang and Yanhong Xu and Yao Li and Yao Zhao and Yaofeng Sun and Yaohui Wang and Yi Yu and Yichao Zhang and Yifan Shi and Yiliang Xiong and Ying He and Yishi Piao and Yisong Wang and Yixuan Tan and Yiyang Ma and Yiyuan Liu and Yongqiang Guo and Yuan Ou and Yuduan Wang and Yue Gong and Yuheng Zou and Yujia He and Yunfan Xiong and Yuxiang Luo and Yuxiang You and Yuxuan Liu and Yuyang Zhou and Y. X. Zhu and Yanhong Xu and Yanping Huang and Yaohui Li and Yi Zheng and Yuchen Zhu and Yunxian Ma and Ying Tang and Yukun Zha and Yuting Yan and Z. Z. Ren and Zehui Ren and Zhangli Sha and Zhe Fu and Zhean Xu and Zhenda Xie and Zhengyan Zhang and Zhewen Hao and Zhicheng Ma and Zhigang Yan and Zhiyu Wu and Zihui Gu and Zijia Zhu and Zijun Liu and Zilin Li and Ziwei Xie and Ziyang Song and Zizheng Pan and Zhen Huang and Zhipeng Xu and Zhongyu Zhang and Zhen Zhang},
year={2025},
eprint={2501.12948},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2501.12948},
}
```
## 9. Contact
If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]).
|
StepLaw/StepLaw-N_429M-D_39.0B-LR2.76E-03-BS4194304 | StepLaw | "2025-04-11T10:22:51Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"step1",
"text-generation",
"StepLaw",
"causal-lm",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-11T10:20:18Z" | ---
license: apache-2.0
tags:
- StepLaw
- causal-lm
language:
- en
library_name: transformers
pipeline_tag: text-generation
model-index:
- name: step2v2_0618_h1280_ffnh9472_numh10_numl10_lr2.76E-03_bs2048_ti9536_mlr1.00E-05
results: []
---
# Wandb Model Name: step2v2_0618_h1280_ffnh9472_numh10_numl10_lr2.76E-03_bs2048_ti9536_mlr1.00E-05
This model is part of the [StepLaw-N_429M-D_39.0B](https://huggingface.co/collections/StepLaw/StepLaw-N_429M-D_39.0B) collection.
## Model Specifications
### Architecture
- **Hidden size (H)**: 1280
- **Feed-forward network size (FFN)**: 9472
- **Attention heads**: 10
- **Layers**: 10
- **Parameter count**: 429MM
### Training Parameters
- **Learning rate (lr)**: 2.76E-03
- **Batch size (bs)**: 2048
- **Training iterations**: 9536
- **Training tokens (D)**: 40.0B
## Model Description
StepLaw models are trained with various hyperparameter settings to enable research on scaling laws and hyperparameter optimization. This specific model was trained with learning rate 2.76E-03 and batch size 2048 for 9536 iterations, using a total of 40.0B training tokens.
## Usage Example
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "StepLaw/StepLaw-N_429M-D_39.0B-LR2.76E-03-BS4194304"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True, use_fast=False)
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True)
# Generate text
inputs = tokenizer("A long time ago in a galaxy far, far away", return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```## Part of StepLaw Project
StepLaw is an initiative to provide thousands of models for optimal hyperparameter research.
Visit [StepLaw Project](https://step-law.github.io/) for more information.
|
VERSIL91/f21f92ca-19bf-4801-9cd7-807726684632 | VERSIL91 | "2025-01-16T19:59:43Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-2b",
"base_model:adapter:unsloth/gemma-2-2b",
"license:gemma",
"region:us"
] | null | "2025-01-16T19:59:37Z" | ---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-2b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8568dd3a-ee91-4b6d-92bb-4544ba3d5c37
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-2b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- db2fe1c0d13dadac_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/db2fe1c0d13dadac_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: null
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/db2fe1c0d13dadac_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 09c4d92f-d31c-4e8f-a2d5-27310b3103c7
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 09c4d92f-d31c-4e8f-a2d5-27310b3103c7
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 8568dd3a-ee91-4b6d-92bb-4544ba3d5c37
This model is a fine-tuned version of [unsloth/gemma-2-2b](https://huggingface.co/unsloth/gemma-2-2b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9567
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.9033 | 0.0021 | 1 | 2.0062 |
| 1.5411 | 0.0063 | 3 | 2.0053 |
| 1.6763 | 0.0125 | 6 | 1.9911 |
| 1.8414 | 0.0188 | 9 | 1.9567 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
rgbayrak/deep-physio-recon | rgbayrak | "2025-02-17T15:57:17Z" | 0 | 0 | null | [
"rv",
"hr",
"fmri",
"en",
"license:mit",
"region:us"
] | null | "2023-09-12T22:01:17Z" | ---
license: mit
language:
- en
tags:
- rv
- hr
- fmri
---
**Physiological Signal Reconstruction directly from Functional Magnetic Resonance Imaging Data**
Functional magnetic resonance imaging (fMRI) is a powerful technique for studying human brain activity and large-scale neural circuits. However, fMRI signals can be strongly modulated by slow changes in respiration volume (RV) and heart rate (HR). Monitoring cardiac and respiratory signals during fMRI enables modeling and/or reducing such effects; yet, physiological measurements are often unavailable in practice, and are missing from a large number of fMRI datasets. Here, we propose learning approaches for inferring RV and HR signals directly from fMRI time-series dynamics.
---
If you use these models, please cite:
```
@article
{bayraktracing,
title={Tracing peripheral physiology in low frequency fMRI dynamics},
author={Bayrak, Roza Gunes and Hansen, Colin and Salas, Jorge and Ahmed, Nafis and Lyu, Ilwoo and Mather, Mara and Huo, Yuankai and Chang, Catie},
publisher={OSF}
}
``` |
casellimarco/Taxi | casellimarco | "2023-05-12T09:32:12Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-05-12T09:32:11Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="casellimarco/Taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
VPTQ-community/Mistral-Large-Instruct-2407-v16-k65536-1024-woft | VPTQ-community | "2025-02-25T17:30:21Z" | 3 | 0 | null | [
"safetensors",
"llama",
"VPTQ",
"Quantized",
"Quantization",
"arxiv:2409.17066",
"base_model:mistralai/Mistral-Large-Instruct-2407",
"base_model:quantized:mistralai/Mistral-Large-Instruct-2407",
"license:other",
"vptq",
"region:us"
] | null | "2024-10-18T05:42:30Z" |
---
license: other
license_name: mrl
license_link: https://mistral.ai/licenses/MRL-0.1.md
base_model:
- mistralai/Mistral-Large-Instruct-2407
base_model_relation: quantized
tags:
- VPTQ
- Quantized
- Quantization
---
**Disclaimer**:
The model is reproduced based on the paper *VPTQ: Extreme Low-bit Vector Post-Training Quantization for Large Language Models* [github](https://github.com/microsoft/vptq) and [arXiv](https://arxiv.org/abs/2409.17066)
The model itself is sourced from a community release.
It is intended only for experimental purposes.
Users are responsible for any consequences arising from the use of this model.
**Note**:
The PPL test results are for reference only and were collected using GPTQ testing script.
```json
{
"ctx_2048": {
"wikitext2": 6.874824523925781,
"c4": 10.510944366455078,
"c4-new": 12.159836769104004
},
"ctx_4096": {
"wikitext2": 6.375296592712402,
"c4": 9.913671493530273,
"c4-new": 11.587206840515137
},
"ctx_8192": {
"wikitext2": 6.131728172302246,
"c4": 7.288276672363281,
"c4-new": 11.48729133605957
}
}
```
|
hongrui/mammogram_v_2_2 | hongrui | "2023-06-27T09:48:52Z" | 2 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2023-06-26T22:46:35Z" |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - hongrui/mammogram_v_2_2
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were fine-tuned on the hongrui/mammogram_v_1 dataset. You can find some example images in the following.




|
visdata/st250 | visdata | "2024-12-31T14:27:50Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-31T14:19:49Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
trl-lib/OpenHermes-2-Mistral-7B-kto-beta-0.1-steps-800 | trl-lib | "2023-12-27T12:06:46Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"en",
"arxiv:1910.09700",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:adapter:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"region:us"
] | null | "2023-12-27T12:06:31Z" | ---
library_name: peft
base_model: teknium/OpenHermes-2.5-Mistral-7B
model-index:
- name: OpenHermes-2-Mistral-7B-kto-beta-0.1-steps-800
results: []
license: apache-2.0
language:
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
AlekseyCalvin/HistoricColorSoonr_v2_FluxSchnell_Diffusers | AlekseyCalvin | "2025-04-05T00:50:10Z" | 12 | 1 | diffusers | [
"diffusers",
"safetensors",
"Flux",
"FluxPipeline",
"text-to-image",
"flux schnell",
"image-generation",
"flux-diffusers",
"photo",
"realism",
"en",
"base_model:black-forest-labs/FLUX.1-schnell",
"base_model:finetune:black-forest-labs/FLUX.1-schnell",
"license:apache-2.0",
"endpoints_compatible",
"diffusers:FluxPipeline",
"region:us"
] | text-to-image | "2024-09-24T07:16:12Z" | ---
license: apache-2.0
tags:
- Flux
- FluxPipeline
- text-to-image
- flux schnell
- image-generation
- flux-diffusers
- diffusers
- photo
- realism
pipeline_tag: text-to-image
library_name: diffusers
emoji: 🔜
language:
- en
base_model: black-forest-labs/FLUX.1-schnell
instance_prompt: HST autochrome photo
widget:
- text: HST style autochrome photo of a young woman playing poker against a blue-feathered dinosaur sitting across from her, moderately wrinkled blemished lined skin texture with pores
output:
url: Hstv2r.png
- text: (w/ our Mayakovsky LoRA) HST photo of Mayakovsky sleeping, seeing a dream wherein rice shoots bud on lush green fields, text \MAYAKOVSKY SAW A DREAM\
output:
url: Hstv2Mayak.webp
- text: HST style photo of a young woman playing a Telecaster electric guitar and singing the blues
output:
url: Hstv2guitar.webp
- text: hst style photo of an aging dark-haired woman playing guitar in an old Soviet apartment
output:
url: Hstv2r3.png
- text: hst style photo of a young dark-haired woman embracing a red-feathered dinosaur
output:
url: HistoricI.png
- text: hst style autochrome vintage color photo of gigantic Rosa Luxemburg walking over iced-over planet Earth
output:
url: Hst2legs.png
---
# **Historic Color Soon® V.2**
The second **FLUX**-based & open-licensed full-model checkpoint in our **HSToric Color** series.<br>
Trained on HD scans of early color photos (circa *1900s-1910s*) by **Sergey Prokudin-Gorsky**, who traveled and photographed widely in those years whilst perfecting implementations of a pioneering 3-color-composite photography technique.<br>
**This model is aimed at being useful for**:<br>
- Quality generation at a low step-count (2 to 8, for most scenarios), with 4-step inference at around 768x768 routinely producing photorealistic outputs at a quality plausibly preferrable to that of **Flux v.1 Dev**. <br>
- Producing realistic images reminiscent of color film analog photography, exhibiting parallels to a broad spectrum of iconic instrumentalities and visual paradigms, from Autochrome-to-Kodachrome-to-Fujifilm-and-beyond. <br>
- Producing visuals with a vaguely "historical" or "lived-in" aesthetic character, striking chromaticity and luminosity dynamics, as well as textural/anatomical/skin details more reliably lifelike than other models at a comparable step-count/resource expenditure. <br>
- Extending realism options under an unrevokable commercial license. <br>
<Gallery />
## Testing Space:
You may try out the **V2** checkpoint at [one of our LoRA gallery spaces](https://huggingface.co/spaces/AlekseyCalvin/soonfactory4), along with many of our trained LoRAs!<br>
## Bit of Model History + TOOL SHARES:
[Historic Color Soon® V.1](https://huggingface.co/AlekseyCalvin/HistoricColorSoonr_Schnell) was fine-tuned by us from [HumbleMikey](https://civitai.com/user/humblemikey)'s [Pixelwave Schnell V.1](https://huggingface.co/mikeyandfriends/PixelWave_FLUX.1-schnell_01/) model which, in its turn, is a generalized base checkpoint trained from [FLUX.1-schnell](https://huggingface.co/black-forest-labs/FLUX.1-schnell) by **Black Forest Labs**, consolidating (in comparison w/vanilla-base-**Schnell**) further inference speed improvements (more reliable results at 2-3 steps), whilst raising the overall quality and consistency standards across most aesthetic categories and at every step.<br>
This version, **Historic Color Soon® V.2** was created through merging into **V.1** a handful of LoRAs trained by us on the (fairly narrow) available range of realistic Flux checkpoint models that are exclusively **Schnell**-derived, so as to stay within the fairly open **Apache 2.0** licensing domain (which was among our reasons to do all this in the first place).<br>
**Historic Color Soon® V.1** is available [here](https://huggingface.co/AlekseyCalvin/HistoricColorSoonr_Schnell) in both **Safetensors** (fp8) & **Diffusers** formats.<br>
To fine-tune **Flux**, try the dedicated [Flux Training Notebook by Ostris](https://github.com/ostris/ai-toolkit/blob/main/notebooks/FLUX_1_schnell_LoRA_Training.ipynb).<br>
**Ostris**' training adapter for **Schnell** is found here: [ostris/FLUX.1-schnell-training-adapter](https://huggingface.co/ostris/FLUX.1-schnell-training-adapter).<br>
To merge **Flux*** models and LoRAs, use the *'flux_merge_lora.py'* script from the sd3-branch & /networks (subfolder) of [Kohya-ss's sd-scripts git](https://github.com/kohya-ss/sd-scripts/tree/sd3).<br>
## Bit of Actual History:
**Prokudin-Gorsky**'s color photography technique would involve three photo-exposures, either simultaneous or sequential, using specialized color-spectrum filters (basically R.B.G.: red, blue, and green), rendering a subject/shot onto glass plates covered with light-emulsive mixture.<br>
The photographer's focus on refining the developer and filter quality, in tandem with his incessant and wide-ranging experimentation, and his artful optimizations of glass plates (generally unwieldly, esp. for color, and by the 1910's already becoming outmoded for B&W on-location shoots, though elsewise extra reliable) ultimately led him to produce a color photography oeuvre of much greater fidelity and vividness than achieved by most of his contemporaries.<br>
At the same time, the peculiarities of the photographer's method, coupled with his exceptionally hands-on execution thereof, would manifest in a range of idyosyncratic color, light, and motion artifacts common across the resulting prints.<br>
Seldom marring the image as a whole, and less grave than the weaknesses of some cp-emerging autochrome techniques, the warm color hazes & flares framing many of **Prokudin-Gorsky**'s prints constitute a kind of ephemeral signature.<br>
Alongside some of the more subtle chromatic, textural, and (in some measure) figural characteristics of his work, these auras have reliably imprinted themselves into this and other LoRAs and Models within our gallery of fine-tunes for Flux and StableDiffusion3.5, fine-tuned exclusively on non-synthetic (human-made and pre-curated) open-access data from iconic, influential, and/or otherwise compelling historical sources.<br>
We urge you to explore the works of **Prokudin-Gorsky** for yourself, at the wonderfully organized online [archive at this link](https://prokudin-gorsky.org/), featuring many hundreds of high quality downloadable scans of composite color photo prints from the photographer's original glass plate negatives, available at this site alongside relatively recent restorations of a substantial portion of the images. The original glass-plate negatives are currently held at and administrated by the Library of Congress in Washington, DC, USA. <br>
## Diffusers:<br>
To use `Historic Color SOON® V.2` with the 🧨 diffusers python library, first install or upgrade diffusers:<br>
```shell
pip install -U diffusers
```
Then you can use `FluxPipeline` to run the model:
```python
import torch
from diffusers import FluxPipeline
pipe = FluxPipeline.from_pretrained("AlekseyCalvin/HistoricColorSoonr_v2_FluxSchnell_Diffusers", torch_dtype=torch.bfloat16)
pipe.to("cuda")
pipe.enable_model_cpu_offload() #save some VRAM by offloading the model to CPU. Remove this if you have enough GPU power
prompt = "HST style autochrome film photograph portrait of 1910 woman playing poker against a purple feathered dinosaur, the green-eyed woman has moderately blemished skin with visible lines and pores, she smiles, film grain, Kodachrome"
image = pipe(
prompt,
guidance_scale=1.2,
num_inference_steps=4,
max_sequence_length=256,
generator=torch.Generator("cpu").manual_seed(0)
).images[0]
image.save("hstcolor1.png")
```
To learn more check out the [diffusers](https://huggingface.co/docs/diffusers/main/en/api/pipelines/flux) documentation.
<br>
Lastly, if you're into literature broadly and old modernist poetry specifically, check out our verse translations at [SILVER AGE POETS](https://www.SilverAgePoets.com/the-poets-and-their-stories)! |
jethrowang/whisper-tiny_tat-esc_vanilla | jethrowang | "2025-04-14T10:46:44Z" | 0 | 0 | null | [
"tensorboard",
"safetensors",
"whisper",
"region:us"
] | null | "2025-04-14T05:40:28Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
indiehackers/gemma7b-telugu-instruct | indiehackers | "2024-04-20T17:29:36Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:unsloth/gemma-7b-it-bnb-4bit",
"base_model:finetune:unsloth/gemma-7b-it-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-04-20T17:28:45Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
base_model: unsloth/gemma-7b-it-bnb-4bit
---
# Uploaded model
- **Developed by:** indiehackers
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-7b-it-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
borgcollectivegmbh/role-prestige-predictor-v1 | borgcollectivegmbh | "2025-03-30T23:58:37Z" | 28 | 0 | transformers | [
"transformers",
"safetensors",
"role_prestige",
"feature-extraction",
"custom_code",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2025-03-21T03:17:33Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
bilbo991/clip-mixer-300k | bilbo991 | "2023-08-17T00:07:23Z" | 90 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vision-text-dual-encoder",
"feature-extraction",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2023-08-16T18:45:04Z" | ---
base_model: clip-mixer-300k
tags:
- generated_from_trainer
model-index:
- name: clip-mixer-300k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clip-mixer-300k
This model is a fine-tuned version of [clip-mixer-300k](https://huggingface.co/clip-mixer-300k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4179
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2811 | 1.0 | 9375 | 2.5993 |
| 0.8159 | 2.0 | 18750 | 2.4179 |
| 0.374 | 3.0 | 28125 | 2.4597 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.1
- Tokenizers 0.13.3
|
yamatazen/ElvenMaid-12B | yamatazen | "2025-04-05T04:32:11Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"chatml",
"conversational",
"en",
"ja",
"arxiv:2306.01708",
"base_model:PocketDoc/Dans-PersonalityEngine-V1.1.0-12b",
"base_model:merge:PocketDoc/Dans-PersonalityEngine-V1.1.0-12b",
"base_model:inflatebot/MN-12B-Mag-Mell-R1",
"base_model:merge:inflatebot/MN-12B-Mag-Mell-R1",
"base_model:yamatazen/Himeyuri-Magnum-12B",
"base_model:merge:yamatazen/Himeyuri-Magnum-12B",
"base_model:yamatazen/LoyalMaid-12B",
"base_model:merge:yamatazen/LoyalMaid-12B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-05T03:29:38Z" | ---
base_model:
- yamatazen/LoyalMaid-12B
- PocketDoc/Dans-PersonalityEngine-V1.1.0-12b
- inflatebot/MN-12B-Mag-Mell-R1
- yamatazen/Himeyuri-Magnum-12B
library_name: transformers
tags:
- mergekit
- merge
- chatml
language:
- en
- ja
---

This is a ChatML model.
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [PocketDoc/Dans-PersonalityEngine-V1.1.0-12b](https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.1.0-12b) as a base.
### Models Merged
The following models were included in the merge:
* [yamatazen/LoyalMaid-12B](https://huggingface.co/yamatazen/LoyalMaid-12B)
* [inflatebot/MN-12B-Mag-Mell-R1](https://huggingface.co/inflatebot/MN-12B-Mag-Mell-R1)
* [yamatazen/Himeyuri-Magnum-12B](https://huggingface.co/yamatazen/Himeyuri-Magnum-12B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: PocketDoc/Dans-PersonalityEngine-V1.1.0-12b
models:
- model: yamatazen/Himeyuri-Magnum-12B
parameters:
density: 0.75
weight: 1.0
- model: yamatazen/LoyalMaid-12B
parameters:
density: 0.5
weight: 1.0
- model: inflatebot/MN-12B-Mag-Mell-R1
parameters:
density: 0.3
weight: 1.0
merge_method: ties
dtype: bfloat16
parameters:
normalize: true
``` |
Warcylewis/real-xl | Warcylewis | "2024-05-08T09:49:07Z" | 189 | 0 | diffusers | [
"diffusers",
"safetensors",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-05-08T09:41:27Z" | ---
license: creativeml-openrail-m
---
|
Triangle104/Falcon3-Mamba-7B-Instruct-Q5_K_M-GGUF | Triangle104 | "2025-01-13T04:17:08Z" | 28 | 0 | transformers | [
"transformers",
"gguf",
"falcon3",
"falcon3_mamba",
"falcon_mamba",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:tiiuae/Falcon3-Mamba-7B-Instruct",
"base_model:quantized:tiiuae/Falcon3-Mamba-7B-Instruct",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-13T04:15:26Z" | ---
language:
- en
tags:
- falcon3
- falcon3_mamba
- falcon_mamba
- llama-cpp
- gguf-my-repo
base_model: tiiuae/Falcon3-Mamba-7B-Instruct
license: other
license_name: falcon-llm-license
license_link: https://falconllm.tii.ae/falcon-terms-and-conditions.html
library_name: transformers
---
# Triangle104/Falcon3-Mamba-7B-Instruct-Q5_K_M-GGUF
This model was converted to GGUF format from [`tiiuae/Falcon3-Mamba-7B-Instruct`](https://huggingface.co/tiiuae/Falcon3-Mamba-7B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/tiiuae/Falcon3-Mamba-7B-Instruct) for more details on the model.
---
Model details:
-
Falcon3 family of Open Foundation Models is a set of pretrained and instruct LLMs ranging from 1B to 10B.
This repository contains the Falcon3-Mamba-7B-Instruct. It achieves, compared to similar SSM-based models of the same size, state of art results (at release's time) on reasoning, language understanding, instruction following, code and mathematics tasks. Falcon3-Mamba-7B-Instruct supports a context length up to 32K and was mainly trained on english corpus.
Model Details
Architecture (same as Falcon-Mamba-7b)
Mamba1 based causal decoder only architecture trained on a causal language modeling task (i.e., predict the next token).
64 decoder blocks
width: 4096
state_size: 16
32k context length
65k vocab size
Continue Pretrained from Falcon-Mamba-7b, with another 1500 Gigatokens of data consisting of web, code, STEM and high quality data.
Postrained on 1.2 million samples of STEM, conversations, code, and safety.
Developed by Technology Innovation Institute
License: TII Falcon-LLM License 2.0
Model Release Date: December 2024
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Falcon3-Mamba-7B-Instruct-Q5_K_M-GGUF --hf-file falcon3-mamba-7b-instruct-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Falcon3-Mamba-7B-Instruct-Q5_K_M-GGUF --hf-file falcon3-mamba-7b-instruct-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Falcon3-Mamba-7B-Instruct-Q5_K_M-GGUF --hf-file falcon3-mamba-7b-instruct-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Falcon3-Mamba-7B-Instruct-Q5_K_M-GGUF --hf-file falcon3-mamba-7b-instruct-q5_k_m.gguf -c 2048
```
|
StepLaw/StepLaw-N_1.0B-D_7.0B-LR5.524e-03-BS131072 | StepLaw | "2025-04-04T11:04:11Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"step1",
"text-generation",
"StepLaw",
"causal-lm",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-04T11:00:55Z" | ---
license: apache-2.0
tags:
- StepLaw
- causal-lm
language:
- en
library_name: transformers
pipeline_tag: text-generation
model-index:
- name: step2v2_0618_h2048_ffnh8192_numh16_numl16_lr5.524e-03_bs64_ti61035_mlr1e-5
results: []
---
# Wandb Model Name: step2v2_0618_h2048_ffnh8192_numh16_numl16_lr5.524e-03_bs64_ti61035_mlr1e-5
This model is part of the [StepLaw-N_1.0B-D_7.0B](https://huggingface.co/collections/StepLaw/StepLaw-N_1.0B-D_7.0B) collection.
## Model Specifications
### Architecture
- **Hidden size (H)**: 2048
- **Feed-forward network size (FFN)**: 8192
- **Attention heads**: 16
- **Layers**: 16
- **Parameter count**: 1.1BM
### Training Parameters
- **Learning rate (lr)**: 5.524e-03
- **Batch size (bs)**: 64
- **Training iterations**: 61035
- **Training tokens (D)**: 8.0B
## Model Description
StepLaw models are trained with various hyperparameter settings to enable research on scaling laws and hyperparameter optimization. This specific model was trained with learning rate 5.524e-03 and batch size 64 for 61035 iterations, using a total of 8.0B training tokens.
## Usage Example
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "StepLaw/StepLaw-N_1.0B-D_7.0B-LR5.524e-03-BS131072"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True, use_fast=False)
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True)
# Generate text
inputs = tokenizer("A long time ago in a galaxy far, far away", return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```## Part of StepLaw Project
StepLaw is an initiative to provide thousands of models for optimal hyperparameter research.
Visit [StepLaw Project](https://step-law.github.io/) for more information.
|
gutsartificial/bge-small-en-v1.5-quality-weight-0.2 | gutsartificial | "2024-12-07T12:05:05Z" | 110 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:BAAI/bge-small-en-v1.5",
"base_model:finetune:BAAI/bge-small-en-v1.5",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-12-07T11:40:01Z" | ---
library_name: transformers
license: mit
base_model: BAAI/bge-small-en-v1.5
tags:
- generated_from_trainer
model-index:
- name: bge-small-en-v1.5-2024-12-07_04-44-32-quality-weight-0.2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bge-small-en-v1.5-2024-12-07_04-44-32-quality-weight-0.2
This model is a fine-tuned version of [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0209
- Spearman: 0.9278
- Pearson: 0.9305
- Mse: 0.0209
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Spearman | Pearson | Mse |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:-------:|:------:|
| 0.0316 | 0.3998 | 1055 | 0.0277 | 0.8996 | 0.9041 | 0.0277 |
| 0.0268 | 0.7997 | 2110 | 0.0250 | 0.9089 | 0.9148 | 0.0250 |
| 0.0231 | 1.1995 | 3165 | 0.0241 | 0.9142 | 0.9197 | 0.0241 |
| 0.0227 | 1.5994 | 4220 | 0.0219 | 0.9210 | 0.9252 | 0.0219 |
| 0.0208 | 1.9992 | 5275 | 0.0222 | 0.9218 | 0.9274 | 0.0222 |
| 0.0178 | 2.3991 | 6330 | 0.0214 | 0.9226 | 0.9290 | 0.0214 |
| 0.0167 | 2.7989 | 7385 | 0.0206 | 0.9247 | 0.9307 | 0.0206 |
| 0.013 | 3.1988 | 8440 | 0.0209 | 0.9260 | 0.9299 | 0.0209 |
| 0.0141 | 3.5986 | 9495 | 0.0207 | 0.9270 | 0.9316 | 0.0207 |
| 0.0146 | 3.9985 | 10550 | 0.0204 | 0.9269 | 0.9316 | 0.0204 |
| 0.0107 | 4.3983 | 11605 | 0.0207 | 0.9272 | 0.9317 | 0.0207 |
| 0.0123 | 4.7982 | 12660 | 0.0206 | 0.9274 | 0.9318 | 0.0206 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.4.1+cu121
- Datasets 2.19.2
- Tokenizers 0.20.3
|
Subsets and Splits