Search is not available for this dataset
modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-15 18:26:17
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 427
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-15 18:25:21
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
pinzhenchen/sft-lora-multilingual-pythia-12b | pinzhenchen | "2024-04-04T22:49:41Z" | 0 | 0 | null | [
"generation",
"question answering",
"instruction tuning",
"bg",
"cs",
"zh",
"de",
"fi",
"fr",
"ru",
"es",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | "2024-04-04T22:49:37Z" |
---
language:
- bg
- cs
- zh
- de
- fi
- fr
- ru
- es
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [EleutherAI/pythia-12b-deduped](https://huggingface.co/EleutherAI/pythia-12b-deduped)
* Instruction tuning language: multilingual (Bulgarian, Czech, Chinese, German, Finnish, French, Russian, and Spanish)
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
FuseAI/FuseChat-7B-VaRM | FuseAI | "2024-03-16T07:51:55Z" | 252 | 84 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"mixtral",
"solar",
"model-fusion",
"fusechat",
"conversational",
"en",
"dataset:FuseAI/FuseChat-Mixture",
"arxiv:2402.16107",
"base_model:openchat/openchat_3.5",
"base_model:finetune:openchat/openchat_3.5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-02-26T03:12:21Z" | ---
license: apache-2.0
language:
- en
base_model: openchat/openchat_3.5
datasets:
- FuseAI/FuseChat-Mixture
pipeline_tag: text-generation
tags:
- mistral
- mixtral
- solar
- model-fusion
- fusechat
library_name: transformers
model-index:
- name: FuseChat-7B-VaRM
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: MT-Bench
type: unknown
metrics:
- type: unknown
value: 8.22
name: score
source:
url: https://huggingface.co/spaces/lmsys/mt-bench
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 62.88
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FuseAI/FuseChat-7B-VaRM
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 84.25
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FuseAI/FuseChat-7B-VaRM
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.71
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FuseAI/FuseChat-7B-VaRM
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 45.67
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FuseAI/FuseChat-7B-VaRM
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 79.16
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FuseAI/FuseChat-7B-VaRM
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.46
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FuseAI/FuseChat-7B-VaRM
name: Open LLM Leaderboard
---
<p align="center" width="100%">
</p>
<div id="top" align="center">
<p style="font-size: 30px; font-weight: bold;">FuseChat: Knowledge Fusion of Chat Models</p>
<p style="font-size: 24px; font-weight: bold;">[SOTA 7B LLM on MT-Bench]</p>
<h4> |<a href="https://arxiv.org/abs/2402.16107"> 📑 Paper </a> |
<a href="https://huggingface.co/FuseAI"> 🤗 HuggingFace Repo </a> |
<a href="https://github.com/fanqiwan/FuseLLM"> 🐱 GitHub Repo </a> |
</h4>
<!-- **Authors:** -->
_**Fanqi Wan, Ziyi Yang, Longguang Zhong, Xiaojun Quan, Xinting Huang, Wei Bi**_
<!-- **Affiliations:** -->
_Sun Yat-sen University_
<p align="center">
<img src="./assets/fig_0.png" width="70%"> <br>
</p>
| Proprietary Models | #Params | MT-Bench | Open Source Models | #Params | MT-Bench |
|-----------------------------------------------------------------------|---------|----------|-----------------------------------------------------------------------|---------|----------|
| GPT-4-1106-preview | - | 9.32 | Qwen1.5-72B-Chat | 72B | 8.61 |
| GPT-4-0613 | - | 9.18 | Nous-Hermes-2-Mixtral-8x7B-DPO | 8x7B | 8.33 |
| GPT-4-0314 | - | 8.96 | Mixtral-8x7B-Instruct-v0.1 | 8x7B | 8.30 |
| Mistral Medium | - | 8.61 | 🤗 [FuseChat-7B-VaRM](https://huggingface.co/FuseAI/FuseChat-7B-VaRM) | 7B | 8.22 |
| GPT-3.5-Turbo-0613 | - | 8.39 | Starling-LM-7B-alpha | 7B | 8.09 |
| GPT-3.5-Turbo-1106 | - | 8.32 | Tulu-2-DPO-70B | 70B | 7.89 |
| 🤗 [FuseChat-7B-VaRM](https://huggingface.co/FuseAI/FuseChat-7B-VaRM) | 7B | 8.22 | OpenChat-3.5 | 7B | 7.81 |
| Claude-2.1 | - | 8.18 | OpenChat-3.5-0106 | 7B | 7.80 |
| Claude-2.0 | - | 8.06 | WizardLM-70B-v1.0 | 70B | 7.71 |
| GPT-3.5-Turbo-0314 | - | 7.94 | Yi-34B-Chat | 34B | 7.67 |
| Claude-1 | - | 7.90 | Nous-Hermes-2-SOLAR-10.7B | 10.7B | 7.66 |
</div>
## News
- **Feb 26, 2024:** 🔥🔥 We release [FuseChat-7B-VaRM](https://huggingface.co/FuseAI/FuseChat-7B-VaRM), which is the fusion of three prominent chat LLMs with diverse architectures and scales, namely [NH2-Mixtral-8x7B](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO), [NH2-Solar-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B), and [OpenChat-3.5-7B](https://huggingface.co/openchat/openchat_3.5). FuseChat-7B-VaRM achieves an average performance of **8.22** on MT-Bench, outperforming various powerful chat LLMs at 7B and 34B scales like [Starling-7B](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) and [Yi-34B-Chat](https://huggingface.co/01-ai/Yi-34B-Chat), even surpassing [GPT-3.5 (March)](https://platform.openai.com/docs/models/gpt-3-5-turbo), [Claude-2.1](https://www.anthropic.com/news/claude-2-1), and approaching [Mixtral-8x7B-Instruct](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1).
- **Feb 25, 2024:** 🔥 We release [FuseChat-Mixture](https://huggingface.co/datasets/FuseAI/FuseChat-Mixture), which is a comprehensive training dataset covers different styles and capabilities, featuring both human-written and model-generated, and spanning general instruction-following and specific skills.
## Contents
- [Overview](#overview)
- [Model Release](#model-release)
- [Quick Start](#quick-start)
- [Data Construction](#data-construction)
- [Pairwise Knowledge Fusion](#pairwise-knowledge-fusion)
- [Model Merging](#model-merging)
- [Evaluation](#evaluation)
- [Citation](#citation)
## Overview
In this work, we propose an extended framework of FuseLLM to integrate the collective knowledge and individual strengths of multiple structure and scale-varied chat LLMs into a more powerful chat LLM, resulting in FuseChat. FuseChat adopts a fuse-then-merge strategy with two main stages. Firstly, it undertakes pairwise knowledge fusion for source LLMs to derive multiple target LLMs of identical structure and size via lightweight fine-tuning. Then, these target LLMs are merged within the parameter space, wherein we propose a novel method VaRM for determining the merging weights based on the variation ratio of parameter matrices before and after fine-tuning.
Moreover, we argue that the concept of knowledge fusion adopted by both FuseChat and FuseLLM shares a fundamentally similar purpose with other related topics, such as the recently popular topic of mixture of experts (MoEs), because they all aim to leverage the strengths of multiple models (experts). However, while MoEs require loading multiple experts during inference, which has higher memory requirements, knowledge fusion supports the integration of multiple LLMs with diverse architectures into a single LLM without any additional memory requirement, making it more memory-efficient.
<p align="center">
<img src="./assets/fig_1.png" width="95%"> <br>
</p>
## Model Release
We release [FuseChat-7B-VaRM](https://huggingface.co/FuseAI/FuseChat-7B-VaRM), which is the fusion of three prominent chat LLMs with diverse architectures and scales, namely [NH2-Mixtral-8x7B](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO), [NH2-Solar-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B), and [OpenChat-3.5-7B](https://huggingface.co/openchat/openchat_3.5). FuseChat-7B-VaRM achieves an average performance of **8.22** on MT-Bench, outperforming various powerful chat LLMs at 7B and 34B scales like [Starling-7B](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) and [Yi-34B-Chat](https://huggingface.co/01-ai/Yi-34B-Chat), even surpassing [GPT-3.5 (March)](https://platform.openai.com/docs/models/gpt-3-5-turbo), [Claude-2.1](https://www.anthropic.com/news/claude-2-1), and approaching [Mixtral-8x7B-Instruct](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1).
To support a plug-and-play fusion of new source LLM, we release our target LLMs: [OpenChat-3.5-7B-Solar](https://huggingface.co/FuseAI/OpenChat-3.5-7B-Solar) and [OpenChat-3.5-7B-Mixtral](https://huggingface.co/FuseAI/OpenChat-3.5-7B-Mixtral), which are obtained from pair-wise knowledge fusion. Integrating a new source LLM at any scale requires only obtaining a target LLM from the new source LLM and merging it with the existing target LLMs.
We also release FuseChat with other merging methods: [FuseChat-7B-SLERP](https://huggingface.co/FuseAI/FuseChat-7B-SLERP) and [FuseChat-7B-TA](https://huggingface.co/FuseAI/FuseChat-7B-TA), which achieves an average performance of **8.19** and **8.20** on MT-Bench respectively.
Here are the evaluation results.
<p align="center">
<img src="./assets/tab_1.png" width="95%"> <br>
</p>
## Quick Start
### Setup
We use `python 3.11` in this project.
Then, we have to install all the libraries listed in `requirements.txt`.
```bash
pip install -r requirements.txt
```
### Usage
Here's how you can run the model using the 🤗 Transformers:
```python
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained("FuseAI/FuseChat-7B-VaRM")
# Single-turn
tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant:").input_ids
assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747]
# Multi-turn
tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant:").input_ids
assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747]
```
The GPT4 template is also available as the integrated `tokenizer.chat_template`, which can be used instead of manually specifying the template:
```python
messages = [
{"role": "user", "content": "Hello"},
{"role": "assistant", "content": "Hi"},
{"role": "user", "content": "How are you today?"}
]
tokens = tokenizer.apply_chat_template(messages, add_generation_prompt=True)
assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747]
```
## Data Construction
We curated a comprehensive training dataset, [FuseChat-Mixture](https://huggingface.co/datasets/FuseAI/FuseChat-Mixture), from various sources. This dataset covers different styles and capabilities, featuring both human-written and model-generated, and spanning general instruction-following and specific skills.
Here we show the scripts to obtain representations from multiple source LLMs for model fusion.
1. Get representations for each source LLM
```bash
# We split the dataset into 4 splits, then process each split on one or multiple GPU.
# OpenChat-3.5-7B
export CUDA_VISIBLE_DEVICES=0
for i in {0..3}; do
python /train/get_data_representation.py \
--model_name_or_path "openchat/openchat_3.5" \
--data_path "/data/fusechat_v1_clean_split_2048_filter_wrong.json" \
--dataset_save_dir "<${i}_4_path_to_openchat_representation>" \
--tknz_dataset_path "<${i}_4_path_to_openchat_tknz>" \
--cache_dir "/.cache/huggingface/datasets" \
--model_max_length 2048 \
--load_in_half bf16 \
--batch_size 32 \
--top_k_logits 10 \
--save_per_token_metric \
--no_assert \
--conv_temp "openchat" \
--flash_attn_transformers \
--mask_instruction \
--dataset_split_num 4 \
--dataset_index ${i}
done
# NH2-Mixtral-8x7B
export CUDA_VISIBLE_DEVICES=0,1,2
for i in {0..3}; do
python /train/get_data_representation.py \
--model_name_or_path "NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO" \
--data_path "/data/fusechat_v1_clean_split_2048_filter_wrong.json" \
--dataset_save_dir "<${i}_4_path_to_mixtral_representation>" \
--tknz_dataset_path "<${i}_4_path_to_mixtral_tknz>" \
--cache_dir "/.cache/huggingface/datasets" \
--model_max_length 2048 \
--load_in_half bf16 \
--batch_size 4 \
--top_k_logits 10 \
--save_per_token_metric \
--no_assert \
--conv_temp "openchat" \
--flash_attn_transformers \
--mask_instruction \
--device_map "auto" \
--dataset_split_num 4 \
--dataset_index ${i}
done
# NH2-Solar-10.7B
export CUDA_VISIBLE_DEVICES=0
for i in {0..3}; do
python /train/get_data_representation.py \
--model_name_or_path "NousResearch/Nous-Hermes-2-SOLAR-10.7B" \
--data_path "/data/fusechat_v1_clean_split_2048_filter_wrong.json" \
--dataset_save_dir "<${i}_4_path_to_solar_representation>" \
--tknz_dataset_path "<${i}_4_path_to_solar_tknz>" \
--cache_dir "/.cache/huggingface/datasets" \
--model_max_length 2048 \
--load_in_half bf16 \
--batch_size 8 \
--top_k_logits 10 \
--save_per_token_metric \
--no_assert \
--conv_temp "openchat" \
--flash_attn_transformers \
--mask_instruction \
--dataset_split_num 4 \
--dataset_index ${i}
done
```
2. Align representations from different source LLMs
```bash
# Since the tokenizers and vocabularies of these source LLMs are identical, we do not align.
# OpenChat-3.5-7B <-> NH2-Mixtral-8x7B
for i in {0..3}; do
python /train/replace_model.py \
--dataset_dir "<${i}_4_path_to_openchat_representation>" \
--replace_dataset_dir "<${i}_4_path_to_mixtral_representation>" \
--dataset_save_dir "<${i}_4_path_to_openchat_mixtral_representation>" \
--preprocessing_num_workers 64 \
--batch_size 1000 \
--replace_model model_0
done
# OpenChat-3.5-7B <-> NH2-Solar-10.7B
for i in {0..3}; do
python /train/replace_model.py \
--dataset_dir "<${i}_4_path_to_openchat_mixtral_representation>" \
--replace_dataset_dir "<${i}_4_path_to_solar_representation>" \
--dataset_save_dir "<${i}_4_path_to_openchat_mixtral_solar_representation>" \
--preprocessing_num_workers 64 \
--batch_size 1000 \
--replace_model model_1
done
```
3. Filter instances with NaN loss in the dataset
```bash
for i in {0..3}; do
python /train/filter_nan.py \
--input_data_dir "<${i}_4_path_to_openchat_mixtral_solar_representation>" \
--output_data_dir "<${i}_4_path_to_openchat_mixtral_solar_representation_fnan>"
done
```
The final processed data is at `<${i}_4_path_to_openchat_mixtral_solar_representation_fnan>`.
## Pairwise Knowledge Fusion
We show the scripts for pairwise knowledge fusion.
```bash
# OpenChat-3.5-7B <-> NH2-Mixtral-8x7B
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
torchrun --nproc_per_node=8 --master_port=20001 /train/train.py \
--model_name_or_path "openchat/openchat_3.5" \
--data_path "<0_4_path_to_openchat_mixtral_solar_representation_fnan>,<1_4_path_to_openchat_mixtral_solar_representation_fnan>,<2_4_path_to_openchat_mixtral_solar_representation_fnan>,<3_4_path_to_openchat_mixtral_solar_representation_fnan>" \
--bf16 True \
--output_dir "<path_to_save_openchat_mixtral_ckpt>" \
--num_train_epochs 3 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--gradient_accumulation_steps 4 \
--evaluation_strategy "no" \
--save_strategy "epoch" \
--save_steps 10000 \
--save_total_limit 5 \
--learning_rate 5e-6 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--lr_scheduler_type "cosine" \
--logging_steps 1 \
--fsdp "full_shard auto_wrap" \
--fsdp_transformer_layer_cls_to_wrap 'MistralDecoderLayer' \
--tf32 True \
--model_max_length 2048 \
--gradient_checkpointing True \
--conv_temp "openchat" \
--lazy_preprocess True \
--flash_attn_transformers True \
--do_train \
--do_distill \
--distill_with_ref_model True \
--distill_with_aligned_model_0 True \
--distill_with_aligned_model_1 False \
--distill_loss_type "ce" \
--distill_teacher_temperature 1.0 \
--lm_loss_weight 0.9 \
--distill_greater_as_gt True \
--distill_greater_as_gt_type hard \
--dataloader_num_workers 8 \
--remove_unused_columns False
# OpenChat-3.5-7B <-> NH2-Solar-10.7B
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
torchrun --nproc_per_node=8 --master_port=20001 /train/train.py \
--model_name_or_path "openchat/openchat_3.5" \
--data_path "<0_4_path_to_openchat_mixtral_solar_representation_fnan>,<1_4_path_to_openchat_mixtral_solar_representation_fnan>,<2_4_path_to_openchat_mixtral_solar_representation_fnan>,<3_4_path_to_openchat_mixtral_solar_representation_fnan>" \
--bf16 True \
--output_dir "<path_to_save_openchat_solar_ckpt>" \
--num_train_epochs 3 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--gradient_accumulation_steps 4 \
--evaluation_strategy "no" \
--save_strategy "epoch" \
--save_steps 10000 \
--save_total_limit 5 \
--learning_rate 5e-6 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--lr_scheduler_type "cosine" \
--logging_steps 1 \
--fsdp "full_shard auto_wrap" \
--fsdp_transformer_layer_cls_to_wrap 'MistralDecoderLayer' \
--tf32 True \
--model_max_length 2048 \
--gradient_checkpointing True \
--conv_temp "openchat" \
--lazy_preprocess True \
--flash_attn_transformers True \
--do_train \
--do_distill \
--distill_with_ref_model True \
--distill_with_aligned_model_0 False \
--distill_with_aligned_model_1 True \
--distill_loss_type "ce" \
--distill_teacher_temperature 1.0 \
--lm_loss_weight 0.9 \
--distill_greater_as_gt True \
--distill_greater_as_gt_type hard \
--dataloader_num_workers 8 \
--remove_unused_columns False
```
## Model Merging
We show the scripts to obtain the final FuseChat using different merging methods.
```bash
# For "slerp", "ta", "ties", and "dare" methods (Please install "mergekit")
export CUDA_VISIBLE_DEVICES=0
mergekit-yaml merge/mergekit_configs/fusechat-slerp.yml "<path_to_save_fusechat_7b_slerp>"
mergekit-yaml merge/mergekit_configs/fusechat-ta.yml "<path_to_save_fusechat_7b_ta>"
mergekit-yaml merge/mergekit_configs/fusechat-ties.yml "<path_to_save_fusechat_7b_ties>"
mergekit-yaml merge/mergekit_configs/fusechat-dare.yml "<path_to_save_fusechat_7b_dare>"
# For "linear" method
python merge/VaRM/merge.py \
--merged_model_names "FuseAI/OpenChat-3.5-7B-Mixtral,FuseAI/OpenChat-3.5-7B-Solar" \
--merged_model_save_dir "<path_to_save_fusechat_7b_linear>" \
--merge_method "linear" \
--linear_weights "1,2"
# For our "varm" method
python merge/VaRM/analysis.py \
--model1_path "FuseAI/OpenChat-3.5-7B-Mixtral" \
--model2_path "FuseAI/OpenChat-3.5-7B-Solar" \
--save_path "<path_to_save_analysis_result>/analysis.json" \
--merge_type "square"
python merge/VaRM/merge.py \
--merged_model_names "FuseAI/OpenChat-3.5-7B-Mixtral,FuseAI/OpenChat-3.5-7B-Solar" \
--analysis_result "<path_to_save_analysis_result>/analysis.json" \
--merged_model_save_dir "<path_to_save_fusechat_7b_varm>" \
--merge_method "avg_param" \
--merge_type "square"
```
## Evaluation
We evaluate FuseChat on MT-Bench, which comprises 80 multi-turn dialogues spanning writing, roleplay, reasoning, math, coding, stem, and humanities domains. Please download the [official code](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge) and follow the guidelines for evaluation. We provide the scripts for our evaluation.
```bash
# Step 1. Generate model answers to MT-bench questions
export CUDA_VISIBLE_DEVICES=0,1
python gen_model_answer.py \
--model-path "FuseAI/FuseChat-7B-VaRM" \
--model-id "openchat_3.5_fusechat_7b_varm" \
--num-gpus-per-model 1 \
--num-gpus-total 2
# Step 2. Generate GPT-4 judgments
export OPENAI_API_KEY=XXXXXX # set the OpenAI API key
python gen_judgment.py \
--parallel 2
# Step 3. Show MT-bench scores
python show_result.py
```
## Citation
If you find this work is relevant with your research or applications, please feel free to cite our work!
```
@article{wan2024fusechat,
title={FuseChat: Knowledge Fusion of Chat Models},
author={Fanqi Wan and Ziyi Yang and Longguang Zhong and Xiaojun Quan and Xinting Huang and Wei Bi},
journal={arXiv preprint arXiv:2402.16107},
year={2024}
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_FuseAI__FuseChat-7B-VaRM)
| Metric |Value|
|---------------------------------|----:|
|Avg. |66.52|
|AI2 Reasoning Challenge (25-Shot)|62.88|
|HellaSwag (10-Shot) |84.25|
|MMLU (5-Shot) |63.71|
|TruthfulQA (0-shot) |45.67|
|Winogrande (5-shot) |79.16|
|GSM8k (5-shot) |63.46|
|
hobab185/noora | hobab185 | "2022-08-24T08:05:00Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-08-24T03:36:24Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xlsr-persian4-demo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-persian4-demo
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4691
- Wer: 0.5166
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.2175 | 1.0 | 400 | 3.0445 | 1.0 |
| 1.6533 | 2.0 | 800 | 1.3703 | 0.9333 |
| 0.7507 | 3.0 | 1200 | 0.6387 | 0.6474 |
| 0.5435 | 4.0 | 1600 | 0.5102 | 0.5506 |
| 0.5017 | 5.0 | 2000 | 0.4691 | 0.5166 |
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.4.1.dev0
- Tokenizers 0.12.1
|
thliang01/fireworks-sdxl-dora | thliang01 | "2024-07-29T04:18:20Z" | 6 | 0 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | "2024-07-29T03:14:32Z" | ---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: 'a SKS icon of an astronaut riding a horse, in the style of SKS'
output:
url:
"image_0.png"
- text: 'a SKS icon of an astronaut riding a horse, in the style of SKS'
output:
url:
"image_1.png"
- text: 'a SKS icon of an astronaut riding a horse, in the style of SKS'
output:
url:
"image_2.png"
- text: 'a SKS icon of an astronaut riding a horse, in the style of SKS'
output:
url:
"image_3.png"
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Fireworks in the style of SKS
license: openrail++
---
# SDXL LoRA DreamBooth - thliang01/fireworks-sdxl-dora
<Gallery />
## Model description
### These are thliang01/fireworks-sdxl-dora LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`fireworks-sdxl-dora.safetensors` here 💾](/thliang01/fireworks-sdxl-dora/blob/main/fireworks-sdxl-dora.safetensors)**.
- Place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:fireworks-sdxl-dora:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
- *Embeddings*: download **[`fireworks-sdxl-dora_emb.safetensors` here 💾](/thliang01/fireworks-sdxl-dora/blob/main/fireworks-sdxl-dora_emb.safetensors)**.
- Place it on it on your `embeddings` folder
- Use it by adding `fireworks-sdxl-dora_emb` to your prompt. For example, `Fireworks in the style of SKS`
(you need both the LoRA and the embeddings as they were trained together for this LoRA)
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('thliang01/fireworks-sdxl-dora', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='thliang01/fireworks-sdxl-dora', filename='fireworks-sdxl-dora_emb.safetensors', repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=[], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=[], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
image = pipeline('a SKS icon of an astronaut riding a horse, in the style of SKS').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` → use `<s0><s1>` in your prompt
## Details
All [Files & versions](/thliang01/fireworks-sdxl-dora/tree/main).
The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
SEBIS/legal_t5_small_trans_de_sv_small_finetuned | SEBIS | "2021-06-23T09:33:24Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"translation Deustch Swedish model",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-03-02T23:29:04Z" |
---
language: Deustch Swedish
tags:
- translation Deustch Swedish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Die Finanzkrise hat schonungslos offenbart, wo die Mängel in den Überwachungsverfahren der EU liegen, die eine wirksame Vorbeugung von Verstößen gegen die Haushaltsdisziplin, ausufernden Haushaltsdefiziten der Mitgliedstaaten, Ungleichgewichten im Handel und Unterschieden in der Wettbewerbsfähigkeit gewährleisten sollen."
---
# legal_t5_small_trans_de_sv_small_finetuned model
Model on translating legal text from Deustch to Swedish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_de_sv_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_de_sv_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Deustch to Swedish.
### How to use
Here is how to use this model to translate legal text from Deustch to Swedish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_de_sv_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_de_sv", do_lower_case=False,
skip_special_tokens=True),
device=0
)
de_text = "Die Finanzkrise hat schonungslos offenbart, wo die Mängel in den Überwachungsverfahren der EU liegen, die eine wirksame Vorbeugung von Verstößen gegen die Haushaltsdisziplin, ausufernden Haushaltsdefiziten der Mitgliedstaaten, Ungleichgewichten im Handel und Unterschieden in der Wettbewerbsfähigkeit gewährleisten sollen."
pipeline([de_text], max_length=512)
```
## Training data
The legal_t5_small_trans_de_sv_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_de_sv_small_finetuned | 41.365|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
IlyaGusev/mbart_ru_sum_gazeta | IlyaGusev | "2023-03-16T22:41:26Z" | 11,243 | 60 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mbart",
"text2text-generation",
"summarization",
"ru",
"dataset:IlyaGusev/gazeta",
"arxiv:2006.11063",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | "2022-03-02T23:29:04Z" | ---
language:
- ru
tags:
- summarization
- mbart
datasets:
- IlyaGusev/gazeta
license: apache-2.0
inference:
parameters:
no_repeat_ngram_size: 4
widget:
- text: "Высота башни составляет 324 метра (1063 фута), примерно такая же высота, как у 81-этажного здания, и самое высокое сооружение в Париже. Его основание квадратно, размером 125 метров (410 футов) с любой стороны. Во время строительства Эйфелева башня превзошла монумент Вашингтона, став самым высоким искусственным сооружением в мире, и этот титул она удерживала в течение 41 года до завершения строительство здания Крайслер в Нью-Йорке в 1930 году. Это первое сооружение которое достигло высоты 300 метров. Из-за добавления вещательной антенны на вершине башни в 1957 году она сейчас выше здания Крайслер на 5,2 метра (17 футов). За исключением передатчиков, Эйфелева башня является второй самой высокой отдельно стоящей структурой во Франции после виадука Мийо."
example_title: "Википедия"
- text: "С 1 сентября в России вступают в силу поправки в закон «О банкротстве» — теперь должники смогут освобождаться от непосильных обязательств во внесудебном порядке, если сумма задолженности составляет не менее 50 тыс. рублей и не превышает 500 тыс. рублей без учета штрафов, пени, процентов за просрочку платежа и прочих имущественных или финансовых санкций. У физлиц и индивидуальных предпринимателей появилась возможность пройти процедуру банкротства без участия суда и финансового управляющего — достаточно подать соответствующее заявление через МФЦ. Сумму задолженности и список всех известных заявителю кредиторов нужно предоставить самостоятельно. Если все условия соблюдены, сведения внесут в Единый федеральный реестр в течение трех рабочих дней. При этом на момент подачи заявления в отношении заявителя должно быть окончено исполнительное производство с возвращением исполнительного документа взыскателю. Это значит, что у потенциального банкрота не должно быть имущества, которое можно взыскать. Кроме того, в отношении гражданина не должно быть возбуждено другое исполнительное производство. В период всей процедуры заявитель не сможет брать займы, кредиты, выдавать поручительства, совершать иные обеспечительные сделки. Внесудебное банкротство будет длиться шесть месяцев, в течение которых также будет действовать мораторий на удовлетворение требований кредиторов, отмеченных в заявлении должника, и мораторий об уплате обязательных платежей. Кроме того, прекращается начисление неустоек и иных финансовых санкций; имущественные взыскания (кроме алиментов) также будут приостановлены. По завершению процедуры заявителя освободят от дальнейшего выполнения требований кредиторов, указанных в заявлении о признании его банкротом, а эта задолженность признается безнадежной. В прошлом месяце стало известно, что за первое полугодие 2020 года российские суды признали банкротами 42,7 тыс. граждан (в том числе индивидуальных предпринимателей) — по данным единого реестра «Федресурс», это на 47,2% больше показателя аналогичного периода 2019 года. Рост числа обанкротившихся граждан во втором квартале по сравнению с первым замедлился — такая динамика обусловлена тем, что в период ограничений с 19 марта по 11 мая суды редко рассматривали банкротные дела компаний и меньше, чем обычно, в отношении граждан, объяснял руководитель проекта «Федресурс» Алексей Юхнин. Он прогнозирует, что во втором полугодии мы увидим рост показателя, когда суды рассмотрят все дела, что не смогли ранее в режиме ограничений. По его данным, уже в июне число личных банкротств выросло до 11,5 тыс., что в два раза превышает показатель аналогичного периода 2019 года."
example_title: "Новости"
- text: "Актуальность проблемы. Электронная информация играет все большую роль во всех сферах жизни современного общества. В последние годы объем научно-технической текстовой информации в электронном виде возрос настолько, что возникает угроза обесценивания этой информации в связи с трудностями поиска необходимых сведений среди множества доступных текстов. Развитие информационных ресурсов Интернет многократно усугубило проблему информационной перегрузки. В этой ситуации особенно актуальными становятся методы автоматизации реферирования текстовой информации, то есть методы получения сжатого представления текстовых документов–рефератов (аннотаций). Постановка проблемы автоматического реферирования текста и соответственно попытки ее решения с использованием различных подходов предпринимались многими исследователями. История применения вычислительной техники для реферирования насчитывает уже более 50 лет и связана с именами таких исследователей, как Г.П. Лун, В.Е. Берзон, И.П. Cевбо, Э.Ф. Скороходько, Д.Г. Лахути, Р.Г. Пиотровский и др. За эти годы выработаны многочисленные подходы к решению данной проблемы, которые достаточно четко подразделяются на два направления: автоматическое реферирование, основанное на экстрагировании из первичных документов с помощью определенных формальных признаков «наиболее информативных» фраз (фрагментов), совокупность которых образует некоторый экстракт; автоматическое реферирование, основанное на выделении из текстов с помощью специальных информационных языков наиболее существенной информации и порождении новых текстов (рефератов), содержательно обобщающих первичные документы."
example_title: "Научная статья"
---
# MBARTRuSumGazeta
## Model description
This is a ported version of [fairseq model](https://www.dropbox.com/s/fijtntnifbt9h0k/gazeta_mbart_v2_fairseq.tar.gz).
For more details, please see [Dataset for Automatic Summarization of Russian News](https://arxiv.org/abs/2006.11063).
## Intended uses & limitations
#### How to use
Colab: [link](https://colab.research.google.com/drive/1wdo_nPZPk6dWAn1J8nGx4Z5Ef82jCCob)
```python
from transformers import MBartTokenizer, MBartForConditionalGeneration
model_name = "IlyaGusev/mbart_ru_sum_gazeta"
tokenizer = MBartTokenizer.from_pretrained(model_name)
model = MBartForConditionalGeneration.from_pretrained(model_name)
article_text = "..."
input_ids = tokenizer(
[article_text],
max_length=600,
padding="max_length",
truncation=True,
return_tensors="pt",
)["input_ids"]
output_ids = model.generate(
input_ids=input_ids,
no_repeat_ngram_size=4
)[0]
summary = tokenizer.decode(output_ids, skip_special_tokens=True)
print(summary)
```
#### Limitations and bias
- The model should work well with Gazeta.ru articles, but for any other agencies it can suffer from domain shift
## Training data
- Dataset: [Gazeta](https://huggingface.co/datasets/IlyaGusev/gazeta)
## Training procedure
- Fairseq training script: [train.sh](https://github.com/IlyaGusev/summarus/blob/master/external/bart_scripts/train.sh)
- Porting: [Colab link](https://colab.research.google.com/drive/13jXOlCpArV-lm4jZQ0VgOpj6nFBYrLAr)
## Eval results
* Train dataset: **Gazeta v1 train**
* Test dataset: **Gazeta v1 test**
* Source max_length: **600**
* Target max_length: **200**
* no_repeat_ngram_size: **4**
* num_beams: **5**
| Model | R-1-f | R-2-f | R-L-f | chrF | METEOR | BLEU | Avg char length |
|:--------------------------|:------|:------|:------|:-------|:-------|:-----|:-----|
| [mbart_ru_sum_gazeta](https://huggingface.co/IlyaGusev/mbart_ru_sum_gazeta) | **32.4** | 14.3 | 28.0 | 39.7 | **26.4** | 12.1 | 371 |
| [rut5_base_sum_gazeta](https://huggingface.co/IlyaGusev/rut5_base_sum_gazeta) | 32.2 | **14.4** | **28.1** | **39.8** | 25.7 | **12.3** | 330 |
| [rugpt3medium_sum_gazeta](https://huggingface.co/IlyaGusev/rugpt3medium_sum_gazeta) | 26.2 | 7.7 | 21.7 | 33.8 | 18.2 | 4.3 | 244 |
* Train dataset: **Gazeta v1 train**
* Test dataset: **Gazeta v2 test**
* Source max_length: **600**
* Target max_length: **200**
* no_repeat_ngram_size: **4**
* num_beams: **5**
| Model | R-1-f | R-2-f | R-L-f | chrF | METEOR | BLEU | Avg char length |
|:--------------------------|:------|:------|:------|:-------|:-------|:-----|:-----|
| [mbart_ru_sum_gazeta](https://huggingface.co/IlyaGusev/mbart_ru_sum_gazeta) | **28.7** | **11.1** | 24.4 | **37.3** | **22.7** | **9.4** | 373 |
| [rut5_base_sum_gazeta](https://huggingface.co/IlyaGusev/rut5_base_sum_gazeta) | 28.6 | **11.1** | **24.5** | 37.2 | 22.0 | **9.4** | 331 |
| [rugpt3medium_sum_gazeta](https://huggingface.co/IlyaGusev/rugpt3medium_sum_gazeta) | 24.1 | 6.5 | 19.8 | 32.1 | 16.3 | 3.6 | 242 |
Predicting all summaries:
```python
import json
import torch
from transformers import MBartTokenizer, MBartForConditionalGeneration
from datasets import load_dataset
def gen_batch(inputs, batch_size):
batch_start = 0
while batch_start < len(inputs):
yield inputs[batch_start: batch_start + batch_size]
batch_start += batch_size
def predict(
model_name,
input_records,
output_file,
max_source_tokens_count=600,
batch_size=4
):
device = "cuda" if torch.cuda.is_available() else "cpu"
tokenizer = MBartTokenizer.from_pretrained(model_name)
model = MBartForConditionalGeneration.from_pretrained(model_name).to(device)
predictions = []
for batch in gen_batch(inputs, batch_size):
texts = [r["text"] for r in batch]
input_ids = tokenizer(
batch,
return_tensors="pt",
padding="max_length",
truncation=True,
max_length=max_source_tokens_count
)["input_ids"].to(device)
output_ids = model.generate(
input_ids=input_ids,
no_repeat_ngram_size=4
)
summaries = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
for s in summaries:
print(s)
predictions.extend(summaries)
with open(output_file, "w") as w:
for p in predictions:
w.write(p.strip().replace("\n", " ") + "\n")
gazeta_test = load_dataset('IlyaGusev/gazeta', script_version="v1.0")["test"]
predict("IlyaGusev/mbart_ru_sum_gazeta", list(gazeta_test), "mbart_predictions.txt")
```
Evaluation: https://github.com/IlyaGusev/summarus/blob/master/evaluate.py
Flags: --language ru --tokenize-after --lower
### BibTeX entry and citation info
```bibtex
@InProceedings{10.1007/978-3-030-59082-6_9,
author="Gusev, Ilya",
editor="Filchenkov, Andrey and Kauttonen, Janne and Pivovarova, Lidia",
title="Dataset for Automatic Summarization of Russian News",
booktitle="Artificial Intelligence and Natural Language",
year="2020",
publisher="Springer International Publishing",
address="Cham",
pages="122--134",
isbn="978-3-030-59082-6"
}
```
|
gtalasso/bert_dataset_classifier | gtalasso | "2025-03-19T20:10:45Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-03-19T20:10:21Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
skyprolk/iPhone-Wallpaper-Style | skyprolk | "2023-11-15T18:56:51Z" | 31 | 1 | diffusers | [
"diffusers",
"art",
"text-to-image",
"stable-diffusion",
"lora",
"style",
"iphone-wallpaper",
"en",
"dataset:skyprolk/iPhone-Wallpapers",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:gpl-3.0",
"region:us"
] | text-to-image | "2023-09-29T17:37:30Z" | ---
license: gpl-3.0
datasets:
- skyprolk/iPhone-Wallpapers
tags:
- art
- text-to-image
- stable-diffusion
- lora
- diffusers
- style
- iphone-wallpaper
language:
- en
pipeline_tag: text-to-image
base_model: runwayml/stable-diffusion-v1-5
widget:
- text: "The sky is water, ultra hd, realistic, vivid colors, highly detailed, UHD drawing, pen and ink, perfect composition, beautiful detailed intricate insanely detailed octane render trending on artstation, 8k artistic photography, photorealistic concept art, soft natural volumetric cinematic perfect light"
output:
url: sample/sample-1.jfif
- text: "It reminds me of when I'm happy or memorable, November, unreal engine, greg rutkowski, loish, rhads, beeple, makoto shinkai and lois van baarle, ilya kuvshinov, rossdraws, tom bagshaw, alphonse mucha, global illumination, detailed and intricate environment"
output:
url: sample/sample-2.jfif
- text: "Land of a thousand autumns, fractal trees, penjing, Miki Asai Macro photography, close-up, hyper detailed, trending on artstation, sharp focus, studio photo, intricate details, highly detailed, by greg rutkowski"
output:
url: sample/sample-3.jfif
- text: "stunning landscape , magical lighting, distant worlds, fantasy, other worlds, highly detailed, sharp focus, dark colourful, centered, symmetry, painted, intricate, volumetric lighting, beautiful, rich deep colors masterpiece, sharp focus, ultra detailed, in the style of dan mumford and marc simonetti, astrophotography, perfect composition, beautiful detailed intricate insanely detailed octane render trending on artstation, 8 k artistic photography, photorealistic concept art, soft natural volumetric cinematic perfect light, chiaroscuro, award - winning photograph, masterpiece, oil on canvas, raphael, caravaggio, greg rutkowski, beeple, beksinski, giger"
output:
url: sample/sample-4.jfif
- text: "fantasy mosque, trees, flowers, centered, symmetry, painted, intricate, volumetric lighting, beautiful, rich deep colors masterpiece, sharp focus, ultra detailed, in the style of dan mumford and marc simonetti, astrophotography"
output:
url: sample/sample-5.jfif
- text: "beautiful flower textures colour full, acrylic painting, trending on pixiv fanbox, palette knife and brush strokes, style of makoto shinkai jamie wyeth james gilleard edward hopper greg rutkowski studio ghibli genshin impact"
output:
url: sample/sample-6.jfif
- text: "Looking from the side of a gentle babbling brook running through a forest in autumn"
output:
url: sample/sample-7.jfif
- text: "8K, watercolor, Watercolor, trending on artstation, sharp focus, studio photo, intricate details, highly detailed, by greg rutkowski"
output:
url: sample/sample-8.jfif
- text: "cosmic night by Berajah76 , ultra hd, realistic, vivid colors, highly detailed, UHD drawing, pen and ink, perfect composition, beautiful detailed intricate insanely detailed octane render trending on artstation, 8k artistic photography, photorealistic concept art, soft natural volumetric cinematic perfect light"
output:
url: sample/sample-9.jfif
- text: "scenery,Beautiful, golden ratio, fake detail, trending pixiv fanbox, acrylic palette knife, style of makoto shinkai studio ghibli genshin impact james gilleard greg rutkowski chiho aoshima"
output:
url: sample/sample-10.jfif
---
## Model Details

### Model Description
<!-- Provide a longer summary of what this model is. -->
This model is designed for applying stylish filters and aesthetic enhancements to your images. It can transform your photos to have a style reminiscent of iPhone wallpapers, giving your images a unique and eye-catching appearance.
- **Developed by:** SKY PRODUCTION
- **Shared by:** KNOIT
- **Model type:** LOCON (LORA)
- **Finetuned from model:** STABLE DIFFUSION 1.5
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
Dataset used to train : https://huggingface.co/datasets/skyprolk/iPhone-Wallpapers
## Disclaimer
- This model is for artistic and aesthetic purposes and may not be suitable for all images or use cases.
- The performance of the style transfer may vary depending on the input image and the desired style.
- Use this model responsibly and respect copyright and licensing agreements when applying styles to images.
## Feedback and Contributions
The model's author, SKY PRODUCTION, welcomes feedback and contributions to improve the model.
# Have Fun Styling Your Images!
We hope you enjoy using the iPhone-Wallpaper-Style model to add a unique touch to your images. If you have any questions or need further assistance, please don't hesitate to reach out to the model's author or the community.
|  |  |  | 
|:----------------------:|:----------------:|:----------------------:|:----------------:|
|
stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 | stefan-it | "2023-10-17T23:23:37Z" | 4 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
] | token-classification | "2023-10-13T18:11:22Z" | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Nous recevons le premier numéro d ' un nouveau journal , le Radical - Libéral
, qui paraîtra à Genève deux fois la semaine . Son but est de représenter l '
élément national du radicalisme genevois , en d ' autres termes , de défendre
la politique intransigeante do M . Carteret , en opposition aux tendances du groupe
_ > dont le Genevois est l ' organe . Bétail .
---
# Fine-tuned Flair Model on French HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.8314][1] | [0.8377][2] | [0.8359][3] | [0.8214][4] | [0.8364][5] | 83.26 ± 0.6 |
| bs8-e10-lr3e-05 | [0.83][6] | [0.8274][7] | [0.8358][8] | [0.8234][9] | [0.8327][10] | 82.99 ± 0.43 |
| bs8-e10-lr5e-05 | [0.8301][11] | [0.8321][12] | [0.8267][13] | [0.8266][14] | [0.8308][15] | 82.93 ± 0.22 |
| bs4-e10-lr5e-05 | [0.8181][16] | [0.8087][17] | [0.8239][18] | [0.8219][19] | [0.8224][20] | 81.9 ± 0.55 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
zyxciss/FUSION-ALE | zyxciss | "2025-03-02T08:59:01Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"fusion-ale",
"text-generation",
"thinking",
"reasoning",
"en",
"hi",
"ur",
"ar",
"bn",
"zh",
"fr",
"es",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-22T11:24:58Z" | ---
license: mit
library_name: transformers
language:
- en
- hi
- ur
- ar
- bn
- zh
- fr
- es
metrics:
- accuracy
parameters:
- 975B
tags:
- thinking
- reasoning
pipeline_tag: text-generation
---
# Fusion-T1
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="https://i.ibb.co/ncVZWRL/FUSION-zip-2-removebg-preview.png" width="30%" alt="Fusion-T1" />
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href="https://www.fusion.com/" target="_blank" style="margin: 2px;">
<img alt="Homepage" src="https://github.com/ Fusion-ai/ Fusion-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://chat. Fusion.com/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat- Fusion%20T1-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://huggingface.co/ Fusion-ai" target="_blank" style="margin: 2px;">
<img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face- Fusion%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;">
<img alt="Discord" src="https://img.shields.io/badge/Discord- Fusion%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/ Fusion-ai/ Fusion-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;">
<img alt="Wechat" src="https://img.shields.io/badge/WeChat- Fusion%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://twitter.com/ Fusion_ai" target="_blank" style="margin: 2px;">
<img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter- Fusion_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://github.com/ Fusion-ai/ Fusion-T1/blob/main/LICENSE-CODE" style="margin: 2px;">
<img alt="Code License" src="https://img.shields.io/badge/Code_License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/ Fusion-ai/ Fusion-T1/blob/main/LICENSE-MODEL" style="margin: 2px;">
<img alt="Model License" src="https://img.shields.io/badge/Model_License-Model_Agreement-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<p align="center">
<a href="https://github.com/ Fusion-ai/ Fusion-T1/blob/main/ Fusion_T1.pdf"><b>Paper Link</b>👁️</a>
</p>
## 1. Introduction
We introduce our first-generation reasoning models, Fusion-T1-Zero and Fusion-T1.
Fusion-T1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrated remarkable performance on reasoning.
With RL, Fusion-T1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors.
However, Fusion-T1-Zero encounters challenges such as endless repetition, poor readability, and language mixing. To address these issues and further enhance reasoning performance,
we introduce Fusion-T1, which incorporates cold-start data before RL.
Fusion-T1 achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.
To support the research community, we have open-sourced Fusion-T1-Zero, Fusion-T1, and six dense models distilled from Fusion-T1 based on Llama and Qwen. Fusion-T1-Distill-Qwen-32B outperforms OpenAI-o1-mini across various benchmarks, achieving new state-of-the-art results for dense models.
<p align="center">
<img width="80%" src="https://i.ibb.co/Bw8N1VY/image.png">
</p>
## 2. Model Summary
---
**Post-Training: Large-Scale Reinforcement Learning on the Base Model**
- We directly apply reinforcement learning (RL) to the base model without relying on supervised fine-tuning (SFT) as a preliminary step. This approach allows the model to explore chain-of-thought (CoT) for solving complex problems, resulting in the development of Fusion-T1-Zero. Fusion-T1-Zero demonstrates capabilities such as self-verification, reflection, and generating long CoTs, marking a significant milestone for the research community. Notably, it is the first open research to validate that reasoning capabilities of LLMs can be incentivized purely through RL, without the need for SFT. This breakthrough paves the way for future advancements in this area.
- We introduce our pipeline to develop Fusion-T1. The pipeline incorporates two RL stages aimed at discovering improved reasoning patterns and aligning with human preferences, as well as two SFT stages that serve as the seed for the model's reasoning and non-reasoning capabilities.
We believe the pipeline will benefit the industry by creating better models.
---
**Distillation: Smaller Models Can Be Powerful Too**
- We demonstrate that the reasoning patterns of larger models can be distilled into smaller models, resulting in better performance compared to the reasoning patterns discovered through RL on small models. The open source Fusion-T1, as well as its API, will benefit the research community to distill better smaller models in the future.
- Using the reasoning data generated by Fusion-T1, we fine-tuned several dense models that are widely used in the research community. The evaluation results demonstrate that the distilled smaller dense models perform exceptionally well on benchmarks. We open-source distilled 1.5B, 7B, 8B, 14B, 32B, and 70B checkpoints based on Qwen2.5 and Llama3 series to the community.
## 3. Model Downloads
### Fusion-T1 Models
<div align="center">
| **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
| :------------: | :------------: | :------------: | :------------: | :------------: |
| Fusion-T1-Zero | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/ Fusion-ai/ Fusion-T1-Zero) |
| Fusion-T1 | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/ Fusion-ai/ Fusion-T1) |
</div>
Fusion-T1-Zero & Fusion-T1 are trained based on Fusion-V3-Base.
For more details regrading the model architecture, please refer to [ Fusion-V3](https://github.com/ Fusion-ai/ Fusion-V3) repository.
### Fusion-T1-Distill Models
<div align="center">
| **Model** | **Base Model** | **Download** |
| :------------: | :------------: | :------------: |
| Fusion-T1-Distill-Qwen-1.5B | [Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) | [🤗 HuggingFace](https://huggingface.co/ Fusion-ai/ Fusion-T1-Distill-Qwen-1.5B) |
| Fusion-T1-Distill-Qwen-7B | [Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) | [🤗 HuggingFace](https://huggingface.co/ Fusion-ai/ Fusion-T1-Distill-Qwen-7B) |
| Fusion-T1-Distill-Llama-8B | [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) | [🤗 HuggingFace](https://huggingface.co/ Fusion-ai/ Fusion-T1-Distill-Llama-8B) |
| Fusion-T1-Distill-Qwen-14B | [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) | [🤗 HuggingFace](https://huggingface.co/ Fusion-ai/ Fusion-T1-Distill-Qwen-14B) |
| Fusion-T1-Distill-Qwen-32B | [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) | [🤗 HuggingFace](https://huggingface.co/ Fusion-ai/ Fusion-T1-Distill-Qwen-32B) |
| Fusion-T1-Distill-Llama-70B | [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | [🤗 HuggingFace](https://huggingface.co/ Fusion-ai/ Fusion-T1-Distill-Llama-70B) |
</div>
Fusion-T1-Distill models are fine-tuned based on open-source models, using samples generated by Fusion-T1.
We slightly change their configs and tokenizers. Please use our setting to run these models.
## 4. Evaluation Results
### Fusion-T1-Evaluation
For all our models, the maximum generation length is set to 32,768 tokens. For benchmarks requiring sampling, we use a temperature of $0.6$, a top-p value of $0.95$, and generate 64 responses per query to estimate pass@1.
<div align="center">
| Category | Benchmark (Metric) | Claude-3.5-Sonnet-1022 | GPT-4o 0513 | Fusion V3 | OpenAI o1-mini | OpenAI o1-1217 | Fusion T1 |
|----------|-------------------|----------------------|------------|--------------|----------------|------------|--------------|
| | Architecture | - | - | MoE | - | - | MoE |
| | # Activated Params | - | - | 37B | - | - | 37B |
| | # Total Params | - | - | 671B | - | - | 671B |
| English | MMLU (Pass@1) | 88.3 | 87.2 | 88.5 | 85.2 | **91.8** | 90.8 |
| | MMLU-Redux (EM) | 88.9 | 88.0 | 89.1 | 86.7 | - | **92.9** |
| | MMLU-Pro (EM) | 78.0 | 72.6 | 75.9 | 80.3 | - | **84.0** |
| | DROP (3-shot F1) | 88.3 | 83.7 | 91.6 | 83.9 | 90.2 | **92.2** |
| | IF-Eval (Prompt Strict) | **86.5** | 84.3 | 86.1 | 84.8 | - | 83.3 |
| | GPQA-Diamond (Pass@1) | 65.0 | 49.9 | 59.1 | 60.0 | **75.7** | 71.5 |
| | SimpleQA (Correct) | 28.4 | 38.2 | 24.9 | 7.0 | **47.0** | 30.1 |
| | FRAMES (Acc.) | 72.5 | 80.5 | 73.3 | 76.9 | - | **82.5** |
| | AlpacaEval2.0 (LC-winrate) | 52.0 | 51.1 | 70.0 | 57.8 | - | **87.6** |
| | ArenaHard (GPT-4-1106) | 85.2 | 80.4 | 85.5 | 92.0 | - | **92.3** |
| Code | LiveCodeBench (Pass@1-COT) | 33.8 | 34.2 | - | 53.8 | 63.4 | **65.9** |
| | Codeforces (Percentile) | 20.3 | 23.6 | 58.7 | 93.4 | **96.6** | 96.3 |
| | Codeforces (Rating) | 717 | 759 | 1134 | 1820 | **2061** | 2029 |
| | SWE Verified (Resolved) | **50.8** | 38.8 | 42.0 | 41.6 | 48.9 | 49.2 |
| | Aider-Polyglot (Acc.) | 45.3 | 16.0 | 49.6 | 32.9 | **61.7** | 53.3 |
| Math | AIME 2024 (Pass@1) | 16.0 | 9.3 | 39.2 | 63.6 | 79.2 | **79.8** |
| | MATH-500 (Pass@1) | 78.3 | 74.6 | 90.2 | 90.0 | 96.4 | **97.3** |
| | CNMO 2024 (Pass@1) | 13.1 | 10.8 | 43.2 | 67.6 | - | **78.8** |
| Chinese | CLUEWSC (EM) | 85.4 | 87.9 | 90.9 | 89.9 | - | **92.8** |
| | C-Eval (EM) | 76.7 | 76.0 | 86.5 | 68.9 | - | **91.8** |
| | C-SimpleQA (Correct) | 55.4 | 58.7 | **68.0** | 40.3 | - | 63.7 |
</div>
### Distilled Model Evaluation
<div align="center">
| Model | AIME 2024 pass@1 | AIME 2024 cons@64 | MATH-500 pass@1 | GPQA Diamond pass@1 | LiveCodeBench pass@1 | CodeForces rating |
|------------------------------------------|------------------|-------------------|-----------------|----------------------|----------------------|-------------------|
| GPT-4o-0513 | 9.3 | 13.4 | 74.6 | 49.9 | 32.9 | 759 |
| Claude-3.5-Sonnet-1022 | 16.0 | 26.7 | 78.3 | 65.0 | 38.9 | 717 |
| o1-mini | 63.6 | 80.0 | 90.0 | 60.0 | 53.8 | **1820** |
| QwQ-32B-Preview | 44.0 | 60.0 | 90.6 | 54.5 | 41.9 | 1316 |
| Fusion-T1-Distill-Qwen-1.5B | 28.9 | 52.7 | 83.9 | 33.8 | 16.9 | 954 |
| Fusion-T1-Distill-Qwen-7B | 55.5 | 83.3 | 92.8 | 49.1 | 37.6 | 1189 |
| Fusion-T1-Distill-Qwen-14B | 69.7 | 80.0 | 93.9 | 59.1 | 53.1 | 1481 |
| Fusion-T1-Distill-Qwen-32B | **72.6** | 83.3 | 94.3 | 62.1 | 57.2 | 1691 |
| Fusion-T1-Distill-Llama-8B | 50.4 | 80.0 | 89.1 | 49.0 | 39.6 | 1205 |
| Fusion-T1-Distill-Llama-70B | 70.0 | **86.7** | **94.5** | **65.2** | **57.5** | 1633 |
</div>
## 5. Chat Website & API Platform
You can chat with Fusion-T1 on Fusion's official website: [chat. Fusion.com](https://chat. Fusion.com), and switch on the button "DeepThink"
We also provide OpenAI-Compatible API at Fusion Platform: [platform. Fusion.com](https://platform. Fusion.com/)
## 6. How to Run Locally
### Fusion-T1 Models
Please visit [ Fusion-V3](https://github.com/ Fusion-ai/ Fusion-V3) repo for more information about running Fusion-T1 locally.
### Fusion-T1-Distill Models
Fusion-T1-Distill models can be utilized in the same manner as Qwen or Llama models.
For instance, you can easily start a service using [vLLM](https://github.com/vllm-project/vllm):
```shell
vllm serve Fusion-ai/ Fusion-T1-Distill-Qwen-32B --tensor-parallel-size 2 --max-model-len 32768 --enforce-eager
```
**NOTE: We recommend setting an appropriate temperature (between 0.5 and 0.7) when running these models, otherwise you may encounter issues with endless repetition or incoherent output.**
## 7. License
This code repository and the model weights are licensed under the [MIT License](https://github.com/ Fusion-ai/ Fusion-T1/blob/main/LICENSE).
Fusion-T1 series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. Please note that:
- Fusion-T1-Distill-Qwen-1.5B, Fusion-T1-Distill-Qwen-7B, Fusion-T1-Distill-Qwen-14B and Fusion-T1-Distill-Qwen-32B are derived from [Qwen-2.5 series](https://github.com/QwenLM/Qwen2.5), which are originally licensed under [Apache 2.0 License](https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE), and now finetuned with 800k samples curated with Fusion-T1.
- Fusion-T1-Distill-Llama-8B is derived from Llama3.1-8B-Base and is originally licensed under [llama3.1 license](https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/LICENSE).
- Fusion-T1-Distill-Llama-70B is derived from Llama3.3-70B-Instruct and is originally licensed under [llama3.3 license](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE).
## 8. Citation
```
```
## 9. Contact
If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]). |
samitizerxu/large-algae-vit-rgb | samitizerxu | "2023-02-17T09:31:17Z" | 27 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-02-16T23:57:45Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: large-algae-vit-rgb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# large-algae-vit-rgb
This model is a fine-tuned version of [samitizerxu/large-algae-vit-rgb](https://huggingface.co/samitizerxu/large-algae-vit-rgb) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1659
- Accuracy: 0.5798
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2115 | 1.0 | 120 | 0.9078 | 0.6315 |
| 1.1249 | 2.0 | 240 | 0.9217 | 0.6320 |
| 1.1385 | 3.0 | 360 | 0.9518 | 0.6180 |
| 1.1347 | 4.0 | 480 | 1.0201 | 0.6068 |
| 1.1358 | 5.0 | 600 | 1.0801 | 0.5892 |
| 1.098 | 6.0 | 720 | 1.0932 | 0.5851 |
| 1.0882 | 7.0 | 840 | 1.0347 | 0.6033 |
| 1.0688 | 8.0 | 960 | 1.0403 | 0.6056 |
| 1.0863 | 9.0 | 1080 | 1.0466 | 0.6009 |
| 1.1253 | 10.0 | 1200 | 1.2308 | 0.5511 |
| 1.0393 | 11.0 | 1320 | 1.1434 | 0.5869 |
| 1.0749 | 12.0 | 1440 | 1.2155 | 0.5622 |
| 1.0433 | 13.0 | 1560 | 1.2466 | 0.5522 |
| 1.0141 | 14.0 | 1680 | 1.1880 | 0.5563 |
| 1.0516 | 15.0 | 1800 | 1.1006 | 0.5992 |
| 1.0696 | 16.0 | 1920 | 1.0971 | 0.5751 |
| 0.9867 | 17.0 | 2040 | 1.1689 | 0.5827 |
| 1.0234 | 18.0 | 2160 | 1.1846 | 0.5751 |
| 1.0364 | 19.0 | 2280 | 1.1480 | 0.5739 |
| 1.0314 | 20.0 | 2400 | 1.0977 | 0.5880 |
| 1.0179 | 21.0 | 2520 | 1.1258 | 0.5851 |
| 1.0584 | 22.0 | 2640 | 1.1569 | 0.5822 |
| 1.0222 | 23.0 | 2760 | 1.1672 | 0.5839 |
| 0.996 | 24.0 | 2880 | 1.1737 | 0.5798 |
| 1.0343 | 25.0 | 3000 | 1.1588 | 0.5792 |
| 0.9854 | 26.0 | 3120 | 1.1758 | 0.5763 |
| 0.9753 | 27.0 | 3240 | 1.1715 | 0.5763 |
| 0.9881 | 28.0 | 3360 | 1.1403 | 0.5839 |
| 1.0057 | 29.0 | 3480 | 1.1765 | 0.5781 |
| 0.9824 | 30.0 | 3600 | 1.1659 | 0.5798 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
tomaarsen/mpnet-base-nq-cgist-triplet-3-gte | tomaarsen | "2024-11-20T13:48:13Z" | 9 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"mpnet",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:50000",
"loss:CachedGISTEmbedLoss",
"en",
"dataset:tomaarsen/gooaq-hard-negatives",
"arxiv:1908.10084",
"base_model:microsoft/mpnet-base",
"base_model:finetune:microsoft/mpnet-base",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2024-11-20T13:47:46Z" | ---
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:50000
- loss:CachedGISTEmbedLoss
base_model: microsoft/mpnet-base
widget:
- source_sentence: what does the accounts receivable turnover measure?
sentences:
- The accounts receivable turnover ratio is an accounting measure used to quantify
a company's effectiveness in collecting its receivables or money owed by clients.
The ratio shows how well a company uses and manages the credit it extends to customers
and how quickly that short-term debt is collected or is paid.
- Capital budgeting, and investment appraisal, is the planning process used to determine
whether an organization's long term investments such as new machinery, replacement
of machinery, new plants, new products, and research development projects are
worth the funding of cash through the firm's capitalization structure ( ...
- The accounts receivable turnover ratio is an accounting measure used to quantify
a company's effectiveness in collecting its receivables or money owed by clients.
The ratio shows how well a company uses and manages the credit it extends to customers
and how quickly that short-term debt is collected or is paid.
- source_sentence: does gabapentin cause liver problems?
sentences:
- Gabapentin has no appreciable liver metabolism, yet, suspected cases of gabapentin-induced
hepatotoxicity have been reported. Per literature review, two cases of possible
gabapentin-induced liver injury have been reported.
- Strongholds are a type of story mission which only unlocks after enough progression
through the game. There are three Stronghold's during the first section of progression
through The Division 2. You'll need to complete the first two and have reached
level 30 before being able to unlock the final Stronghold.
- The most-common side effects attributed to Gabapentin include mild sedation, ataxia,
and occasional diarrhea. Sedation can be minimized by tapering from a smaller
starting dose to the desired dose. When treating seizures, it is ideal to wean
off the drug to reduce the risk of withdrawal seizures.
- source_sentence: how long should you wait to give blood after eating?
sentences:
- Until the bleeding has stopped it is natural to taste blood or to see traces of
blood in your saliva. You may stop using gauze after the flow stops – usually
around 8 hours after surgery.
- Before donation The first and most important rule—never donate blood on an empty
stomach. “Eat a wholesome meal about 2-3 hours before donating to keep your blood
sugar stable," says Dr Chaturvedi. The timing of the meal is important too. You
need to allow the food to be digested properly before the blood is drawn.
- While grid computing involves virtualizing computing resources to store massive
amounts of data, whereas cloud computing is where an application doesn't access
resources directly, rather it accesses them through a service over the internet.
...
- source_sentence: what is the difference between chicken francese and chicken marsala?
sentences:
- Chicken is the species name, equivalent to our “human.” Rooster is an adult male,
equivalent to “man.” Hen is an adult female, equivalent to “woman.” Cockerel is
a juvenile male, equivalent to “boy/young man.”
- What is 99 kg in pounds? - 99 kg is equal to 218.26 pounds.
- The difference between the two is for Francese, the chicken breast is first dipped
in flour, then into a beaten egg mixture, before being cooked. For piccata, the
chicken is first dipped in egg and then in flour. Both are then simmered in a
lemony butter sauce, but the piccata sauce includes capers.”
- source_sentence: what energy is released when coal is burned?
sentences:
- When coal is burned, it reacts with the oxygen in the air. This chemical reaction
converts the stored solar energy into thermal energy, which is released as heat.
But it also produces carbon dioxide and methane.
- When coal is burned it releases a number of airborne toxins and pollutants. They
include mercury, lead, sulfur dioxide, nitrogen oxides, particulates, and various
other heavy metals.
- Squad Building Challenges allow you to exchange sets of players for coins, packs,
and special items in FUT 20. Each of these challenges come with specific requirements,
such as including players from certain teams. ... Live SBCs are time-limited challenges
which often give out unique, high-rated versions of players.
datasets:
- tomaarsen/gooaq-hard-negatives
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
co2_eq_emissions:
emissions: 40.86214567359107
energy_consumed: 0.1051246087583575
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
ram_total_size: 31.777088165283203
hours_used: 0.3
hardware_used: 1 x NVIDIA GeForce RTX 3090
model-index:
- name: MPNet base trained on Natural Questions pairs
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoClimateFEVER
type: NanoClimateFEVER
metrics:
- type: cosine_accuracy@1
value: 0.22
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.44
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.52
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.72
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.22
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.16666666666666663
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.12
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09399999999999999
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.09333333333333332
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.195
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.2333333333333333
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.37233333333333335
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.2744024872493329
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.3594365079365079
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.20181676147957636
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoDBPedia
type: NanoDBPedia
metrics:
- type: cosine_accuracy@1
value: 0.46
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.62
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.76
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.82
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.46
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.38666666666666666
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.38799999999999996
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.344
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.03065300183409328
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.07730098142643593
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.14588470319900892
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.22159653924772912
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.3920743245484332
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.567
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.28153419189397744
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoFEVER
type: NanoFEVER
metrics:
- type: cosine_accuracy@1
value: 0.38
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.54
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.58
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.68
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.38
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.18
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.12
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.07
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.37
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.52
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.57
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.66
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5156585003907987
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.4756666666666666
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.47620972127897226
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoFiQA2018
type: NanoFiQA2018
metrics:
- type: cosine_accuracy@1
value: 0.28
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.52
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.58
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.28
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.22
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.16399999999999998
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09799999999999999
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.1371904761904762
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.3226904761904762
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.3682142857142857
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.43073809523809525
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.3420135901424927
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.38405555555555554
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.2826394452885763
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoHotpotQA
type: NanoHotpotQA
metrics:
- type: cosine_accuracy@1
value: 0.34
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.52
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.62
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.72
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.34
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.19333333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.14400000000000002
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09200000000000001
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.17
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.29
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.36
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.46
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.3723049657456267
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.4570793650793651
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.2995175868330484
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoMSMARCO
type: NanoMSMARCO
metrics:
- type: cosine_accuracy@1
value: 0.1
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.28
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.52
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.68
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.1
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.09333333333333332
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.10400000000000001
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.068
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.1
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.28
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.52
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.68
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.36083481845261806
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.26157142857142857
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.27215692684924997
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoNFCorpus
type: NanoNFCorpus
metrics:
- type: cosine_accuracy@1
value: 0.26
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.38
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.44
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.5
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.26
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.21333333333333332
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.19599999999999998
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.13799999999999998
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.01122167476431692
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.02047531859468654
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.03079316493603994
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.0422192068561938
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.1654539374427929
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.3367460317460317
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.04901233559063261
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoNQ
type: NanoNQ
metrics:
- type: cosine_accuracy@1
value: 0.14
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.36
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.44
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.58
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.14
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.11999999999999998
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.08800000000000002
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.06000000000000001
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.13
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.34
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.41
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.55
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.33223439819785083
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.2734365079365079
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.2764557370904448
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoQuoraRetrieval
type: NanoQuoraRetrieval
metrics:
- type: cosine_accuracy@1
value: 0.82
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.92
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.96
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.82
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3666666666666666
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.244
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.13399999999999998
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.7206666666666667
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8553333333333333
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8993333333333333
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9566666666666666
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8807317086981499
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8616666666666666
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8525831566094724
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoSCIDOCS
type: NanoSCIDOCS
metrics:
- type: cosine_accuracy@1
value: 0.34
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.48
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.54
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.66
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.34
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.24666666666666667
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.212
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.14800000000000002
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.07066666666666668
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.15366666666666667
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.21866666666666668
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.30466666666666664
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.28968259227673265
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.4286349206349206
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.22985309744949503
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoArguAna
type: NanoArguAna
metrics:
- type: cosine_accuracy@1
value: 0.18
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.56
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.62
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.84
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.18
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.18666666666666668
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.124
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08399999999999999
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.18
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.56
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.62
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.84
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.49726259302609505
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.389079365079365
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.3967117258845785
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoSciFact
type: NanoSciFact
metrics:
- type: cosine_accuracy@1
value: 0.38
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.46
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.48
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.62
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.38
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.16666666666666663
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.10400000000000001
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.068
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.345
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.44
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.46
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.605
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.47012843706683605
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.4409285714285714
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.43840522432574647
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoTouche2020
type: NanoTouche2020
metrics:
- type: cosine_accuracy@1
value: 0.5306122448979592
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7551020408163265
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8571428571428571
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9387755102040817
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5306122448979592
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.45578231292517
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.4040816326530612
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.336734693877551
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.03881638827876476
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.10008002766114979
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.13975964122053652
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.22966349775526734
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.39339080810676896
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6553206997084549
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.31344772891929434
name: Cosine Map@100
- task:
type: nano-beir
name: Nano BEIR
dataset:
name: NanoBEIR mean
type: NanoBEIR_mean
metrics:
- type: cosine_accuracy@1
value: 0.3408163265306122
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5227001569858712
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6013186813186814
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.7152904238618524
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.3408163265306122
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.23044479330193612
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1855447409733124
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.13344113029827318
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.18442678521033212
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.31958052337482684
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.3827680868002465
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.4886833850587655
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.4066287047188099
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.4531247913084647
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.33618027996100497
name: Cosine Map@100
---
# MPNet base trained on Natural Questions pairs
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base) on the [gooaq-hard-negatives](https://huggingface.co/datasets/tomaarsen/gooaq-hard-negatives) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base) <!-- at revision 6996ce1e91bd2a9c7d7f61daec37463394f73f09 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [gooaq-hard-negatives](https://huggingface.co/datasets/tomaarsen/gooaq-hard-negatives)
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("tomaarsen/mpnet-base-nq-cgist-triplet-3-gte")
# Run inference
sentences = [
'what energy is released when coal is burned?',
'When coal is burned, it reacts with the oxygen in the air. This chemical reaction converts the stored solar energy into thermal energy, which is released as heat. But it also produces carbon dioxide and methane.',
'When coal is burned it releases a number of airborne toxins and pollutants. They include mercury, lead, sulfur dioxide, nitrogen oxides, particulates, and various other heavy metals.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Datasets: `NanoClimateFEVER`, `NanoDBPedia`, `NanoFEVER`, `NanoFiQA2018`, `NanoHotpotQA`, `NanoMSMARCO`, `NanoNFCorpus`, `NanoNQ`, `NanoQuoraRetrieval`, `NanoSCIDOCS`, `NanoArguAna`, `NanoSciFact` and `NanoTouche2020`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | NanoClimateFEVER | NanoDBPedia | NanoFEVER | NanoFiQA2018 | NanoHotpotQA | NanoMSMARCO | NanoNFCorpus | NanoNQ | NanoQuoraRetrieval | NanoSCIDOCS | NanoArguAna | NanoSciFact | NanoTouche2020 |
|:--------------------|:-----------------|:------------|:-----------|:-------------|:-------------|:------------|:-------------|:-----------|:-------------------|:------------|:------------|:------------|:---------------|
| cosine_accuracy@1 | 0.22 | 0.46 | 0.38 | 0.28 | 0.34 | 0.1 | 0.26 | 0.14 | 0.82 | 0.34 | 0.18 | 0.38 | 0.5306 |
| cosine_accuracy@3 | 0.44 | 0.62 | 0.54 | 0.5 | 0.52 | 0.28 | 0.38 | 0.36 | 0.9 | 0.48 | 0.56 | 0.46 | 0.7551 |
| cosine_accuracy@5 | 0.52 | 0.76 | 0.58 | 0.52 | 0.62 | 0.52 | 0.44 | 0.44 | 0.92 | 0.54 | 0.62 | 0.48 | 0.8571 |
| cosine_accuracy@10 | 0.72 | 0.82 | 0.68 | 0.58 | 0.72 | 0.68 | 0.5 | 0.58 | 0.96 | 0.66 | 0.84 | 0.62 | 0.9388 |
| cosine_precision@1 | 0.22 | 0.46 | 0.38 | 0.28 | 0.34 | 0.1 | 0.26 | 0.14 | 0.82 | 0.34 | 0.18 | 0.38 | 0.5306 |
| cosine_precision@3 | 0.1667 | 0.3867 | 0.18 | 0.22 | 0.1933 | 0.0933 | 0.2133 | 0.12 | 0.3667 | 0.2467 | 0.1867 | 0.1667 | 0.4558 |
| cosine_precision@5 | 0.12 | 0.388 | 0.12 | 0.164 | 0.144 | 0.104 | 0.196 | 0.088 | 0.244 | 0.212 | 0.124 | 0.104 | 0.4041 |
| cosine_precision@10 | 0.094 | 0.344 | 0.07 | 0.098 | 0.092 | 0.068 | 0.138 | 0.06 | 0.134 | 0.148 | 0.084 | 0.068 | 0.3367 |
| cosine_recall@1 | 0.0933 | 0.0307 | 0.37 | 0.1372 | 0.17 | 0.1 | 0.0112 | 0.13 | 0.7207 | 0.0707 | 0.18 | 0.345 | 0.0388 |
| cosine_recall@3 | 0.195 | 0.0773 | 0.52 | 0.3227 | 0.29 | 0.28 | 0.0205 | 0.34 | 0.8553 | 0.1537 | 0.56 | 0.44 | 0.1001 |
| cosine_recall@5 | 0.2333 | 0.1459 | 0.57 | 0.3682 | 0.36 | 0.52 | 0.0308 | 0.41 | 0.8993 | 0.2187 | 0.62 | 0.46 | 0.1398 |
| cosine_recall@10 | 0.3723 | 0.2216 | 0.66 | 0.4307 | 0.46 | 0.68 | 0.0422 | 0.55 | 0.9567 | 0.3047 | 0.84 | 0.605 | 0.2297 |
| **cosine_ndcg@10** | **0.2744** | **0.3921** | **0.5157** | **0.342** | **0.3723** | **0.3608** | **0.1655** | **0.3322** | **0.8807** | **0.2897** | **0.4973** | **0.4701** | **0.3934** |
| cosine_mrr@10 | 0.3594 | 0.567 | 0.4757 | 0.3841 | 0.4571 | 0.2616 | 0.3367 | 0.2734 | 0.8617 | 0.4286 | 0.3891 | 0.4409 | 0.6553 |
| cosine_map@100 | 0.2018 | 0.2815 | 0.4762 | 0.2826 | 0.2995 | 0.2722 | 0.049 | 0.2765 | 0.8526 | 0.2299 | 0.3967 | 0.4384 | 0.3134 |
#### Nano BEIR
* Dataset: `NanoBEIR_mean`
* Evaluated with [<code>NanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.NanoBEIREvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.3408 |
| cosine_accuracy@3 | 0.5227 |
| cosine_accuracy@5 | 0.6013 |
| cosine_accuracy@10 | 0.7153 |
| cosine_precision@1 | 0.3408 |
| cosine_precision@3 | 0.2304 |
| cosine_precision@5 | 0.1855 |
| cosine_precision@10 | 0.1334 |
| cosine_recall@1 | 0.1844 |
| cosine_recall@3 | 0.3196 |
| cosine_recall@5 | 0.3828 |
| cosine_recall@10 | 0.4887 |
| **cosine_ndcg@10** | **0.4066** |
| cosine_mrr@10 | 0.4531 |
| cosine_map@100 | 0.3362 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### gooaq-hard-negatives
* Dataset: [gooaq-hard-negatives](https://huggingface.co/datasets/tomaarsen/gooaq-hard-negatives) at [87594a1](https://huggingface.co/datasets/tomaarsen/gooaq-hard-negatives/tree/87594a1e6c58e88b5843afa9da3a97ffd75d01c2)
* Size: 50,000 training samples
* Columns: <code>question</code>, <code>answer</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | question | answer | negative |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 11.53 tokens</li><li>max: 28 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 59.79 tokens</li><li>max: 150 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 58.76 tokens</li><li>max: 143 tokens</li></ul> |
* Samples:
| question | answer | negative |
|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>what is the difference between calories from fat and total fat?</code> | <code>Fat has more than twice as many calories per gram as carbohydrates and proteins. A gram of fat has about 9 calories, while a gram of carbohydrate or protein has about 4 calories. In other words, you could eat twice as much carbohydrates or proteins as fat for the same amount of calories.</code> | <code>Fat has more than twice as many calories per gram as carbohydrates and proteins. A gram of fat has about 9 calories, while a gram of carbohydrate or protein has about 4 calories. In other words, you could eat twice as much carbohydrates or proteins as fat for the same amount of calories.</code> |
| <code>what is the difference between return transcript and account transcript?</code> | <code>A tax return transcript usually meets the needs of lending institutions offering mortgages and student loans. ... Tax Account Transcript - shows basic data such as return type, marital status, adjusted gross income, taxable income and all payment types. It also shows changes made after you filed your original return.</code> | <code>Trial balance is not a financial statement whereas a balance sheet is a financial statement. Trial balance is solely used for internal purposes whereas a balance sheet is used for purposes other than internal i.e. external. In a trial balance, each and every account is divided into debit (dr.) and credit (cr.)</code> |
| <code>how long does my dog need to fast before sedation?</code> | <code>Now, guidelines are aimed towards 6-8 hours before surgery. This pre-op fasting time is much more beneficial for your pets because you have enough food in there to neutralize the stomach acid, preventing it from coming up the esophagus that causes regurgitation under anesthetic.</code> | <code>Try not to let your pooch rapidly wolf down his/her food! Do not let the dog play or exercise (e.g. go for a walk) for at least two hours after having a meal. Ensure continuous fresh water is available to avoid your pet gulping down a large amount after eating.</code> |
* Loss: [<code>CachedGISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedgistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.01}
```
### Evaluation Dataset
#### gooaq-hard-negatives
* Dataset: [gooaq-hard-negatives](https://huggingface.co/datasets/tomaarsen/gooaq-hard-negatives) at [87594a1](https://huggingface.co/datasets/tomaarsen/gooaq-hard-negatives/tree/87594a1e6c58e88b5843afa9da3a97ffd75d01c2)
* Size: 10,048,700 evaluation samples
* Columns: <code>question</code>, <code>answer</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | question | answer | negative |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 11.61 tokens</li><li>max: 21 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 58.16 tokens</li><li>max: 131 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 57.98 tokens</li><li>max: 157 tokens</li></ul> |
* Samples:
| question | answer | negative |
|:--------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>how is height width and length written?</code> | <code>The Graphics' industry standard is width by height (width x height). Meaning that when you write your measurements, you write them from your point of view, beginning with the width.</code> | <code>The Graphics' industry standard is width by height (width x height). Meaning that when you write your measurements, you write them from your point of view, beginning with the width. That's important.</code> |
| <code>what is the difference between pork shoulder and loin?</code> | <code>All the recipes I've found for pulled pork recommends a shoulder/butt. Shoulders take longer to cook than a loin, because they're tougher. Loins are lean, while shoulders have marbled fat inside.</code> | <code>They are extracted from the loin, which runs from the hip to the shoulder, and it has a small strip of meat called the tenderloin. Unlike other pork, this pork chop is cut from four major sections, which are the shoulder, also known as the blade chops, ribs chops, loin chops, and the last, which is the sirloin chops.</code> |
| <code>is the yin yang symbol religious?</code> | <code>The ubiquitous yin-yang symbol holds its roots in Taoism/Daoism, a Chinese religion and philosophy. The yin, the dark swirl, is associated with shadows, femininity, and the trough of a wave; the yang, the light swirl, represents brightness, passion and growth.</code> | <code>Yin energy is in the calm colors around you, in the soft music, in the soothing sound of a water fountain, or the relaxing images of water. Yang (active energy) is the feng shui energy expressed in strong, vibrant sounds and colors, bright lights, upward moving energy, tall plants, etc.</code> |
* Loss: [<code>CachedGISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedgistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.01}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 2048
- `per_device_eval_batch_size`: 2048
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `seed`: 12
- `bf16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 2048
- `per_device_eval_batch_size`: 2048
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 12
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | NanoClimateFEVER_cosine_ndcg@10 | NanoDBPedia_cosine_ndcg@10 | NanoFEVER_cosine_ndcg@10 | NanoFiQA2018_cosine_ndcg@10 | NanoHotpotQA_cosine_ndcg@10 | NanoMSMARCO_cosine_ndcg@10 | NanoNFCorpus_cosine_ndcg@10 | NanoNQ_cosine_ndcg@10 | NanoQuoraRetrieval_cosine_ndcg@10 | NanoSCIDOCS_cosine_ndcg@10 | NanoArguAna_cosine_ndcg@10 | NanoSciFact_cosine_ndcg@10 | NanoTouche2020_cosine_ndcg@10 | NanoBEIR_mean_cosine_ndcg@10 |
|:-----:|:----:|:-------------:|:---------------:|:-------------------------------:|:--------------------------:|:------------------------:|:---------------------------:|:---------------------------:|:--------------------------:|:---------------------------:|:---------------------:|:---------------------------------:|:--------------------------:|:--------------------------:|:--------------------------:|:-----------------------------:|:----------------------------:|
| 0.04 | 1 | 11.5141 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2 | 5 | 9.4407 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4 | 10 | 5.6005 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6 | 15 | 3.7323 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8 | 20 | 2.7976 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.0 | 25 | 2.1899 | 1.3429 | 0.2744 | 0.3921 | 0.5157 | 0.3420 | 0.3723 | 0.3608 | 0.1655 | 0.3322 | 0.8807 | 0.2897 | 0.4973 | 0.4701 | 0.3934 | 0.4066 |
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Energy Consumed**: 0.105 kWh
- **Carbon Emitted**: 0.041 kg of CO2
- **Hours Used**: 0.3 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x NVIDIA GeForce RTX 3090
- **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
- **RAM Size**: 31.78 GB
### Framework Versions
- Python: 3.11.6
- Sentence Transformers: 3.4.0.dev0
- Transformers: 4.46.2
- PyTorch: 2.5.0+cu121
- Accelerate: 0.35.0.dev0
- Datasets: 2.20.0
- Tokenizers: 0.20.3
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
CMU-AIR2/math-phi-1-5-FULL-Curriculum-Subjects1to5-lr-1.1e-6-NotCodePrompt | CMU-AIR2 | "2024-05-21T05:04:09Z" | 3 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-21T04:55:50Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Severian/Mistral-Base-v0.2-Nexus-IKMv5-7B-LoRa | Severian | "2024-03-25T20:09:07Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-v0.2-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-v0.2-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-03-25T20:08:46Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/mistral-7b-v0.2-bnb-4bit
---
# Uploaded model
- **Developed by:** Severian
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
huggingtweets/ancapkid | huggingtweets | "2021-05-21T18:53:43Z" | 5 | 1 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | ---
language: en
thumbnail: https://www.huggingtweets.com/ancapkid/1617897872455/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1364764641633701889/wk_YVSbd_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">N.P.C. Lovecraft 🤖 AI Bot </div>
<div style="font-size: 15px">@ancapkid bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@ancapkid's tweets](https://twitter.com/ancapkid).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 2738 |
| Retweets | 166 |
| Short tweets | 589 |
| Tweets kept | 1983 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1to3139m/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ancapkid's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/27sth5f2) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/27sth5f2/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ancapkid')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
NoPattern/Rorschach | NoPattern | "2024-02-01T23:07:30Z" | 0 | 4 | null | [
"text-to-image",
"region:us"
] | text-to-image | "2024-01-18T04:50:35Z" | ---
pipeline_tag: text-to-image
---
**NoPattern Studio** is the creative practice of multidisciplinary artist, designer & creative director **Chuck Anderson**, established in 2004. Chuck is acclaimed for his use of surreal color and light, innovative juxtapositions of traditional and digital mediums, and constant experimentation within both his own art practice as well as client collaborations, as well as co-founding early internet culture blog THE BRILLIANCE! with Benjamin Edgar and Virgil Abloh in 2005.
Chuck released *CRASH REPORT*, a self-published book in 2019, containing a year's worth of experimental, exploratory 3D imagery generated entirely in Photoshop. The concept behind the book deals with our relationship to working creatively with imperfect technology and learning to embrace errors and interruptions.
This **NoPattern Model**, a fine-tuned version of Stability.ai, allows users to take Chuck’s *CRASH REPORT* art and type in what they think they see in the shapes and colors - find the pattern you think you see in his work.
This digital Rorschach model will then infer its own version of these inputs, modeled after the original *CRASH REPORT* artwork.
#### Model Details
Explore the model’s lineage [here](https://huggingface.co/spaces/EQTYLab/lineage-explorer)
#### License |
beast33/7e9a33e4-dfed-46c0-8f45-6919b81fa56d | beast33 | "2025-02-04T05:19:39Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-7B-Instruct",
"base_model:adapter:unsloth/Qwen2-7B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-02-04T04:54:49Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7e9a33e4-dfed-46c0-8f45-6919b81fa56d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e6e4f6e948bc6471_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e6e4f6e948bc6471_train_data.json
type:
field_input: topic
field_instruction: text
field_output: title
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: beast33/7e9a33e4-dfed-46c0-8f45-6919b81fa56d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/e6e4f6e948bc6471_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 69002658-908b-4f14-a9fb-64d08340747d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 69002658-908b-4f14-a9fb-64d08340747d
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 7e9a33e4-dfed-46c0-8f45-6919b81fa56d
This model is a fine-tuned version of [unsloth/Qwen2-7B-Instruct](https://huggingface.co/unsloth/Qwen2-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6725
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 140
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.4011 | 1.0 | 140 | 0.6725 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Augusto777/SWv2-DMAE-H-4-rp-clean-fix-U-40-Cross-5 | Augusto777 | "2025-02-12T14:59:08Z" | 0 | 0 | null | [
"tensorboard",
"safetensors",
"swinv2",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swinv2-tiny-patch4-window8-256",
"base_model:finetune:microsoft/swinv2-tiny-patch4-window8-256",
"license:apache-2.0",
"model-index",
"region:us"
] | null | "2025-02-12T14:11:51Z" | ---
license: apache-2.0
base_model: microsoft/swinv2-tiny-patch4-window8-256
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: SWv2-DMAE-H-4-rp-clean-fix-U-40-Cross-5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8690476190476191
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SWv2-DMAE-H-4-rp-clean-fix-U-40-Cross-5
This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4646
- Accuracy: 0.8690
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.6088 | 0.98 | 12 | 1.6063 | 0.2024 |
| 1.6024 | 1.96 | 24 | 1.5854 | 0.2024 |
| 1.568 | 2.94 | 36 | 1.5534 | 0.2024 |
| 1.5001 | 4.0 | 49 | 1.4693 | 0.2024 |
| 1.3811 | 4.98 | 61 | 1.3256 | 0.3690 |
| 1.27 | 5.96 | 73 | 1.0978 | 0.5476 |
| 1.0887 | 6.94 | 85 | 0.8237 | 0.7381 |
| 0.937 | 8.0 | 98 | 0.7746 | 0.7143 |
| 0.811 | 8.98 | 110 | 0.5772 | 0.7976 |
| 0.7574 | 9.96 | 122 | 0.6164 | 0.7857 |
| 0.7118 | 10.94 | 134 | 0.6410 | 0.7976 |
| 0.6374 | 12.0 | 147 | 0.5243 | 0.8095 |
| 0.5958 | 12.98 | 159 | 0.4589 | 0.8095 |
| 0.5446 | 13.96 | 171 | 0.5288 | 0.7738 |
| 0.5348 | 14.94 | 183 | 0.4989 | 0.7619 |
| 0.464 | 16.0 | 196 | 0.5408 | 0.7857 |
| 0.4641 | 16.98 | 208 | 0.4609 | 0.7738 |
| 0.4471 | 17.96 | 220 | 0.4229 | 0.8333 |
| 0.4301 | 18.94 | 232 | 0.3962 | 0.8452 |
| 0.3862 | 20.0 | 245 | 0.4005 | 0.8452 |
| 0.3659 | 20.98 | 257 | 0.3873 | 0.8452 |
| 0.3488 | 21.96 | 269 | 0.4196 | 0.8333 |
| 0.3683 | 22.94 | 281 | 0.4299 | 0.8095 |
| 0.3477 | 24.0 | 294 | 0.4470 | 0.8214 |
| 0.3426 | 24.98 | 306 | 0.4478 | 0.8333 |
| 0.3 | 25.96 | 318 | 0.4604 | 0.8452 |
| 0.3138 | 26.94 | 330 | 0.4114 | 0.8571 |
| 0.2569 | 28.0 | 343 | 0.4640 | 0.8452 |
| 0.2894 | 28.98 | 355 | 0.5187 | 0.7976 |
| 0.2996 | 29.96 | 367 | 0.4617 | 0.8452 |
| 0.3046 | 30.94 | 379 | 0.4646 | 0.8690 |
| 0.2896 | 32.0 | 392 | 0.4492 | 0.8571 |
| 0.2548 | 32.98 | 404 | 0.4523 | 0.8571 |
| 0.2137 | 33.96 | 416 | 0.4764 | 0.8333 |
| 0.2246 | 34.94 | 428 | 0.4474 | 0.8571 |
| 0.2684 | 36.0 | 441 | 0.4495 | 0.8452 |
| 0.2413 | 36.98 | 453 | 0.4634 | 0.8452 |
| 0.2633 | 37.96 | 465 | 0.4558 | 0.8452 |
| 0.2518 | 38.94 | 477 | 0.4523 | 0.8452 |
| 0.2428 | 39.18 | 480 | 0.4523 | 0.8452 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
LoneStriker/Mixtral_7Bx5_MoE_30B-4.0bpw-h6-exl2 | LoneStriker | "2024-02-13T17:58:07Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-02-13T17:50:32Z" | ---
license: cc-by-nc-4.0
---
# Mixtral MOE 5x7B
MoE of the following models :
* [Toten5/Marcoroni-neural-chat-7B-v1](https://huggingface.co/Toten5/Marcoroni-neural-chat-7B-v1)
* [NurtureAI/neural-chat-7b-v3-16k](https://huggingface.co/NurtureAI/neural-chat-7b-v3-16k)
* [mncai/mistral-7b-dpo-v6](https://huggingface.co/mncai/mistral-7b-dpo-v6)
* [cookinai/CatMacaroni-Slerp](https://huggingface.co/cookinai/CatMacaroni-Slerp)
* [ignos/Mistral-T5-7B-v1](https://huggingface.co/ignos/Mistral-T5-7B-v1)
gpu code example
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import math
## v2 models
model_path = "cloudyu/Mixtral_7Bx5_MoE_30B"
tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False)
model = AutoModelForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float32, device_map='auto',local_files_only=False, load_in_4bit=True
)
print(model)
prompt = input("please input prompt:")
while len(prompt) > 0:
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda")
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2
)
print(tokenizer.decode(generation_output[0]))
prompt = input("please input prompt:")
```
CPU example
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import math
## v2 models
model_path = "cloudyu/Mixtral_7Bx5_MoE_30B"
tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False)
model = AutoModelForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float32, device_map='cpu',local_files_only=False
)
print(model)
prompt = input("please input prompt:")
while len(prompt) > 0:
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2
)
print(tokenizer.decode(generation_output[0]))
prompt = input("please input prompt:")
``` |
fatcar/test-model | fatcar | "2024-03-08T18:17:02Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-03-08T18:16:56Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
llmware/slim-summary-tiny-onnx | llmware | "2024-10-31T21:31:28Z" | 2 | 1 | transformers | [
"transformers",
"onnx",
"llama",
"text-generation",
"green",
"p1",
"llmware-fx",
"base_model:llmware/slim-summary-tiny",
"base_model:quantized:llmware/slim-summary-tiny",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | text-generation | "2024-06-15T00:26:07Z" | ---
license: apache-2.0
inference: false
base_model: llmware/slim-summary-tiny
base_model_relation: quantized
tags: [green, p1, llmware-fx, onnx]
---
# slim-summary-tiny-onnx
**slim-summary-tiny-onnx** is a specialized function calling model that summarizes a given text and generates as output a Python list of summary points.
This is an ONNX int4 quantized version of slim-summary-tiny, providing a very fast, very small inference implementation, optimized for AI PCs using Intel GPU, CPU and NPU.
### Model Description
- **Developed by:** llmware
- **Model type:** tinyllama
- **Parameters:** 1.1 billion
- **Model Parent:** llmware/slim-summary-tiny
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Uses:** Summary bulletpoints extracted from complex business documents
- **RAG Benchmark Accuracy Score:** NA
- **Quantization:** int4
## Model Card Contact
[llmware on github](https://www.github.com/llmware-ai/llmware)
[llmware on hf](https://www.huggingface.co/llmware)
[llmware website](https://www.llmware.ai)
|
asenella/mmnist_MMVAEconfig2_seed_0_ratio_02_i | asenella | "2023-06-03T11:36:09Z" | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | "2023-05-10T19:02:49Z" | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
mojoee/poca-SoccerTwos_v2 | mojoee | "2023-03-01T06:10:34Z" | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | "2023-03-01T06:10:27Z" |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: mojoee/poca-SoccerTwos_v2
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
sunbv56/vit-gpt2-imagecaptioningfood | sunbv56 | "2024-07-27T19:51:43Z" | 19 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"image-to-text",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-to-text | "2024-06-25T18:11:32Z" | ---
license: apache-2.0
language:
- en
metrics:
- bleu
pipeline_tag: image-to-text
---
## About model
The model fined tuning with large data of API from bbcgoodfood.com
## How to Get Started with the Model
```python
from transformers import AutoTokenizer, VisionEncoderDecoderModel, ViTFeatureExtractor
tokenizer = AutoTokenizer.from_pretrained("gpt2") # for text
feature_extractor = ViTFeatureExtractor.from_pretrained("google/vit-base-patch16-224") # for image
model = VisionEncoderDecoderModel.from_pretrained("sunbv56/vit-gpt2-imagecaptioningfood") # load model
```
## Example code here
https://www.kaggle.com/code/thuntrngbnh/test-model-vit-gpt2-icf/notebook |
MaziyarPanahi/mergekit-slerp-ojqhjfr-GGUF | MaziyarPanahi | "2024-06-16T19:53:01Z" | 133 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:NousResearch/Meta-Llama-3-8B-Instruct",
"base_model:cognitivecomputations/dolphin-2.8-mistral-7b-v02",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:mergekit-community/mergekit-slerp-ojqhjfr",
"base_model:quantized:mergekit-community/mergekit-slerp-ojqhjfr"
] | text-generation | "2024-06-16T19:31:18Z" | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- llama
- text-generation
- mergekit
- merge
- conversational
- base_model:NousResearch/Meta-Llama-3-8B-Instruct
- base_model:cognitivecomputations/dolphin-2.8-mistral-7b-v02
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
- text-generation
model_name: mergekit-slerp-ojqhjfr-GGUF
base_model: mergekit-community/mergekit-slerp-ojqhjfr
inference: false
model_creator: mergekit-community
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/mergekit-slerp-ojqhjfr-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-ojqhjfr-GGUF)
- Model creator: [mergekit-community](https://huggingface.co/mergekit-community)
- Original model: [mergekit-community/mergekit-slerp-ojqhjfr](https://huggingface.co/mergekit-community/mergekit-slerp-ojqhjfr)
## Description
[MaziyarPanahi/mergekit-slerp-ojqhjfr-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-ojqhjfr-GGUF) contains GGUF format model files for [mergekit-community/mergekit-slerp-ojqhjfr](https://huggingface.co/mergekit-community/mergekit-slerp-ojqhjfr).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
Victarry/poca-SoccerTwos | Victarry | "2023-02-07T02:58:05Z" | 32 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | "2023-02-06T15:29:46Z" |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: Victarry/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
lsnoo/russian_fairseq | lsnoo | "2023-01-05T05:55:48Z" | 5 | 1 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2023-01-03T02:17:00Z" | Wav2vec2.0-xlsr-53 model is fine-tuned on commonvoice russian dataset
Configs (yaml)
checkpoint:
save_interval: 1000
save_interval_updates: 1000
keep_interval_updates: 1
no_epoch_ckechpoints: true
best_checkpoint_metric: wer
task:
_name: audio_finetuning
normalize: true
labels: phn
dataset:
num_workers: 6
max_tokens: 800000
skip_invalid_size_inputs_valid_test: true
valid_subset: valie
distributed_training:
ddp_backend: legacy_ddp
distributed_world_size: 4
criterion:
_name: ctc
zero_infinity: true
optimization:
max_update: 25000
lr: [0.00001]
sentence_avg: true
update_freq: [4]
optimizer:
_name: adam
adam_betas: (0.9, 0.98)
adam_eps: 1e-8
lr_scheduler:
_name: tri_stage
phase_ratio: [0.1, 0.4, 0.5]
final_lr_scale: 0.05
model:
_name: wav2vec_ctc
apply_mask: true
mask_prob: 0.5
mask_channel_prob: 0.1
mask_channel_length: 64
layerdrop: 0.1
activation_dropout: 0.1
feature_grad_mult: 0.0
freeze_finetune_updates: 0 |
furrutiav/modernbert_mixtral_nllfg_rubric_sst2_none_item | furrutiav | "2025-03-21T07:09:29Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"modernbert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2025-03-20T07:09:16Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
denbeo/7b49c4c5-8266-4e39-b268-cd2886cb9e2b | denbeo | "2025-01-16T19:33:09Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-410m-deduped",
"base_model:adapter:EleutherAI/pythia-410m-deduped",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-16T19:11:41Z" | ---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-410m-deduped
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7b49c4c5-8266-4e39-b268-cd2886cb9e2b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-410m-deduped
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c1976ffe748c9b97_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c1976ffe748c9b97_train_data.json
type:
field_instruction: rendered_input
field_output: rendered_output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: denbeo/7b49c4c5-8266-4e39-b268-cd2886cb9e2b
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/c1976ffe748c9b97_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f9860949-9bee-432c-9254-4e2a5aa656da
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: f9860949-9bee-432c-9254-4e2a5aa656da
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 7b49c4c5-8266-4e39-b268-cd2886cb9e2b
This model is a fine-tuned version of [EleutherAI/pythia-410m-deduped](https://huggingface.co/EleutherAI/pythia-410m-deduped) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0816
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.1065 | 0.0073 | 200 | 1.0816 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
brittlewis12/Kunoichi-DPO-v2-7B-GGUF | brittlewis12 | "2024-05-02T19:16:54Z" | 2,150 | 66 | null | [
"gguf",
"text-generation",
"en",
"base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"base_model:quantized:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"license:cc-by-nc-4.0",
"region:us"
] | text-generation | "2024-01-16T16:33:41Z" | ---
base_model: SanjiWatsuki/Kunoichi-DPO-v2-7B
inference: false
language:
- en
license: cc-by-nc-4.0
model_creator: SanjiWatsuki
model_name: Kunoichi-DPO-v2-7B
model_type: mistral
pipeline_tag: text-generation
prompt_template: "{{system_message}}
### Instruction:
{{prompt}}
### Response:
"
quantized_by: brittlewis12
---
# Kunoichi-DPO-v2-7B GGUF

Original model: [Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B)
Model creator: [SanjiWatsuki](https://huggingface.co/SanjiWatsuki)
This repo contains GGUF format model files for SanjiWatsuki’s Kunoichi-DPO-v2-7B. Updated as of 2024-05-01.
### What is GGUF?
GGUF is a file format for representing AI models. It is the third version of the format, introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Converted using llama.cpp build 2780 (revision [b0d943de](https://github.com/ggerganov/llama.cpp/commit/b0d943de))
### Prompt template: Unknown (Alpaca)
[Alpaca-style](https://huggingface.co/SanjiWatsuki/Kunoichi-7B#prompt-template-custom-format-or-alpaca) was the prompt format for the original [Kunoichi-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-7B).
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{{prompt}}
### Response:
```
---
## Download & run with [cnvrs](https://twitter.com/cnvrsai) on iPhone, iPad, and Mac!

[cnvrs](https://testflight.apple.com/join/sFWReS7K) is the best app for private, local AI on your device:
- create & save **Characters** with custom system prompts & temperature settings
- download and experiment with any **GGUF model** you can [find on HuggingFace](https://huggingface.co/models?library=gguf)!
- make it your own with custom **Theme colors**
- powered by Metal ⚡️ & [Llama.cpp](https://github.com/ggerganov/llama.cpp), with **haptics** during response streaming!
- **try it out** yourself today, on [Testflight](https://testflight.apple.com/join/sFWReS7K)!
- follow [cnvrs on twitter](https://twitter.com/cnvrsai) to stay up to date
---
## Original Model Evaluations:
| Model | MT Bench | EQ Bench | MMLU | Logic Test |
|----------------------|----------|----------|---------|-------------|
| GPT-4-Turbo | 9.32 | - | - | - |
| GPT-4 | 8.99 | 62.52 | 86.4 | 0.86 |
| **Kunoichi-DPO-v2-7B** | **8.51** | **42.18** | - | **0.58** |
| Mixtral-8x7B-Instruct| 8.30 | 44.81 | 70.6 | 0.75 |
| **Kunoichi-DPO-7B** | **8.29** | **41.60** | **64.83** | **0.59** |
| **Kunoichi-7B** | **8.14** | **44.32** | **64.9** | **0.58** |
| Starling-7B | 8.09 | - | 63.9 | 0.51 |
| Claude-2 | 8.06 | 52.14 | 78.5 | - |
| Silicon-Maid-7B | 7.96 | 40.44 | 64.7 | 0.54 |
| Loyal-Macaroni-Maid-7B | 7.95 | 38.66 | 64.9 | 0.57 |
| GPT-3.5-Turbo | 7.94 | 50.28 | 70 | 0.57 |
| Claude-1 | 7.9 | - | 77 | - |
| Openchat-3.5 | 7.81 | 37.08 | 64.3 | 0.39 |
| Dolphin-2.6-DPO | 7.74 | 42.88 | 61.9 | 0.53 |
| Zephyr-7B-beta | 7.34 | 38.71 | 61.4 | 0.30 |
| Llama-2-70b-chat-hf | 6.86 | 51.56 | 63 | - |
| Neural-chat-7b-v3-1 | 6.84 | 43.61 | 62.4 | 0.30 |
| Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench |
|---|---:|---:|---:|---:|---:|
| **Kunoichi-DPO-7B**|**58.4**| 45.08 | 74| 66.99| 47.52|
| **Kunoichi-DPO-v2-7B**|**58.31**| 44.85| 75.05| 65.69| 47.65|
| [Kunoichi-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-7B)|57.54| 44.99| 74.86| 63.72| 46.58|
| [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)| 56.85 | 44.74 | 75.6 | 59.89 | 47.17 |
| [Silicon-Maid-7B](https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B) | 56.45| 44.74| 74.26| 61.5| 45.32|
| [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B) | 53.51 | 43.67 | 73.24 | 55.37 | 41.76 |
| [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) | 52.42 | 42.75 | 72.99 | 52.99 | 40.94 |
| [openchat/openchat_3.5](https://huggingface.co/openchat/openchat_3.5) | 51.34 | 42.67 | 72.92 | 47.27 | 42.51 |
| [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) | 51.16 | 42.06 | 72.72 | 47.33 | 42.53 |
| [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) | 50.99 | 37.33 | 71.83 | 55.1 | 39.7 |
| Model | AlpacaEval2 | Length |
| --------------------------- | ----------- | ------ |
| GPT-4 | 23.58% | 1365 |
| GPT-4 0314 | 22.07% | 1371 |
| Mistral Medium | 21.86% | 1500 |
| Mixtral 8x7B v0.1 | 18.26% | 1465 |
| **Kunoichi-DPO-v2** | **17.19%** | 1785 |
| Claude 2 | 17.19% | 1069 |
| Claude | 16.99% | 1082 |
| Gemini Pro | 16.85% | 1315 |
| GPT-4 0613 | 15.76% | 1140 |
| Claude 2.1 | 15.73% | 1096 |
| Mistral 7B v0.2 | 14.72% | 1676 |
| GPT 3.5 Turbo 0613 | 14.13% | 1328 |
| LLaMA2 Chat 70B | 13.87% | 1790 |
| LMCocktail-10.7B-v1 | 13.15% | 1203 |
| WizardLM 13B V1.1 | 11.23% | 1525 |
| Zephyr 7B Beta | 10.99% | 1444 |
| OpenHermes-2.5-Mistral (7B) | 10.34% | 1107 |
| GPT 3.5 Turbo 0301 | 9.62% | 827 |
| **Kunoichi-7B** | **9.38%** | 1492 |
| GPT 3.5 Turbo 1106 | 9.18% | 796 |
| GPT-3.5 | 8.56% | 1018 |
| Phi-2 DPO | 7.76% | 1687 |
| LLaMA2 Chat 13B | 7.70% | 1513 | |
ssktora/e5-mistral-nfcorpus-train-bm25 | ssktora | "2025-03-14T04:25:30Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"base_model:intfloat/e5-mistral-7b-instruct",
"base_model:adapter:intfloat/e5-mistral-7b-instruct",
"region:us"
] | null | "2025-03-14T04:23:01Z" | ---
base_model: intfloat/e5-mistral-7b-instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.0 |
Khushi-Bhattarai-Free/New.Khushi-Bhattarai.Video.Viral.Leaked.on.social.media.x.trending | Khushi-Bhattarai-Free | "2025-02-26T20:22:42Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-02-26T20:22:33Z" |
<a href="https://view-blog-777.blogspot.com/2025/02/sfdtgsdhdujftkhk.html"><img src="http://4.bp.blogspot.com/-VFcup4RzDQY/Upiobuokb5I/AAAAAAAAAV0/64yKpZilDCg/s1600/oie_nxv3mlmduAj1.gif" alt="fsd" /></a>
<a href="https://view-blog-777.blogspot.com/2025/02/sfdtgsdhdujftkhk.html" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a>
<a href="https://view-blog-777.blogspot.com/2025/02/sfdtgsdhdujftkhk.html" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤)</a>
|
parksuna/xlm-roberta-base-finetuned-panx-de | parksuna | "2023-08-31T08:29:59Z" | 122 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2023-08-31T08:25:49Z" | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: validation
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8657241810026685
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1338
- F1: 0.8657
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.257 | 1.0 | 525 | 0.1557 | 0.8218 |
| 0.126 | 2.0 | 1050 | 0.1460 | 0.8521 |
| 0.0827 | 3.0 | 1575 | 0.1338 | 0.8657 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Shaleen123/code-yi-6b | Shaleen123 | "2024-02-07T21:05:04Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-01-28T17:46:44Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
matrixportal/c4ai-command-r7b-arabic-02-2025-Q4_0-GGUF | matrixportal | "2025-03-19T20:50:25Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"el",
"fa",
"pl",
"id",
"cs",
"he",
"hi",
"nl",
"ro",
"ru",
"tr",
"uk",
"vi",
"base_model:CohereForAI/c4ai-command-r7b-arabic-02-2025",
"base_model:quantized:CohereForAI/c4ai-command-r7b-arabic-02-2025",
"license:cc-by-nc-4.0",
"region:us",
"conversational"
] | null | "2025-03-19T17:01:18Z" | ---
base_model: CohereForAI/c4ai-command-r7b-arabic-02-2025
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
- el
- fa
- pl
- id
- cs
- he
- hi
- nl
- ro
- ru
- tr
- uk
- vi
library_name: transformers
license: cc-by-nc-4.0
tags:
- llama-cpp
- gguf-my-repo
inference: false
extra_gated_prompt: By submitting this form, you agree to the [License Agreement](https://cohere.com/c4ai-cc-by-nc-license) and
acknowledge that the information you provide will be collected, used, and shared
in accordance with Cohere’s [Privacy Policy]( https://cohere.com/privacy). You’ll
receive email updates about C4AI and Cohere research, events, products and services.
You can unsubscribe at any time.
extra_gated_fields:
Name: text
Affiliation: text
Country: country
I agree to use this model for non-commercial use ONLY: checkbox
---
# matrixportal/c4ai-command-r7b-arabic-02-2025-Q4_0-GGUF
This model was converted to GGUF format from [`CohereForAI/c4ai-command-r7b-arabic-02-2025`](https://huggingface.co/CohereForAI/c4ai-command-r7b-arabic-02-2025) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/CohereForAI/c4ai-command-r7b-arabic-02-2025) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo matrixportal/c4ai-command-r7b-arabic-02-2025-Q4_0-GGUF --hf-file c4ai-command-r7b-arabic-02-2025-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo matrixportal/c4ai-command-r7b-arabic-02-2025-Q4_0-GGUF --hf-file c4ai-command-r7b-arabic-02-2025-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo matrixportal/c4ai-command-r7b-arabic-02-2025-Q4_0-GGUF --hf-file c4ai-command-r7b-arabic-02-2025-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo matrixportal/c4ai-command-r7b-arabic-02-2025-Q4_0-GGUF --hf-file c4ai-command-r7b-arabic-02-2025-q4_0.gguf -c 2048
```
|
Kewlfunky/learn_hf_food_not_food_text_classifier-distilbert-base-uncased | Kewlfunky | "2024-12-29T18:26:50Z" | 106 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-12-29T15:09:46Z" | ---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: learn_hf_food_not_food_text_classifier-distilbert-base-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# learn_hf_food_not_food_text_classifier-distilbert-base-uncased
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0005
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4688 | 1.0 | 7 | 0.1071 | 1.0 |
| 0.0496 | 2.0 | 14 | 0.0085 | 1.0 |
| 0.0056 | 3.0 | 21 | 0.0025 | 1.0 |
| 0.0021 | 4.0 | 28 | 0.0013 | 1.0 |
| 0.0012 | 5.0 | 35 | 0.0009 | 1.0 |
| 0.0009 | 6.0 | 42 | 0.0007 | 1.0 |
| 0.0008 | 7.0 | 49 | 0.0006 | 1.0 |
| 0.0007 | 8.0 | 56 | 0.0006 | 1.0 |
| 0.0007 | 9.0 | 63 | 0.0005 | 1.0 |
| 0.0007 | 10.0 | 70 | 0.0005 | 1.0 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
RichardErkhov/princeton-nlp_-_Mistral-7B-Base-SFT-IPO-gguf | RichardErkhov | "2024-10-09T21:42:15Z" | 43 | 0 | null | [
"gguf",
"arxiv:2405.14734",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-10-09T19:17:56Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Mistral-7B-Base-SFT-IPO - GGUF
- Model creator: https://huggingface.co/princeton-nlp/
- Original model: https://huggingface.co/princeton-nlp/Mistral-7B-Base-SFT-IPO/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Mistral-7B-Base-SFT-IPO.Q2_K.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_Mistral-7B-Base-SFT-IPO-gguf/blob/main/Mistral-7B-Base-SFT-IPO.Q2_K.gguf) | Q2_K | 2.53GB |
| [Mistral-7B-Base-SFT-IPO.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_Mistral-7B-Base-SFT-IPO-gguf/blob/main/Mistral-7B-Base-SFT-IPO.IQ3_XS.gguf) | IQ3_XS | 2.29GB |
| [Mistral-7B-Base-SFT-IPO.IQ3_S.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_Mistral-7B-Base-SFT-IPO-gguf/blob/main/Mistral-7B-Base-SFT-IPO.IQ3_S.gguf) | IQ3_S | 1.07GB |
| [Mistral-7B-Base-SFT-IPO.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_Mistral-7B-Base-SFT-IPO-gguf/blob/main/Mistral-7B-Base-SFT-IPO.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [Mistral-7B-Base-SFT-IPO.IQ3_M.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_Mistral-7B-Base-SFT-IPO-gguf/blob/main/Mistral-7B-Base-SFT-IPO.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [Mistral-7B-Base-SFT-IPO.Q3_K.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_Mistral-7B-Base-SFT-IPO-gguf/blob/main/Mistral-7B-Base-SFT-IPO.Q3_K.gguf) | Q3_K | 3.28GB |
| [Mistral-7B-Base-SFT-IPO.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_Mistral-7B-Base-SFT-IPO-gguf/blob/main/Mistral-7B-Base-SFT-IPO.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [Mistral-7B-Base-SFT-IPO.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_Mistral-7B-Base-SFT-IPO-gguf/blob/main/Mistral-7B-Base-SFT-IPO.Q3_K_L.gguf) | Q3_K_L | 0.34GB |
| [Mistral-7B-Base-SFT-IPO.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_Mistral-7B-Base-SFT-IPO-gguf/blob/main/Mistral-7B-Base-SFT-IPO.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [Mistral-7B-Base-SFT-IPO.Q4_0.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_Mistral-7B-Base-SFT-IPO-gguf/blob/main/Mistral-7B-Base-SFT-IPO.Q4_0.gguf) | Q4_0 | 3.83GB |
| [Mistral-7B-Base-SFT-IPO.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_Mistral-7B-Base-SFT-IPO-gguf/blob/main/Mistral-7B-Base-SFT-IPO.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [Mistral-7B-Base-SFT-IPO.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_Mistral-7B-Base-SFT-IPO-gguf/blob/main/Mistral-7B-Base-SFT-IPO.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [Mistral-7B-Base-SFT-IPO.Q4_K.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_Mistral-7B-Base-SFT-IPO-gguf/blob/main/Mistral-7B-Base-SFT-IPO.Q4_K.gguf) | Q4_K | 4.07GB |
| [Mistral-7B-Base-SFT-IPO.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_Mistral-7B-Base-SFT-IPO-gguf/blob/main/Mistral-7B-Base-SFT-IPO.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [Mistral-7B-Base-SFT-IPO.Q4_1.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_Mistral-7B-Base-SFT-IPO-gguf/blob/main/Mistral-7B-Base-SFT-IPO.Q4_1.gguf) | Q4_1 | 4.24GB |
| [Mistral-7B-Base-SFT-IPO.Q5_0.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_Mistral-7B-Base-SFT-IPO-gguf/blob/main/Mistral-7B-Base-SFT-IPO.Q5_0.gguf) | Q5_0 | 4.65GB |
| [Mistral-7B-Base-SFT-IPO.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_Mistral-7B-Base-SFT-IPO-gguf/blob/main/Mistral-7B-Base-SFT-IPO.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [Mistral-7B-Base-SFT-IPO.Q5_K.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_Mistral-7B-Base-SFT-IPO-gguf/blob/main/Mistral-7B-Base-SFT-IPO.Q5_K.gguf) | Q5_K | 4.78GB |
| [Mistral-7B-Base-SFT-IPO.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_Mistral-7B-Base-SFT-IPO-gguf/blob/main/Mistral-7B-Base-SFT-IPO.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [Mistral-7B-Base-SFT-IPO.Q5_1.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_Mistral-7B-Base-SFT-IPO-gguf/blob/main/Mistral-7B-Base-SFT-IPO.Q5_1.gguf) | Q5_1 | 5.07GB |
| [Mistral-7B-Base-SFT-IPO.Q6_K.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_Mistral-7B-Base-SFT-IPO-gguf/blob/main/Mistral-7B-Base-SFT-IPO.Q6_K.gguf) | Q6_K | 5.53GB |
| [Mistral-7B-Base-SFT-IPO.Q8_0.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_Mistral-7B-Base-SFT-IPO-gguf/blob/main/Mistral-7B-Base-SFT-IPO.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
This is a model released from the preprint: *[SimPO: Simple Preference Optimization with a Reference-Free Reward](https://arxiv.org/abs/2405.14734)* Please refer to our [repository](https://github.com/princeton-nlp/SimPO) for more details.
|
csikasote/xls-r-1b-bem-genbed-f-model | csikasote | "2024-10-07T11:24:01Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"genbed",
"generated_from_trainer",
"base_model:facebook/wav2vec2-xls-r-1b",
"base_model:finetune:facebook/wav2vec2-xls-r-1b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-09-26T16:24:03Z" | ---
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-1b
tags:
- automatic-speech-recognition
- genbed
- generated_from_trainer
metrics:
- wer
model-index:
- name: xls-r-1b-bem-genbed-f-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xls-r-1b-bem-genbed-f-model
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the GENBED - BEM dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3137
- Wer: 0.5529
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| No log | 0.2740 | 100 | 3.0378 | 1.0 |
| No log | 0.5479 | 200 | 0.8302 | 0.9818 |
| No log | 0.8219 | 300 | 0.6783 | 0.9103 |
| No log | 1.0959 | 400 | 0.5512 | 0.8721 |
| 1.8782 | 1.3699 | 500 | 0.5296 | 0.8568 |
| 1.8782 | 1.6438 | 600 | 0.4413 | 0.7333 |
| 1.8782 | 1.9178 | 700 | 0.4747 | 0.7614 |
| 1.8782 | 2.1918 | 800 | 0.3884 | 0.6667 |
| 1.8782 | 2.4658 | 900 | 0.3577 | 0.6355 |
| 0.5114 | 2.7397 | 1000 | 0.3585 | 0.6321 |
| 0.5114 | 3.0137 | 1100 | 0.3641 | 0.6607 |
| 0.5114 | 3.2877 | 1200 | 0.3813 | 0.7282 |
| 0.5114 | 3.5616 | 1300 | 0.3829 | 0.7086 |
| 0.5114 | 3.8356 | 1400 | 0.3682 | 0.6413 |
| 0.3931 | 4.1096 | 1500 | 0.3527 | 0.6221 |
| 0.3931 | 4.3836 | 1600 | 0.3481 | 0.6297 |
| 0.3931 | 4.6575 | 1700 | 0.3541 | 0.6193 |
| 0.3931 | 4.9315 | 1800 | 0.3355 | 0.6242 |
| 0.3931 | 5.2055 | 1900 | 0.3339 | 0.5801 |
| 0.3293 | 5.4795 | 2000 | 0.3137 | 0.5529 |
| 0.3293 | 5.7534 | 2100 | 0.3132 | 0.5822 |
| 0.3293 | 6.0274 | 2200 | 0.3145 | 0.5676 |
| 0.3293 | 6.3014 | 2300 | 0.3283 | 0.5961 |
| 0.3293 | 6.5753 | 2400 | 0.3247 | 0.5988 |
### Framework versions
- Transformers 4.46.0.dev0
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0
|
jpacifico/Chocolatine-2-14B-Instruct-v2.0.2 | jpacifico | "2025-02-05T08:53:57Z" | 6 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-05T08:44:33Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sail-rvc/ninachubav2 | sail-rvc | "2023-07-14T07:42:01Z" | 1 | 0 | transformers | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
] | audio-to-audio | "2023-07-14T07:41:43Z" |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# ninachubav2
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:42:01
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
|
huggingtweets/mattiasinspace | huggingtweets | "2022-03-23T18:30:31Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-23T18:30:21Z" | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1434246328788398081/M7Httz0A_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Mattias in Deep</div>
<div style="text-align: center; font-size: 14px;">@mattiasinspace</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Mattias in Deep.
| Data | Mattias in Deep |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 26 |
| Short tweets | 196 |
| Tweets kept | 3027 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2r9u5eoz/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mattiasinspace's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1ua0ungm) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1ua0ungm/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/mattiasinspace')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
bonvent/test2 | bonvent | "2023-12-02T13:47:31Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"fr",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_6_0",
"robust-speech-event",
"speech",
"xlsr-fine-tuning-week",
"dataset:common_voice",
"dataset:mozilla-foundation/common_voice_6_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2023-12-02T13:47:31Z" | ---
language: fr
license: apache-2.0
datasets:
- common_voice
- mozilla-foundation/common_voice_6_0
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- fr
- hf-asr-leaderboard
- mozilla-foundation/common_voice_6_0
- robust-speech-event
- speech
- xlsr-fine-tuning-week
model-index:
- name: XLSR Wav2Vec2 French by Jonatas Grosman
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice fr
type: common_voice
args: fr
metrics:
- name: Test WER
type: wer
value: 17.65
- name: Test CER
type: cer
value: 4.89
- name: Test WER (+LM)
type: wer
value: 13.59
- name: Test CER (+LM)
type: cer
value: 3.91
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: fr
metrics:
- name: Dev WER
type: wer
value: 34.35
- name: Dev CER
type: cer
value: 14.09
- name: Dev WER (+LM)
type: wer
value: 24.72
- name: Dev CER (+LM)
type: cer
value: 12.33
---
# Fine-tuned XLSR-53 large model for speech recognition in French
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on French using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
## Usage
The model can be used directly (without a language model) as follows...
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-french")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "fr"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-french"
SAMPLES = 10
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
for i, predicted_sentence in enumerate(predicted_sentences):
print("-" * 100)
print("Reference:", test_dataset[i]["sentence"])
print("Prediction:", predicted_sentence)
```
| Reference | Prediction |
| ------------- | ------------- |
| "CE DERNIER A ÉVOLUÉ TOUT AU LONG DE L'HISTOIRE ROMAINE." | CE DERNIER ÉVOLUÉ TOUT AU LONG DE L'HISTOIRE ROMAINE |
| CE SITE CONTIENT QUATRE TOMBEAUX DE LA DYNASTIE ACHÉMÉNIDE ET SEPT DES SASSANIDES. | CE SITE CONTIENT QUATRE TOMBEAUX DE LA DYNASTIE ASHEMÉNID ET SEPT DES SASANDNIDES |
| "J'AI DIT QUE LES ACTEURS DE BOIS AVAIENT, SELON MOI, BEAUCOUP D'AVANTAGES SUR LES AUTRES." | JAI DIT QUE LES ACTEURS DE BOIS AVAIENT SELON MOI BEAUCOUP DAVANTAGES SUR LES AUTRES |
| LES PAYS-BAS ONT REMPORTÉ TOUTES LES ÉDITIONS. | LE PAYS-BAS ON REMPORTÉ TOUTES LES ÉDITIONS |
| IL Y A MAINTENANT UNE GARE ROUTIÈRE. | IL AMNARDIGAD LE TIRAN |
| HUIT | HUIT |
| DANS L’ATTENTE DU LENDEMAIN, ILS NE POUVAIENT SE DÉFENDRE D’UNE VIVE ÉMOTION | DANS L'ATTENTE DU LENDEMAIN IL NE POUVAIT SE DÉFENDRE DUNE VIVE ÉMOTION |
| LA PREMIÈRE SAISON EST COMPOSÉE DE DOUZE ÉPISODES. | LA PREMIÈRE SAISON EST COMPOSÉE DE DOUZE ÉPISODES |
| ELLE SE TROUVE ÉGALEMENT DANS LES ÎLES BRITANNIQUES. | ELLE SE TROUVE ÉGALEMENT DANS LES ÎLES BRITANNIQUES |
| ZÉRO | ZEGO |
## Evaluation
1. To evaluate on `mozilla-foundation/common_voice_6_0` with split `test`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-french --dataset mozilla-foundation/common_voice_6_0 --config fr --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-french --dataset speech-recognition-community-v2/dev_data --config fr --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021xlsr53-large-french,
title={Fine-tuned {XLSR}-53 large model for speech recognition in {F}rench},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-french}},
year={2021}
}
``` |
brunnosarttori/loras | brunnosarttori | "2024-12-09T21:58:10Z" | 6 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:apache-2.0",
"region:us"
] | text-to-image | "2024-10-03T18:59:51Z" | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/image - 2024-10-03T014125.893.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: apache-2.0
---
# Bruno
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/brunnosarttori/loras/tree/main) them in the Files & versions tab.
|
Satish1967/Oracle_SKC | Satish1967 | "2024-05-02T07:33:47Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-05-02T07:33:33Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ModelCloud/QwQ-32B-Preview-gptqmodel-4bit-vortex-mlx-v3 | ModelCloud | "2025-01-19T06:30:49Z" | 17 | 1 | null | [
"safetensors",
"qwen2",
"4-bit",
"region:us"
] | null | "2025-01-18T19:03:22Z" | This model was quantized and exported to mlx using [GPTQModel](https://github.com/ModelCloud/GPTQModel).
## How to run this model
```shell
# install mlx
pip install mlx_lm
```
```python
from mlx_lm import load, generate
mlx_path = "ModelCloud/QwQ-32B-Preview-gptqmodel-4bit-vortex-mlx-v3"
mlx_model, tokenizer = load(mlx_path)
prompt = "The capital of France is"
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
text = generate(mlx_model, tokenizer, prompt=prompt, verbose=True)
```
### Export gptq to mlx
```shell
# install gptqmodel with mlx
pip install gptqmodel[mlx] --no-build-isolation
```
```python
from gptqmodel import GPTQModel
# load gptq quantized model
gptq_model_path = "ModelCloud/QwQ-32B-Preview-gptqmodel-4bit-vortex-v3"
mlx_path = f"./vortex/QwQ-32B-Preview-gptqmodel-4bit-vortex-mlx-v3"
# export to mlx model
GPTQModel.export(gptq_model_path, mlx_path, "mlx")
``` |
CHIH-HUNG/llama-2-13b-FINETUNE2_3w-q_k_v_o_proj | CHIH-HUNG | "2023-09-06T04:55:43Z" | 1,488 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:huangyt/FINETUNE2",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-09-02T08:23:22Z" | ---
license: llama2
datasets:
- huangyt/FINETUNE2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
在llama-2-13b上使用huangyt/FINETUNE2資料集進行訓練,總資料筆數約3w
# Fine-Tuning Information
- **GPU:** RTX4090 (single core / 24564MiB)
- **model:** meta-llama/Llama-2-13b-hf
- **dataset:** huangyt/FINETUNE2 (共約3w筆訓練集)
- **peft_type:** LoRA
- **lora_rank:** 8
- **lora_target:** q_proj, k_proj, v_proj, o_proj
- **per_device_train_batch_size:** 8
- **gradient_accumulation_steps:** 8
- **learning_rate :** 5e-5
- **epoch:** 1
- **precision:** bf16
- **quantization:** load_in_4bit
# Fine-Tuning Detail
- **train_loss:** 0.65
- **train_runtime:** 3:33:41 (use deepspeed)
# Evaluation
- 評估結果來自**HuggingFaceH4/open_llm_leaderboard**
- 與Llama-2-13b比較4種Benchmark,包含**ARC**、**HellaSwag**、**MMLU**、**TruthfulQA**
| Model |Average| ARC |HellaSwag| MMLU | TruthfulQA |
|-----------------------------------------------------|-------|-------|---------|-------|------------|
|meta-llama/Llama-2-13b-hf | 56.9 | 58.11 | 80.97 | 54.34 | 34.17 |
|meta-llama/Llama-2-13b-chat-hf | 59.93 | 59.04 | 81.94 | 54.64 | 44.12 |
|CHIH-HUNG/llama-2-13b-FINETUNE2_3w | 58.34 | 58.62 | 82.32 | 54.25 | 38.17 |
|CHIH-HUNG/llama-2-13b-FINETUNE2_3w-q_k_v_o_proj | 58.21 | 58.53 | 82.47 | 53.9 | 37.92 |
|CHIH-HUNG/llama-2-13b-FINETUNE2_3w-gate_up_down_proj | 58.65 | 57.42 | 82.42 | 55.57 | 39.19 |
# How to convert dataset to json
- 在**load_dataset**中輸入資料集名稱,並且在**take**中輸入要取前幾筆資料
- 觀察該資料集的欄位名稱,填入**example**欄位中(例如system_prompt、question、response)
- 最後指定json檔儲存位置 (**json_filename**)
```py
import json
from datasets import load_dataset
# 讀取數據集,take可以取得該數據集前n筆資料
dataset = load_dataset("huangyt/FINETUNE2", split="train", streaming=True)
# 提取所需欄位並建立新的字典列表
extracted_data = []
for example in dataset:
extracted_example = {
"instruction": example["instruction"],
"input": example["input"],
"output": example["output"]
}
extracted_data.append(extracted_example)
# 指定 JSON 文件名稱
json_filename = "huangyt_FINETUNE2.json"
# 寫入 JSON 文件
with open(json_filename, "w") as json_file:
json.dump(extracted_data, json_file, indent=4)
print(f"數據已提取並保存為 {json_filename}")
``` |
lesso09/cf01ca9f-0a08-4058-b72d-6da9534de12b | lesso09 | "2025-01-19T20:48:26Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/codegemma-7b-it",
"base_model:adapter:unsloth/codegemma-7b-it",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-19T20:47:50Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/codegemma-7b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: cf01ca9f-0a08-4058-b72d-6da9534de12b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/codegemma-7b-it
bf16: true
chat_template: llama3
datasets:
- data_files:
- edaa3d5d217efafe_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/edaa3d5d217efafe_train_data.json
type:
field_instruction: context
field_output: completion_file
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso09/cf01ca9f-0a08-4058-b72d-6da9534de12b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/edaa3d5d217efafe_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3fb4eb2b-db1f-4607-8c33-7d7c962e083b
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 3fb4eb2b-db1f-4607-8c33-7d7c962e083b
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# cf01ca9f-0a08-4058-b72d-6da9534de12b
This model is a fine-tuned version of [unsloth/codegemma-7b-it](https://huggingface.co/unsloth/codegemma-7b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0992
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.2426 | 0.2857 | 1 | 1.0992 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
jvelja/gemma2b-multivllm-NodropSus_8 | jvelja | "2024-09-07T00:21:05Z" | 59 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | reinforcement-learning | "2024-09-07T00:21:02Z" | ---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="jvelja//tmp/tmpjk5kcib1/jvelja/gemma2b-multivllm-NodropSus_8")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("jvelja//tmp/tmpjk5kcib1/jvelja/gemma2b-multivllm-NodropSus_8")
model = AutoModelForCausalLMWithValueHead.from_pretrained("jvelja//tmp/tmpjk5kcib1/jvelja/gemma2b-multivllm-NodropSus_8")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
winain7788/bert-finetuned-sem_eval-english | winain7788 | "2025-03-20T13:47:32Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:sem_eval_2018_task_1",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-03-20T13:47:17Z" | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
datasets:
- sem_eval_2018_task_1
metrics:
- f1
- accuracy
model-index:
- name: bert-finetuned-sem_eval-english
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: sem_eval_2018_task_1
type: sem_eval_2018_task_1
config: subtask5.english
split: validation
args: subtask5.english
metrics:
- name: F1
type: f1
value: 0.7081292850146915
- name: Accuracy
type: accuracy
value: 0.26749435665914223
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-sem_eval-english
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the sem_eval_2018_task_1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3129
- F1: 0.7081
- Roc Auc: 0.8031
- Accuracy: 0.2675
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.4084 | 1.0 | 855 | 0.3155 | 0.6932 | 0.7890 | 0.2754 |
| 0.2826 | 2.0 | 1710 | 0.3029 | 0.6965 | 0.7877 | 0.2765 |
| 0.2412 | 3.0 | 2565 | 0.3082 | 0.7081 | 0.8021 | 0.2731 |
| 0.213 | 4.0 | 3420 | 0.3125 | 0.6992 | 0.7960 | 0.2619 |
| 0.1924 | 5.0 | 4275 | 0.3129 | 0.7081 | 0.8031 | 0.2675 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
lenML/aya-expanse-8b-abliterated | lenML | "2024-11-13T18:48:37Z" | 127 | 4 | transformers | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"gguf",
"CohereForAI",
"conversational",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"el",
"fa",
"pl",
"id",
"cs",
"he",
"hi",
"nl",
"ro",
"ru",
"tr",
"uk",
"vi",
"base_model:CohereForAI/aya-expanse-8b",
"base_model:finetune:CohereForAI/aya-expanse-8b",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-11-13T09:32:05Z" | ---
inference: false
library_name: transformers
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
- el
- fa
- pl
- id
- cs
- he
- hi
- nl
- ro
- ru
- tr
- uk
- vi
license: cc-by-nc-4.0
base_model:
- CohereForAI/aya-expanse-8b
tags:
- gguf
- CohereForAI
---
# Model Card for aya-expanse-8b-abliterated
This is an uncensored version of [aya-expanse-8b](https://huggingface.co/CohereForAI/aya-expanse-8b) created with abliteration (see [this article](https://huggingface.co/blog/mlabonne/abliteration) to know more about it).
Special thanks to [@FailSpy](https://huggingface.co/failspy) for the original code and technique. Please follow him if you're interested in abliterated models.
# Limitations
目前,根据我的 `lenml-reject-eval` 测试,此版本模型将拒绝评分从 `0.91` 降低到 `0.50`,这仍然是一个很高的分数(目前完全解除限制的模型在 reject eval 中最低可以到达到 0.05)
后续还会继续更新这个模型
|
Omriy123/OLD_vit_epochs5_batch64_lr5e-05_size224_tiles1_seed1_classic_image_classification | Omriy123 | "2024-05-23T16:46:17Z" | 220 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-05-23T15:41:23Z" | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- image-classification
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit_epochs5_batch64_lr5e-05_size224_tiles1_seed1_classic_image_classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: Dogs_vs_Cats
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9989655681986109
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit_epochs5_batch64_lr5e-05_size224_tiles1_seed1_classic_image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the Dogs_vs_Cats dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0066
- Accuracy: 0.9990
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0449 | 1.0 | 282 | 0.0183 | 0.9957 |
| 0.04 | 2.0 | 564 | 0.0101 | 0.9981 |
| 0.0303 | 3.0 | 846 | 0.0081 | 0.9985 |
| 0.0489 | 4.0 | 1128 | 0.0068 | 0.9988 |
| 0.0284 | 5.0 | 1410 | 0.0066 | 0.9990 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Shanos76/f5 | Shanos76 | "2024-12-29T14:21:42Z" | 6 | 0 | f5-tts | [
"f5-tts",
"text-to-speech",
"hi",
"dataset:SPRINGLab/IndicTTS-Hindi",
"dataset:SPRINGLab/IndicVoices-R_Hindi",
"arxiv:2410.06885",
"arxiv:2409.05356",
"license:cc-by-4.0",
"region:us"
] | text-to-speech | "2024-12-29T14:11:36Z" | ---
license: cc-by-4.0
library_name: f5-tts
datasets:
- SPRINGLab/IndicTTS-Hindi
- SPRINGLab/IndicVoices-R_Hindi
language:
- hi
pipeline_tag: text-to-speech
widget:
- text: "उसके दोस्त, प्रेमिकाएँ, और रिश्तेदार, उसे इसी नाम से बुलाते थे, और वो भी, अक्सर समझ जाता था, कि क्वैं उसी को संबोधित है"
output:
url: samples/output1.wav
- text: "इस बागीचे में, आप शुरू से अन्त तक घूम आइये, तो दुनिया भर की सुन्दर चीज़ों के साथ, एक अनन्यता महसूस करेंगें"
output:
url: samples/output2.wav
- text: "शिवगढ़ी गाँव, एक बड़ा गाँव था, और उसमेँ सबसे बड़ा मकान, पण्डित दुर्गाशङ्कर श्रीमुख का था"
output:
url: samples/output3.wav
---
# F5-TTS Hindi 24KHz Model
This is a Hindi Text-to-Speech model trained from scratch using the [F5 architecture](https://arxiv.org/abs/2410.06885).
# Details
- **Developed by:** SPRING Lab, Indian Institute of Technology, Madras
- **Language:** Hindi
- **License:** CC-BY-4.0
## Uses
The model was developed and is primarily intended for research purposes.
## How to Get Started with the Model
Clone the following github repo and refer to the README: https://github.com/rumourscape/F5-TTS
## Training Details
The model was trained on 8x A100 40GB GPUs for close to a week. We would like to thank [CDAC](https://cdac.in/) for providing the compute resources.
We used the "small" configuration(151M parameter) model for training according to the F5 paper.
### Training Data
We used the Hindi subsets of [IndicTTS](https://www.tsdconference.org/tsd2016/download/cbblr16-850.pdf) and [IndicVoices-R](https://arxiv.org/pdf/2409.05356) datasets for training this model.
<br>
- **IndicTTS-Hindi:** https://huggingface.co/datasets/SPRINGLab/IndicTTS-Hindi
<br>
- **IndicVoices-R_Hindi:** https://huggingface.co/datasets/SPRINGLab/IndicVoices-R_Hindi
|
macarious/torgo_xlsr_finetune_M02_keep_all | macarious | "2024-02-02T15:35:56Z" | 11 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-02-02T05:04:24Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: torgo_xlsr_finetune_M02_keep_all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# torgo_xlsr_finetune_M02_keep_all
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6539
- Wer: 0.2436
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5043 | 0.56 | 1000 | 3.3139 | 1.0 |
| 2.1248 | 1.12 | 2000 | 1.9926 | 0.8898 |
| 1.0178 | 1.67 | 3000 | 1.5324 | 0.6683 |
| 0.7315 | 2.23 | 4000 | 1.7989 | 0.5959 |
| 0.6289 | 2.79 | 5000 | 1.3984 | 0.4987 |
| 0.5123 | 3.35 | 6000 | 1.2977 | 0.4228 |
| 0.4751 | 3.91 | 7000 | 1.3967 | 0.3988 |
| 0.4354 | 4.47 | 8000 | 1.5080 | 0.4274 |
| 0.3817 | 5.03 | 9000 | 1.7897 | 0.4014 |
| 0.3758 | 5.58 | 10000 | 1.3421 | 0.3385 |
| 0.358 | 6.14 | 11000 | 1.6429 | 0.3427 |
| 0.3083 | 6.7 | 12000 | 1.2683 | 0.3084 |
| 0.2805 | 7.26 | 13000 | 1.7095 | 0.3122 |
| 0.2856 | 7.82 | 14000 | 1.7918 | 0.3317 |
| 0.2574 | 8.38 | 15000 | 1.5411 | 0.2947 |
| 0.2495 | 8.93 | 16000 | 1.4551 | 0.2997 |
| 0.2651 | 9.49 | 17000 | 1.5073 | 0.2825 |
| 0.2517 | 10.05 | 18000 | 1.6405 | 0.2920 |
| 0.2274 | 10.61 | 19000 | 1.4440 | 0.2604 |
| 0.2278 | 11.17 | 20000 | 1.4020 | 0.2875 |
| 0.2472 | 11.73 | 21000 | 1.6264 | 0.2897 |
| 0.1875 | 12.28 | 22000 | 1.5901 | 0.2783 |
| 0.175 | 12.84 | 23000 | 1.4056 | 0.2501 |
| 0.1751 | 13.4 | 24000 | 1.4809 | 0.2631 |
| 0.1607 | 13.96 | 25000 | 1.4363 | 0.2551 |
| 0.1712 | 14.52 | 26000 | 1.6480 | 0.2524 |
| 0.1581 | 15.08 | 27000 | 1.5084 | 0.2615 |
| 0.1623 | 15.63 | 28000 | 1.4066 | 0.2482 |
| 0.1397 | 16.19 | 29000 | 1.7111 | 0.2619 |
| 0.1536 | 16.75 | 30000 | 1.4691 | 0.2402 |
| 0.1343 | 17.31 | 31000 | 1.5406 | 0.2329 |
| 0.1428 | 17.87 | 32000 | 1.5261 | 0.2413 |
| 0.1125 | 18.43 | 33000 | 1.6416 | 0.2337 |
| 0.1214 | 18.98 | 34000 | 1.6803 | 0.2425 |
| 0.124 | 19.54 | 35000 | 1.6539 | 0.2436 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.13.3
|
Stevens/AuroraGPT-O2-FRG | Stevens | "2025-02-19T01:15:09Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-19T01:12:20Z" | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Opeai_QZ_Preview-QwQ-32B-GGUF | mradermacher | "2025-03-27T00:51:55Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:IRUCAAI/Opeai_QZ_Preview-QwQ-32B",
"base_model:quantized:IRUCAAI/Opeai_QZ_Preview-QwQ-32B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-27T00:18:34Z" | ---
base_model: IRUCAAI/Opeai_QZ_Preview-QwQ-32B
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/IRUCAAI/Opeai_QZ_Preview-QwQ-32B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Opeai_QZ_Preview-QwQ-32B-GGUF/resolve/main/Opeai_QZ_Preview-QwQ-32B.Q2_K.gguf) | Q2_K | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/Opeai_QZ_Preview-QwQ-32B-GGUF/resolve/main/Opeai_QZ_Preview-QwQ-32B.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/Opeai_QZ_Preview-QwQ-32B-GGUF/resolve/main/Opeai_QZ_Preview-QwQ-32B.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Opeai_QZ_Preview-QwQ-32B-GGUF/resolve/main/Opeai_QZ_Preview-QwQ-32B.Q3_K_L.gguf) | Q3_K_L | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/Opeai_QZ_Preview-QwQ-32B-GGUF/resolve/main/Opeai_QZ_Preview-QwQ-32B.IQ4_XS.gguf) | IQ4_XS | 18.0 | |
| [GGUF](https://huggingface.co/mradermacher/Opeai_QZ_Preview-QwQ-32B-GGUF/resolve/main/Opeai_QZ_Preview-QwQ-32B.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Opeai_QZ_Preview-QwQ-32B-GGUF/resolve/main/Opeai_QZ_Preview-QwQ-32B.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Opeai_QZ_Preview-QwQ-32B-GGUF/resolve/main/Opeai_QZ_Preview-QwQ-32B.Q5_K_S.gguf) | Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/Opeai_QZ_Preview-QwQ-32B-GGUF/resolve/main/Opeai_QZ_Preview-QwQ-32B.Q5_K_M.gguf) | Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/Opeai_QZ_Preview-QwQ-32B-GGUF/resolve/main/Opeai_QZ_Preview-QwQ-32B.Q6_K.gguf) | Q6_K | 27.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Opeai_QZ_Preview-QwQ-32B-GGUF/resolve/main/Opeai_QZ_Preview-QwQ-32B.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
LHRuig/mediterraneanmnsx | LHRuig | "2025-01-16T08:24:07Z" | 10 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | "2025-01-16T08:23:58Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: man
---
# mediterraneanmnsx
<Gallery />
## Model description
mediterraneanmnsx lora
## Trigger words
You should use `man` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/mediterraneanmnsx/tree/main) them in the Files & versions tab.
|
DavidAU/MN-WORDSTORM-pt1-RCM-Kiss-of-Madness-18.5B-Instruct | DavidAU | "2024-11-22T02:51:37Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-10-27T23:55:55Z" | ---
library_name: transformers
tags:
- mergekit
- merge
base_model: []
---
<h2>MN-WORDSTORM-pt1-RCM-Kiss-of-Madness-18.5B-Instruct</h2>
This is part 1 in a 10 part series.
This repo contains the full precision source code, in "safe tensors" format to generate GGUFs, GPTQ, EXL2, AWQ, HQQ and other formats.
The source code can also be used directly.
<B>IMPORTANT: Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B>
If you are going to use this model, (source, GGUF or a different quant), please review this document for critical parameter, sampler and advance sampler settings (for multiple AI/LLM aps).
This a "Class 2" (settings will enhance operation / optional adjustments) model:
For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) (especially for use case(s) beyond the model's design) please see:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
REASON:
Regardless of "model class" this document will detail methods to enhance operations.
If the model is a Class 3/4 model the default settings (parameters, samplers, advanced samplers) must be set for "use case(s)" uses correctly. Some AI/LLM apps DO NOT have consistant default setting(s) which result in sub-par model operation. Like wise for Class 3/4 models (which operate somewhat to very differently than standard models) additional samplers and advanced samplers settings are required to "smooth out" operation, AND/OR also allow full operation for use cases the model was not designed for.
BONUS - Use these settings for ANY model, ANY repo, ANY quant (including source/full precision):
This document also details parameters, sampler and advanced samplers that can be use FOR ANY MODEL, FROM ANY REPO too - all quants, and of course source code operation too - to enhance the operation of any model.
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
NOTE:
I strongly suggest you also visit the DavidAU GGUF (below) repo too for more details in using this model ; especially if it is "Class 3" or "Class 4" to get maximum performance from the model.
For full information about this model, including:
- Details about this model and its use case(s).
- Context limits
- Special usage notes / settings.
- Any model(s) used to create this model.
- Template(s) used to access/use this model.
- Example generation(s)
- GGUF quants of this model
Please go to:
[ https://huggingface.co/DavidAU/MN-WORDSTORM-pt1-RCM-Kiss-of-Madness-18.5B-Instruct-gguf ] |
seyeon-shijuan/llama-2-koen-13b-adapter-cosmetic | seyeon-shijuan | "2024-01-17T07:04:29Z" | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:beomi/llama-2-koen-13b",
"base_model:adapter:beomi/llama-2-koen-13b",
"region:us"
] | null | "2024-01-17T01:11:33Z" | ---
library_name: peft
base_model: beomi/llama-2-koen-13b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
Vezora/QwQ-32B-Preview-fp8-W8A16 | Vezora | "2024-11-30T23:31:56Z" | 125 | 0 | null | [
"safetensors",
"qwen2",
"arxiv:2411.02355",
"license:apache-2.0",
"region:us"
] | null | "2024-11-30T07:29:06Z" | ---
license: apache-2.0
---
## Overview
This model is optimized for use with [VLLM](https://github.com/vllm-project/vllm) on NVIDIA GPUs with compute capability > 8.0 (Ampere, A100, A10, 3090, etc.). It utilizes a weight-only FP8 Marlin kernel, providing an efficient W8A16 configuration.
### Key Features of FP8 Marlin
The NeuralMagic FP8 Marlin kernel achieves impressive efficiency by packing 4 8-bit values into an int32 and performing a 4xFP8 to 4xFP16/BF16 dequantization using bit arithmetic and SIMT operations. This approach yields nearly a **2x speedup** over FP16 on most models while maintaining **near lossless quality**.
#### FP8 Advantages on NVIDIA GPUs
On newer NVIDIA GPUs (4090/H100 or later), dedicated FP8 tensor cores and hardware allow fast conversion from FP8 to BF16/FP16, maximizing performance. However, older GPUs lack this specific hardware support, preventing activation quantization if we want to leverage FP8. The Marlin kernel addresses this gap effectively, enabling performance gains on Ampere cards (e.g., 3090, A100) without needing full tensor core support.
Traditional int8 quantization methods often require extensive overhead for data type conversion between int8 and fp16, making them less efficient for inference. Marlin’s FP8 kernel bypasses this limitation by staying predominantly in FP16, removing the need for such conversions during runtime.
### Optimizations in the Marlin Kernel
The Marlin kernel is finely tuned for performance, employing several innovative techniques:
- **Asynchronous Global Weight Loads**: Uses non-blocking `cuda::memcpy_async` (available since Ampere) to load weights directly into shared memory. This minimizes latency by overlapping data transfers with computation.
- **Circular Shared Memory Queue**: A cyclic buffer system enables uninterrupted data loading, processing, and unloading, ensuring continuous computational flow without stalling.
- **Optimized Task Scheduling and Synchronization**: Utilizes Stream-K parallelization with non-uniform partitioning, optimizing GPU utilization by minimizing idle time and efficiently managing work distribution across Streaming Multiprocessors (SMs).
These optimizations enable GPUs like the 3090 and A100 to deliver near FP8 performance with minimal sacrifices, making the Marlin kernel highly effective on non-Ada cards.
### FP8 Marlin Details
- Developed by [Michael Goin and the Neural Magic team](https://github.com/vllm-project/vllm/pull/5975), FP8 Marlin is specifically designed for NVIDIA’s GPU architecture, providing a compact and high-performance format.
- FP8 achieves nearly lossless compression, making it suitable for scenarios where quantization errors in traditional int4 or int8 formats might degrade performance.
### Why FP8?
This FP8-quantized model was uploaded to explore high-precision quantization. Traditional int4 quantization, as seen in models like `Qwen/Qwen2.5-Coder-32B-Instruct-int4`, can sometimes produce poor outputs with repeated tokens due to quantization errors. In contrast, FP8 does not require calibration data and achieves robust, lossless compression.
As shown in Neural Magic's recent paper ([arXiv:2411.02355](https://arxiv.org/pdf/2411.02355)), int4 has limited fidelity recovery from FP16 without careful calibration. FP8, especially in the W8A16 format, maintains high-quality outputs without calibration, making it ideal for high-precision applications such as code generation.
### How to Quantize your own models to FP-8 W8A16?
Included in this is a script that will convert the weights of any HF model to W8A16. (TBH its kinda glitched and makes two dupes to the disk if any one wants to fix it feel free to submit a pr but if its aint broke im not gonna fix it)
How to use the script:
Have VLLM installed and run 'pip install llmcompressor==0.1.0'.
Then literally run the script it will ask you for the model name enter it and it will do the rest **NOTE** this will use CPU ram to avoid OOM errors if you somehow on gods green earth have more GPU vram than CPU ram, edit the script to load to gpu.
## How to Run
To launch the API server for this model, use the following command:
```bash
python3 -m vllm.entrypoints.openai.api_server \
--model Vezora/QwQ-32B-Preview-fp8-W8A16 \
--dtype auto \
--api-key token-abc123 \
--quantization compressed-tensors \
--max-num-batched-tokens 16384 \
--max-model-len 16384 \
--tensor-parallel-size 2 \
--gpu-memory-utilization 0.99
|
Triangle104/Gemma-3-4B-Toxic-R1-Q4_K_M-GGUF | Triangle104 | "2025-04-06T04:04:32Z" | 0 | 0 | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"llama-cpp",
"gguf-my-repo",
"base_model:bunnycore/Gemma-3-4B-Toxic-R1",
"base_model:quantized:bunnycore/Gemma-3-4B-Toxic-R1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-06T04:04:20Z" | ---
base_model: bunnycore/Gemma-3-4B-Toxic-R1
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- llama-cpp
- gguf-my-repo
---
# Triangle104/Gemma-3-4B-Toxic-R1-Q4_K_M-GGUF
This model was converted to GGUF format from [`bunnycore/Gemma-3-4B-Toxic-R1`](https://huggingface.co/bunnycore/Gemma-3-4B-Toxic-R1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/bunnycore/Gemma-3-4B-Toxic-R1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Gemma-3-4B-Toxic-R1-Q4_K_M-GGUF --hf-file gemma-3-4b-toxic-r1-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Gemma-3-4B-Toxic-R1-Q4_K_M-GGUF --hf-file gemma-3-4b-toxic-r1-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Gemma-3-4B-Toxic-R1-Q4_K_M-GGUF --hf-file gemma-3-4b-toxic-r1-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Gemma-3-4B-Toxic-R1-Q4_K_M-GGUF --hf-file gemma-3-4b-toxic-r1-q4_k_m.gguf -c 2048
```
|
Likich/vicuna-finetune-qualcoding_1000_prompt2_dot | Likich | "2024-05-28T15:35:51Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-05-28T15:35:41Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RayneAmes/buckbeak_v3 | RayneAmes | "2025-02-09T22:44:17Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-02-09T22:42:41Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sail-rvc/BartSimpson2333333 | sail-rvc | "2023-07-14T07:19:19Z" | 1 | 0 | transformers | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
] | audio-to-audio | "2023-07-14T07:18:58Z" |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# BartSimpson2333333
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:19:19
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
|
epsilonai/Dexter_Grif | epsilonai | "2023-07-20T14:40:43Z" | 0 | 1 | null | [
"redvsblue",
"rvb",
"fictional characters",
"rooster teeth",
"en",
"region:us"
] | null | "2023-07-20T14:34:26Z" | ---
language:
- en
tags:
- redvsblue
- rvb
- fictional characters
- rooster teeth
--- |
PJDoes/peplo | PJDoes | "2025-03-16T23:55:31Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-03-16T23:32:48Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: PEPLO
---
# Peplo
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `PEPLO` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('PJDoes/peplo', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Ver-viral-video/VIRAL.laura.amaya.telegram.se.filtro.foto.pack.de.erome.y.azul | Ver-viral-video | "2025-03-16T17:00:54Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-03-16T17:00:38Z" | <animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?v=news-es-tvdf" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
shivvamm/Meta-Llama-3.1-8B_16bit_indianlaw | shivvamm | "2025-03-24T12:44:17Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-03-24T12:40:11Z" | Temporary Redirect. Redirecting to /api/resolve-cache/models/shivvamm/Meta-Llama-3.1-8B_16bit_indianlaw/2f32baeb0214dcd42e5427ed08e88e1e1f9f66da/README.md?%2Fshivvamm%2FMeta-Llama-3.1-8B_16bit_indianlaw%2Fresolve%2Fmain%2FREADME.md=&etag=%22743c39cdc508efb5898b04d73d6c34766322d779%22 |
jvbjkbjkbfjis/distillbert-base-drug-effectiveness-classification-model | jvbjkbjkbfjis | "2024-06-03T10:40:50Z" | 118 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-06-03T09:28:52Z" | ---
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distillbert-base-drug-effectiveness-classification-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distillbert-base-drug-effectiveness-classification-model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3800
- F1: 0.4333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 401 | 1.4627 | 0.4117 |
| 1.6681 | 2.0 | 802 | 1.4132 | 0.4304 |
| 1.3968 | 3.0 | 1203 | 1.3890 | 0.4258 |
| 1.3289 | 4.0 | 1604 | 1.3800 | 0.4333 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
Trojanssafsdg/hebrew-tts | Trojanssafsdg | "2025-04-13T19:15:11Z" | 0 | 0 | null | [
"csm",
"region:us"
] | null | "2025-04-13T19:14:43Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
hawalurahman/mt5-base-qaqg-finetuned-TydiQA-id | hawalurahman | "2024-09-01T04:14:13Z" | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/mt5-base",
"base_model:finetune:google/mt5-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-09-01T03:50:11Z" | ---
library_name: transformers
license: apache-2.0
base_model: google/mt5-base
tags:
- generated_from_trainer
metrics:
- rouge
- bleu
model-index:
- name: mt5-base-qaqg-finetuned-TydiQA-id
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-qaqg-finetuned-TydiQA-id
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8334
- Rouge1: 0.5212
- Rouge2: 0.3529
- Rougel: 0.5187
- Rougelsum: 0.5196
- Bleu: 0.3354
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleu |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:------:|
| 1.5538 | 1.0 | 1141 | 0.9977 | 0.4486 | 0.2915 | 0.4465 | 0.4476 | 0.2792 |
| 1.1151 | 2.0 | 2282 | 0.8848 | 0.4774 | 0.3098 | 0.4745 | 0.4753 | 0.3047 |
| 0.9266 | 3.0 | 3423 | 0.8454 | 0.5026 | 0.3348 | 0.5003 | 0.5014 | 0.3183 |
| 0.8067 | 4.0 | 4564 | 0.8357 | 0.5149 | 0.3440 | 0.5127 | 0.5133 | 0.3270 |
| 0.739 | 5.0 | 5705 | 0.8334 | 0.5212 | 0.3529 | 0.5187 | 0.5196 | 0.3354 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0a0+f70bd71a48.nv24.06
- Datasets 2.21.0
- Tokenizers 0.19.1
|
ninagroot/Llama-360M-finaltest | ninagroot | "2024-05-31T09:42:44Z" | 169 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-30T07:34:53Z" | ---
tags:
- generated_from_trainer
model-index:
- name: Llama-360M
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-360M
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8245
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 300
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 8.6417 | 1.0 | 3 | 8.5751 |
| 8.3908 | 2.0 | 6 | 8.3473 |
| 7.9583 | 3.0 | 9 | 7.9814 |
| 7.3598 | 4.0 | 12 | 7.5011 |
| 6.7468 | 5.0 | 15 | 6.9942 |
| 6.3345 | 6.0 | 18 | 6.6309 |
| 6.0489 | 7.0 | 21 | 6.3987 |
| 5.9651 | 8.0 | 24 | 6.2101 |
| 5.7683 | 9.0 | 27 | 5.9691 |
| 5.3051 | 10.0 | 30 | 5.5791 |
| 4.6791 | 11.0 | 33 | 5.1445 |
| 4.3962 | 12.0 | 36 | 4.8859 |
| 4.0007 | 13.0 | 39 | 4.7013 |
| 3.9473 | 14.0 | 42 | 4.4994 |
| 3.5486 | 15.0 | 45 | 4.3178 |
| 3.3243 | 16.0 | 48 | 4.1587 |
| 3.1305 | 17.0 | 51 | 4.0505 |
| 2.8703 | 18.0 | 54 | 3.9467 |
| 2.7661 | 19.0 | 57 | 3.8780 |
| 2.7976 | 20.0 | 60 | 3.8245 |
### Framework versions
- Transformers 4.39.1
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
tensorblock/OLMo-7B-0724-hf-GGUF | tensorblock | "2024-11-16T01:35:51Z" | 45 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"en",
"dataset:allenai/dolma",
"base_model:allenai/OLMo-7B-0724-hf",
"base_model:quantized:allenai/OLMo-7B-0724-hf",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-11-14T02:58:09Z" | ---
license: apache-2.0
datasets:
- allenai/dolma
language:
- en
tags:
- TensorBlock
- GGUF
base_model: allenai/OLMo-7B-0724-hf
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## allenai/OLMo-7B-0724-hf - GGUF
This repo contains GGUF format model files for [allenai/OLMo-7B-0724-hf](https://huggingface.co/allenai/OLMo-7B-0724-hf).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [OLMo-7B-0724-hf-Q2_K.gguf](https://huggingface.co/tensorblock/OLMo-7B-0724-hf-GGUF/blob/main/OLMo-7B-0724-hf-Q2_K.gguf) | Q2_K | 2.439 GB | smallest, significant quality loss - not recommended for most purposes |
| [OLMo-7B-0724-hf-Q3_K_S.gguf](https://huggingface.co/tensorblock/OLMo-7B-0724-hf-GGUF/blob/main/OLMo-7B-0724-hf-Q3_K_S.gguf) | Q3_K_S | 2.833 GB | very small, high quality loss |
| [OLMo-7B-0724-hf-Q3_K_M.gguf](https://huggingface.co/tensorblock/OLMo-7B-0724-hf-GGUF/blob/main/OLMo-7B-0724-hf-Q3_K_M.gguf) | Q3_K_M | 3.159 GB | very small, high quality loss |
| [OLMo-7B-0724-hf-Q3_K_L.gguf](https://huggingface.co/tensorblock/OLMo-7B-0724-hf-GGUF/blob/main/OLMo-7B-0724-hf-Q3_K_L.gguf) | Q3_K_L | 3.437 GB | small, substantial quality loss |
| [OLMo-7B-0724-hf-Q4_0.gguf](https://huggingface.co/tensorblock/OLMo-7B-0724-hf-GGUF/blob/main/OLMo-7B-0724-hf-Q4_0.gguf) | Q4_0 | 3.660 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [OLMo-7B-0724-hf-Q4_K_S.gguf](https://huggingface.co/tensorblock/OLMo-7B-0724-hf-GGUF/blob/main/OLMo-7B-0724-hf-Q4_K_S.gguf) | Q4_K_S | 3.688 GB | small, greater quality loss |
| [OLMo-7B-0724-hf-Q4_K_M.gguf](https://huggingface.co/tensorblock/OLMo-7B-0724-hf-GGUF/blob/main/OLMo-7B-0724-hf-Q4_K_M.gguf) | Q4_K_M | 3.897 GB | medium, balanced quality - recommended |
| [OLMo-7B-0724-hf-Q5_0.gguf](https://huggingface.co/tensorblock/OLMo-7B-0724-hf-GGUF/blob/main/OLMo-7B-0724-hf-Q5_0.gguf) | Q5_0 | 4.437 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [OLMo-7B-0724-hf-Q5_K_S.gguf](https://huggingface.co/tensorblock/OLMo-7B-0724-hf-GGUF/blob/main/OLMo-7B-0724-hf-Q5_K_S.gguf) | Q5_K_S | 4.437 GB | large, low quality loss - recommended |
| [OLMo-7B-0724-hf-Q5_K_M.gguf](https://huggingface.co/tensorblock/OLMo-7B-0724-hf-GGUF/blob/main/OLMo-7B-0724-hf-Q5_K_M.gguf) | Q5_K_M | 4.560 GB | large, very low quality loss - recommended |
| [OLMo-7B-0724-hf-Q6_K.gguf](https://huggingface.co/tensorblock/OLMo-7B-0724-hf-GGUF/blob/main/OLMo-7B-0724-hf-Q6_K.gguf) | Q6_K | 5.264 GB | very large, extremely low quality loss |
| [OLMo-7B-0724-hf-Q8_0.gguf](https://huggingface.co/tensorblock/OLMo-7B-0724-hf-GGUF/blob/main/OLMo-7B-0724-hf-Q8_0.gguf) | Q8_0 | 6.818 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/OLMo-7B-0724-hf-GGUF --include "OLMo-7B-0724-hf-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/OLMo-7B-0724-hf-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
emilstabil/mt5-base_V25775_V44105 | emilstabil | "2023-11-20T02:04:21Z" | 9 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:emilstabil/mt5-base_V25775",
"base_model:finetune:emilstabil/mt5-base_V25775",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-11-19T18:03:56Z" | ---
license: apache-2.0
base_model: emilstabil/mt5-base_V25775
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-base_V25775_V44105
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base_V25775_V44105
This model is a fine-tuned version of [emilstabil/mt5-base_V25775](https://huggingface.co/emilstabil/mt5-base_V25775) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1019
- Rouge1: 30.2631
- Rouge2: 10.8564
- Rougel: 20.9297
- Rougelsum: 24.8312
- Gen Len: 80.9356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.32 | 0.81 | 500 | 2.1435 | 28.7386 | 10.9481 | 20.3975 | 23.8195 | 74.2704 |
| 2.2393 | 1.61 | 1000 | 2.1053 | 29.1856 | 10.7042 | 20.4864 | 24.1221 | 75.515 |
| 2.2124 | 2.42 | 1500 | 2.1157 | 28.6845 | 10.9397 | 20.4075 | 23.9154 | 74.8627 |
| 2.1635 | 3.23 | 2000 | 2.1232 | 28.8373 | 10.8364 | 20.4743 | 24.0269 | 74.1845 |
| 2.1148 | 4.03 | 2500 | 2.1149 | 29.0484 | 11.0898 | 20.6711 | 24.0963 | 73.9571 |
| 2.0904 | 4.84 | 3000 | 2.1101 | 29.5911 | 11.2027 | 20.883 | 24.3776 | 76.8412 |
| 2.0598 | 5.65 | 3500 | 2.1212 | 29.5276 | 10.8551 | 20.5466 | 24.1469 | 78.4506 |
| 2.0596 | 6.45 | 4000 | 2.1368 | 29.8832 | 10.9578 | 20.7962 | 24.4686 | 80.3777 |
| 2.0135 | 7.26 | 4500 | 2.1173 | 29.5314 | 10.6881 | 20.375 | 24.2483 | 81.5751 |
| 2.0085 | 8.06 | 5000 | 2.1050 | 29.7932 | 11.0481 | 20.8481 | 24.5598 | 78.5708 |
| 2.0006 | 8.87 | 5500 | 2.1233 | 30.4225 | 11.3125 | 21.1509 | 24.9171 | 81.3648 |
| 1.9888 | 9.68 | 6000 | 2.1067 | 29.9013 | 10.7672 | 20.6523 | 24.5878 | 78.7897 |
| 1.9496 | 10.48 | 6500 | 2.1036 | 29.7453 | 10.9583 | 20.7396 | 24.3824 | 78.7425 |
| 1.9513 | 11.29 | 7000 | 2.1125 | 29.5484 | 10.752 | 20.4861 | 24.3097 | 79.0558 |
| 1.9476 | 12.1 | 7500 | 2.1014 | 29.6296 | 10.8252 | 20.6412 | 24.2908 | 76.1202 |
| 1.9294 | 12.9 | 8000 | 2.1102 | 29.9456 | 10.9121 | 20.8077 | 24.5787 | 79.515 |
| 1.9036 | 13.71 | 8500 | 2.0977 | 30.1173 | 10.9352 | 20.9176 | 24.9725 | 80.9056 |
| 1.9415 | 14.52 | 9000 | 2.1011 | 29.9247 | 10.8223 | 20.7609 | 24.6858 | 81.103 |
| 1.8959 | 15.32 | 9500 | 2.0998 | 29.8002 | 10.6206 | 20.5674 | 24.6966 | 80.4549 |
| 1.9356 | 16.13 | 10000 | 2.1038 | 30.355 | 11.0359 | 21.0347 | 25.0475 | 80.8927 |
| 1.8958 | 16.94 | 10500 | 2.1029 | 30.3957 | 11.0562 | 21.1067 | 25.1431 | 82.1588 |
| 1.9093 | 17.74 | 11000 | 2.1002 | 30.4669 | 10.9894 | 20.9725 | 24.9598 | 81.1888 |
| 1.8969 | 18.55 | 11500 | 2.1045 | 30.4956 | 10.9426 | 20.9578 | 24.9973 | 81.824 |
| 1.8971 | 19.35 | 12000 | 2.1019 | 30.2631 | 10.8564 | 20.9297 | 24.8312 | 80.9356 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
mradermacher/Anpla_S1-GGUF | mradermacher | "2025-02-17T17:47:18Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:TOMFORD79/Anpla_S1",
"base_model:quantized:TOMFORD79/Anpla_S1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-17T17:27:51Z" | ---
base_model: TOMFORD79/Anpla_S1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/TOMFORD79/Anpla_S1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Anpla_S1-GGUF/resolve/main/Anpla_S1.Q2_K.gguf) | Q2_K | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Anpla_S1-GGUF/resolve/main/Anpla_S1.Q3_K_S.gguf) | Q3_K_S | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Anpla_S1-GGUF/resolve/main/Anpla_S1.Q3_K_M.gguf) | Q3_K_M | 4.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Anpla_S1-GGUF/resolve/main/Anpla_S1.Q3_K_L.gguf) | Q3_K_L | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Anpla_S1-GGUF/resolve/main/Anpla_S1.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Anpla_S1-GGUF/resolve/main/Anpla_S1.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Anpla_S1-GGUF/resolve/main/Anpla_S1.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Anpla_S1-GGUF/resolve/main/Anpla_S1.Q5_K_S.gguf) | Q5_K_S | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Anpla_S1-GGUF/resolve/main/Anpla_S1.Q5_K_M.gguf) | Q5_K_M | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/Anpla_S1-GGUF/resolve/main/Anpla_S1.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Anpla_S1-GGUF/resolve/main/Anpla_S1.Q8_0.gguf) | Q8_0 | 8.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Anpla_S1-GGUF/resolve/main/Anpla_S1.f16.gguf) | f16 | 15.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
sudsBLEDN/Link.Original.video.de.26is.muerte.de.26is.louis | sudsBLEDN | "2025-04-15T06:47:35Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-04-15T06:47:35Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
Triangle104/Dusk_Rainbow-Q4_K_S-GGUF | Triangle104 | "2025-02-01T02:10:12Z" | 26 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:SicariusSicariiStuff/Dusk_Rainbow",
"base_model:quantized:SicariusSicariiStuff/Dusk_Rainbow",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-01T02:07:55Z" | ---
license: llama3
language:
- en
base_model: SicariusSicariiStuff/Dusk_Rainbow
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/Dusk_Rainbow-Q4_K_S-GGUF
This model was converted to GGUF format from [`SicariusSicariiStuff/Dusk_Rainbow`](https://huggingface.co/SicariusSicariiStuff/Dusk_Rainbow) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/SicariusSicariiStuff/Dusk_Rainbow) for more details on the model.
---
Censorship level: Very low
9.1 / 10 (10 completely uncensored)
Intended use: Creative Writing, General tasks.
This model is the result of training a fraction (16M tokens) of the testing data Intended for LLAMA-3_8B_Unaligned's upcoming beta.
The base model is a merge of merges, made by Invisietch's and named EtherealRainbow-v0.3-8B.
The name for this model reflects the base that was used for this
finetune while hinting a darker, and more uncensored aspects associated
with the nature of the LLAMA-3_8B_Unaligned project.
As a result of the unique data added, this model has an exceptional
adherence to instructions about paragraph length, and to the story
writing prompt. I would like to emphasize, no ChatGPT \ Claude was used for any of the additional data I added in this finetune. The goal is to eventually have a model with a minimal amount of slop, this cannot be reliably done by relying on API models, which pollute datasets with their bias and repetitive words.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Dusk_Rainbow-Q4_K_S-GGUF --hf-file dusk_rainbow-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Dusk_Rainbow-Q4_K_S-GGUF --hf-file dusk_rainbow-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Dusk_Rainbow-Q4_K_S-GGUF --hf-file dusk_rainbow-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Dusk_Rainbow-Q4_K_S-GGUF --hf-file dusk_rainbow-q4_k_s.gguf -c 2048
```
|
1daniar/ppo-Pyramids | 1daniar | "2023-07-26T09:06:35Z" | 8 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | "2023-07-26T09:06:30Z" | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: 1daniar/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
lokas/spam-usernames-classifier | lokas | "2023-04-01T17:21:23Z" | 2,185 | 4 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"multilingual",
"af",
"sq",
"ar",
"an",
"hy",
"ast",
"az",
"ba",
"eu",
"bar",
"be",
"bn",
"inc",
"bs",
"br",
"bg",
"my",
"ca",
"ceb",
"ce",
"zh",
"cv",
"hr",
"cs",
"da",
"nl",
"en",
"et",
"fi",
"fr",
"gl",
"ka",
"de",
"el",
"gu",
"ht",
"he",
"hi",
"hu",
"is",
"io",
"id",
"ga",
"it",
"ja",
"jv",
"kn",
"kk",
"ky",
"ko",
"la",
"lv",
"lt",
"roa",
"nds",
"lm",
"mk",
"mg",
"ms",
"ml",
"mr",
"mn",
"min",
"ne",
"new",
"nb",
"nn",
"oc",
"fa",
"pms",
"pl",
"pt",
"pa",
"ro",
"ru",
"sco",
"sr",
"scn",
"sk",
"sl",
"aze",
"es",
"su",
"sw",
"sv",
"tl",
"tg",
"th",
"ta",
"tt",
"te",
"tr",
"uk",
"ud",
"uz",
"vi",
"vo",
"war",
"cy",
"fry",
"pnb",
"yo",
"arxiv:1910.09700",
"doi:10.57967/hf/0475",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-03-23T13:40:31Z" | ---
language:
- multilingual
- af
- sq
- ar
- an
- hy
- ast
- az
- ba
- eu
- bar
- be
- bn
- inc
- bs
- br
- bg
- my
- ca
- ceb
- ce
- zh
- cv
- hr
- cs
- da
- nl
- en
- et
- fi
- fr
- gl
- ka
- de
- el
- gu
- ht
- he
- hi
- hu
- is
- io
- id
- ga
- it
- ja
- jv
- kn
- kk
- ky
- ko
- la
- lv
- lt
- roa
- nds
- lm
- mk
- mg
- ms
- ml
- mr
- mn
- min
- ne
- new
- nb
- nn
- oc
- fa
- pms
- pl
- pt
- pa
- ro
- ru
- sco
- sr
- hr
- scn
- sk
- sl
- aze
- es
- su
- sw
- sv
- tl
- tg
- th
- ta
- tt
- te
- tr
- uk
- ud
- uz
- vi
- vo
- war
- cy
- fry
- pnb
- yo
license: apache-2.0
metrics:
- accuracy
pipeline_tag: text-classification
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hisaoka/pegasus-pubmed_radiology-ai-cardiothoracic-0.9 | hisaoka | "2023-01-30T04:10:06Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-12-12T14:01:20Z" | ---
tags:
- generated_from_trainer
model-index:
- name: pegasus-pubmed_radiology-ai-cardiothoracic-0.9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-pubmed_radiology-ai-cardiothoracic-0.9
This model is a fine-tuned version of [google/pegasus-pubmed](https://huggingface.co/google/pegasus-pubmed) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.1+cu116
- Datasets 2.4.0
- Tokenizers 0.12.1
|
yuiseki/tinyllama-fr-wikipedia-aya-1.5T-v0.1 | yuiseki | "2024-03-26T06:59:16Z" | 141 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-26T06:57:42Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
luckyf1998/distilbert-base-uncased-finetuned-emotion | luckyf1998 | "2023-08-03T14:34:09Z" | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-08-03T11:31:49Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.923
- name: F1
type: f1
value: 0.9227217081326218
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2148
- Accuracy: 0.923
- F1: 0.9227
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8269 | 1.0 | 250 | 0.3107 | 0.909 | 0.9072 |
| 0.2402 | 2.0 | 500 | 0.2148 | 0.923 | 0.9227 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.0
- Datasets 1.16.1
- Tokenizers 0.13.3
|
sophie-rain-leaked-nudes/Sophie-rain-leaked-videos-Original | sophie-rain-leaked-nudes | "2025-03-15T06:58:17Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-03-15T06:57:55Z" | <p><a href="https://social.danielwellington.com/srain" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a></p>
<p><a href="https://social.danielwellington.com/srain" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a></p>
<p><a href="https://social.danielwellington.com/srain" rel="nofollow"><img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif"></a></p> |
mradermacher/Marco-01-slerp5-7B-GGUF | mradermacher | "2024-11-25T18:00:11Z" | 8 | 3 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:allknowingroger/Marco-01-slerp5-7B",
"base_model:quantized:allknowingroger/Marco-01-slerp5-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-11-25T09:55:01Z" | ---
base_model: allknowingroger/Marco-01-slerp5-7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/allknowingroger/Marco-01-slerp5-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Marco-01-slerp5-7B-GGUF/resolve/main/Marco-01-slerp5-7B.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Marco-01-slerp5-7B-GGUF/resolve/main/Marco-01-slerp5-7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Marco-01-slerp5-7B-GGUF/resolve/main/Marco-01-slerp5-7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Marco-01-slerp5-7B-GGUF/resolve/main/Marco-01-slerp5-7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Marco-01-slerp5-7B-GGUF/resolve/main/Marco-01-slerp5-7B.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Marco-01-slerp5-7B-GGUF/resolve/main/Marco-01-slerp5-7B.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Marco-01-slerp5-7B-GGUF/resolve/main/Marco-01-slerp5-7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Marco-01-slerp5-7B-GGUF/resolve/main/Marco-01-slerp5-7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Marco-01-slerp5-7B-GGUF/resolve/main/Marco-01-slerp5-7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Marco-01-slerp5-7B-GGUF/resolve/main/Marco-01-slerp5-7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Marco-01-slerp5-7B-GGUF/resolve/main/Marco-01-slerp5-7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Marco-01-slerp5-7B-GGUF/resolve/main/Marco-01-slerp5-7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Marco-01-slerp5-7B-GGUF/resolve/main/Marco-01-slerp5-7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Etso/finetuning-sentiment-model-3000-samples | Etso | "2024-12-05T15:32:51Z" | 105 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-12-05T15:22:17Z" | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3685
- Accuracy: 0.8733
- F1: 0.8766
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
IngoTB303/PPO-LunarLander-v2 | IngoTB303 | "2022-12-19T12:00:28Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2022-12-19T12:00:02Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 255.51 +/- 21.94
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
metricspace/EntityAnonymization-3B-V0.9 | metricspace | "2023-11-10T22:53:15Z" | 14 | 3 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text2text-generation",
"dataset:metricspace/AnonymeData",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2023-10-02T15:25:13Z" | ---
inference: false
license: apache-2.0
datasets:
- metricspace/AnonymeData
pipeline_tag: text2text-generation
---
# EntityAnonymization-3B-V0.9
EntityAnonymization identifies entities in texts and replaces them with randomised versions.
In a first pass, the entities are recognised and a dictionary with similar but randomised variants is created.
In a second run, the original text and the dictionary are provided and the paraphrased variant is generated.
The two-step approach allows the dictionary to be cached and converted back to an anonymised text that has been further processed.
# License
This Natural Language Processing (NLP) model is made available under the Apache License, Version 2.0. You are free to use, modify, and distribute this software according to the terms and conditions of the Apache 2.0 License. For the full license text, please refer to the Apache 2.0 License.
# Usage and Specific Capabilities
## Text Length Limitation
The model is optimized to analyze texts containing up to 2048 tokens. If your text exceeds this limit, we recommend splitting it into smaller chunks, each containing no more than 2048 tokens. Each chunk can then be processed separately.
## Supported Languages
Bulgarian, Chinese, Czech, Dutch, English, Estonian, Finnish, French, German, Greek, Indonesian, Italian, Japanese, Korean, Lithuanian, Norwegian, Polish, Portuguese, Romanian, Russian, Slovak, Spanish, Swedish, Turkish
# Use Cases
## Entity Resampling and Anonymization
Introducing a cutting-edge model tailored to the task of extracting entities from sensitive text and anonymizing it. This model specializes in identifying and safeguarding confidential information, ensuring organizations' compliance with stringent data privacy regulations and minimizing the potential for inadvertent disclosure of classified data and trade secrets.
# Example Usage
```python
!pip install sentencepiece
!pip install transformers
```
```python
import torch
import json
import re
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("metricspace/EntityAnonymization-3B-V0.9")
model = AutoModelForCausalLM.from_pretrained("metricspace/EntityAnonymization-3B-V0.9", torch_dtype=torch.bfloat16)
model.to("cuda:0")
def extract_last_assistant_response(input_text):
# Find the occurrence of "ASSISTANT:" in the input text
match = re.search(r'ASSISTANT:', input_text)
# Get the index where the last "ASSISTANT:" ends
start_index = match.end()
response = input_text[start_index:].strip()
return response
# Input example
text_to_anonymize = '''Subject: HR Incident Report: Speculation of Drug Misuse by Mr. Benjamin Mitchell
Dear Mrs. Alice Williams,
I trust you're well. I wish to bring to your attention a concerning matter involving one of our esteemed employees, Mr. Benjamin Mitchell.
Employee Details:
Name: Benjamin Mitchell
Position: Senior Marketing Creative
Department: Marketing
Date of Joining: January 15, 2020
Reporting Manager: Mrs. Jane Fitzgerald
Incident Details:
Date: October 25, 2023
Location: Restroom, 4th Floor
Time: 11:45 AM
Description of Incident:
On the date specified, a few colleagues reported unusual behavior exhibited by Mr. Mitchell, which raised concerns about potential drug misuse. Witnesses mentioned that Benjamin appeared disoriented and was found in the restroom for an extended period. Some employees also discovered unidentified pills in close proximity to his chair.
Witness Accounts:
Ms. Emily Clark: "Benjamin seemed distracted and not his usual self today. He's been taking frequent breaks and appears a bit disoriented."
Mr. Robert Taylor: "I found some pills near his chair on the floor. It's concerning, and I felt it necessary to report."
Immediate Actions Taken:
Mr. Benjamin Mitchell was approached by HR for a preliminary conversation to understand the situation.
Mrs. Jane Fitzgerald, his reporting manager, was made aware of the concerns.
Recommendations:
It's crucial to have a private and supportive conversation with Mr. Mitchell to understand if there's an underlying issue.
Consider referring Benjamin to our Employee Assistance Program (EAP) for counseling or support.
It may be beneficial to organize a session on drug awareness and workplace safety for all employees.
It's of utmost importance to handle this situation with sensitivity and discretion, ensuring the wellbeing of Mr. Mitchell and maintaining the integrity of our workplace environment. This email serves as a formal documentation of the incident. We'll determine the subsequent course of action based on your guidance and the recommendations provided.
Looking forward to your direction on this matter.
'''
print(text_to_anonymize)
# Step 1: Extracting entities from text
prompt = f'USER: Resample the entities: {text_to_anonymize}\n\nASSISTANT:'
inputs = tokenizer(prompt, return_tensors='pt').to('cuda:0')
output_entities = model.generate(inputs.input_ids, max_new_tokens=300, do_sample=False, temperature=0.8, penalty_alpha=1.3, top_k=180, num_beams=5, repetition_penalty=2.3)
raw_output_entities_text = tokenizer.decode(output_entities[0])
entities = extract_last_assistant_response(raw_output_entities_text)
print('-----------Entities----------------')
try:
entities = re.search(r"\{.*?\}", entities, re.DOTALL).group(0)
data_dict = eval(entities)
formatted_json = json.dumps(data_dict, indent=4)
print(formatted_json)
except:
#bad formated json
print(entities)
#output
'''
{
"Mr. Benjamin Mitchell": "Mr. Edward Martin",
"Mrs. Alice Williams": "Mrs. Charlotte Johnson",
"January 15, 2020": "January 15, 2020",
"Mrs. Jane Fitzgerald": "Mrs. Jane Anderson",
"October 25, 2023": "October 25, 2023",
"4th Floor": "topmost floor",
"11:45 AM": "midday",
"Emily Clark": "Marie Foster",
"Employee Assistance Program (EAP)": "Personal Assistance Program (PAP)",
"Robert Taylor": "Benjamin Adams",
}
'''
# Step 2: Use entities to resample the original text
prompt_2 = f"USER: Rephrase with {entities}: {text_to_anonymize}\n\nASSISTANT:"
inputs = tokenizer(prompt_2, return_tensors='pt').to('cuda:0')
output_resampled = model.generate(inputs.input_ids, max_length=2048)
raw_output_resampled_text = tokenizer.decode(output_resampled[0])
resampled_text = extract_last_assistant_response(raw_output_resampled_text)
print('---------Anonymized Version--------')
print(resampled_text)
#output:
'''
Subject: HR Incident Report: Speculation of Drug Misuse by Mr. Edward Martin
Dear Mrs. Charlotte Johnson,
I trust you're well. I wish to bring to your attention a concerning matter involving one of our esteemed employees, Mr. Edward Martin.
Employee Details:
Name: Edward Martin
Position: Senior Marketing Creative
Department: Marketing
Date of Joining: January 15, 2020
Reporting Manager: Mrs. Jane Anderson
Incident Details:
Date: October 25, 2023
Location: Restroom, topmost floor
Time: midday
Description of Incident:
On the date specified, a few colleagues reported unusual behavior exhibited by Mr. Martin, which raised concerns about potential drug misuse. Witnesses mentioned that Edward appeared disoriented and was found in the restroom for an extended period. Some employees also discovered unidentified pills in close proximity to his chair.
Witness Accounts:
Ms. Marie Foster: "Edward seemed distracted and not his usual self today. He's been taking frequent breaks and appears a bit disoriented."
Mr. Benjamin Adams: "I found some pills near his chair on the floor. It's concerning, and I felt it necessary to report."
Immediate Actions Taken:
Mr. Edward Martin was approached by People Management for a preliminary conversation to understand the situation.
Mrs. Jane Anderson, his reporting manager, was made aware of the concerns.
Recommendations:
It's crucial to have a private and supportive conversation with Mr. Martin to understand if there's an underlying issue.
Consider referring Edward to our Personal Assistance Program (PAP) for counseling or support.
It may be beneficial to organize a session on drug awareness and workplace safety for all employees.
It's of utmost importance to handle this situation with sensitivity and discretion, ensuring the wellbeing of Mr. Martin and maintaining the integrity of our workplace environment. This email serves as a formal documentation of the incident. We'll determine the subsequent course of action based on your guidance and the recommendations provided.
Looking forward to your direction on this matter.
'''
```
# Example: Process anonymized version with GPT4 and change entities back
```python
import torch
import json
import re
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("metricspace/EntityAnonymization-3B-V0.9")
model = AutoModelForCausalLM.from_pretrained("metricspace/EntityAnonymization-3B-V0.9", torch_dtype=torch.bfloat16)
model.to("cuda:0")
# Anonymized input
anonymized_text = '''Subject: HR Incident Report: Speculation of Drug Misuse by Mr. Edward Martin
Dear Mrs. Charlotte Johnson,
I trust you're well. I wish to bring to your attention a concerning matter involving one of our esteemed employees, Mr. Edward Martin.
Employee Details:
Name: Edward Martin
Position: Senior Marketing Creative
Department: Marketing
Date of Joining: January 15, 2020
Reporting Manager: Mrs. Jane Anderson
Incident Details:
Date: October 25, 2023
Location: Restroom, topmost floor
Time: midday
Description of Incident:
On the date specified, a few colleagues reported unusual behavior exhibited by Mr. Martin, which raised concerns about potential drug misuse. Witnesses mentioned that Edward appeared disoriented and was found in the restroom for an extended period. Some employees also discovered unidentified pills in close proximity to his chair.
Witness Accounts:
Ms. Marie Foster: "Edward seemed distracted and not his usual self today. He's been taking frequent breaks and appears a bit disoriented."
Mr. Benjamin Adams: "I found some pills near his chair on the floor. It's concerning, and I felt it necessary to report."
Immediate Actions Taken:
Mr. Edward Martin was approached by People Management for a preliminary conversation to understand the situation.
Mrs. Jane Anderson, his reporting manager, was made aware of the concerns.
Recommendations:
It's crucial to have a private and supportive conversation with Mr. Martin to understand if there's an underlying issue.
Consider referring Edward to our Personal Assistance Program (PAP) for counseling or support.
It may be beneficial to organize a session on drug awareness and workplace safety for all employees.
It's of utmost importance to handle this situation with sensitivity and discretion, ensuring the wellbeing of Mr. Martin and maintaining the integrity of our workplace environment. This email serves as a formal documentation of the incident. We'll determine the subsequent course of action based on your guidance and the recommendations provided.
Looking forward to your direction on this matter.
'''
# Entities map
entities_map = '''
{
"Mr. Benjamin Mitchell": "Mr. Edward Martin",
"Mrs. Alice Williams": "Mrs. Charlotte Johnson",
"January 15, 2020": "January 15, 2020",
"Mrs. Jane Fitzgerald": "Mrs. Jane Anderson",
"October 25, 2023": "October 25, 2023",
"4th Floor": "topmost floor",
"11:45 AM": "midday",
"Emily Clark": "Marie Foster",
"Employee Assistance Program (EAP)": "Personal Assistance Program (PAP)",
"Robert Taylor": "Benjamin Adams",
}
'''
# Step 1: Processing anonymized text with GPT-4
import openai
openai.api_key = f"<API_KEY>"
completion = openai.ChatCompletion.create(
model="gpt-4",
messages=[
{"role": "user", "content": f"Write an official warning letter to the employee, that we do not tolerate it and with the next incident he gets fired in the name of Mrs. Charlotte Johnson Human Resources Manager. Here is the report with information. {anonymized_info}"}
]
)
print(completion.choices[0].message.content)
#output
'''
Subject: Official Warning – Substance Misuse Policy Violation
Dear Mr. Edward Martin,
We hope this letter finds you well. This letter serves as an official warning in regards to an incident that occurred on October 25, 2023, in which signs of drug misuse were reported. This alleged conduct is highly alarming and contrary to our company’s policies and guidelines.
The incident detailed allegations of unusual behavior indicative of possible substance abuse. Colleagues reported that you appeared disoriented and were found in the restroom for an extended period. Additionally, unidentified pills were discovered near your workspace.
Our company is committed to providing a safe and non-detrimental work environment for all its workforce. This commitment is compromised when any type of drug misuse occurs. We draw your attention to our Employee Handbook, specifically 'Section 5: Substance Misuse', that states any illegal drug use, substance misuse or distribution thereof is strictly prohibited and could be subject to severe disciplinary action, including termination.
This is an official warning that such behavior misaligns with our workplace norms and cannot be tolerated. Another incident like this or similar breach of company guidelines may lead to further disciplinary action, up to and including termination of employment.
Please note that this is not an assumption of your guilt but an assertion to remain vigilant against any practices that could harm you or the workplace environment. We encourage you to utilize our Personal Assistance Program (PAP) as a tool for counseling and support, if needed.
We believe in your potential to rectify this situation and to maintain the high standards we are all accustomed to in our organization.
Should you need assistance or if you wish to discuss this matter further, please feel free to reach out to me. We appreciate your immediate attention to this important issue.
Yours sincerely,
Mrs. Charlotte Johnson
Human Resources Manager
'''
# Step 2: Replace the entities back in processed by GPT-4 text.
import ast
def swap_keys_and_values_in_string(input_str):
# Convert the input string to a dictionary
input_dict = ast.literal_eval(input_str)
# Swap the keys and values
swapped_dict = {v: k for k, v in input_dict.items()}
# Convert the swapped dictionary back to a string
swapped_str = str(swapped_dict)
return swapped_str
gpt_response = completion.choices[0].message.content
entities_map = swap_keys_and_values_in_string(entities_map)
prompt = f"USER: Rephrase with {entities_map}: {gpt_response}\n\nASSISTANT:"
inputs = tokenizer(prompt, return_tensors='pt').to('cuda:0')
outputs = model.generate(inputs.input_ids, max_new_tokens=2048)
output_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
#output:
'''
Subject: Official Warning – Substance Misuse Policy Violation
Dear Mr. Benjamin Mitchell,
We hope this letter finds you well. This letter serves as an official warning in regards to an incident that occurred on January 15,
2020, in which signs of drug misuse were reported. This alleged conduct is highly alarming and contrary to our company’s policies and guidelines.
The incident detailed allegations of unusual behavior indicative of possible substance abuse. Colleagues reported that you appeared disoriented and
were found in the restroom for an extended period. Additionally, unidentified pills were discovered near your workspace.
Our company is committed to providing a safe and non-detrimental work environment for all its workforce. This commitment is compromised when any
type of drug misuse occurs. We draw your attention to our Employee Handbook, specifically 'Section 5: Substance Misuse', that states any illegal
drug use, substance misuse or distribution thereof is strictly prohibited and could be subject to severe disciplinary action, including termination.
This is an official warning that such behavior misaligns with our workplace norms and cannot be tolerated. Another incident like this or similar breach
of company guidelines may lead to further disciplinary action, up to and including termination of employment.
Please note that this is not an assumption of your guilt but an assertion to remain vigilant against any practices that could harm you or the workplace
environment. We encourage you to utilize our Employee Assistance Program (EAP) as a tool for counseling and support, if needed.
We believe in your potential to rectify this situation and to maintain the high standards we are all accustomed to in our organization.
Should you need assistance or if you wish to discuss this matter further, please feel free to reach out to me. We appreciate your immediate attention
to this important issue.
Yours sincerely,
Mrs. Alice Williams,
Human Resources Manager.
'''
```
…
# Dataset and Training Documentation for Audit
If you require the original dataset used for training this model, or further documentation related to its training and architecture for audit purposes, you can request this information by contacting us.
Further Tuning Services for Custom Use Cases
For specialized needs or custom use cases, we offer further tuning services to adapt the model to your specific requirements. To inquire about these services, please reach out to us at:
📧 Email: [email protected]
Please note that the availability of the dataset, additional documentation, and tuning services may be subject to certain conditions and limitations. |
gando2077/gando | gando2077 | "2025-03-08T14:44:47Z" | 0 | 0 | null | [
"license:openrail++",
"region:us"
] | null | "2025-03-08T14:30:01Z" | ---
license: openrail++
---
|
mradermacher/DeepSeek_GSM8K_Self_Explore-GGUF | mradermacher | "2025-01-04T13:47:25Z" | 359 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:hbin0701/DeepSeek_GSM8K_Self_Explore",
"base_model:quantized:hbin0701/DeepSeek_GSM8K_Self_Explore",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-01-04T13:40:46Z" | ---
base_model: hbin0701/DeepSeek_GSM8K_Self_Explore
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/hbin0701/DeepSeek_GSM8K_Self_Explore
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DeepSeek_GSM8K_Self_Explore-GGUF/resolve/main/DeepSeek_GSM8K_Self_Explore.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek_GSM8K_Self_Explore-GGUF/resolve/main/DeepSeek_GSM8K_Self_Explore.Q3_K_S.gguf) | Q3_K_S | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek_GSM8K_Self_Explore-GGUF/resolve/main/DeepSeek_GSM8K_Self_Explore.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek_GSM8K_Self_Explore-GGUF/resolve/main/DeepSeek_GSM8K_Self_Explore.Q3_K_L.gguf) | Q3_K_L | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek_GSM8K_Self_Explore-GGUF/resolve/main/DeepSeek_GSM8K_Self_Explore.IQ4_XS.gguf) | IQ4_XS | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek_GSM8K_Self_Explore-GGUF/resolve/main/DeepSeek_GSM8K_Self_Explore.Q4_K_S.gguf) | Q4_K_S | 4.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek_GSM8K_Self_Explore-GGUF/resolve/main/DeepSeek_GSM8K_Self_Explore.Q4_K_M.gguf) | Q4_K_M | 4.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek_GSM8K_Self_Explore-GGUF/resolve/main/DeepSeek_GSM8K_Self_Explore.Q5_K_S.gguf) | Q5_K_S | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek_GSM8K_Self_Explore-GGUF/resolve/main/DeepSeek_GSM8K_Self_Explore.Q5_K_M.gguf) | Q5_K_M | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek_GSM8K_Self_Explore-GGUF/resolve/main/DeepSeek_GSM8K_Self_Explore.Q6_K.gguf) | Q6_K | 5.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek_GSM8K_Self_Explore-GGUF/resolve/main/DeepSeek_GSM8K_Self_Explore.Q8_0.gguf) | Q8_0 | 7.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek_GSM8K_Self_Explore-GGUF/resolve/main/DeepSeek_GSM8K_Self_Explore.f16.gguf) | f16 | 13.9 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
AlanHou/trainer-chapter5 | AlanHou | "2024-06-12T10:00:05Z" | 107 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-06-12T09:56:28Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: trainer-chapter5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trainer-chapter5
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2428
- Accuracy: 0.9218
- F1: 0.9218
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 313 | 0.2610 | 0.9104 | 0.9102 |
| 0.3012 | 2.0 | 626 | 0.2428 | 0.9218 | 0.9218 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
Triangle104/DeepHermes-3-Llama-3-8B-Preview-Q4_K_S-GGUF | Triangle104 | "2025-02-14T11:01:08Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"Llama-3",
"instruct",
"finetune",
"chatml",
"gpt4",
"synthetic data",
"distillation",
"function calling",
"json mode",
"axolotl",
"roleplaying",
"chat",
"reasoning",
"r1",
"vllm",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:NousResearch/DeepHermes-3-Llama-3-8B-Preview",
"base_model:quantized:NousResearch/DeepHermes-3-Llama-3-8B-Preview",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-14T10:57:17Z" | ---
language:
- en
license: llama3
tags:
- Llama-3
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- distillation
- function calling
- json mode
- axolotl
- roleplaying
- chat
- reasoning
- r1
- vllm
- llama-cpp
- gguf-my-repo
base_model: NousResearch/DeepHermes-3-Llama-3-8B-Preview
widget:
- example_title: Hermes 3
messages:
- role: system
content: You are a sentient, superintelligent artificial general intelligence,
here to teach and assist me.
- role: user
content: What is the meaning of life?
library_name: transformers
model-index:
- name: DeepHermes-3-Llama-3.1-8B
results: []
---
# Triangle104/DeepHermes-3-Llama-3-8B-Preview-Q4_K_S-GGUF
This model was converted to GGUF format from [`NousResearch/DeepHermes-3-Llama-3-8B-Preview`](https://huggingface.co/NousResearch/DeepHermes-3-Llama-3-8B-Preview) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/NousResearch/DeepHermes-3-Llama-3-8B-Preview) for more details on the model.
---
DeepHermes 3 Preview is the latest version of our flagship Hermes
series of LLMs by Nous Research, and one of the first models in the
world to unify Reasoning (long chains of thought that improve answer
accuracy) and normal LLM response modes into one model. We have also
improved LLM annotation, judgement, and function calling.
DeepHermes 3 Preview is one of the first LLM models to unify both "intuitive", traditional mode responses and long chain of thought reasoning responses into a single model, toggled by a system prompt.
Hermes 3, the predecessor of DeepHermes 3, is a generalist language
model with many improvements over Hermes 2, including advanced agentic
capabilities, much better roleplaying, reasoning, multi-turn
conversation, long context coherence, and improvements across the board.
The ethos of the Hermes series of models is focused on aligning LLMs
to the user, with powerful steering capabilities and control given to
the end user.
This is a preview Hermes with early reasoning capabilities,
distilled from R1 across a variety of tasks that benefit from reasoning
and objectivity. Some quirks may be discovered! Please let us know any
interesting findings or issues you discover!
Note: To toggle REASONING ON, you must use the following system prompt:
You are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/DeepHermes-3-Llama-3-8B-Preview-Q4_K_S-GGUF --hf-file deephermes-3-llama-3-8b-preview-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/DeepHermes-3-Llama-3-8B-Preview-Q4_K_S-GGUF --hf-file deephermes-3-llama-3-8b-preview-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/DeepHermes-3-Llama-3-8B-Preview-Q4_K_S-GGUF --hf-file deephermes-3-llama-3-8b-preview-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/DeepHermes-3-Llama-3-8B-Preview-Q4_K_S-GGUF --hf-file deephermes-3-llama-3-8b-preview-q4_k_s.gguf -c 2048
```
|
LHRuig/mkflx | LHRuig | "2025-01-10T09:58:49Z" | 6 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | "2025-01-10T09:56:12Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: man
---
# mkflx
<Gallery />
## Model description
mkflx lora
## Trigger words
You should use `man` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/mkflx/tree/main) them in the Files & versions tab.
|
Vaishali0803/flan-t5-base | Vaishali0803 | "2025-02-15T06:21:46Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:adapter:google/flan-t5-base",
"license:apache-2.0",
"region:us"
] | null | "2025-02-15T05:58:34Z" | ---
library_name: peft
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
model-index:
- name: flan-t5-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3949
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 7 | 0.4430 |
| 0.6166 | 2.0 | 14 | 0.4243 |
| 0.54 | 3.0 | 21 | 0.4165 |
| 0.54 | 4.0 | 28 | 0.4139 |
| 0.4603 | 5.0 | 35 | 0.4086 |
| 0.432 | 6.0 | 42 | 0.4087 |
| 0.432 | 7.0 | 49 | 0.4003 |
| 0.4 | 8.0 | 56 | 0.3984 |
| 0.4068 | 9.0 | 63 | 0.3951 |
| 0.3968 | 10.0 | 70 | 0.3949 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.3.0
- Tokenizers 0.21.0 |
research-backup/mt5-base-dequad-qg-trimmed-30000 | research-backup | "2023-03-03T12:57:56Z" | 106 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-03-03T12:20:12Z" | # Vocabulary Trimmed [lmqg/mt5-base-dequad-qg](https://huggingface.co/lmqg/mt5-base-dequad-qg): `vocabtrimmer/mt5-base-dequad-qg-trimmed-30000`
This model is a trimmed version of [lmqg/mt5-base-dequad-qg](https://huggingface.co/lmqg/mt5-base-dequad-qg) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mt5-base-dequad-qg | vocabtrimmer/mt5-base-dequad-qg-trimmed-30000 |
|:---------------------------|:--------------------------|:------------------------------------------------|
| parameter_size_full | 582,384,384 | 244,312,320 |
| parameter_size_embedding | 384,155,136 | 46,083,072 |
| vocab_size | 250,101 | 30,002 |
| compression_rate_full | 100.0 | 41.95 |
| compression_rate_embedding | 100.0 | 12.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| de | vocabtrimmer/mc4_validation | text | de | validation | 30000 | 2 | |
kanishka/smolm-autoreg-bpe-counterfactual-babylm-all_det_removal-3e-4 | kanishka | "2023-12-13T19:59:47Z" | 12 | 0 | transformers | [
"transformers",
"safetensors",
"opt",
"text-generation",
"generated_from_trainer",
"dataset:kanishka/counterfactual_babylm_aann_all_det_removal",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-12-13T04:42:59Z" | ---
tags:
- generated_from_trainer
datasets:
- kanishka/counterfactual_babylm_aann_all_det_removal
metrics:
- accuracy
model-index:
- name: smolm-autoreg-bpe-counterfactual-babylm-all_det_removal-3e-4
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: kanishka/counterfactual_babylm_aann_all_det_removal
type: kanishka/counterfactual_babylm_aann_all_det_removal
metrics:
- name: Accuracy
type: accuracy
value: 0.40870703690431515
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smolm-autoreg-bpe-counterfactual-babylm-all_det_removal-3e-4
This model was trained from scratch on the kanishka/counterfactual_babylm_aann_all_det_removal dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4401
- Accuracy: 0.4087
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 32000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 3.5638 | 1.0 | 37189 | 3.7368 | 0.3622 |
| 3.3412 | 2.0 | 74378 | 3.5511 | 0.3852 |
| 3.2337 | 3.0 | 111567 | 3.4790 | 0.3939 |
| 3.1708 | 4.0 | 148756 | 3.4317 | 0.3994 |
| 3.1125 | 5.0 | 185945 | 3.4199 | 0.4013 |
| 3.0765 | 6.0 | 223134 | 3.4033 | 0.4038 |
| 3.0361 | 7.0 | 260323 | 3.3774 | 0.4055 |
| 3.0108 | 8.0 | 297512 | 3.3853 | 0.4064 |
| 2.9931 | 9.0 | 334701 | 3.3719 | 0.4072 |
| 2.9628 | 10.0 | 371890 | 3.3715 | 0.4079 |
| 2.9363 | 11.0 | 409079 | 3.3925 | 0.4082 |
| 2.9167 | 12.0 | 446268 | 3.3869 | 0.4083 |
| 2.8918 | 13.0 | 483457 | 3.3873 | 0.4089 |
| 2.8737 | 14.0 | 520646 | 3.3924 | 0.4086 |
| 2.8545 | 15.0 | 557835 | 3.3917 | 0.4090 |
| 2.8353 | 16.0 | 595024 | 3.4101 | 0.4089 |
| 2.8209 | 17.0 | 632213 | 3.4230 | 0.4089 |
| 2.7977 | 18.0 | 669402 | 3.4256 | 0.4089 |
| 2.781 | 19.0 | 706591 | 3.4295 | 0.4089 |
| 2.7692 | 20.0 | 743780 | 3.4401 | 0.4087 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
ivailobsu/llama3.2-invoice-extraction-pass-1 | ivailobsu | "2025-02-21T08:12:38Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-02-20T20:58:41Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MaziyarPanahi/Kosmos-Elusive-VENN-Aurora_faustus-8B-GGUF | MaziyarPanahi | "2024-12-28T16:33:37Z" | 98 | 0 | null | [
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:jaspionjader/Kosmos-Elusive-VENN-Aurora_faustus-8B",
"base_model:quantized:jaspionjader/Kosmos-Elusive-VENN-Aurora_faustus-8B",
"region:us"
] | text-generation | "2024-12-28T16:14:13Z" | ---
base_model: jaspionjader/Kosmos-Elusive-VENN-Aurora_faustus-8B
inference: false
model_creator: jaspionjader
model_name: Kosmos-Elusive-VENN-Aurora_faustus-8B-GGUF
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
---
# [MaziyarPanahi/Kosmos-Elusive-VENN-Aurora_faustus-8B-GGUF](https://huggingface.co/MaziyarPanahi/Kosmos-Elusive-VENN-Aurora_faustus-8B-GGUF)
- Model creator: [jaspionjader](https://huggingface.co/jaspionjader)
- Original model: [jaspionjader/Kosmos-Elusive-VENN-Aurora_faustus-8B](https://huggingface.co/jaspionjader/Kosmos-Elusive-VENN-Aurora_faustus-8B)
## Description
[MaziyarPanahi/Kosmos-Elusive-VENN-Aurora_faustus-8B-GGUF](https://huggingface.co/MaziyarPanahi/Kosmos-Elusive-VENN-Aurora_faustus-8B-GGUF) contains GGUF format model files for [jaspionjader/Kosmos-Elusive-VENN-Aurora_faustus-8B](https://huggingface.co/jaspionjader/Kosmos-Elusive-VENN-Aurora_faustus-8B).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
springroll4/inaba | springroll4 | "2023-12-29T01:46:33Z" | 0 | 0 | null | [
"license:other",
"region:us"
] | null | "2023-12-08T17:13:27Z" | ---
license: other
license_name: inaba
license_link: LICENSE
---
|
anas-awadalla/t5-base-finetuned-squad-infilling-lr-3e-5 | anas-awadalla | "2022-10-09T02:58:00Z" | 110 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-10-08T23:06:38Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: t5-base-finetuned-squad-infilling-lr-3e-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-squad-infilling-lr-3e-5
This model is a fine-tuned version of [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 24
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
baby-dev/e7f6e4d2-0d5f-487c-8c4e-cdd7d0d4016a | baby-dev | "2025-02-06T00:29:14Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:Intel/neural-chat-7b-v3-3",
"base_model:adapter:Intel/neural-chat-7b-v3-3",
"license:apache-2.0",
"region:us"
] | null | "2025-02-06T00:02:24Z" | ---
library_name: peft
license: apache-2.0
base_model: Intel/neural-chat-7b-v3-3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e7f6e4d2-0d5f-487c-8c4e-cdd7d0d4016a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
# e7f6e4d2-0d5f-487c-8c4e-cdd7d0d4016a
This model is a fine-tuned version of [Intel/neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4192
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
RichardErkhov/mu0gum_-_AIFT-42dot_LLM-PLM-1.3B-ao-instruct-all-v1.3-8bits | RichardErkhov | "2025-02-28T06:02:54Z" | 0 | 0 | null | [
"safetensors",
"llama",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-02-28T06:01:52Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
AIFT-42dot_LLM-PLM-1.3B-ao-instruct-all-v1.3 - bnb 8bits
- Model creator: https://huggingface.co/mu0gum/
- Original model: https://huggingface.co/mu0gum/AIFT-42dot_LLM-PLM-1.3B-ao-instruct-all-v1.3/
Original model description:
---
license: cc-by-nc-4.0
---
# AIFT-42dot-LLM-PLM-1.3B-ao-instruct-all-v1.3
베이스 모델 : 42dot/42dot_LLM-PLM-1.3B
학습 데이터 : 자체 제작한 Open Orca 스타일 데이터셋 약 63,000건 (중복 제거 및 데이터 분포 조정)
학습 방법 : Full finetuning
epoch : 3
## ko-lm-evaluation-harness(5-shot)
|kobest_boolq|kobest_copa|kobest_hellaswag|pawsx_ko|
|--|--|--|--|
|0.522079772079772|0.722|0.47|0.557|
## Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.0.0
- Tokenizers 0.15.0
|
Subsets and Splits