Search is not available for this dataset
modelId
stringlengths 5
137
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-03-29 12:26:52
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 401
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-03-29 12:26:36
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
LoneStriker/Tess-10.7B-v1.5-8.0bpw-h8-exl2 | LoneStriker | "2024-01-27T10:31:04Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-01-27T10:26:33Z" | ---
license: apache-2.0
---
<br>

<br>
Tess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-10.7B-v1.5 was trained on the SOLAR-10.7B base.
# Prompt Format:
```
SYSTEM: <ANY SYSTEM CONTEXT>
USER:
ASSISTANT:
``` |
mofyrt/bert-base-uncased-finetuned-cola | mofyrt | "2022-04-23T18:04:55Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-04-23T13:35:27Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5905946625710334
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7445
- Matthews Correlation: 0.5906
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4926 | 1.0 | 535 | 0.5155 | 0.4941 |
| 0.2971 | 2.0 | 1070 | 0.5561 | 0.5320 |
| 0.1947 | 3.0 | 1605 | 0.7230 | 0.5677 |
| 0.1293 | 4.0 | 2140 | 0.7445 | 0.5906 |
| 0.0867 | 5.0 | 2675 | 0.8836 | 0.5788 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
MBZUAI/swiftformer-s | MBZUAI | "2023-05-12T03:12:44Z" | 79 | 1 | transformers | [
"transformers",
"pytorch",
"swiftformer",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2303.15446",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-04-18T07:37:01Z" | ---
datasets:
- imagenet-1k
library_name: transformers
pipeline_tag: image-classification
---
# SwiftFormer (swiftformer-s)
## Model description
The SwiftFormer model was proposed in [SwiftFormer: Efficient Additive Attention for Transformer-based Real-time Mobile Vision Applications](https://arxiv.org/abs/2303.15446) by Abdelrahman Shaker, Muhammad Maaz, Hanoona Rasheed, Salman Khan, Ming-Hsuan Yang, Fahad Shahbaz Khan.
SwiftFormer paper introduces a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations in the self-attention computation with linear element-wise multiplications. A series of models called 'SwiftFormer' is built based on this, which achieves state-of-the-art performance in terms of both accuracy and mobile inference speed. Even their small variant achieves 78.5% top-1 ImageNet1K accuracy with only 0.8 ms latency on iPhone 14, which is more accurate and 2× faster compared to MobileViT-v2.
## Intended uses & limitations
## How to use
import requests
from PIL import Image
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
from transformers import ViTImageProcessor
processor = ViTImageProcessor.from_pretrained('shehan97/swiftformer-s')
inputs = processor(images=image, return_tensors="pt")
from transformers.models.swiftformer import SwiftFormerForImageClassification
new_model = SwiftFormerForImageClassification.from_pretrained('shehan97/swiftformer-s')
output = new_model(inputs['pixel_values'], output_hidden_states=True)
logits = output.logits
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", new_model.config.id2label[predicted_class_idx])
## Limitations and bias
## Training data
The classification model is trained on the ImageNet-1K dataset.
## Training procedure
## Evaluation results
|
mradermacher/LLama3.3-Rhino-70B-RAG-GGUF | mradermacher | "2025-01-16T07:55:24Z" | 214 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"en",
"base_model:QomSSLab/LLama3.3-Rhino-70B-RAG",
"base_model:quantized:QomSSLab/LLama3.3-Rhino-70B-RAG",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-16T01:14:00Z" | ---
base_model: QomSSLab/LLama3.3-Rhino-70B-RAG
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/QomSSLab/LLama3.3-Rhino-70B-RAG
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LLama3.3-Rhino-70B-RAG-GGUF/resolve/main/LLama3.3-Rhino-70B-RAG.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/LLama3.3-Rhino-70B-RAG-GGUF/resolve/main/LLama3.3-Rhino-70B-RAG.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/LLama3.3-Rhino-70B-RAG-GGUF/resolve/main/LLama3.3-Rhino-70B-RAG.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LLama3.3-Rhino-70B-RAG-GGUF/resolve/main/LLama3.3-Rhino-70B-RAG.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/LLama3.3-Rhino-70B-RAG-GGUF/resolve/main/LLama3.3-Rhino-70B-RAG.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/LLama3.3-Rhino-70B-RAG-GGUF/resolve/main/LLama3.3-Rhino-70B-RAG.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LLama3.3-Rhino-70B-RAG-GGUF/resolve/main/LLama3.3-Rhino-70B-RAG.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LLama3.3-Rhino-70B-RAG-GGUF/resolve/main/LLama3.3-Rhino-70B-RAG.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/LLama3.3-Rhino-70B-RAG-GGUF/resolve/main/LLama3.3-Rhino-70B-RAG.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/LLama3.3-Rhino-70B-RAG-GGUF/resolve/main/LLama3.3-Rhino-70B-RAG.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/LLama3.3-Rhino-70B-RAG-GGUF/resolve/main/LLama3.3-Rhino-70B-RAG.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/LLama3.3-Rhino-70B-RAG-GGUF/resolve/main/LLama3.3-Rhino-70B-RAG.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/LLama3.3-Rhino-70B-RAG-GGUF/resolve/main/LLama3.3-Rhino-70B-RAG.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
anas-awadalla/t5-base-few-shot-k-128-finetuned-squad-seed-0 | anas-awadalla | "2022-09-29T23:26:31Z" | 108 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-09-27T17:38:38Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: t5-base-few-shot-k-128-finetuned-squad-seed-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-few-shot-k-128-finetuned-squad-seed-0
This model is a fine-tuned version of [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
RichardErkhov/OpenLLM-Ro_-_RoLlama3-8b-Instruct-2024-06-28-gguf | RichardErkhov | "2024-10-11T19:40:11Z" | 146 | 1 | null | [
"gguf",
"arxiv:2406.18266",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-10-11T15:36:31Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
RoLlama3-8b-Instruct-2024-06-28 - GGUF
- Model creator: https://huggingface.co/OpenLLM-Ro/
- Original model: https://huggingface.co/OpenLLM-Ro/RoLlama3-8b-Instruct-2024-06-28/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [RoLlama3-8b-Instruct-2024-06-28.Q2_K.gguf](https://huggingface.co/RichardErkhov/OpenLLM-Ro_-_RoLlama3-8b-Instruct-2024-06-28-gguf/blob/main/RoLlama3-8b-Instruct-2024-06-28.Q2_K.gguf) | Q2_K | 2.96GB |
| [RoLlama3-8b-Instruct-2024-06-28.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/OpenLLM-Ro_-_RoLlama3-8b-Instruct-2024-06-28-gguf/blob/main/RoLlama3-8b-Instruct-2024-06-28.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [RoLlama3-8b-Instruct-2024-06-28.IQ3_S.gguf](https://huggingface.co/RichardErkhov/OpenLLM-Ro_-_RoLlama3-8b-Instruct-2024-06-28-gguf/blob/main/RoLlama3-8b-Instruct-2024-06-28.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [RoLlama3-8b-Instruct-2024-06-28.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/OpenLLM-Ro_-_RoLlama3-8b-Instruct-2024-06-28-gguf/blob/main/RoLlama3-8b-Instruct-2024-06-28.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [RoLlama3-8b-Instruct-2024-06-28.IQ3_M.gguf](https://huggingface.co/RichardErkhov/OpenLLM-Ro_-_RoLlama3-8b-Instruct-2024-06-28-gguf/blob/main/RoLlama3-8b-Instruct-2024-06-28.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [RoLlama3-8b-Instruct-2024-06-28.Q3_K.gguf](https://huggingface.co/RichardErkhov/OpenLLM-Ro_-_RoLlama3-8b-Instruct-2024-06-28-gguf/blob/main/RoLlama3-8b-Instruct-2024-06-28.Q3_K.gguf) | Q3_K | 3.74GB |
| [RoLlama3-8b-Instruct-2024-06-28.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/OpenLLM-Ro_-_RoLlama3-8b-Instruct-2024-06-28-gguf/blob/main/RoLlama3-8b-Instruct-2024-06-28.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [RoLlama3-8b-Instruct-2024-06-28.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/OpenLLM-Ro_-_RoLlama3-8b-Instruct-2024-06-28-gguf/blob/main/RoLlama3-8b-Instruct-2024-06-28.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [RoLlama3-8b-Instruct-2024-06-28.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/OpenLLM-Ro_-_RoLlama3-8b-Instruct-2024-06-28-gguf/blob/main/RoLlama3-8b-Instruct-2024-06-28.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [RoLlama3-8b-Instruct-2024-06-28.Q4_0.gguf](https://huggingface.co/RichardErkhov/OpenLLM-Ro_-_RoLlama3-8b-Instruct-2024-06-28-gguf/blob/main/RoLlama3-8b-Instruct-2024-06-28.Q4_0.gguf) | Q4_0 | 4.34GB |
| [RoLlama3-8b-Instruct-2024-06-28.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/OpenLLM-Ro_-_RoLlama3-8b-Instruct-2024-06-28-gguf/blob/main/RoLlama3-8b-Instruct-2024-06-28.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [RoLlama3-8b-Instruct-2024-06-28.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/OpenLLM-Ro_-_RoLlama3-8b-Instruct-2024-06-28-gguf/blob/main/RoLlama3-8b-Instruct-2024-06-28.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [RoLlama3-8b-Instruct-2024-06-28.Q4_K.gguf](https://huggingface.co/RichardErkhov/OpenLLM-Ro_-_RoLlama3-8b-Instruct-2024-06-28-gguf/blob/main/RoLlama3-8b-Instruct-2024-06-28.Q4_K.gguf) | Q4_K | 4.58GB |
| [RoLlama3-8b-Instruct-2024-06-28.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/OpenLLM-Ro_-_RoLlama3-8b-Instruct-2024-06-28-gguf/blob/main/RoLlama3-8b-Instruct-2024-06-28.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [RoLlama3-8b-Instruct-2024-06-28.Q4_1.gguf](https://huggingface.co/RichardErkhov/OpenLLM-Ro_-_RoLlama3-8b-Instruct-2024-06-28-gguf/blob/main/RoLlama3-8b-Instruct-2024-06-28.Q4_1.gguf) | Q4_1 | 4.78GB |
| [RoLlama3-8b-Instruct-2024-06-28.Q5_0.gguf](https://huggingface.co/RichardErkhov/OpenLLM-Ro_-_RoLlama3-8b-Instruct-2024-06-28-gguf/blob/main/RoLlama3-8b-Instruct-2024-06-28.Q5_0.gguf) | Q5_0 | 5.21GB |
| [RoLlama3-8b-Instruct-2024-06-28.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/OpenLLM-Ro_-_RoLlama3-8b-Instruct-2024-06-28-gguf/blob/main/RoLlama3-8b-Instruct-2024-06-28.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [RoLlama3-8b-Instruct-2024-06-28.Q5_K.gguf](https://huggingface.co/RichardErkhov/OpenLLM-Ro_-_RoLlama3-8b-Instruct-2024-06-28-gguf/blob/main/RoLlama3-8b-Instruct-2024-06-28.Q5_K.gguf) | Q5_K | 5.34GB |
| [RoLlama3-8b-Instruct-2024-06-28.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/OpenLLM-Ro_-_RoLlama3-8b-Instruct-2024-06-28-gguf/blob/main/RoLlama3-8b-Instruct-2024-06-28.Q5_K_M.gguf) | Q5_K_M | 1.79GB |
| [RoLlama3-8b-Instruct-2024-06-28.Q5_1.gguf](https://huggingface.co/RichardErkhov/OpenLLM-Ro_-_RoLlama3-8b-Instruct-2024-06-28-gguf/blob/main/RoLlama3-8b-Instruct-2024-06-28.Q5_1.gguf) | Q5_1 | 5.65GB |
| [RoLlama3-8b-Instruct-2024-06-28.Q6_K.gguf](https://huggingface.co/RichardErkhov/OpenLLM-Ro_-_RoLlama3-8b-Instruct-2024-06-28-gguf/blob/main/RoLlama3-8b-Instruct-2024-06-28.Q6_K.gguf) | Q6_K | 6.14GB |
| [RoLlama3-8b-Instruct-2024-06-28.Q8_0.gguf](https://huggingface.co/RichardErkhov/OpenLLM-Ro_-_RoLlama3-8b-Instruct-2024-06-28-gguf/blob/main/RoLlama3-8b-Instruct-2024-06-28.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
license: cc-by-nc-4.0
language:
- ro
base_model:
- meta-llama/Meta-Llama-3-8B
datasets:
- OpenLLM-Ro/ro_sft_alpaca
- OpenLLM-Ro/ro_sft_alpaca_gpt4
- OpenLLM-Ro/ro_sft_dolly
- OpenLLM-Ro/ro_sft_selfinstruct_gpt4
- OpenLLM-Ro/ro_sft_norobots
- OpenLLM-Ro/ro_sft_orca
- OpenLLM-Ro/ro_sft_camel
model-index:
- name: OpenLLM-Ro/RoLlama3-8b-Instruct-2024-06-28
results:
- task:
type: text-generation
dataset:
name: RoMT-Bench
type: RoMT-Bench
metrics:
- name: Score
type: Score
value: 5.15
- task:
type: text-generation
dataset:
name: RoCulturaBench
type: RoCulturaBench
metrics:
- name: Score
type: Score
value: 3.71
- task:
type: text-generation
dataset:
name: Romanian_Academic_Benchmarks
type: Romanian_Academic_Benchmarks
metrics:
- name: Average accuracy
type: accuracy
value: 50.56
- task:
type: text-generation
dataset:
name: OpenLLM-Ro/ro_arc_challenge
type: OpenLLM-Ro/ro_arc_challenge
metrics:
- name: Average accuracy
type: accuracy
value: 44.70
- task:
type: text-generation
dataset:
name: OpenLLM-Ro/ro_mmlu
type: OpenLLM-Ro/ro_mmlu
metrics:
- name: Average accuracy
type: accuracy
value: 52.19
- task:
type: text-generation
dataset:
name: OpenLLM-Ro/ro_winogrande
type: OpenLLM-Ro/ro_winogrande
metrics:
- name: Average accuracy
type: accuracy
value: 67.23
- task:
type: text-generation
dataset:
name: OpenLLM-Ro/ro_hellaswag
type: OpenLLM-Ro/ro_hellaswag
metrics:
- name: Average accuracy
type: accuracy
value: 57.69
- task:
type: text-generation
dataset:
name: OpenLLM-Ro/ro_gsm8k
type: OpenLLM-Ro/ro_gsm8k
metrics:
- name: Average accuracy
type: accuracy
value: 30.23
- task:
type: text-generation
dataset:
name: OpenLLM-Ro/ro_truthfulqa
type: OpenLLM-Ro/ro_truthfulqa
metrics:
- name: Average accuracy
type: accuracy
value: 51.34
- task:
type: text-generation
dataset:
name: LaRoSeDa_binary
type: LaRoSeDa_binary
metrics:
- name: Average macro-f1
type: macro-f1
value: 97.52
- task:
type: text-generation
dataset:
name: LaRoSeDa_multiclass
type: LaRoSeDa_multiclass
metrics:
- name: Average macro-f1
type: macro-f1
value: 67.41
- task:
type: text-generation
dataset:
name: LaRoSeDa_binary_finetuned
type: LaRoSeDa_binary_finetuned
metrics:
- name: Average macro-f1
type: macro-f1
value: 94.15
- task:
type: text-generation
dataset:
name: LaRoSeDa_multiclass_finetuned
type: LaRoSeDa_multiclass_finetuned
metrics:
- name: Average macro-f1
type: macro-f1
value: 87.13
- task:
type: text-generation
dataset:
name: WMT_EN-RO
type: WMT_EN-RO
metrics:
- name: Average bleu
type: bleu
value: 24.01
- task:
type: text-generation
dataset:
name: WMT_RO-EN
type: WMT_RO-EN
metrics:
- name: Average bleu
type: bleu
value: 27.36
- task:
type: text-generation
dataset:
name: WMT_EN-RO_finetuned
type: WMT_EN-RO_finetuned
metrics:
- name: Average bleu
type: bleu
value: 26.53
- task:
type: text-generation
dataset:
name: WMT_RO-EN_finetuned
type: WMT_RO-EN_finetuned
metrics:
- name: Average bleu
type: bleu
value: 40.36
- task:
type: text-generation
dataset:
name: XQuAD
type: XQuAD
metrics:
- name: Average exact_match
type: exact_match
value: 39.43
- task:
type: text-generation
dataset:
name: XQuAD
type: XQuAD
metrics:
- name: Average f1
type: f1
value: 59.50
- task:
type: text-generation
dataset:
name: XQuAD_finetuned
type: XQuAD_finetuned
metrics:
- name: Average exact_match
type: exact_match
value: 44.45
- task:
type: text-generation
dataset:
name: XQuAD_finetuned
type: XQuAD_finetuned
metrics:
- name: Average f1
type: f1
value: 59.76
- task:
type: text-generation
dataset:
name: STS
type: STS
metrics:
- name: Average spearman
type: spearman
value: 77.20
- task:
type: text-generation
dataset:
name: STS
type: STS
metrics:
- name: Average pearson
type: pearson
value: 77.87
- task:
type: text-generation
dataset:
name: STS_finetuned
type: STS_finetuned
metrics:
- name: Average spearman
type: spearman
value: 85.80
- task:
type: text-generation
dataset:
name: STS_finetuned
type: STS_finetuned
metrics:
- name: Average pearson
type: pearson
value: 86.05
- task:
type: text-generation
dataset:
name: RoMT-Bench
type: RoMT-Bench
metrics:
- name: First turn
type: Score
value: 6.03
- name: Second turn
type: Score
value: 4.28
- task:
type: text-generation
dataset:
name: OpenLLM-Ro/ro_arc_challenge
type: OpenLLM-Ro/ro_arc_challenge
metrics:
- name: 0-shot
type: accuracy
value: 41.90
- name: 1-shot
type: accuracy
value: 44.30
- name: 3-shot
type: accuracy
value: 44.56
- name: 5-shot
type: accuracy
value: 45.50
- name: 10-shot
type: accuracy
value: 46.10
- name: 25-shot
type: accuracy
value: 45.84
- task:
type: text-generation
dataset:
name: OpenLLM-Ro/ro_mmlu
type: OpenLLM-Ro/ro_mmlu
metrics:
- name: 0-shot
type: accuracy
value: 50.85
- name: 1-shot
type: accuracy
value: 51.24
- name: 3-shot
type: accuracy
value: 53.30
- name: 5-shot
type: accuracy
value: 53.39
- task:
type: text-generation
dataset:
name: OpenLLM-Ro/ro_winogrande
type: OpenLLM-Ro/ro_winogrande
metrics:
- name: 0-shot
type: accuracy
value: 65.19
- name: 1-shot
type: accuracy
value: 66.54
- name: 3-shot
type: accuracy
value: 67.88
- name: 5-shot
type: accuracy
value: 69.30
- task:
type: text-generation
dataset:
name: OpenLLM-Ro/ro_hellaswag
type: OpenLLM-Ro/ro_hellaswag
metrics:
- name: 0-shot
type: accuracy
value: 56.12
- name: 1-shot
type: accuracy
value: 57.37
- name: 3-shot
type: accuracy
value: 57.92
- name: 5-shot
type: accuracy
value: 58.18
- name: 10-shot
type: accuracy
value: 58.85
- task:
type: text-generation
dataset:
name: OpenLLM-Ro/ro_gsm8k
type: OpenLLM-Ro/ro_gsm8k
metrics:
- name: 1-shot
type: accuracy
value: 29.42
- name: 3-shot
type: accuracy
value: 30.02
- name: 5-shot
type: accuracy
value: 31.24
- task:
type: text-generation
dataset:
name: LaRoSeDa_binary
type: LaRoSeDa_binary
metrics:
- name: 0-shot
type: macro-f1
value: 97.43
- name: 1-shot
type: macro-f1
value: 96.60
- name: 3-shot
type: macro-f1
value: 97.90
- name: 5-shot
type: macro-f1
value: 98.13
- task:
type: text-generation
dataset:
name: LaRoSeDa_multiclass
type: LaRoSeDa_multiclass
metrics:
- name: 0-shot
type: macro-f1
value: 63.77
- name: 1-shot
type: macro-f1
value: 68.91
- name: 3-shot
type: macro-f1
value: 66.36
- name: 5-shot
type: macro-f1
value: 70.61
- task:
type: text-generation
dataset:
name: WMT_EN-RO
type: WMT_EN-RO
metrics:
- name: 0-shot
type: bleu
value: 6.92
- name: 1-shot
type: bleu
value: 29.33
- name: 3-shot
type: bleu
value: 29.79
- name: 5-shot
type: bleu
value: 30.02
- task:
type: text-generation
dataset:
name: WMT_RO-EN
type: WMT_RO-EN
metrics:
- name: 0-shot
type: bleu
value: 4.50
- name: 1-shot
type: bleu
value: 30.30
- name: 3-shot
type: bleu
value: 36.96
- name: 5-shot
type: bleu
value: 37.70
- task:
type: text-generation
dataset:
name: XQuAD_EM
type: XQuAD_EM
metrics:
- name: 0-shot
type: exact_match
value: 4.45
- name: 1-shot
type: exact_match
value: 48.24
- name: 3-shot
type: exact_match
value: 52.03
- name: 5-shot
type: exact_match
value: 53.03
- task:
type: text-generation
dataset:
name: XQuAD_F1
type: XQuAD_F1
metrics:
- name: 0-shot
type: f1
value: 26.08
- name: 1-shot
type: f1
value: 68.40
- name: 3-shot
type: f1
value: 71.92
- name: 5-shot
type: f1
value: 71.60
- task:
type: text-generation
dataset:
name: STS_Spearman
type: STS_Spearman
metrics:
- name: 1-shot
type: spearman
value: 77.76
- name: 3-shot
type: spearman
value: 76.72
- name: 5-shot
type: spearman
value: 77.12
- task:
type: text-generation
dataset:
name: STS_Pearson
type: STS_Pearson
metrics:
- name: 1-shot
type: pearson
value: 77.83
- name: 3-shot
type: pearson
value: 77.64
- name: 5-shot
type: pearson
value: 78.13
---
# Model Card for Model ID
*Built with Meta Llama 3*
<!-- Provide a quick summary of what the model is/does. -->
RoLlama3 is a family of pretrained and fine-tuned generative text models for Romanian. This is the repository for the **instruct 8B model**. Links to other models can be found at the bottom of this page.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
OpenLLM-Ro represents the first open-source effort to build a LLM specialized for Romanian. OpenLLM-Ro developed and publicly releases a collection of Romanian LLMs, both in the form of foundational model and instruct and chat variants.
- **Developed by:** OpenLLM-Ro
<!-- - **Funded by [optional]:** [More Information Needed] -->
<!-- - **Shared by [optional]:** [More Information Needed] -->
<!-- - **Model type:** [More Information Needed] -->
- **Language(s):** Romanian
- **License:** cc-by-nc-4.0
- **Finetuned from model:** [Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B)
- **Trained using:** [RoAlpaca](https://huggingface.co/datasets/OpenLLM-Ro/ro_sft_alpaca), [RoAlpacaGPT4](https://huggingface.co/datasets/OpenLLM-Ro/ro_sft_alpaca_gpt4), [RoDolly](https://huggingface.co/datasets/OpenLLM-Ro/ro_sft_dolly), [RoSelfInstruct](https://huggingface.co/datasets/OpenLLM-Ro/ro_sft_selfinstruct_gpt4), [RoNoRobots](https://huggingface.co/datasets/OpenLLM-Ro/ro_sft_norobots), [RoOrca](https://huggingface.co/datasets/OpenLLM-Ro/ro_sft_orca), [RoCamel](https://huggingface.co/datasets/OpenLLM-Ro/ro_sft_camel)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/OpenLLM-Ro/LLaMA-Factory
- **Paper:** https://arxiv.org/abs/2406.18266
## Intended Use
### Intended Use Cases
RoLlama3 is intented for research use in Romanian. Base models can be adapted for a variety of natural language tasks while instruction and chat tuned models are intended for assistant-like chat.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
Use in any manner that violates the license, any applicable laws or regluations, use in languages other than Romanian.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("OpenLLM-Ro/RoLlama3-8b-Instruct-2024-06-28")
model = AutoModelForCausalLM.from_pretrained("OpenLLM-Ro/RoLlama3-8b-Instruct-2024-06-28")
instruction = "Ce jocuri de societate pot juca cu prietenii mei?"
chat = [
{"role": "system", "content": "Ești un asistent folositor, respectuos și onest. Încearcă să ajuți cât mai mult prin informațiile oferite, excluzând răspunsuri toxice, rasiste, sexiste, periculoase și ilegale."},
{"role": "user", "content": instruction},
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, system_message="")
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs, max_new_tokens=128)
print(tokenizer.decode(outputs[0]))
```
## Academic Benchmarks
<table>
<tbody>
<tr>
<td><strong>Model</strong></td>
<td><strong><center>Average</center></strong></td>
<td><strong><center>ARC</center></strong></td>
<td><strong><center>MMLU</center></strong></td>
<td><strong><center>Winogrande</center></strong></td>
<td><strong><center>Hellaswag</center></strong></td>
<td><strong><center>GSM8k</center></strong></td>
<td><strong><center>TruthfulQA</center></strong></td>
</tr>
<tr>
<td>Llama-3-8B-Instruct</td><td><center>50.62</center></td><td><center>43.69</center></td><td><center>52.04</center></td><td><center>59.33</center></td><td><center>53.19</center></td><td><center><strong>43.87</strong></center></td><td><center><strong>51.59</strong></center></td>
</tr>
<tr>
<td><em>RoLlama3-8b-Instruct-2024-06-28</em></td><td><center><em>50.56</em></center></td><td><center><em>44.70</em></center></td><td><center><em>52.19</em></center></td><td><center><em><strong>67.23</strong></em></center></td><td><center><em>57.69</em></center></td><td><center><em>30.23</em></center></td><td><center><em>51.34</em></center></td>
</tr>
<tr>
<td>RoLlama3-8b-Instruct-2024-10-09</td><td><center><strong>52.21</strong></center></td><td><center><strong>47.94</strong></center></td><td><center><strong>53.50</strong></center></td><td><center>66.06</center></td><td><center><strong>59.72</strong></center></td><td><center>40.16</center></td><td><center>45.90</center></td>
</tr>
<tr>
<td>RoLlama3-8b-Instruct-DPO-2024-10-09</td><td><center>49.96</center></td><td><center>46.29</center></td><td><center>53.29</center></td><td><center>65.57</center></td><td><center>58.15</center></td><td><center>34.77</center></td><td><center>41.70</center></td>
</tr>
</tbody>
</table>
## Downstream tasks
<table>
<tbody>
<tr>
<td></td>
<td colspan="4"><center><strong>LaRoSeDa</strong></center></td>
<td colspan="4"><center><strong>WMT</strong></center></td>
</tr>
<tr>
<td></td>
<td colspan="2"><center><strong>Few-shot</strong></center></td>
<td colspan="2"><center><strong>Finetuned</strong></center></td>
<td colspan="2"><center><strong>Few-shot</strong></center></td>
<td colspan="2"><center><strong>Finetuned</strong></center></td>
</tr>
<tr>
<td><strong>Model</strong></td>
<td><center><strong>Binary<br>(Macro F1)</strong></center></td>
<td><center><strong>Multiclass<br>(Macro F1)</strong></center></td>
<td><center><strong>Binary<br>(Macro F1)</strong></center></td>
<td><center><strong>Multiclass<br>(Macro F1)</strong></center></td>
<td><center><strong>EN-RO<br>(Bleu)</strong></center></td>
<td><center><strong>RO-EN<br>(Bleu)</strong></center></td>
<td><center><strong>EN-RO<br>(Bleu)</strong></center></td>
<td><center><strong>RO-EN<br>(Bleu)</strong></center>
</tr>
<tr>
<td>Llama-3-8B-Instruct</td><td><center>95.88</center></td><td><center>56.21</center></td><td><center><strong>98.53</strong></center></td><td><center>86.19</center></td><td><center>18.88</center></td><td><center><strong>30.98</strong></center></td><td><center><strong>28.02</strong></center></td><td><center>40.28</center></td>
</tr>
<tr>
<td><em>RoLlama3-8b-Instruct-2024-06-28</em></td><td><center><em><strong>97.52</strong></em></center></td><td><center><em><strong>67.41</strong></em></center></td><td><center><em>94.15</em></center></td><td><center><em>87.13</em></center></td><td><center><em><strong>24.01</strong></em></center></td><td><center><em>27.36</em></center></td><td><center><em>26.53</em></center></td><td><center><em>40.36</em></center></td>
</tr>
<tr>
<td>RoLlama3-8b-Instruct-2024-10-09</td><td><center>95.58</center></td><td><center>61.20</center></td><td><center>96.46</center></td><td><center><strong>87.26</strong></center></td><td><center>22.92</center></td><td><center>24.28</center></td><td><center>27.31</center></td><td><center><strong>40.52</strong></center></td>
</tr>
<tr>
<td>RoLlama3-8b-Instruct-DPO-2024-10-09</td><td><center>97.48</center></td><td><center>54.00</center></td><td><center>-</center></td><td><center>-</center></td><td><center>22.09</center></td><td><center>23.00</center></td><td><center>-</center></td><td><center>-</center></td>
</tr>
</tbody>
</table>
<table>
<tbody>
<tr>
<td></td>
<td colspan="4"><center><strong>XQuAD</strong></center></td>
<td colspan="4"><center><strong>STS</strong></center></td>
</tr>
<tr>
<td></td>
<td colspan="2"><center><strong>Few-shot</strong></center></td>
<td colspan="2"><center><strong>Finetuned</strong></center></td>
<td colspan="2"><center><strong>Few-shot</strong></center></td>
<td colspan="2"><center><strong>Finetuned</strong></center></td>
</tr>
<tr>
<td><strong>Model</strong></td>
<td><center><strong>(EM)</strong></center></td>
<td><center><strong>(F1)</strong></center></td>
<td><center><strong>(EM)</strong></center></td>
<td><center><strong>(F1)</strong></center></td>
<td><center><strong>(Spearman)</strong></center></td>
<td><center><strong>(Pearson)</strong></center></td>
<td><center><strong>(Spearman)</strong></center></td>
<td><center><strong>(Pearson)</strong></center></td>
</tr>
<tr>
<td>Llama-3-8B-Instruct</td><td><center><strong>39.47</strong></center></td><td><center>58.67</center></td><td><center><strong>67.65</strong></center></td><td><center><strong>82.77</strong></center></td><td><center>73.04</center></td><td><center>72.36</center></td><td><center>83.49</center></td><td><center>84.06</center></td>
</tr>
<tr>
<td><em>RoLlama3-8b-Instruct-2024-06-28</em></td><td><center><em>39.43</em></center></td><td><center><em><strong>59.50</strong></em></center></td><td><center><em>44.45</em></center></td><td><center><em>59.76</em></center></td><td><center><em>77.20</em></center></td><td><center><em>77.87</em></center></td><td><center><em>85.80</em></center></td><td><center><em>86.05</em></center></td>
</tr>
<tr>
<td>RoLlama3-8b-Instruct-2024-10-09</td><td><center>18.89</center></td><td><center>31.79</center></td><td><center>50.84</center></td><td><center>65.18</center></td><td><center>77.60</center></td><td><center>76.86</center></td><td><center><strong>86.70</strong></center></td><td><center><strong>87.09</strong></center></td>
</tr>
<tr>
<td>RoLlama3-8b-Instruct-DPO-2024-10-09</td><td><center>26.05</center></td><td><center>42.77</center></td><td><center>-</center></td><td><center>-</center></td><td><center><strong>79.64</strong></center></td><td><center><strong>79.52</strong></center></td><td><center>-</center></td><td><center>-</center></td>
</tr>
</tbody>
</table>
## MT-Bench
<table>
<tbody>
<tr>
<td><strong>Model</strong></td>
<td><strong><center>Average</center></strong></td>
<td><strong><center>1st turn</center></strong></td>
<td><strong><center>2nd turn</center></strong></td>
<td><strong><center>Answers in Ro</center></strong></td>
</tr>
<tr>
<td>Llama-3-8B-Instruct</td><td><center><strong>5.96</strong></center></td><td><center>6.16</center></td><td><center><strong>5.76</strong></center></td><td><center>158/160</center></td>
</tr>
<tr>
<td><em>RoLlama3-8b-Instruct-2024-06-28</em></td><td><center><em>5.15</em></center></td><td><center><em>6.03</em></center></td><td><center><em>4.28</em></center></td><td><center><em><strong>160/160</strong></em></center></td>
</tr>
<tr>
<td>RoLlama3-8b-Instruct-2024-10-09</td><td><center>5.38</center></td><td><center>6.09</center></td><td><center>4.67</center></td><td><center><strong>160/160</strong></center></td>
</tr>
<tr>
<td>RoLlama3-8b-Instruct-DPO-2024-10-09</td><td><center>5.87</center></td><td><center><strong>6.22</strong></center></td><td><center>5.49</center></td><td><center><strong>160/160</strong></center></td>
</tr>
</tbody>
</table>
## RoCulturaBench
<table>
<tbody>
<tr>
<td><strong>Model</strong></td>
<td><strong><center>Average</center></strong></td>
<td><strong><center>Answers in Ro</center></strong></td>
</tr>
<tr>
<td>Llama-3-8B-Instruct</td><td><center><strong>4.62</strong></center></td><td><center><strong>100/100</strong></center></td>
</tr>
<tr>
<td><em>RoLlama3-8b-Instruct-2024-06-28</em></td><td><center><em>3.71</em></center></td><td><center><em><strong>100/100</strong></em></center></td>
</tr>
<tr>
<td>RoLlama3-8b-Instruct-2024-10-09</td><td><center>3.81</center></td><td><center><strong>100/100</strong></center></td>
</tr>
<tr>
<td>RoLlama3-8b-Instruct-DPO-2024-10-09</td><td><center>4.40</center></td><td><center><strong>100/100</strong></center></td>
</tr>
</tbody>
</table>
## RoLlama3 Model Family
| Model | Link |
|--------------------|:--------:|
|*RoLlama3-8b-Instruct-2024-06-28*| [link](https://huggingface.co/OpenLLM-Ro/RoLlama3-8b-Instruct-2024-06-28) |
|RoLlama3-8b-Instruct-2024-10-09| [link](https://huggingface.co/OpenLLM-Ro/RoLlama3-8b-Instruct-2024-10-09) |
|RoLlama3-8b-Instruct-DPO-2024-10-09| [link](https://huggingface.co/OpenLLM-Ro/RoLlama3-8b-Instruct-DPO-2024-10-09) |
## Citation
```
@misc{masala2024vorbecstiromanecsterecipetrain,
title={"Vorbe\c{s}ti Rom\^ane\c{s}te?" A Recipe to Train Powerful Romanian LLMs with English Instructions},
author={Mihai Masala and Denis C. Ilie-Ablachim and Alexandru Dima and Dragos Corlatescu and Miruna Zavelca and Ovio Olaru and Simina Terian-Dan and Andrei Terian-Dan and Marius Leordeanu and Horia Velicu and Marius Popescu and Mihai Dascalu and Traian Rebedea},
year={2024},
eprint={2406.18266},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2406.18266},
}
```
<!-- **APA:**
[More Information Needed] -->
|
stuartmesham/electra-large_lemon-spell_5k_2_p3 | stuartmesham | "2022-10-24T17:14:16Z" | 6 | 0 | transformers | [
"transformers",
"pytorch",
"electra",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2022-10-24T17:13:27Z" | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: electra-large_lemon-spell_5k_2_p3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-large_lemon-spell_5k_2_p3
This model is a fine-tuned version of [model_saves/electra-large_lemon-spell_5k_2_p2](https://huggingface.co/model_saves/electra-large_lemon-spell_5k_2_p2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4357
- Accuracy: 0.9398
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 52
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 15.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 268 | 0.4357 | 0.9398 |
| No log | 2.0 | 536 | 0.4500 | 0.9391 |
| No log | 3.0 | 804 | 0.4678 | 0.9388 |
| 0.3213 | 4.0 | 1072 | 0.5006 | 0.9384 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
choco58/LLaMAdelic | choco58 | "2025-01-24T16:54:27Z" | 53 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"LLaMAdelic",
"Conversational AI",
"Personality",
"Persona-dialogue",
"Dialogue-systems",
"Human-like assistant",
"LLaMA",
"LLaMA-8B",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-09-17T04:27:44Z" | ---
library_name: transformers
tags: [LLaMAdelic, Conversational AI, Personality, Persona-dialogue, Dialogue-systems, Human-like assistant, LLaMA, LLaMA-8B]
---
# LLaMAdelic: Conversational Personality Model 🌊✨
Welcome to **LLaMAdelic**—a conversational model fine-tuned from LLaMA 3 8B Instruct, capturing nuanced personality traits that make AI interactions feel more authentic and relatable. Whether it’s about balancing conscientious responses or tapping into empathetic reflections, LLaMAdelic is here to explore the depths of the human-like personality spectrum.
# Model Overview: LLaMAdelic
## Model Name: LLaMAdelic
- **Architecture**: LLaMA 3 8B Instruct
- **Training Objective**: Personality-Enhanced Conversational AI
- **Training Dataset**: Fine-tuned on conversational data to reflect Big 5 personality traits.
- JIC: [Journal Intensive Conversations](https://huggingface.co/datasets/chocokiddo/jic) dataset
- **Training Duration**: 4-5 days on A100 GPU (training parameters can be found in appendix of the paper)
## Why "LLaMAdelic"?
The name "LLaMAdelic" reflects our aim to bring a rich, nuanced personality to conversational AI. Just as the Big 5 personality traits (OCEAN) encapsulate the subtle layers of human interaction, LLaMAdelic seeks to capture these nuanced dimensions — openness, conscientiousness, extraversion, agreeableness, and neuroticism — making conversations with AI feel more genuinely human. It’s not just another model; it’s designed to add depth, authenticity, and a hint of human-like character to every interaction.
---
## Scope of Applications
LLaMAdelic is designed to add a splash of personality to various conversational tasks. Here's what it can handle:
- **Conversational Agents**: Engage users with relatable and personality-driven conversations.
- **Text Generation**: Generate human-like text for articles, chats, and creative writing with a personal touch.
- **Question-Answering**: Answer questions with a flair of personality, making responses more relatable.
- **Educational and Therapy Bots**: Assist in applications where personality-sensitive responses can improve user engagement and retention.
---
## Intended Use
LLaMAdelic is built for those aiming to inject personality into conversational systems, whether it’s for customer service bots, therapy support, or just plain fun AI companions. It’s particularly suited to applications where capturing nuances like openness, agreeableness, and neuroticism (yes, even those angsty replies!) can enhance user experience.
### Data and Training
The model has been trained on an extensive conversational dataset. Our goal was to align model responses with intrinsic personality traits, enabling LLaMAdelic to tailor its tone and style depending on conversational context. More information on the dataset will be shared soon.
### Results
**Personality Evaluation on [EleutherAI/lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) (OCEAN Personality Benchmark)**
| Model | Description | Openness | Conscientiousness | Extraversion | Agreeableness | Neuroticism | AVG |
|---------------|-------------------------------|----------|-------------------|--------------|---------------|-------------|-------|
| LLaMA 8B ins | Zeroshot | 0.8760 | 0.7620 | 0.7170 | 0.9500 | 0.5220 | 0.7654 |
| LLaMAdelic | Fine-tuned on Conversational Data | 0.9150 | 0.7840 | 0.6680 | 0.9440 | 0.7040 | 0.8030 |
---
## Performance and Limitations
While LLaMAdelic brings vibrant and personality-driven conversations to the table, it does have limitations:
- **Personality Representation**: LLaMAdelic is trained for personality alignment, so it may sacrifice some general knowledge capabilities in favor of personality-specific responses. A detailed evaluation will be updated soon.
- **Sensitive Topics**: Despite strong filtering, caution is advised when deploying in high-stakes environments.
- **Computational Load**: The LLaMA 8B backbone requires substantial resources, which may limit deployment in real-time settings without sufficient hardware.
---
## Ethical Considerations
We made sure to avoid toxic or inappropriate dialogues by tagging any dialogue with over 25% toxic utterances for separate review. Ethical considerations are a priority, and LLaMAdelic was designed with responsible AI practices in mind. For details on ethical data practices, see the Appendix.
---
## Future Updates
Stay tuned for more information on LLaMAdelic!
---
## Citation
```bibtex
@inproceedings{pal-etal-2025-beyond,
title = "Beyond Discrete Personas: Personality Modeling Through Journal Intensive Conversations",
author = "Pal, Sayantan and
Das, Souvik and
Srihari, Rohini K.",
editor = "Rambow, Owen and
Wanner, Leo and
Apidianaki, Marianna and
Al-Khalifa, Hend and
Eugenio, Barbara Di and
Schockaert, Steven",
booktitle = "Proceedings of the 31st International Conference on Computational Linguistics",
month = jan,
year = "2025",
address = "Abu Dhabi, UAE",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2025.coling-main.470/",
pages = "7055--7074",
abstract = "Large Language Models (LLMs) have significantly improved personalized conversational capabilities. However, existing datasets like Persona Chat, Synthetic Persona Chat, and Blended Skill Talk rely on static, predefined personas. This approach often results in dialogues that fail to capture human personalities' fluid and evolving nature. To overcome these limitations, we introduce a novel dataset with around 400,000 dialogues and a framework for generating personalized conversations using long-form journal entries from Reddit. Our approach clusters journal entries for each author and filters them by selecting the most representative cluster, ensuring that the retained entries best reflect the author`s personality. We further refine the data by capturing the Big Five personality traits{---}openness, conscientiousness, extraversion, agreeableness, and neuroticism{---}ensuring that dialogues authentically reflect an individual`s personality. Using Llama 3 70B, we generate high-quality, personality-rich dialogues grounded in these journal entries. Fine-tuning models on this dataset leads to an 11{\%} improvement in capturing personality traits on average, outperforming existing approaches in generating more coherent and personality-driven dialogues."
}
```
---
|
CeroShrijver/chinese-macbert-large-text-classification | CeroShrijver | "2023-06-18T17:36:30Z" | 106 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-06-03T07:49:47Z" | ---
tags:
- generated_from_trainer
model-index:
- name: chinese-macbert-large-text-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chinese-macbert-large-text-classification
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.28.1
- Pytorch 1.12.1
- Datasets 2.12.0
- Tokenizers 0.11.6
|
dendimaki/multilabel_classification | dendimaki | "2024-05-08T09:04:00Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | "2024-05-08T08:46:13Z" | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: multilabel_classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# multilabel_classification
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 425 | 2.0765 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
TransferGraph/marcelcastrobr_sagemaker-distilbert-emotion-finetuned-lora-glue_cola | TransferGraph | "2024-02-28T00:47:09Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:glue",
"base_model:marcelcastrobr/sagemaker-distilbert-emotion",
"base_model:adapter:marcelcastrobr/sagemaker-distilbert-emotion",
"license:apache-2.0",
"model-index",
"region:us"
] | text-classification | "2024-02-28T00:47:06Z" | ---
license: apache-2.0
library_name: peft
tags:
- parquet
- text-classification
datasets:
- glue
metrics:
- accuracy
base_model: marcelcastrobr/sagemaker-distilbert-emotion
model-index:
- name: marcelcastrobr_sagemaker-distilbert-emotion-finetuned-lora-glue_cola
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- type: accuracy
value: 0.7535953978907
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marcelcastrobr_sagemaker-distilbert-emotion-finetuned-lora-glue_cola
This model is a fine-tuned version of [marcelcastrobr/sagemaker-distilbert-emotion](https://huggingface.co/marcelcastrobr/sagemaker-distilbert-emotion) on the glue dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.7536
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.6702 | None | 0 |
| 0.7095 | 0.6043 | 0 |
| 0.7354 | 0.5365 | 1 |
| 0.7555 | 0.5047 | 2 |
| 0.7536 | 0.4836 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
mlx-community/Starling-LM-7B-beta | mlx-community | "2024-03-28T15:12:23Z" | 79 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"reward model",
"RLHF",
"RLAIF",
"mlx",
"conversational",
"en",
"dataset:berkeley-nest/Nectar",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-28T14:41:41Z" | ---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- reward model
- RLHF
- RLAIF
- mlx
datasets:
- berkeley-nest/Nectar
---
# mlx-community/Starling-LM-7B-beta
This model was converted to MLX format from [`Nexusflow/Starling-LM-7B-beta`]() using mlx-lm version **0.5.0**.
Refer to the [original model card](https://huggingface.co/Nexusflow/Starling-LM-7B-beta) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Starling-LM-7B-beta")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
kinshuk-h/flan-t5-retacred-kg-w-context-var-len-small-finetuned | kinshuk-h | "2023-07-19T13:19:25Z" | 110 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"legal",
"en",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-06-07T09:40:48Z" |
---
license: mit
language:
- en
pipeline_tag: text2text-generation
tags:
- legal
---
# flan-t5-retacred-kg-w-context-var-len-small-finetuned
[flan-t5-small](https://huggingface.co/google/flan-t5-small) finetuned over the TACRED Knowledge Graph patched with the [Re-TACRED proposal](https://github.com/gstoica27/Re-TACRED) using the training method for [KGT-5](https://github.com/apoorvumang/kgt5/) with additional variable length context alongside the prompts.
|
TehranNLP-org/electra-base-mnli | TehranNLP-org | "2022-05-03T17:01:07Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"electra",
"text-classification",
"generated_from_trainer",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-04-30T12:50:13Z" | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: SEED0042
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: MNLI
type: ''
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.8879266428935303
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SEED0042
This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on the MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4265
- Accuracy: 0.8879
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: not_parallel
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3762 | 1.0 | 12272 | 0.3312 | 0.8794 |
| 0.2542 | 2.0 | 24544 | 0.3467 | 0.8843 |
| 0.1503 | 3.0 | 36816 | 0.4265 | 0.8879 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu113
- Datasets 2.1.0
- Tokenizers 0.11.6
|
PopularPenguin/bart-base-2024-09-24_11-12 | PopularPenguin | "2024-09-24T11:48:43Z" | 89 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:arrow",
"base_model:facebook/bart-base",
"base_model:finetune:facebook/bart-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-09-24T11:21:12Z" | ---
library_name: transformers
license: apache-2.0
base_model: facebook/bart-base
tags:
- generated_from_trainer
datasets:
- arrow
model-index:
- name: bart-base-2024-09-24_11-12
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-2024-09-24_11-12
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the arrow dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1993
- Gen Len: 20.0
- Bertscorer-p: 0.5928
- Bertscorer-r: 0.1701
- Bertscorer-f1: 0.3731
- Sacrebleu-score: 10.2541
- Sacrebleu-precisions: [90.63003300856309, 79.05155386114873, 70.66565212490137, 65.68935823527592]
- Bleu-bp: 0.1350
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | Bertscorer-p | Bertscorer-r | Bertscorer-f1 | Sacrebleu-score | Sacrebleu-precisions | Bleu-bp |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------------:|:------------:|:-------------:|:---------------:|:----------------------------------------------------------------------------:|:-------:|
| 0.189 | 1.0 | 4772 | 0.1993 | 20.0 | 0.5928 | 0.1701 | 0.3731 | 10.2541 | [90.63003300856309, 79.05155386114873, 70.66565212490137, 65.68935823527592] | 0.1350 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
mradermacher/BlackSheep-Mistral-RP-7B-i1-GGUF | mradermacher | "2025-01-22T01:37:36Z" | 143 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:TroyDoesAI/BlackSheep-Mistral-RP-7B",
"base_model:quantized:TroyDoesAI/BlackSheep-Mistral-RP-7B",
"license:artistic-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2024-11-18T07:42:20Z" | ---
base_model: TroyDoesAI/BlackSheep-Mistral-RP-7B
language:
- en
library_name: transformers
license: artistic-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/TroyDoesAI/BlackSheep-Mistral-RP-7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/BlackSheep-Mistral-RP-7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Mistral-RP-7B-i1-GGUF/resolve/main/BlackSheep-Mistral-RP-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Mistral-RP-7B-i1-GGUF/resolve/main/BlackSheep-Mistral-RP-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Mistral-RP-7B-i1-GGUF/resolve/main/BlackSheep-Mistral-RP-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Mistral-RP-7B-i1-GGUF/resolve/main/BlackSheep-Mistral-RP-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Mistral-RP-7B-i1-GGUF/resolve/main/BlackSheep-Mistral-RP-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Mistral-RP-7B-i1-GGUF/resolve/main/BlackSheep-Mistral-RP-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Mistral-RP-7B-i1-GGUF/resolve/main/BlackSheep-Mistral-RP-7B.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Mistral-RP-7B-i1-GGUF/resolve/main/BlackSheep-Mistral-RP-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Mistral-RP-7B-i1-GGUF/resolve/main/BlackSheep-Mistral-RP-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Mistral-RP-7B-i1-GGUF/resolve/main/BlackSheep-Mistral-RP-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Mistral-RP-7B-i1-GGUF/resolve/main/BlackSheep-Mistral-RP-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Mistral-RP-7B-i1-GGUF/resolve/main/BlackSheep-Mistral-RP-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Mistral-RP-7B-i1-GGUF/resolve/main/BlackSheep-Mistral-RP-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Mistral-RP-7B-i1-GGUF/resolve/main/BlackSheep-Mistral-RP-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Mistral-RP-7B-i1-GGUF/resolve/main/BlackSheep-Mistral-RP-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Mistral-RP-7B-i1-GGUF/resolve/main/BlackSheep-Mistral-RP-7B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Mistral-RP-7B-i1-GGUF/resolve/main/BlackSheep-Mistral-RP-7B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.2 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Mistral-RP-7B-i1-GGUF/resolve/main/BlackSheep-Mistral-RP-7B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.2 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Mistral-RP-7B-i1-GGUF/resolve/main/BlackSheep-Mistral-RP-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Mistral-RP-7B-i1-GGUF/resolve/main/BlackSheep-Mistral-RP-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Mistral-RP-7B-i1-GGUF/resolve/main/BlackSheep-Mistral-RP-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Mistral-RP-7B-i1-GGUF/resolve/main/BlackSheep-Mistral-RP-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Mistral-RP-7B-i1-GGUF/resolve/main/BlackSheep-Mistral-RP-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Mistral-RP-7B-i1-GGUF/resolve/main/BlackSheep-Mistral-RP-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
marstafk0/mark-lora2 | marstafk0 | "2025-02-21T18:30:16Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-02-21T18:16:03Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Mark
---
# Mark Lora2
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Mark` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('marstafk0/mark-lora2', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
facebook/mms-tts-bgw | facebook | "2023-09-01T14:22:36Z" | 107 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | text-to-speech | "2023-09-01T14:22:19Z" |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Bhatri Text-to-Speech
This repository contains the **Bhatri (bgw)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-bgw")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-bgw")
text = "some example text in the Bhatri language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
|
Malar/BM_MLM_EXT_230221063231 | Malar | "2023-02-21T06:34:37Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2023-02-21T06:32:36Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: BM_MLM_EXT_230221063231
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BM_MLM_EXT_230221063231
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8944
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 108 | 1.8893 |
| No log | 2.0 | 216 | 1.8124 |
| No log | 3.0 | 324 | 1.8304 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.8.0+cu111
- Datasets 1.11.0
- Tokenizers 0.12.1
|
ZhangShenao/SELM-Llama-3.2-3B-Instruct-re-new-iter-1 | ZhangShenao | "2025-01-10T03:52:47Z" | 17 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-10T03:31:28Z" | ---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-3B-Instruct
tags:
- alignment-handbook
- trl
- dpo
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- updated
- original
model-index:
- name: SELM-Llama-3.2-3B-Instruct-re-new-iter-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SELM-Llama-3.2-3B-Instruct-re-new-iter-1
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.45.0
- Pytorch 2.5.1+cu124
- Datasets 2.14.6
- Tokenizers 0.20.3
|
zacdennis/gradientascent | zacdennis | "2023-07-27T22:16:30Z" | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-07-27T22:16:26Z" | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: gradientascent
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 109.50 +/- 14.23
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
lesso16/b311c13d-01a1-4346-9dc9-db322bd04ffa | lesso16 | "2025-03-16T11:27:47Z" | 5 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/CodeLlama-13b-hf-flash",
"base_model:adapter:NousResearch/CodeLlama-13b-hf-flash",
"region:us"
] | null | "2025-03-10T09:32:55Z" | ---
library_name: peft
base_model: NousResearch/CodeLlama-13b-hf-flash
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b311c13d-01a1-4346-9dc9-db322bd04ffa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# b311c13d-01a1-4346-9dc9-db322bd04ffa
This model is a fine-tuned version of [NousResearch/CodeLlama-13b-hf-flash](https://huggingface.co/NousResearch/CodeLlama-13b-hf-flash) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0949
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000216
- train_batch_size: 4
- eval_batch_size: 4
- seed: 160
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0006 | 1 | 1.0067 |
| 0.7748 | 0.3221 | 500 | 0.0949 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
PrunaAI/nyu-visionx-cambrian-phi3-3b-QUANTO-int8bit-smashed | PrunaAI | "2024-07-19T09:27:07Z" | 4 | 0 | transformers | [
"transformers",
"pruna-ai",
"base_model:nyu-visionx/cambrian-phi3-3b",
"base_model:finetune:nyu-visionx/cambrian-phi3-3b",
"endpoints_compatible",
"region:us"
] | null | "2024-07-17T11:19:58Z" | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: nyu-visionx/cambrian-phi3-3b
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with quanto.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo nyu-visionx/cambrian-phi3-3b installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install quanto
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
IMPORTS
model = AutoModelForCausalLM.from_pretrained("PrunaAI/nyu-visionx-cambrian-phi3-3b-QUANTO-int8bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("nyu-visionx/cambrian-phi3-3b")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model nyu-visionx/cambrian-phi3-3b before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
TrinhDacPhu/finetune4en-vi | TrinhDacPhu | "2024-06-24T23:35:25Z" | 12 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-06-17T22:09:22Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Halcyonindo/an1kulora | Halcyonindo | "2023-05-12T17:27:24Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-05-12T17:26:20Z" | ---
license: creativeml-openrail-m
---
|
mradermacher/Hermes-Qwen1.5-MoE-A2.7B-Chat-240405-GGUF | mradermacher | "2024-11-24T19:17:42Z" | 20 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:ByteResearch/Hermes-Qwen1.5-MoE-A2.7B-Chat-240405",
"base_model:quantized:ByteResearch/Hermes-Qwen1.5-MoE-A2.7B-Chat-240405",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-11-24T17:53:39Z" | ---
base_model: ByteResearch/Hermes-Qwen1.5-MoE-A2.7B-Chat-240405
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/ByteResearch/Hermes-Qwen1.5-MoE-A2.7B-Chat-240405
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Hermes-Qwen1.5-MoE-A2.7B-Chat-240405-GGUF/resolve/main/Hermes-Qwen1.5-MoE-A2.7B-Chat-240405.Q2_K.gguf) | Q2_K | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/Hermes-Qwen1.5-MoE-A2.7B-Chat-240405-GGUF/resolve/main/Hermes-Qwen1.5-MoE-A2.7B-Chat-240405.Q3_K_S.gguf) | Q3_K_S | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/Hermes-Qwen1.5-MoE-A2.7B-Chat-240405-GGUF/resolve/main/Hermes-Qwen1.5-MoE-A2.7B-Chat-240405.Q3_K_M.gguf) | Q3_K_M | 7.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Hermes-Qwen1.5-MoE-A2.7B-Chat-240405-GGUF/resolve/main/Hermes-Qwen1.5-MoE-A2.7B-Chat-240405.Q3_K_L.gguf) | Q3_K_L | 7.8 | |
| [GGUF](https://huggingface.co/mradermacher/Hermes-Qwen1.5-MoE-A2.7B-Chat-240405-GGUF/resolve/main/Hermes-Qwen1.5-MoE-A2.7B-Chat-240405.IQ4_XS.gguf) | IQ4_XS | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Hermes-Qwen1.5-MoE-A2.7B-Chat-240405-GGUF/resolve/main/Hermes-Qwen1.5-MoE-A2.7B-Chat-240405.Q4_0_4_4.gguf) | Q4_0_4_4 | 8.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Hermes-Qwen1.5-MoE-A2.7B-Chat-240405-GGUF/resolve/main/Hermes-Qwen1.5-MoE-A2.7B-Chat-240405.Q4_K_S.gguf) | Q4_K_S | 8.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Hermes-Qwen1.5-MoE-A2.7B-Chat-240405-GGUF/resolve/main/Hermes-Qwen1.5-MoE-A2.7B-Chat-240405.Q4_K_M.gguf) | Q4_K_M | 9.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Hermes-Qwen1.5-MoE-A2.7B-Chat-240405-GGUF/resolve/main/Hermes-Qwen1.5-MoE-A2.7B-Chat-240405.Q5_K_S.gguf) | Q5_K_S | 10.3 | |
| [GGUF](https://huggingface.co/mradermacher/Hermes-Qwen1.5-MoE-A2.7B-Chat-240405-GGUF/resolve/main/Hermes-Qwen1.5-MoE-A2.7B-Chat-240405.Q5_K_M.gguf) | Q5_K_M | 10.9 | |
| [GGUF](https://huggingface.co/mradermacher/Hermes-Qwen1.5-MoE-A2.7B-Chat-240405-GGUF/resolve/main/Hermes-Qwen1.5-MoE-A2.7B-Chat-240405.Q6_K.gguf) | Q6_K | 12.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Hermes-Qwen1.5-MoE-A2.7B-Chat-240405-GGUF/resolve/main/Hermes-Qwen1.5-MoE-A2.7B-Chat-240405.Q8_0.gguf) | Q8_0 | 15.3 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
LarryAIDraw/nico_robin_v1 | LarryAIDraw | "2023-11-25T01:38:25Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-11-25T01:35:44Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/211150/nico-robin-one-piece |
SmallDoge/Qwen2.5-3B-Instruct-SmallThoughts | SmallDoge | "2025-03-14T10:45:05Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-14T10:35:21Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TransferGraph/Jeevesh8_lecun_feather_berts-3-finetuned-lora-tweet_eval_hate | TransferGraph | "2024-02-29T13:43:37Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:Jeevesh8/lecun_feather_berts-3",
"base_model:adapter:Jeevesh8/lecun_feather_berts-3",
"model-index",
"region:us"
] | text-classification | "2024-02-29T13:43:35Z" | ---
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: Jeevesh8/lecun_feather_berts-3
model-index:
- name: Jeevesh8_lecun_feather_berts-3-finetuned-lora-tweet_eval_hate
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: hate
split: validation
args: hate
metrics:
- type: accuracy
value: 0.73
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Jeevesh8_lecun_feather_berts-3-finetuned-lora-tweet_eval_hate
This model is a fine-tuned version of [Jeevesh8/lecun_feather_berts-3](https://huggingface.co/Jeevesh8/lecun_feather_berts-3) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.73
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.499 | None | 0 |
| 0.701 | 0.5912 | 0 |
| 0.712 | 0.4743 | 1 |
| 0.721 | 0.4435 | 2 |
| 0.73 | 0.4307 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
sivaranjanisundarraj/finetuning-sentiment-model-imdb-3000 | sivaranjanisundarraj | "2024-05-25T06:44:32Z" | 108 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-05-25T06:39:22Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-imdb-3000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-imdb-3000
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3371
- Accuracy: 0.8767
- F1: 0.8810
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
itlwas/Marco-o1-Q4_K_M-GGUF | itlwas | "2024-12-24T12:47:34Z" | 21 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:AIDC-AI/Marco-o1",
"base_model:quantized:AIDC-AI/Marco-o1",
"license:apache-2.0",
"region:us",
"conversational"
] | null | "2024-12-24T12:47:12Z" | ---
license: apache-2.0
library_name: transformers
inference: false
base_model: AIDC-AI/Marco-o1
tags:
- llama-cpp
- gguf-my-repo
---
# AIronMind/Marco-o1-Q4_K_M-GGUF
This model was converted to GGUF format from [`AIDC-AI/Marco-o1`](https://huggingface.co/AIDC-AI/Marco-o1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/AIDC-AI/Marco-o1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo AIronMind/Marco-o1-Q4_K_M-GGUF --hf-file marco-o1-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo AIronMind/Marco-o1-Q4_K_M-GGUF --hf-file marco-o1-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo AIronMind/Marco-o1-Q4_K_M-GGUF --hf-file marco-o1-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo AIronMind/Marco-o1-Q4_K_M-GGUF --hf-file marco-o1-q4_k_m.gguf -c 2048
```
|
kejolong/cnmodel | kejolong | "2023-12-16T20:47:14Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-11-15T16:24:18Z" | ---
license: creativeml-openrail-m
---
|
clarxus/61766e53-cbff-40df-b751-ff9b536670d4 | clarxus | "2025-02-04T10:13:56Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-1.7B",
"base_model:adapter:unsloth/SmolLM-1.7B",
"license:apache-2.0",
"region:us"
] | null | "2025-02-04T10:02:31Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-1.7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 61766e53-cbff-40df-b751-ff9b536670d4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-1.7B
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3e2ed20f95d2f384_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3e2ed20f95d2f384_train_data.json
type:
field_input: student_answer
field_instruction: question
field_output: reference_answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: clarxus/61766e53-cbff-40df-b751-ff9b536670d4
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/3e2ed20f95d2f384_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1.0e-05
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: e9396a3b-cad7-4197-aa37-3ad515193e96
wandb_project: Gradients-On-Seven
wandb_run: your_name
wandb_runid: e9396a3b-cad7-4197-aa37-3ad515193e96
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 61766e53-cbff-40df-b751-ff9b536670d4
This model is a fine-tuned version of [unsloth/SmolLM-1.7B](https://huggingface.co/unsloth/SmolLM-1.7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2243
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-05
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.1324 | 0.0028 | 1 | 3.2835 |
| 1.4925 | 0.1397 | 50 | 1.0052 |
| 0.7377 | 0.2793 | 100 | 0.5217 |
| 0.4174 | 0.4190 | 150 | 0.2874 |
| 0.3831 | 0.5587 | 200 | 0.2243 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
BlackKakapo/cupidon-small-ro | BlackKakapo | "2025-03-27T15:27:28Z" | 10 | 2 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"ro",
"dataset:BlackKakapo/RoSTSC",
"base_model:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"base_model:finetune:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2025-03-21T15:49:42Z" | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
language:
- ro
language_creators:
- machine-generated
dataset:
- ro_sts
license: apache-2.0
datasets:
- BlackKakapo/RoSTSC
base_model:
- sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
---
# 🔥 cupidon-small-ro
Here comes cupidon-small-ro — small in name, but ready to play with the big models. Fine-tuned from the powerful sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2, this sentence-transformers model captures Romanian sentence meaning with impressive accuracy.
It’s compact enough to stay efficient, but packs a semantic punch that hits deep. Think of it as the model that proves "small" can still break hearts — especially in semantic textual similarity, search, or clustering. 💔💬
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```bash
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('BlackKakapo/cupidon-small-ro')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('BlackKakapo/cupidon-small-ro')
model = AutoModel.from_pretrained('BlackKakapo/cupidon-small-ro')
```
## License
This dataset is licensed under **Apache 2.0**.
## Citation
If you use BlackKakapo/cupidon-mini-ro in your research, please cite this model as follows:
```
@misc{cupidon-small-ro,
title={BlackKakapo/cupidon-small-ro},
author={BlackKakapo},
year={2025},
}
``` |
MayBashendy/ASAP_FineTuningBERT_AugV3_k5_task1_organization_fold4 | MayBashendy | "2024-11-09T21:25:22Z" | 162 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-11-09T20:13:02Z" | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: ASAP_FineTuningBERT_AugV3_k5_task1_organization_fold4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ASAP_FineTuningBERT_AugV3_k5_task1_organization_fold4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8540
- Qwk: 0.4885
- Mse: 0.8540
- Rmse: 0.9241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:-------:|:------:|
| No log | 0.0034 | 2 | 10.4541 | 0.0 | 10.4541 | 3.2333 |
| No log | 0.0069 | 4 | 8.7845 | 0.0063 | 8.7845 | 2.9639 |
| No log | 0.0103 | 6 | 7.0878 | 0.0056 | 7.0878 | 2.6623 |
| No log | 0.0137 | 8 | 5.6467 | 0.0016 | 5.6467 | 2.3763 |
| No log | 0.0172 | 10 | 4.4096 | 0.0018 | 4.4096 | 2.0999 |
| No log | 0.0206 | 12 | 3.4517 | 0.0352 | 3.4517 | 1.8579 |
| No log | 0.0240 | 14 | 2.6451 | 0.0157 | 2.6451 | 1.6264 |
| No log | 0.0274 | 16 | 2.1055 | 0.0079 | 2.1055 | 1.4510 |
| No log | 0.0309 | 18 | 1.6913 | 0.0079 | 1.6913 | 1.3005 |
| No log | 0.0343 | 20 | 1.5597 | 0.0079 | 1.5597 | 1.2489 |
| No log | 0.0377 | 22 | 1.7036 | 0.0079 | 1.7036 | 1.3052 |
| No log | 0.0412 | 24 | 1.4501 | 0.0298 | 1.4501 | 1.2042 |
| No log | 0.0446 | 26 | 1.6450 | 0.0061 | 1.6450 | 1.2826 |
| No log | 0.0480 | 28 | 1.4894 | 0.0047 | 1.4894 | 1.2204 |
| No log | 0.0515 | 30 | 1.4130 | 0.0392 | 1.4130 | 1.1887 |
| No log | 0.0549 | 32 | 1.6155 | 0.0095 | 1.6155 | 1.2710 |
| No log | 0.0583 | 34 | 1.7771 | 0.0079 | 1.7771 | 1.3331 |
| No log | 0.0617 | 36 | 2.0169 | 0.0089 | 2.0169 | 1.4202 |
| No log | 0.0652 | 38 | 1.6396 | 0.0105 | 1.6396 | 1.2805 |
| No log | 0.0686 | 40 | 1.1523 | 0.0826 | 1.1523 | 1.0734 |
| No log | 0.0720 | 42 | 1.0491 | 0.0715 | 1.0491 | 1.0243 |
| No log | 0.0755 | 44 | 1.0917 | 0.0738 | 1.0917 | 1.0448 |
| No log | 0.0789 | 46 | 1.3192 | 0.0915 | 1.3192 | 1.1485 |
| No log | 0.0823 | 48 | 2.0679 | 0.0079 | 2.0679 | 1.4380 |
| No log | 0.0858 | 50 | 2.0706 | 0.0079 | 2.0706 | 1.4390 |
| No log | 0.0892 | 52 | 1.5654 | 0.0068 | 1.5654 | 1.2511 |
| No log | 0.0926 | 54 | 1.3798 | 0.0718 | 1.3798 | 1.1747 |
| No log | 0.0961 | 56 | 1.3267 | 0.1049 | 1.3267 | 1.1518 |
| No log | 0.0995 | 58 | 1.2978 | 0.1266 | 1.2978 | 1.1392 |
| No log | 0.1029 | 60 | 1.3724 | 0.0549 | 1.3724 | 1.1715 |
| No log | 0.1063 | 62 | 1.5236 | 0.0076 | 1.5236 | 1.2343 |
| No log | 0.1098 | 64 | 1.6220 | 0.0030 | 1.6220 | 1.2736 |
| No log | 0.1132 | 66 | 1.5378 | 0.0024 | 1.5378 | 1.2401 |
| No log | 0.1166 | 68 | 1.6463 | 0.0033 | 1.6463 | 1.2831 |
| No log | 0.1201 | 70 | 1.7358 | 0.0002 | 1.7358 | 1.3175 |
| No log | 0.1235 | 72 | 1.6195 | -0.0036 | 1.6195 | 1.2726 |
| No log | 0.1269 | 74 | 1.4822 | 0.0152 | 1.4822 | 1.2174 |
| No log | 0.1304 | 76 | 1.2915 | 0.0599 | 1.2915 | 1.1364 |
| No log | 0.1338 | 78 | 1.1804 | 0.0669 | 1.1804 | 1.0865 |
| No log | 0.1372 | 80 | 1.2141 | 0.0485 | 1.2141 | 1.1019 |
| No log | 0.1407 | 82 | 1.3422 | -0.0004 | 1.3422 | 1.1585 |
| No log | 0.1441 | 84 | 1.1635 | 0.0737 | 1.1635 | 1.0787 |
| No log | 0.1475 | 86 | 0.9490 | 0.0682 | 0.9490 | 0.9742 |
| No log | 0.1509 | 88 | 0.8979 | 0.0682 | 0.8979 | 0.9476 |
| No log | 0.1544 | 90 | 0.9215 | 0.0630 | 0.9215 | 0.9600 |
| No log | 0.1578 | 92 | 1.0162 | 0.0766 | 1.0162 | 1.0081 |
| No log | 0.1612 | 94 | 1.0891 | 0.0804 | 1.0891 | 1.0436 |
| No log | 0.1647 | 96 | 1.0281 | 0.0957 | 1.0281 | 1.0139 |
| No log | 0.1681 | 98 | 0.9041 | 0.0992 | 0.9041 | 0.9508 |
| No log | 0.1715 | 100 | 0.8600 | 0.0842 | 0.8600 | 0.9274 |
| No log | 0.1750 | 102 | 0.8940 | 0.0845 | 0.8940 | 0.9455 |
| No log | 0.1784 | 104 | 0.8905 | 0.0823 | 0.8905 | 0.9437 |
| No log | 0.1818 | 106 | 0.8699 | 0.0630 | 0.8699 | 0.9327 |
| No log | 0.1852 | 108 | 0.9257 | 0.0643 | 0.9257 | 0.9621 |
| No log | 0.1887 | 110 | 1.0056 | 0.0818 | 1.0056 | 1.0028 |
| No log | 0.1921 | 112 | 0.9491 | 0.0678 | 0.9491 | 0.9742 |
| No log | 0.1955 | 114 | 0.9983 | 0.0683 | 0.9983 | 0.9991 |
| No log | 0.1990 | 116 | 1.0346 | 0.0737 | 1.0346 | 1.0171 |
| No log | 0.2024 | 118 | 1.0319 | 0.0808 | 1.0319 | 1.0158 |
| No log | 0.2058 | 120 | 1.0585 | 0.0957 | 1.0585 | 1.0289 |
| No log | 0.2093 | 122 | 1.1211 | 0.1074 | 1.1211 | 1.0588 |
| No log | 0.2127 | 124 | 1.2209 | 0.1052 | 1.2209 | 1.1049 |
| No log | 0.2161 | 126 | 1.2090 | 0.1047 | 1.2090 | 1.0996 |
| No log | 0.2196 | 128 | 1.0891 | 0.1219 | 1.0891 | 1.0436 |
| No log | 0.2230 | 130 | 2.5667 | 0.0289 | 2.5667 | 1.6021 |
| No log | 0.2264 | 132 | 4.8560 | -0.0172 | 4.8560 | 2.2036 |
| No log | 0.2298 | 134 | 2.7746 | 0.0366 | 2.7746 | 1.6657 |
| No log | 0.2333 | 136 | 0.8076 | 0.0995 | 0.8076 | 0.8987 |
| No log | 0.2367 | 138 | 0.9279 | 0.0620 | 0.9279 | 0.9633 |
| No log | 0.2401 | 140 | 1.0653 | 0.0579 | 1.0653 | 1.0322 |
| No log | 0.2436 | 142 | 0.9947 | 0.0593 | 0.9947 | 0.9973 |
| No log | 0.2470 | 144 | 0.8734 | 0.0459 | 0.8734 | 0.9346 |
| No log | 0.2504 | 146 | 0.8209 | 0.0459 | 0.8209 | 0.9060 |
| No log | 0.2539 | 148 | 0.8149 | 0.0459 | 0.8149 | 0.9027 |
| No log | 0.2573 | 150 | 0.8122 | 0.0459 | 0.8122 | 0.9012 |
| No log | 0.2607 | 152 | 0.8128 | 0.0459 | 0.8128 | 0.9016 |
| No log | 0.2642 | 154 | 0.8080 | 0.0459 | 0.8080 | 0.8989 |
| No log | 0.2676 | 156 | 0.8144 | 0.0617 | 0.8144 | 0.9024 |
| No log | 0.2710 | 158 | 0.8260 | 0.0747 | 0.8260 | 0.9088 |
| No log | 0.2744 | 160 | 0.8116 | 0.0724 | 0.8116 | 0.9009 |
| No log | 0.2779 | 162 | 0.7971 | 0.0644 | 0.7971 | 0.8928 |
| No log | 0.2813 | 164 | 0.8109 | 0.0558 | 0.8109 | 0.9005 |
| No log | 0.2847 | 166 | 0.8364 | 0.0558 | 0.8364 | 0.9145 |
| No log | 0.2882 | 168 | 0.7910 | 0.0558 | 0.7910 | 0.8894 |
| No log | 0.2916 | 170 | 0.7807 | 0.0558 | 0.7807 | 0.8835 |
| No log | 0.2950 | 172 | 0.7720 | 0.0620 | 0.7720 | 0.8786 |
| No log | 0.2985 | 174 | 0.7883 | 0.0747 | 0.7883 | 0.8879 |
| No log | 0.3019 | 176 | 0.7986 | 0.0814 | 0.7986 | 0.8937 |
| No log | 0.3053 | 178 | 0.7986 | 0.0836 | 0.7986 | 0.8936 |
| No log | 0.3087 | 180 | 0.7869 | 0.0747 | 0.7869 | 0.8871 |
| No log | 0.3122 | 182 | 0.8133 | 0.0620 | 0.8133 | 0.9018 |
| No log | 0.3156 | 184 | 0.8636 | 0.0459 | 0.8636 | 0.9293 |
| No log | 0.3190 | 186 | 0.8813 | 0.0434 | 0.8813 | 0.9388 |
| No log | 0.3225 | 188 | 0.8884 | 0.0549 | 0.8884 | 0.9426 |
| No log | 0.3259 | 190 | 0.8814 | 0.0555 | 0.8814 | 0.9388 |
| No log | 0.3293 | 192 | 0.8817 | 0.1626 | 0.8817 | 0.9390 |
| No log | 0.3328 | 194 | 0.8910 | 0.2245 | 0.8910 | 0.9439 |
| No log | 0.3362 | 196 | 0.8679 | 0.2799 | 0.8679 | 0.9316 |
| No log | 0.3396 | 198 | 0.8133 | 0.1795 | 0.8133 | 0.9019 |
| No log | 0.3431 | 200 | 0.8171 | 0.1940 | 0.8171 | 0.9040 |
| No log | 0.3465 | 202 | 0.8469 | 0.2575 | 0.8469 | 0.9203 |
| No log | 0.3499 | 204 | 0.8790 | 0.2307 | 0.8790 | 0.9376 |
| No log | 0.3533 | 206 | 0.8847 | 0.1400 | 0.8847 | 0.9406 |
| No log | 0.3568 | 208 | 0.9028 | 0.0502 | 0.9028 | 0.9501 |
| No log | 0.3602 | 210 | 0.9008 | 0.0690 | 0.9008 | 0.9491 |
| No log | 0.3636 | 212 | 0.8805 | 0.1004 | 0.8805 | 0.9383 |
| No log | 0.3671 | 214 | 0.8424 | 0.0820 | 0.8424 | 0.9178 |
| No log | 0.3705 | 216 | 0.8325 | 0.0753 | 0.8325 | 0.9124 |
| No log | 0.3739 | 218 | 0.8144 | 0.0874 | 0.8144 | 0.9025 |
| No log | 0.3774 | 220 | 0.8314 | 0.2189 | 0.8314 | 0.9118 |
| No log | 0.3808 | 222 | 0.8541 | 0.3114 | 0.8541 | 0.9242 |
| No log | 0.3842 | 224 | 0.8353 | 0.2907 | 0.8353 | 0.9140 |
| No log | 0.3877 | 226 | 0.8128 | 0.2936 | 0.8128 | 0.9015 |
| No log | 0.3911 | 228 | 0.7832 | 0.2022 | 0.7832 | 0.8850 |
| No log | 0.3945 | 230 | 0.7746 | 0.1606 | 0.7746 | 0.8801 |
| No log | 0.3979 | 232 | 0.7689 | 0.1796 | 0.7689 | 0.8768 |
| No log | 0.4014 | 234 | 0.7650 | 0.1695 | 0.7650 | 0.8747 |
| No log | 0.4048 | 236 | 0.7645 | 0.1502 | 0.7645 | 0.8743 |
| No log | 0.4082 | 238 | 0.7622 | 0.0676 | 0.7622 | 0.8730 |
| No log | 0.4117 | 240 | 0.7626 | 0.0693 | 0.7626 | 0.8733 |
| No log | 0.4151 | 242 | 0.7589 | 0.0777 | 0.7589 | 0.8711 |
| No log | 0.4185 | 244 | 0.7769 | 0.3239 | 0.7769 | 0.8814 |
| No log | 0.4220 | 246 | 0.8533 | 0.3712 | 0.8533 | 0.9238 |
| No log | 0.4254 | 248 | 0.8826 | 0.3042 | 0.8826 | 0.9395 |
| No log | 0.4288 | 250 | 0.8974 | 0.2468 | 0.8974 | 0.9473 |
| No log | 0.4322 | 252 | 0.8558 | 0.2418 | 0.8558 | 0.9251 |
| No log | 0.4357 | 254 | 0.8683 | 0.0838 | 0.8683 | 0.9318 |
| No log | 0.4391 | 256 | 1.8847 | 0.0116 | 1.8847 | 1.3728 |
| No log | 0.4425 | 258 | 1.9193 | 0.0196 | 1.9193 | 1.3854 |
| No log | 0.4460 | 260 | 1.7541 | 0.0279 | 1.7541 | 1.3244 |
| No log | 0.4494 | 262 | 1.1901 | 0.1307 | 1.1901 | 1.0909 |
| No log | 0.4528 | 264 | 0.8152 | 0.1156 | 0.8152 | 0.9029 |
| No log | 0.4563 | 266 | 0.7355 | 0.0860 | 0.7355 | 0.8576 |
| No log | 0.4597 | 268 | 0.7253 | 0.0693 | 0.7253 | 0.8517 |
| No log | 0.4631 | 270 | 0.7311 | 0.0728 | 0.7311 | 0.8551 |
| No log | 0.4666 | 272 | 0.7594 | 0.2769 | 0.7594 | 0.8715 |
| No log | 0.4700 | 274 | 0.7950 | 0.4128 | 0.7950 | 0.8916 |
| No log | 0.4734 | 276 | 0.8190 | 0.3663 | 0.8190 | 0.9050 |
| No log | 0.4768 | 278 | 0.8627 | 0.3435 | 0.8627 | 0.9288 |
| No log | 0.4803 | 280 | 0.8799 | 0.3281 | 0.8799 | 0.9381 |
| No log | 0.4837 | 282 | 0.8792 | 0.3281 | 0.8792 | 0.9377 |
| No log | 0.4871 | 284 | 0.8162 | 0.3401 | 0.8162 | 0.9034 |
| No log | 0.4906 | 286 | 0.7772 | 0.4092 | 0.7772 | 0.8816 |
| No log | 0.4940 | 288 | 0.7801 | 0.3999 | 0.7801 | 0.8832 |
| No log | 0.4974 | 290 | 0.7471 | 0.3607 | 0.7471 | 0.8643 |
| No log | 0.5009 | 292 | 0.7161 | 0.2118 | 0.7161 | 0.8462 |
| No log | 0.5043 | 294 | 0.7337 | 0.3039 | 0.7337 | 0.8565 |
| No log | 0.5077 | 296 | 0.7708 | 0.4149 | 0.7708 | 0.8779 |
| No log | 0.5111 | 298 | 0.7610 | 0.3708 | 0.7610 | 0.8724 |
| No log | 0.5146 | 300 | 0.7456 | 0.3050 | 0.7456 | 0.8635 |
| No log | 0.5180 | 302 | 0.7360 | 0.2361 | 0.7360 | 0.8579 |
| No log | 0.5214 | 304 | 0.7349 | 0.2281 | 0.7349 | 0.8573 |
| No log | 0.5249 | 306 | 0.7285 | 0.2043 | 0.7285 | 0.8536 |
| No log | 0.5283 | 308 | 0.7070 | 0.1514 | 0.7070 | 0.8408 |
| No log | 0.5317 | 310 | 0.7056 | 0.2389 | 0.7056 | 0.8400 |
| No log | 0.5352 | 312 | 0.6916 | 0.1769 | 0.6916 | 0.8316 |
| No log | 0.5386 | 314 | 0.6841 | 0.0977 | 0.6841 | 0.8271 |
| No log | 0.5420 | 316 | 0.7423 | 0.1207 | 0.7423 | 0.8616 |
| No log | 0.5455 | 318 | 0.9220 | 0.1621 | 0.9220 | 0.9602 |
| No log | 0.5489 | 320 | 1.0175 | 0.2216 | 1.0175 | 1.0087 |
| No log | 0.5523 | 322 | 1.0533 | 0.2130 | 1.0533 | 1.0263 |
| No log | 0.5557 | 324 | 0.9320 | 0.2034 | 0.9320 | 0.9654 |
| No log | 0.5592 | 326 | 0.7331 | 0.1527 | 0.7331 | 0.8562 |
| No log | 0.5626 | 328 | 0.6548 | 0.1728 | 0.6548 | 0.8092 |
| No log | 0.5660 | 330 | 0.6968 | 0.3633 | 0.6968 | 0.8347 |
| No log | 0.5695 | 332 | 0.7427 | 0.3691 | 0.7427 | 0.8618 |
| No log | 0.5729 | 334 | 0.7882 | 0.2644 | 0.7882 | 0.8878 |
| No log | 0.5763 | 336 | 0.8160 | 0.0905 | 0.8160 | 0.9033 |
| No log | 0.5798 | 338 | 0.8233 | 0.0540 | 0.8233 | 0.9073 |
| No log | 0.5832 | 340 | 0.8169 | 0.0581 | 0.8169 | 0.9038 |
| No log | 0.5866 | 342 | 0.8322 | 0.0558 | 0.8322 | 0.9123 |
| No log | 0.5901 | 344 | 0.8375 | 0.0558 | 0.8375 | 0.9152 |
| No log | 0.5935 | 346 | 0.8434 | 0.0558 | 0.8434 | 0.9184 |
| No log | 0.5969 | 348 | 0.8369 | 0.0655 | 0.8369 | 0.9148 |
| No log | 0.6003 | 350 | 0.8578 | 0.0558 | 0.8578 | 0.9262 |
| No log | 0.6038 | 352 | 0.8761 | 0.0533 | 0.8761 | 0.9360 |
| No log | 0.6072 | 354 | 0.8633 | 0.0533 | 0.8633 | 0.9291 |
| No log | 0.6106 | 356 | 0.8311 | 0.0533 | 0.8311 | 0.9117 |
| No log | 0.6141 | 358 | 0.8233 | 0.0533 | 0.8233 | 0.9074 |
| No log | 0.6175 | 360 | 0.8318 | 0.0545 | 0.8318 | 0.9120 |
| No log | 0.6209 | 362 | 0.8117 | 0.0672 | 0.8117 | 0.9010 |
| No log | 0.6244 | 364 | 0.7745 | 0.1929 | 0.7745 | 0.8800 |
| No log | 0.6278 | 366 | 0.7582 | 0.3273 | 0.7582 | 0.8707 |
| No log | 0.6312 | 368 | 0.7591 | 0.3862 | 0.7591 | 0.8713 |
| No log | 0.6346 | 370 | 0.7898 | 0.3950 | 0.7898 | 0.8887 |
| No log | 0.6381 | 372 | 0.7906 | 0.3908 | 0.7906 | 0.8892 |
| No log | 0.6415 | 374 | 0.7817 | 0.3939 | 0.7817 | 0.8841 |
| No log | 0.6449 | 376 | 0.7472 | 0.4034 | 0.7472 | 0.8644 |
| No log | 0.6484 | 378 | 0.6836 | 0.2956 | 0.6836 | 0.8268 |
| No log | 0.6518 | 380 | 0.6678 | 0.2076 | 0.6678 | 0.8172 |
| No log | 0.6552 | 382 | 0.6667 | 0.1808 | 0.6667 | 0.8165 |
| No log | 0.6587 | 384 | 0.6672 | 0.2037 | 0.6672 | 0.8168 |
| No log | 0.6621 | 386 | 0.6815 | 0.3610 | 0.6815 | 0.8255 |
| No log | 0.6655 | 388 | 0.7429 | 0.3978 | 0.7429 | 0.8619 |
| No log | 0.6690 | 390 | 0.8098 | 0.3911 | 0.8098 | 0.8999 |
| No log | 0.6724 | 392 | 0.8099 | 0.3792 | 0.8099 | 0.9000 |
| No log | 0.6758 | 394 | 0.8004 | 0.3823 | 0.8004 | 0.8946 |
| No log | 0.6792 | 396 | 0.7312 | 0.3793 | 0.7312 | 0.8551 |
| No log | 0.6827 | 398 | 0.7154 | 0.3665 | 0.7154 | 0.8458 |
| No log | 0.6861 | 400 | 0.6883 | 0.3723 | 0.6883 | 0.8297 |
| No log | 0.6895 | 402 | 0.6618 | 0.3733 | 0.6618 | 0.8135 |
| No log | 0.6930 | 404 | 0.6510 | 0.3558 | 0.6510 | 0.8068 |
| No log | 0.6964 | 406 | 0.6427 | 0.3455 | 0.6427 | 0.8017 |
| No log | 0.6998 | 408 | 0.6877 | 0.2104 | 0.6877 | 0.8293 |
| No log | 0.7033 | 410 | 0.7691 | 0.1952 | 0.7691 | 0.8770 |
| No log | 0.7067 | 412 | 0.6090 | 0.4214 | 0.6090 | 0.7804 |
| No log | 0.7101 | 414 | 0.6570 | 0.4416 | 0.6570 | 0.8105 |
| No log | 0.7136 | 416 | 0.6584 | 0.4009 | 0.6584 | 0.8114 |
| No log | 0.7170 | 418 | 0.6512 | 0.3557 | 0.6512 | 0.8070 |
| No log | 0.7204 | 420 | 0.7077 | 0.2119 | 0.7077 | 0.8413 |
| No log | 0.7238 | 422 | 0.7783 | 0.1751 | 0.7783 | 0.8822 |
| No log | 0.7273 | 424 | 0.7581 | 0.1718 | 0.7581 | 0.8707 |
| No log | 0.7307 | 426 | 0.7271 | 0.1811 | 0.7271 | 0.8527 |
| No log | 0.7341 | 428 | 0.7055 | 0.2054 | 0.7055 | 0.8399 |
| No log | 0.7376 | 430 | 0.6621 | 0.3369 | 0.6621 | 0.8137 |
| No log | 0.7410 | 432 | 0.6885 | 0.3977 | 0.6885 | 0.8297 |
| No log | 0.7444 | 434 | 0.6920 | 0.4186 | 0.6920 | 0.8318 |
| No log | 0.7479 | 436 | 0.6917 | 0.4300 | 0.6917 | 0.8317 |
| No log | 0.7513 | 438 | 0.6280 | 0.4431 | 0.6280 | 0.7925 |
| No log | 0.7547 | 440 | 0.5963 | 0.4405 | 0.5963 | 0.7722 |
| No log | 0.7581 | 442 | 0.5874 | 0.4380 | 0.5874 | 0.7664 |
| No log | 0.7616 | 444 | 0.5913 | 0.4492 | 0.5913 | 0.7689 |
| No log | 0.7650 | 446 | 0.6154 | 0.4674 | 0.6154 | 0.7845 |
| No log | 0.7684 | 448 | 0.6316 | 0.4507 | 0.6316 | 0.7947 |
| No log | 0.7719 | 450 | 0.5995 | 0.4604 | 0.5995 | 0.7743 |
| No log | 0.7753 | 452 | 0.5784 | 0.4653 | 0.5784 | 0.7605 |
| No log | 0.7787 | 454 | 0.5788 | 0.4011 | 0.5788 | 0.7608 |
| No log | 0.7822 | 456 | 0.6058 | 0.3145 | 0.6058 | 0.7783 |
| No log | 0.7856 | 458 | 0.6096 | 0.2966 | 0.6096 | 0.7808 |
| No log | 0.7890 | 460 | 0.6101 | 0.2799 | 0.6101 | 0.7811 |
| No log | 0.7925 | 462 | 0.6341 | 0.2733 | 0.6341 | 0.7963 |
| No log | 0.7959 | 464 | 0.5612 | 0.3275 | 0.5612 | 0.7491 |
| No log | 0.7993 | 466 | 0.5301 | 0.4283 | 0.5301 | 0.7281 |
| No log | 0.8027 | 468 | 0.5363 | 0.4775 | 0.5363 | 0.7323 |
| No log | 0.8062 | 470 | 0.5627 | 0.4966 | 0.5627 | 0.7502 |
| No log | 0.8096 | 472 | 0.6140 | 0.5264 | 0.6140 | 0.7836 |
| No log | 0.8130 | 474 | 0.6255 | 0.5175 | 0.6255 | 0.7909 |
| No log | 0.8165 | 476 | 0.6558 | 0.5165 | 0.6558 | 0.8098 |
| No log | 0.8199 | 478 | 0.6823 | 0.5129 | 0.6823 | 0.8260 |
| No log | 0.8233 | 480 | 0.6910 | 0.4902 | 0.6910 | 0.8313 |
| No log | 0.8268 | 482 | 0.6955 | 0.4858 | 0.6955 | 0.8340 |
| No log | 0.8302 | 484 | 0.6444 | 0.5138 | 0.6444 | 0.8028 |
| No log | 0.8336 | 486 | 0.6024 | 0.4028 | 0.6024 | 0.7762 |
| No log | 0.8370 | 488 | 0.5988 | 0.3254 | 0.5988 | 0.7738 |
| No log | 0.8405 | 490 | 0.5972 | 0.3724 | 0.5972 | 0.7728 |
| No log | 0.8439 | 492 | 0.5991 | 0.4367 | 0.5991 | 0.7740 |
| No log | 0.8473 | 494 | 0.6037 | 0.4775 | 0.6037 | 0.7770 |
| No log | 0.8508 | 496 | 0.5667 | 0.4162 | 0.5667 | 0.7528 |
| No log | 0.8542 | 498 | 0.5614 | 0.3345 | 0.5614 | 0.7492 |
| 1.4467 | 0.8576 | 500 | 0.5595 | 0.3099 | 0.5595 | 0.7480 |
| 1.4467 | 0.8611 | 502 | 0.5750 | 0.3021 | 0.5750 | 0.7583 |
| 1.4467 | 0.8645 | 504 | 0.5486 | 0.3463 | 0.5486 | 0.7407 |
| 1.4467 | 0.8679 | 506 | 0.5228 | 0.4251 | 0.5228 | 0.7230 |
| 1.4467 | 0.8714 | 508 | 0.5220 | 0.4514 | 0.5220 | 0.7225 |
| 1.4467 | 0.8748 | 510 | 0.5299 | 0.5216 | 0.5299 | 0.7280 |
| 1.4467 | 0.8782 | 512 | 0.5365 | 0.5290 | 0.5365 | 0.7324 |
| 1.4467 | 0.8816 | 514 | 0.5426 | 0.5082 | 0.5426 | 0.7366 |
| 1.4467 | 0.8851 | 516 | 0.5536 | 0.5262 | 0.5536 | 0.7441 |
| 1.4467 | 0.8885 | 518 | 0.6244 | 0.5335 | 0.6244 | 0.7902 |
| 1.4467 | 0.8919 | 520 | 0.6433 | 0.5236 | 0.6433 | 0.8020 |
| 1.4467 | 0.8954 | 522 | 0.6888 | 0.5369 | 0.6888 | 0.8299 |
| 1.4467 | 0.8988 | 524 | 0.8037 | 0.4895 | 0.8037 | 0.8965 |
| 1.4467 | 0.9022 | 526 | 0.7234 | 0.5176 | 0.7234 | 0.8506 |
| 1.4467 | 0.9057 | 528 | 0.7297 | 0.5097 | 0.7297 | 0.8542 |
| 1.4467 | 0.9091 | 530 | 0.7516 | 0.5034 | 0.7516 | 0.8669 |
| 1.4467 | 0.9125 | 532 | 0.7611 | 0.4916 | 0.7611 | 0.8724 |
| 1.4467 | 0.9160 | 534 | 0.6036 | 0.5769 | 0.6036 | 0.7769 |
| 1.4467 | 0.9194 | 536 | 0.4908 | 0.4728 | 0.4908 | 0.7005 |
| 1.4467 | 0.9228 | 538 | 0.5400 | 0.4473 | 0.5400 | 0.7348 |
| 1.4467 | 0.9262 | 540 | 0.6013 | 0.3841 | 0.6013 | 0.7754 |
| 1.4467 | 0.9297 | 542 | 0.5468 | 0.4434 | 0.5468 | 0.7394 |
| 1.4467 | 0.9331 | 544 | 0.4979 | 0.4585 | 0.4979 | 0.7056 |
| 1.4467 | 0.9365 | 546 | 0.4855 | 0.4803 | 0.4855 | 0.6968 |
| 1.4467 | 0.9400 | 548 | 0.4848 | 0.5022 | 0.4848 | 0.6963 |
| 1.4467 | 0.9434 | 550 | 0.4907 | 0.5268 | 0.4907 | 0.7005 |
| 1.4467 | 0.9468 | 552 | 0.5412 | 0.5541 | 0.5412 | 0.7356 |
| 1.4467 | 0.9503 | 554 | 0.5753 | 0.5743 | 0.5753 | 0.7585 |
| 1.4467 | 0.9537 | 556 | 0.5410 | 0.5593 | 0.5410 | 0.7355 |
| 1.4467 | 0.9571 | 558 | 0.5243 | 0.5358 | 0.5243 | 0.7241 |
| 1.4467 | 0.9605 | 560 | 0.5355 | 0.5508 | 0.5355 | 0.7318 |
| 1.4467 | 0.9640 | 562 | 0.6039 | 0.5449 | 0.6039 | 0.7771 |
| 1.4467 | 0.9674 | 564 | 0.5849 | 0.5535 | 0.5849 | 0.7648 |
| 1.4467 | 0.9708 | 566 | 0.5134 | 0.5411 | 0.5134 | 0.7165 |
| 1.4467 | 0.9743 | 568 | 0.4942 | 0.4772 | 0.4942 | 0.7030 |
| 1.4467 | 0.9777 | 570 | 0.5022 | 0.4533 | 0.5022 | 0.7087 |
| 1.4467 | 0.9811 | 572 | 0.4918 | 0.5004 | 0.4918 | 0.7013 |
| 1.4467 | 0.9846 | 574 | 0.5402 | 0.5311 | 0.5402 | 0.7350 |
| 1.4467 | 0.9880 | 576 | 0.5852 | 0.5550 | 0.5852 | 0.7650 |
| 1.4467 | 0.9914 | 578 | 0.5398 | 0.5377 | 0.5398 | 0.7347 |
| 1.4467 | 0.9949 | 580 | 0.5121 | 0.5300 | 0.5121 | 0.7156 |
| 1.4467 | 0.9983 | 582 | 0.4940 | 0.4900 | 0.4940 | 0.7028 |
| 1.4467 | 1.0017 | 584 | 0.5047 | 0.4475 | 0.5047 | 0.7104 |
| 1.4467 | 1.0051 | 586 | 0.5087 | 0.4404 | 0.5087 | 0.7132 |
| 1.4467 | 1.0086 | 588 | 0.4969 | 0.4986 | 0.4969 | 0.7049 |
| 1.4467 | 1.0120 | 590 | 0.4991 | 0.5336 | 0.4991 | 0.7065 |
| 1.4467 | 1.0154 | 592 | 0.5040 | 0.5444 | 0.5040 | 0.7099 |
| 1.4467 | 1.0189 | 594 | 0.4863 | 0.5234 | 0.4863 | 0.6973 |
| 1.4467 | 1.0223 | 596 | 0.4988 | 0.4800 | 0.4988 | 0.7063 |
| 1.4467 | 1.0257 | 598 | 0.5908 | 0.4059 | 0.5908 | 0.7686 |
| 1.4467 | 1.0292 | 600 | 0.6252 | 0.3926 | 0.6252 | 0.7907 |
| 1.4467 | 1.0326 | 602 | 0.5112 | 0.4579 | 0.5112 | 0.7150 |
| 1.4467 | 1.0360 | 604 | 0.5122 | 0.5698 | 0.5122 | 0.7157 |
| 1.4467 | 1.0395 | 606 | 0.7310 | 0.4911 | 0.7310 | 0.8550 |
| 1.4467 | 1.0429 | 608 | 0.8856 | 0.4021 | 0.8856 | 0.9411 |
| 1.4467 | 1.0463 | 610 | 0.9630 | 0.3085 | 0.9630 | 0.9813 |
| 1.4467 | 1.0497 | 612 | 0.9528 | 0.2853 | 0.9528 | 0.9761 |
| 1.4467 | 1.0532 | 614 | 0.9115 | 0.2412 | 0.9115 | 0.9547 |
| 1.4467 | 1.0566 | 616 | 0.8846 | 0.1487 | 0.8846 | 0.9406 |
| 1.4467 | 1.0600 | 618 | 0.7885 | 0.3236 | 0.7885 | 0.8880 |
| 1.4467 | 1.0635 | 620 | 0.6935 | 0.4716 | 0.6935 | 0.8328 |
| 1.4467 | 1.0669 | 622 | 0.6428 | 0.5356 | 0.6428 | 0.8017 |
| 1.4467 | 1.0703 | 624 | 0.6412 | 0.5383 | 0.6412 | 0.8007 |
| 1.4467 | 1.0738 | 626 | 0.6226 | 0.5171 | 0.6226 | 0.7891 |
| 1.4467 | 1.0772 | 628 | 0.5974 | 0.5300 | 0.5974 | 0.7729 |
| 1.4467 | 1.0806 | 630 | 0.5614 | 0.5356 | 0.5614 | 0.7492 |
| 1.4467 | 1.0840 | 632 | 0.5480 | 0.5475 | 0.5480 | 0.7403 |
| 1.4467 | 1.0875 | 634 | 0.5286 | 0.5256 | 0.5286 | 0.7271 |
| 1.4467 | 1.0909 | 636 | 0.5107 | 0.4668 | 0.5107 | 0.7146 |
| 1.4467 | 1.0943 | 638 | 0.5219 | 0.3653 | 0.5219 | 0.7224 |
| 1.4467 | 1.0978 | 640 | 0.5869 | 0.2661 | 0.5869 | 0.7661 |
| 1.4467 | 1.1012 | 642 | 0.6165 | 0.2497 | 0.6165 | 0.7851 |
| 1.4467 | 1.1046 | 644 | 0.5858 | 0.2664 | 0.5858 | 0.7654 |
| 1.4467 | 1.1081 | 646 | 0.5430 | 0.3289 | 0.5430 | 0.7369 |
| 1.4467 | 1.1115 | 648 | 0.5303 | 0.3594 | 0.5303 | 0.7282 |
| 1.4467 | 1.1149 | 650 | 0.5374 | 0.3472 | 0.5374 | 0.7331 |
| 1.4467 | 1.1184 | 652 | 0.5604 | 0.3247 | 0.5604 | 0.7486 |
| 1.4467 | 1.1218 | 654 | 0.5965 | 0.2873 | 0.5965 | 0.7723 |
| 1.4467 | 1.1252 | 656 | 0.6517 | 0.2560 | 0.6517 | 0.8073 |
| 1.4467 | 1.1286 | 658 | 0.6403 | 0.2600 | 0.6403 | 0.8002 |
| 1.4467 | 1.1321 | 660 | 0.5611 | 0.3158 | 0.5611 | 0.7490 |
| 1.4467 | 1.1355 | 662 | 0.5172 | 0.4185 | 0.5172 | 0.7192 |
| 1.4467 | 1.1389 | 664 | 0.5615 | 0.5188 | 0.5615 | 0.7493 |
| 1.4467 | 1.1424 | 666 | 0.6296 | 0.5406 | 0.6296 | 0.7935 |
| 1.4467 | 1.1458 | 668 | 0.6790 | 0.5060 | 0.6790 | 0.8240 |
| 1.4467 | 1.1492 | 670 | 0.6252 | 0.5295 | 0.6252 | 0.7907 |
| 1.4467 | 1.1527 | 672 | 0.5617 | 0.5067 | 0.5617 | 0.7495 |
| 1.4467 | 1.1561 | 674 | 0.5482 | 0.5205 | 0.5482 | 0.7404 |
| 1.4467 | 1.1595 | 676 | 0.5532 | 0.5105 | 0.5532 | 0.7438 |
| 1.4467 | 1.1630 | 678 | 0.5241 | 0.5325 | 0.5241 | 0.7239 |
| 1.4467 | 1.1664 | 680 | 0.5108 | 0.4480 | 0.5108 | 0.7147 |
| 1.4467 | 1.1698 | 682 | 0.5547 | 0.2923 | 0.5547 | 0.7448 |
| 1.4467 | 1.1732 | 684 | 0.5698 | 0.2945 | 0.5698 | 0.7549 |
| 1.4467 | 1.1767 | 686 | 0.5462 | 0.3037 | 0.5462 | 0.7390 |
| 1.4467 | 1.1801 | 688 | 0.5157 | 0.4053 | 0.5157 | 0.7181 |
| 1.4467 | 1.1835 | 690 | 0.5196 | 0.5144 | 0.5196 | 0.7208 |
| 1.4467 | 1.1870 | 692 | 0.5475 | 0.5311 | 0.5475 | 0.7400 |
| 1.4467 | 1.1904 | 694 | 0.6014 | 0.5623 | 0.6014 | 0.7755 |
| 1.4467 | 1.1938 | 696 | 0.6404 | 0.5366 | 0.6404 | 0.8003 |
| 1.4467 | 1.1973 | 698 | 0.6191 | 0.5592 | 0.6191 | 0.7868 |
| 1.4467 | 1.2007 | 700 | 0.5985 | 0.5541 | 0.5985 | 0.7736 |
| 1.4467 | 1.2041 | 702 | 0.5791 | 0.5440 | 0.5791 | 0.7610 |
| 1.4467 | 1.2075 | 704 | 0.5684 | 0.5482 | 0.5684 | 0.7539 |
| 1.4467 | 1.2110 | 706 | 0.5335 | 0.5284 | 0.5335 | 0.7304 |
| 1.4467 | 1.2144 | 708 | 0.5002 | 0.5346 | 0.5002 | 0.7073 |
| 1.4467 | 1.2178 | 710 | 0.4808 | 0.5058 | 0.4808 | 0.6934 |
| 1.4467 | 1.2213 | 712 | 0.4982 | 0.4652 | 0.4982 | 0.7058 |
| 1.4467 | 1.2247 | 714 | 0.5175 | 0.4509 | 0.5175 | 0.7194 |
| 1.4467 | 1.2281 | 716 | 0.5188 | 0.4514 | 0.5188 | 0.7203 |
| 1.4467 | 1.2316 | 718 | 0.5650 | 0.4206 | 0.5650 | 0.7517 |
| 1.4467 | 1.2350 | 720 | 0.6433 | 0.3495 | 0.6433 | 0.8021 |
| 1.4467 | 1.2384 | 722 | 0.5728 | 0.4136 | 0.5728 | 0.7568 |
| 1.4467 | 1.2419 | 724 | 0.4843 | 0.5375 | 0.4843 | 0.6959 |
| 1.4467 | 1.2453 | 726 | 0.5314 | 0.5164 | 0.5314 | 0.7290 |
| 1.4467 | 1.2487 | 728 | 0.5652 | 0.4515 | 0.5652 | 0.7518 |
| 1.4467 | 1.2521 | 730 | 0.5401 | 0.5090 | 0.5401 | 0.7349 |
| 1.4467 | 1.2556 | 732 | 0.5091 | 0.5267 | 0.5091 | 0.7135 |
| 1.4467 | 1.2590 | 734 | 0.5036 | 0.5331 | 0.5036 | 0.7096 |
| 1.4467 | 1.2624 | 736 | 0.5180 | 0.5579 | 0.5180 | 0.7197 |
| 1.4467 | 1.2659 | 738 | 0.5450 | 0.5734 | 0.5450 | 0.7382 |
| 1.4467 | 1.2693 | 740 | 0.6033 | 0.5591 | 0.6033 | 0.7767 |
| 1.4467 | 1.2727 | 742 | 0.6785 | 0.5214 | 0.6785 | 0.8237 |
| 1.4467 | 1.2762 | 744 | 0.7074 | 0.5244 | 0.7074 | 0.8411 |
| 1.4467 | 1.2796 | 746 | 0.7368 | 0.5110 | 0.7368 | 0.8584 |
| 1.4467 | 1.2830 | 748 | 0.7032 | 0.5253 | 0.7032 | 0.8386 |
| 1.4467 | 1.2864 | 750 | 0.6386 | 0.5526 | 0.6386 | 0.7991 |
| 1.4467 | 1.2899 | 752 | 0.5645 | 0.5853 | 0.5645 | 0.7514 |
| 1.4467 | 1.2933 | 754 | 0.5120 | 0.5700 | 0.5120 | 0.7155 |
| 1.4467 | 1.2967 | 756 | 0.5050 | 0.5636 | 0.5050 | 0.7106 |
| 1.4467 | 1.3002 | 758 | 0.4909 | 0.5660 | 0.4909 | 0.7006 |
| 1.4467 | 1.3036 | 760 | 0.4752 | 0.5537 | 0.4752 | 0.6894 |
| 1.4467 | 1.3070 | 762 | 0.4705 | 0.5425 | 0.4705 | 0.6859 |
| 1.4467 | 1.3105 | 764 | 0.4769 | 0.5650 | 0.4769 | 0.6906 |
| 1.4467 | 1.3139 | 766 | 0.5247 | 0.5878 | 0.5247 | 0.7244 |
| 1.4467 | 1.3173 | 768 | 0.5567 | 0.5820 | 0.5567 | 0.7461 |
| 1.4467 | 1.3208 | 770 | 0.5621 | 0.5798 | 0.5621 | 0.7497 |
| 1.4467 | 1.3242 | 772 | 0.5323 | 0.5977 | 0.5323 | 0.7296 |
| 1.4467 | 1.3276 | 774 | 0.5068 | 0.5952 | 0.5068 | 0.7119 |
| 1.4467 | 1.3310 | 776 | 0.4927 | 0.5798 | 0.4927 | 0.7019 |
| 1.4467 | 1.3345 | 778 | 0.5172 | 0.5159 | 0.5172 | 0.7192 |
| 1.4467 | 1.3379 | 780 | 0.5354 | 0.4926 | 0.5354 | 0.7317 |
| 1.4467 | 1.3413 | 782 | 0.5227 | 0.4901 | 0.5227 | 0.7230 |
| 1.4467 | 1.3448 | 784 | 0.5517 | 0.4711 | 0.5517 | 0.7427 |
| 1.4467 | 1.3482 | 786 | 0.6068 | 0.4424 | 0.6068 | 0.7790 |
| 1.4467 | 1.3516 | 788 | 0.5734 | 0.4632 | 0.5734 | 0.7573 |
| 1.4467 | 1.3551 | 790 | 0.5509 | 0.4758 | 0.5509 | 0.7423 |
| 1.4467 | 1.3585 | 792 | 0.5378 | 0.4870 | 0.5378 | 0.7333 |
| 1.4467 | 1.3619 | 794 | 0.4953 | 0.5192 | 0.4953 | 0.7038 |
| 1.4467 | 1.3654 | 796 | 0.4681 | 0.5414 | 0.4681 | 0.6842 |
| 1.4467 | 1.3688 | 798 | 0.4889 | 0.5946 | 0.4889 | 0.6992 |
| 1.4467 | 1.3722 | 800 | 0.5452 | 0.6009 | 0.5452 | 0.7384 |
| 1.4467 | 1.3756 | 802 | 0.6980 | 0.5381 | 0.6980 | 0.8355 |
| 1.4467 | 1.3791 | 804 | 0.8372 | 0.4942 | 0.8372 | 0.9150 |
| 1.4467 | 1.3825 | 806 | 0.9861 | 0.4654 | 0.9861 | 0.9930 |
| 1.4467 | 1.3859 | 808 | 1.1494 | 0.4320 | 1.1494 | 1.0721 |
| 1.4467 | 1.3894 | 810 | 1.1945 | 0.4135 | 1.1945 | 1.0929 |
| 1.4467 | 1.3928 | 812 | 1.0414 | 0.4591 | 1.0414 | 1.0205 |
| 1.4467 | 1.3962 | 814 | 0.7966 | 0.5238 | 0.7966 | 0.8925 |
| 1.4467 | 1.3997 | 816 | 0.5620 | 0.6009 | 0.5620 | 0.7497 |
| 1.4467 | 1.4031 | 818 | 0.4917 | 0.5098 | 0.4917 | 0.7012 |
| 1.4467 | 1.4065 | 820 | 0.5848 | 0.4308 | 0.5848 | 0.7647 |
| 1.4467 | 1.4099 | 822 | 0.6022 | 0.4328 | 0.6022 | 0.7760 |
| 1.4467 | 1.4134 | 824 | 0.5356 | 0.4644 | 0.5356 | 0.7319 |
| 1.4467 | 1.4168 | 826 | 0.4930 | 0.4972 | 0.4930 | 0.7021 |
| 1.4467 | 1.4202 | 828 | 0.5180 | 0.5755 | 0.5180 | 0.7197 |
| 1.4467 | 1.4237 | 830 | 0.5769 | 0.5715 | 0.5769 | 0.7595 |
| 1.4467 | 1.4271 | 832 | 0.6236 | 0.5765 | 0.6236 | 0.7897 |
| 1.4467 | 1.4305 | 834 | 0.6349 | 0.5654 | 0.6349 | 0.7968 |
| 1.4467 | 1.4340 | 836 | 0.5807 | 0.5617 | 0.5807 | 0.7621 |
| 1.4467 | 1.4374 | 838 | 0.5394 | 0.5114 | 0.5394 | 0.7344 |
| 1.4467 | 1.4408 | 840 | 0.5532 | 0.4129 | 0.5532 | 0.7438 |
| 1.4467 | 1.4443 | 842 | 0.5295 | 0.4610 | 0.5295 | 0.7277 |
| 1.4467 | 1.4477 | 844 | 0.5082 | 0.5507 | 0.5082 | 0.7129 |
| 1.4467 | 1.4511 | 846 | 0.5605 | 0.5704 | 0.5605 | 0.7487 |
| 1.4467 | 1.4545 | 848 | 0.5777 | 0.5612 | 0.5777 | 0.7601 |
| 1.4467 | 1.4580 | 850 | 0.5747 | 0.5383 | 0.5747 | 0.7581 |
| 1.4467 | 1.4614 | 852 | 0.5844 | 0.5035 | 0.5844 | 0.7644 |
| 1.4467 | 1.4648 | 854 | 0.6067 | 0.4904 | 0.6067 | 0.7789 |
| 1.4467 | 1.4683 | 856 | 0.6499 | 0.4798 | 0.6499 | 0.8062 |
| 1.4467 | 1.4717 | 858 | 0.6804 | 0.4627 | 0.6804 | 0.8249 |
| 1.4467 | 1.4751 | 860 | 0.7119 | 0.4594 | 0.7119 | 0.8438 |
| 1.4467 | 1.4786 | 862 | 0.6832 | 0.4750 | 0.6832 | 0.8265 |
| 1.4467 | 1.4820 | 864 | 0.6485 | 0.4806 | 0.6485 | 0.8053 |
| 1.4467 | 1.4854 | 866 | 0.6089 | 0.4928 | 0.6089 | 0.7803 |
| 1.4467 | 1.4889 | 868 | 0.5959 | 0.5352 | 0.5959 | 0.7720 |
| 1.4467 | 1.4923 | 870 | 0.6194 | 0.5600 | 0.6194 | 0.7870 |
| 1.4467 | 1.4957 | 872 | 0.6318 | 0.5642 | 0.6318 | 0.7949 |
| 1.4467 | 1.4991 | 874 | 0.6293 | 0.5612 | 0.6293 | 0.7933 |
| 1.4467 | 1.5026 | 876 | 0.6245 | 0.5551 | 0.6245 | 0.7903 |
| 1.4467 | 1.5060 | 878 | 0.6221 | 0.5464 | 0.6221 | 0.7888 |
| 1.4467 | 1.5094 | 880 | 0.6268 | 0.5602 | 0.6268 | 0.7917 |
| 1.4467 | 1.5129 | 882 | 0.6381 | 0.5641 | 0.6381 | 0.7988 |
| 1.4467 | 1.5163 | 884 | 0.7047 | 0.5904 | 0.7047 | 0.8394 |
| 1.4467 | 1.5197 | 886 | 0.8195 | 0.5681 | 0.8195 | 0.9053 |
| 1.4467 | 1.5232 | 888 | 0.9176 | 0.5200 | 0.9176 | 0.9579 |
| 1.4467 | 1.5266 | 890 | 0.8808 | 0.5355 | 0.8808 | 0.9385 |
| 1.4467 | 1.5300 | 892 | 0.8105 | 0.5565 | 0.8105 | 0.9003 |
| 1.4467 | 1.5334 | 894 | 0.7524 | 0.5707 | 0.7524 | 0.8674 |
| 1.4467 | 1.5369 | 896 | 0.6910 | 0.5920 | 0.6910 | 0.8312 |
| 1.4467 | 1.5403 | 898 | 0.6964 | 0.5859 | 0.6964 | 0.8345 |
| 1.4467 | 1.5437 | 900 | 0.6703 | 0.5921 | 0.6703 | 0.8187 |
| 1.4467 | 1.5472 | 902 | 0.6408 | 0.5879 | 0.6408 | 0.8005 |
| 1.4467 | 1.5506 | 904 | 0.6144 | 0.5458 | 0.6144 | 0.7838 |
| 1.4467 | 1.5540 | 906 | 0.5912 | 0.5258 | 0.5912 | 0.7689 |
| 1.4467 | 1.5575 | 908 | 0.6039 | 0.4918 | 0.6039 | 0.7771 |
| 1.4467 | 1.5609 | 910 | 0.7268 | 0.4464 | 0.7268 | 0.8525 |
| 1.4467 | 1.5643 | 912 | 0.8848 | 0.4019 | 0.8848 | 0.9406 |
| 1.4467 | 1.5678 | 914 | 0.8112 | 0.4244 | 0.8112 | 0.9007 |
| 1.4467 | 1.5712 | 916 | 0.6855 | 0.4701 | 0.6855 | 0.8279 |
| 1.4467 | 1.5746 | 918 | 0.5862 | 0.4971 | 0.5862 | 0.7656 |
| 1.4467 | 1.5780 | 920 | 0.5316 | 0.5337 | 0.5316 | 0.7291 |
| 1.4467 | 1.5815 | 922 | 0.5797 | 0.6002 | 0.5797 | 0.7614 |
| 1.4467 | 1.5849 | 924 | 0.6757 | 0.5839 | 0.6757 | 0.8220 |
| 1.4467 | 1.5883 | 926 | 0.7027 | 0.5251 | 0.7027 | 0.8383 |
| 1.4467 | 1.5918 | 928 | 0.6812 | 0.4456 | 0.6812 | 0.8253 |
| 1.4467 | 1.5952 | 930 | 0.6881 | 0.4488 | 0.6881 | 0.8295 |
| 1.4467 | 1.5986 | 932 | 0.7299 | 0.4800 | 0.7299 | 0.8544 |
| 1.4467 | 1.6021 | 934 | 0.7679 | 0.5006 | 0.7679 | 0.8763 |
| 1.4467 | 1.6055 | 936 | 0.7714 | 0.4971 | 0.7714 | 0.8783 |
| 1.4467 | 1.6089 | 938 | 0.7134 | 0.5341 | 0.7134 | 0.8446 |
| 1.4467 | 1.6123 | 940 | 0.6023 | 0.5899 | 0.6023 | 0.7761 |
| 1.4467 | 1.6158 | 942 | 0.5084 | 0.5953 | 0.5084 | 0.7130 |
| 1.4467 | 1.6192 | 944 | 0.4845 | 0.5553 | 0.4845 | 0.6960 |
| 1.4467 | 1.6226 | 946 | 0.5044 | 0.5000 | 0.5044 | 0.7102 |
| 1.4467 | 1.6261 | 948 | 0.5057 | 0.5064 | 0.5057 | 0.7111 |
| 1.4467 | 1.6295 | 950 | 0.4857 | 0.5267 | 0.4857 | 0.6969 |
| 1.4467 | 1.6329 | 952 | 0.4853 | 0.5538 | 0.4853 | 0.6966 |
| 1.4467 | 1.6364 | 954 | 0.5263 | 0.5920 | 0.5263 | 0.7255 |
| 1.4467 | 1.6398 | 956 | 0.5516 | 0.5825 | 0.5516 | 0.7427 |
| 1.4467 | 1.6432 | 958 | 0.5468 | 0.5930 | 0.5468 | 0.7394 |
| 1.4467 | 1.6467 | 960 | 0.5185 | 0.5757 | 0.5185 | 0.7200 |
| 1.4467 | 1.6501 | 962 | 0.5088 | 0.4945 | 0.5088 | 0.7133 |
| 1.4467 | 1.6535 | 964 | 0.5813 | 0.3930 | 0.5813 | 0.7625 |
| 1.4467 | 1.6569 | 966 | 0.5810 | 0.4091 | 0.5810 | 0.7622 |
| 1.4467 | 1.6604 | 968 | 0.5508 | 0.4628 | 0.5508 | 0.7422 |
| 1.4467 | 1.6638 | 970 | 0.4762 | 0.5199 | 0.4762 | 0.6901 |
| 1.4467 | 1.6672 | 972 | 0.4902 | 0.5730 | 0.4902 | 0.7002 |
| 1.4467 | 1.6707 | 974 | 0.5267 | 0.5791 | 0.5267 | 0.7258 |
| 1.4467 | 1.6741 | 976 | 0.5422 | 0.5672 | 0.5422 | 0.7363 |
| 1.4467 | 1.6775 | 978 | 0.5207 | 0.5731 | 0.5207 | 0.7216 |
| 1.4467 | 1.6810 | 980 | 0.5013 | 0.5841 | 0.5013 | 0.7080 |
| 1.4467 | 1.6844 | 982 | 0.5081 | 0.5773 | 0.5081 | 0.7128 |
| 1.4467 | 1.6878 | 984 | 0.5251 | 0.5647 | 0.5251 | 0.7246 |
| 1.4467 | 1.6913 | 986 | 0.5149 | 0.5633 | 0.5149 | 0.7176 |
| 1.4467 | 1.6947 | 988 | 0.4974 | 0.5435 | 0.4974 | 0.7052 |
| 1.4467 | 1.6981 | 990 | 0.4818 | 0.5208 | 0.4818 | 0.6941 |
| 1.4467 | 1.7015 | 992 | 0.4888 | 0.4916 | 0.4888 | 0.6991 |
| 1.4467 | 1.7050 | 994 | 0.5038 | 0.4543 | 0.5038 | 0.7098 |
| 1.4467 | 1.7084 | 996 | 0.5016 | 0.4525 | 0.5016 | 0.7082 |
| 1.4467 | 1.7118 | 998 | 0.4872 | 0.4982 | 0.4872 | 0.6980 |
| 1.307 | 1.7153 | 1000 | 0.4887 | 0.4958 | 0.4887 | 0.6991 |
| 1.307 | 1.7187 | 1002 | 0.5017 | 0.5087 | 0.5017 | 0.7083 |
| 1.307 | 1.7221 | 1004 | 0.5236 | 0.5381 | 0.5236 | 0.7236 |
| 1.307 | 1.7256 | 1006 | 0.5646 | 0.5846 | 0.5646 | 0.7514 |
| 1.307 | 1.7290 | 1008 | 0.5972 | 0.5660 | 0.5972 | 0.7728 |
| 1.307 | 1.7324 | 1010 | 0.5695 | 0.5561 | 0.5695 | 0.7546 |
| 1.307 | 1.7358 | 1012 | 0.5178 | 0.5463 | 0.5178 | 0.7196 |
| 1.307 | 1.7393 | 1014 | 0.4861 | 0.5501 | 0.4861 | 0.6972 |
| 1.307 | 1.7427 | 1016 | 0.4876 | 0.5395 | 0.4876 | 0.6983 |
| 1.307 | 1.7461 | 1018 | 0.4866 | 0.5360 | 0.4866 | 0.6975 |
| 1.307 | 1.7496 | 1020 | 0.4853 | 0.5405 | 0.4853 | 0.6966 |
| 1.307 | 1.7530 | 1022 | 0.4882 | 0.5377 | 0.4882 | 0.6987 |
| 1.307 | 1.7564 | 1024 | 0.5056 | 0.5343 | 0.5056 | 0.7110 |
| 1.307 | 1.7599 | 1026 | 0.5697 | 0.5727 | 0.5697 | 0.7548 |
| 1.307 | 1.7633 | 1028 | 0.5984 | 0.5816 | 0.5984 | 0.7735 |
| 1.307 | 1.7667 | 1030 | 0.5625 | 0.5673 | 0.5625 | 0.7500 |
| 1.307 | 1.7702 | 1032 | 0.5218 | 0.5736 | 0.5218 | 0.7223 |
| 1.307 | 1.7736 | 1034 | 0.4841 | 0.5602 | 0.4841 | 0.6958 |
| 1.307 | 1.7770 | 1036 | 0.4750 | 0.5322 | 0.4750 | 0.6892 |
| 1.307 | 1.7804 | 1038 | 0.5044 | 0.4753 | 0.5044 | 0.7102 |
| 1.307 | 1.7839 | 1040 | 0.5388 | 0.4538 | 0.5388 | 0.7340 |
| 1.307 | 1.7873 | 1042 | 0.5200 | 0.4602 | 0.5200 | 0.7211 |
| 1.307 | 1.7907 | 1044 | 0.4939 | 0.4771 | 0.4939 | 0.7028 |
| 1.307 | 1.7942 | 1046 | 0.4687 | 0.5325 | 0.4687 | 0.6846 |
| 1.307 | 1.7976 | 1048 | 0.4940 | 0.5668 | 0.4940 | 0.7029 |
| 1.307 | 1.8010 | 1050 | 0.5585 | 0.5803 | 0.5585 | 0.7473 |
| 1.307 | 1.8045 | 1052 | 0.5689 | 0.5679 | 0.5689 | 0.7542 |
| 1.307 | 1.8079 | 1054 | 0.5711 | 0.5514 | 0.5711 | 0.7557 |
| 1.307 | 1.8113 | 1056 | 0.6434 | 0.5778 | 0.6434 | 0.8021 |
| 1.307 | 1.8148 | 1058 | 0.7629 | 0.5149 | 0.7629 | 0.8735 |
| 1.307 | 1.8182 | 1060 | 0.8838 | 0.4890 | 0.8838 | 0.9401 |
| 1.307 | 1.8216 | 1062 | 0.9201 | 0.4814 | 0.9201 | 0.9592 |
| 1.307 | 1.8250 | 1064 | 0.9079 | 0.4724 | 0.9079 | 0.9529 |
| 1.307 | 1.8285 | 1066 | 0.8348 | 0.4747 | 0.8348 | 0.9137 |
| 1.307 | 1.8319 | 1068 | 0.8540 | 0.4885 | 0.8540 | 0.9241 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
ttorolee/qwen2.5_7b_it_100 | ttorolee | "2024-11-08T08:07:11Z" | 5 | 0 | null | [
"safetensors",
"qwen2",
"law",
"unsloth",
"trl",
"sft",
"text-generation",
"conversational",
"ko",
"en",
"base_model:unsloth/Qwen2.5-7B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"region:us"
] | text-generation | "2024-11-08T06:58:36Z" | ---
license: apache-2.0
language:
- ko
- en
base_model:
- unsloth/Qwen2.5-7B-Instruct
pipeline_tag: text-generation
tags:
- law
- unsloth
- trl
- sft
---
|
RylanSchaeffer/pythia-70m_tatsu-lab_alpaca_farm_sftsd0_policy_pythia-6.9b_gold_internlm2-7b_noise0.25_rmsd2 | RylanSchaeffer | "2024-07-31T00:00:58Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-classification",
"trl",
"reward-trainer",
"generated_from_trainer",
"base_model:RylanSchaeffer/EleutherAI_pythia-70m_tatsu-lab_alpaca_farm_sftseed0",
"base_model:finetune:RylanSchaeffer/EleutherAI_pythia-70m_tatsu-lab_alpaca_farm_sftseed0",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-07-31T00:00:52Z" | ---
license: apache-2.0
base_model: RylanSchaeffer/EleutherAI_pythia-70m_tatsu-lab_alpaca_farm_sftseed0
tags:
- trl
- reward-trainer
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: pythia-70m_tatsu-lab_alpaca_farm_sftsd0_policy_pythia-6.9b_gold_internlm2-7b_noise0.25_rmsd2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/rylan/switching-rms-rm/runs/9c8zd3l2)
# pythia-70m_tatsu-lab_alpaca_farm_sftsd0_policy_pythia-6.9b_gold_internlm2-7b_noise0.25_rmsd2
This model is a fine-tuned version of [RylanSchaeffer/EleutherAI_pythia-70m_tatsu-lab_alpaca_farm_sftseed0](https://huggingface.co/RylanSchaeffer/EleutherAI_pythia-70m_tatsu-lab_alpaca_farm_sftseed0) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7715
- Accuracy: 0.5312
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 2
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.025
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| No log | 0 | 0 | 0.8791 | 0.5158 |
| 0.8237 | 0.0648 | 100 | 0.8836 | 0.5077 |
| 0.8692 | 0.1296 | 200 | 0.8703 | 0.5120 |
| 0.8862 | 0.1944 | 300 | 0.8459 | 0.5139 |
| 0.805 | 0.2592 | 400 | 0.8338 | 0.5170 |
| 0.8476 | 0.3239 | 500 | 0.8211 | 0.5247 |
| 0.87 | 0.3887 | 600 | 0.8137 | 0.5197 |
| 0.7827 | 0.4535 | 700 | 0.8091 | 0.5251 |
| 0.8028 | 0.5183 | 800 | 0.8105 | 0.5224 |
| 0.7531 | 0.5831 | 900 | 0.8027 | 0.5255 |
| 0.7557 | 0.6479 | 1000 | 0.7992 | 0.5270 |
| 0.8452 | 0.7127 | 1100 | 0.8015 | 0.5224 |
| 0.7943 | 0.7775 | 1200 | 0.7905 | 0.5262 |
| 0.7649 | 0.8422 | 1300 | 0.7861 | 0.5274 |
| 0.7663 | 0.9070 | 1400 | 0.7874 | 0.5351 |
| 0.7498 | 0.9718 | 1500 | 0.7858 | 0.5351 |
| 0.7649 | 1.0366 | 1600 | 0.7848 | 0.5289 |
| 0.7859 | 1.1014 | 1700 | 0.7861 | 0.5285 |
| 0.7689 | 1.1662 | 1800 | 0.7864 | 0.5297 |
| 0.745 | 1.2310 | 1900 | 0.7821 | 0.5289 |
| 0.7447 | 1.2958 | 2000 | 0.7830 | 0.5340 |
| 0.8268 | 1.3605 | 2100 | 0.7796 | 0.5293 |
| 0.7596 | 1.4253 | 2200 | 0.7797 | 0.5336 |
| 0.7543 | 1.4901 | 2300 | 0.7741 | 0.5278 |
| 0.7558 | 1.5549 | 2400 | 0.7736 | 0.5266 |
| 0.7518 | 1.6197 | 2500 | 0.7725 | 0.5251 |
| 0.7845 | 1.6845 | 2600 | 0.7738 | 0.5367 |
| 0.763 | 1.7493 | 2700 | 0.7776 | 0.5262 |
| 0.7527 | 1.8141 | 2800 | 0.7756 | 0.5312 |
| 0.7533 | 1.8788 | 2900 | 0.7799 | 0.5262 |
| 0.7932 | 1.9436 | 3000 | 0.7757 | 0.5347 |
| 0.7522 | 2.0084 | 3100 | 0.7757 | 0.5332 |
| 0.7677 | 2.0732 | 3200 | 0.7738 | 0.5228 |
| 0.7804 | 2.1380 | 3300 | 0.7733 | 0.5274 |
| 0.7504 | 2.2028 | 3400 | 0.7742 | 0.5266 |
| 0.7793 | 2.2676 | 3500 | 0.7757 | 0.5266 |
| 0.7447 | 2.3324 | 3600 | 0.7726 | 0.5266 |
| 0.7647 | 2.3971 | 3700 | 0.7728 | 0.5343 |
| 0.7154 | 2.4619 | 3800 | 0.7704 | 0.5251 |
| 0.7742 | 2.5267 | 3900 | 0.7743 | 0.5312 |
| 0.7828 | 2.5915 | 4000 | 0.7758 | 0.5197 |
| 0.7383 | 2.6563 | 4100 | 0.7729 | 0.5297 |
| 0.765 | 2.7211 | 4200 | 0.7761 | 0.5270 |
| 0.7862 | 2.7859 | 4300 | 0.7764 | 0.5255 |
| 0.7602 | 2.8507 | 4400 | 0.7735 | 0.5270 |
| 0.7487 | 2.9155 | 4500 | 0.7758 | 0.5266 |
| 0.7447 | 2.9802 | 4600 | 0.7747 | 0.5297 |
| 0.7869 | 3.0450 | 4700 | 0.7756 | 0.5340 |
| 0.7655 | 3.1098 | 4800 | 0.7778 | 0.5301 |
| 0.7438 | 3.1746 | 4900 | 0.7717 | 0.5270 |
| 0.7754 | 3.2394 | 5000 | 0.7725 | 0.5320 |
| 0.7783 | 3.3042 | 5100 | 0.7685 | 0.5401 |
| 0.7806 | 3.3690 | 5200 | 0.7718 | 0.5289 |
| 0.7755 | 3.4338 | 5300 | 0.7700 | 0.5343 |
| 0.7698 | 3.4985 | 5400 | 0.7723 | 0.5270 |
| 0.7772 | 3.5633 | 5500 | 0.7733 | 0.5320 |
| 0.8048 | 3.6281 | 5600 | 0.7750 | 0.5266 |
| 0.7491 | 3.6929 | 5700 | 0.7732 | 0.5274 |
| 0.8085 | 3.7577 | 5800 | 0.7757 | 0.5243 |
| 0.7653 | 3.8225 | 5900 | 0.7739 | 0.5228 |
| 0.7702 | 3.8873 | 6000 | 0.7747 | 0.5197 |
| 0.7671 | 3.9521 | 6100 | 0.7711 | 0.5316 |
| 0.777 | 4.0168 | 6200 | 0.7739 | 0.5282 |
| 0.7451 | 4.0816 | 6300 | 0.7709 | 0.5324 |
| 0.7121 | 4.1464 | 6400 | 0.7706 | 0.5355 |
| 0.7714 | 4.2112 | 6500 | 0.7721 | 0.5370 |
| 0.7299 | 4.2760 | 6600 | 0.7697 | 0.5382 |
| 0.782 | 4.3408 | 6700 | 0.7759 | 0.5312 |
| 0.7759 | 4.4056 | 6800 | 0.7726 | 0.5270 |
| 0.7474 | 4.4704 | 6900 | 0.7669 | 0.5355 |
| 0.776 | 4.5351 | 7000 | 0.7721 | 0.5309 |
| 0.7693 | 4.5999 | 7100 | 0.7720 | 0.5316 |
| 0.7578 | 4.6647 | 7200 | 0.7731 | 0.5274 |
| 0.7431 | 4.7295 | 7300 | 0.7690 | 0.5351 |
| 0.7883 | 4.7943 | 7400 | 0.7726 | 0.5255 |
| 0.7794 | 4.8591 | 7500 | 0.7704 | 0.5255 |
| 0.7697 | 4.9239 | 7600 | 0.7730 | 0.5312 |
| 0.7373 | 4.9887 | 7700 | 0.7714 | 0.5328 |
### Framework versions
- Transformers 4.42.4
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
tomaszki/llama-7 | tomaszki | "2024-04-22T17:46:30Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-22T17:43:22Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sarahai/ruT5-base-summarizer | sarahai | "2024-03-19T18:11:21Z" | 631 | 5 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"summarization",
"summarizer",
"суммаризатор",
"text-generation-inference",
"russian text summarizer",
"ru",
"dataset:IlyaGusev/gazeta",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | "2024-02-28T09:53:20Z" | ---
license: apache-2.0
datasets:
- IlyaGusev/gazeta
language:
- ru
pipeline_tag: summarization
tags:
- summarization
- summarizer
- суммаризатор
- text-generation-inference
- russian text summarizer
widget:
- text: >-
83-летняя жительница Хабаровского края сутки простояла в трясине, отпугивая
кружащего вокруг нее медведя рыком. Об этом сообщает ТАСС со ссылкой на
источник в добровольческом поисково-спасательном отряде. Об инциденте стало
известно 5 августа, когда в правоохранительные органы обратились
родственники пенсионерки. По их словам, утром того дня она ушла в лес за
грибами из поселка Сита и пропала. На поиски пожилой женщины вышли местные
жители, участники спасательного отряда, охотники и сотрудники
патрульно-постовой службы. Они несколько раз видели следы медведей, их
лежанки, а также слышали хищников, бродящих неподалеку. Разыскать
пенсионерку удалось только 7 августа. «Ночью в лесу в нескольких метрах от
лежанки медведя было обнаружено ведро с грибами, поисковики услышали
нехарактерное для животных рычание и в овраге в ручье увидели бабушку. Рыком
женщина пыталась отпугнуть караулившего ее медведя», — рассказал
представитель поискового отряда. Когда спасатели освобождали жительницу
Приморья от оков трясины, рядом все еще ходил медведь — его спугнул лишь
подъехавший за поисковиками автомобиль. В итоге женщину отвезли в районную
больницу. Врачи заподозрили у нее травму черепа и отправили в медучреждение
Хабаровска, но там диагноз не подтвердился. По словам сотрудников больницы,
пострадавшая испытала сильный стресс, из-за которого у нее повысилась
сонливость, передает портал Life.ru. Позже пенсионерка рассказала, что
ходила по лесу в поисках грибов и угодила в илистое дно ручья, как вдруг
около нее начал кружить медведь. Чтобы отпугнуть дикого зверя, женщина стала
громко рычать. Ранее нападение медведя на человека произошло 24 июля в
Карелии. Там на территорию дачного участка в садово-огородническом
товариществе «Родник» прибежал медвежонок — его увидел хозяин дома и решил
погладить. Через некоторое время из леса вышла медведица и впилась зубами в
мужчину. Его госпитализировали с укусами в районе предплечья и
прооперировали. По словам главврача медучреждения, пациент находится в
состоянии средней тяжести, передает газета «Новости Костомукши». Жители
Карелии заявили, что хищники давно держат в страхе целые районы. Так,
медведи заполонили город Беломорск: их замечали на заводах, набережной,
около магазина и в порту. Прогулку одного из зверей сняла камера наружного
видеонаблюдения, расположенная на побережье. 3 августа местная жительница
якобы встретила медведя прямо у продуктового магазина. «Может уже что-нибудь
сделают с этим. У некоторых дети гуляют до 11, а медведи сейчас голодные и
бродят.
example_title: Summarization Example 2
library_name: transformers
metrics:
- accuracy
---
Russian text summarizer was fine-tuned from ai-forever/ruT5-base model and trained on ~60k rows samples' dataset.
Example Usage:
```python
import torch
from transformers import T5Tokenizer, T5ForConditionalGeneration
model_name = "sarahai/ruT5-base-summarizer"
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
device = torch.device("cpu") #if you are using cpu
input_text = "Похоже, в Солнечной системе вскоре могут снова произойти изменения, связанные с переклассификацией известных науке тел. По мнению ученых, в ближайшем будущем возможно увеличение числа так называемых карликовых планет — тел, из-за которых возникает наибольшее число споров в астрономической среде. Чтобы относиться к карликовым планетам, по правилам Международного астрономического союза телу Солнечной системы необходимо удовлетворять сразу четырем критериям. Во-первых, оно должно вращаться вокруг Солнца, при этом оно не должно быть спутником одной из планет. Пространство вокруг тела должно быть «очищено» от других объектов, и, наконец, тело должно быть достаточно массивным, чтобы быть в состоянии гидростатического равновесия — иначе говоря, оно должно быть относительно круглым. Внутри Солнечной системы есть огромное число тел, удовлетворяющих первым трем критериям, особенно, находящихся внутри Главного пояса астероидов между орбитами Марса и Юпитера. Всем четырем критериям до последнего времени, как считалось, удовлетворяли пять тел Солнечной системы — транснептуновые объекты Плутон, Эрида, Макемаке, Хаумеа и наименьшая из известных карликовых планет Церера, находящаяся в поясе астероидов. Однако последние наблюдения показали, что к карликовым планетам стоит отнести еще одно тело – Гигею, четвертый по величине объект пояса астероидов после Цереры, Весты и Паллады. До недавнего времени этот астероид был мало изучен — астрономы знали, что он имеет продолговатую форму размером более 400 километров. Наблюдения, проведенные в Чили на одном из крупнейших телескопов мира Very Large Telescope (Очень большой телескоп), смогли качественно изменить представление о форме этого тела. «Благодаря уникальным возможностям инструмента SPHERE на телескопе VLT, остающемся одной из мощнейших строящих изображение систем в мире, мы смогли рассмотреть форму Гигеи, которая оказалась почти сферической, — пояснил астроном Пьер Вернацца из Астрофизической лаборатории в Марселе. — Благодаря этим снимкам Гигея может быть переклассифицирована в карликовую планету, самую маленькую в Солнечной системе». Согласно новым наблюдениям, диаметр Гигеи составляет свыше 430 километров, а период вращения вокруг собственной оси — 13,8 часа. Ученые и раньше знали, что поверхность Гигеи схожа с поверхностью Цереры и имеет такую же низкую плотность. Однако теперь стало очевидно, что Гигея почти такая же круглая, как и Церера, и потому имеет полное право тоже называться карликовой планетой. Немало удивило астрономов и другое обстоятельство — отсутствие на поверхности Гигеи крупных ударных кратеров. Дело в то, что примерно на одной орбите с Гигеей находится порядка 7 тыс. небольших астероидов схожего состава. Гигея — наиболее массивное из этих тел, принадлежащих к одному семейству. Считается, что вся группа образовалась порядка 2 миллиардов лет назад, когда удар крупного тела выбил из Гигеи множество осколков, вылетевших в окружающее пространство. Похожее событие пережила в далеком прошлом Веста, создав вокруг себя аналогичное семейство астероидов. Правда, на теле Весты до сих пор присутствуют следы этого бурного прошлого. Снимки 95% поверхности Гигеи позволили обнаружить лишь два мелких кратера на ее поверхности, которые не идут ни в какое сравнение с «ранами» на поверхности Гигеи. «Ни один из этих кратеров не мог быть вызван ударом, образовавшим семейство астероидов Гигеи, чей объем соответствует объему тела диаметром сто километров. Они слишком маленькие», — пояснил интригу Мирослав Броз, астроном из Карлова Университета в Чехии. На помощь в решении этой загадки пришло численное моделирование, часто используемое астрофизиками для описания эволюции различных астрономических систем. С его помощью астрономы показали, что округлая форма современной Гигеи и наличие рядом с ней роя астероидов — следствие сильнейшего лобового столкновения Гигеи с крупным телом, имевшим в поперечнике от 75 до 150 километров. Моделирование показало, что это соударение, произошедшее 2 млрд лет назад, почти полностью разнесло на части Гигею. Образовавшиеся после этого осколки, слипшись под действием гравитации, заново сформировали Гигею, дав ей почти идеально круглую форму. «Такие столкновения между двумя крупными телами в поясе астероидов уникальны для последних 3-4 миллиардов лет», — пояснил Равел Севечек, соавтор исследования , опубликованного в журнале Nature Astronomy. Ранее астрономы объявили об открытии, которое в очередной раз заставит авторов переписывать учебники астрономии. С конца 1990-х годов считалось, что планетой Солнечной системы, имеющей наибольшее число спутников, является Юпитер, у которого их в настоящее время насчитывается 79 штук. Вторым после него по этому показателю был Сатурн, третьим – Уран. Однако теперь рекордсменом стал именно Сатурн, которому астрономы добавили сразу 20 небольших, ранее неизвестных спутников. Теперь их у него как минимум 82 штуки. Новые спутники были открыты при помощи телескопа Subaru, расположенного на горе Мауна-Кеа на Гавайях. Обнаружить объекты позволили новые компьютерные алгоритмы, примененные для обработки данных, полученных еще в 2004-2004 годы." #your input in russian
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(device)
outputs = model.generate(input_ids, max_length=100, min_length=50, length_penalty=2.0, num_beams=4, early_stopping=True) #change according to your preferences
summary = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(summary)
```
References
Hugging Face Model Hub
T5 Paper
Disclaimer: The model's performance may be influenced by the quality and representativeness of the data it was fine-tuned on. Users are encouraged to assess the model's suitability for their specific applications and datasets. |
LoneStriker/Yi-34B-GiftedConvo-merged-8.0bpw-h8-exl2 | LoneStriker | "2023-11-09T19:49:03Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:NobodyExistsOnTheInternet/GiftedConvoBeforeEcons",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-11-09T19:45:54Z" | ---
license: mit
datasets:
- NobodyExistsOnTheInternet/GiftedConvoBeforeEcons
---
Trained on over 20k instruct generated all by gpt-4 or humans
Dataset features:
1000 long evolved conversations based off LIMA
Subsection of correct PRM800k data
Subsection of CamelAI's Physics and Chemistry data
The model is trained with Qlora as well as Axolotl.
The model format is Vicuna 1.1:
```
User: ...
Assistant: ...
``` |
Daemontatox/PathFinderAi3.0 | Daemontatox | "2025-01-10T04:36:45Z" | 74 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-31T15:18:21Z" | ---
base_model: Daemontatox/PathFinderAI3.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
model-index:
- name: PathFinderAi3.0
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: wis-k/instruction-following-eval
split: train
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 42.71
name: averaged accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Daemontatox%2FPathFinderAi3.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: SaylorTwift/bbh
split: test
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 55.54
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Daemontatox%2FPathFinderAi3.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: lighteval/MATH-Hard
split: test
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 48.34
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Daemontatox%2FPathFinderAi3.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
split: train
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 21.14
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Daemontatox%2FPathFinderAi3.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 20.05
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Daemontatox%2FPathFinderAi3.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 52.86
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Daemontatox%2FPathFinderAi3.0
name: Open LLM Leaderboard
---

# PathFinderAI 3.0
PathFinderAI 3.0 is a high-performance language model designed for advanced reasoning, real-time text analysis, and decision support. Fine-tuned for diverse applications, it builds upon the capabilities of Qwen2, optimized with cutting-edge tools for efficiency and performance.
## Features
- **Advanced Reasoning:** Fine-tuned for real-time problem-solving and logic-driven tasks.
- **Enhanced Performance:** Trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and the Hugging Face TRL library.
- **Multi-domain Capability:** Excels in education, research, business, legal, and healthcare applications.
- **Optimized Architecture:** Leverages Qwen2 for robust language understanding and generation.
## Training Details
- **Base Model:** Daemontatox/PathFinderAI3.0
- **Training Frameworks:** [Unsloth](https://github.com/unslothai/unsloth) and Hugging Face’s TRL library.
- **Optimization:** Quantization-aware training for faster inference and deployment on resource-constrained environments.
## Deployment
This model is ideal for deployment on both cloud platforms and edge devices, including Raspberry Pi, utilizing efficient quantization techniques.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
## License
The model is open-sourced under the Apache 2.0 license.
## Usage
To load the model:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Daemontatox/PathFinderAI3.0"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
input_text = "What is the capital of France?"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0]))
```
Model Applications
PathFinderAI 3.0 is designed for:
Real-time reasoning and problem-solving
Text generation and comprehension
Legal and policy analysis
Educational tutoring
Healthcare decision support
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/Daemontatox__PathFinderAi3.0-details)!
Summarized results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/contents/viewer/default/train?q=Daemontatox%2FPathFinderAi3.0&sort[column]=Average%20%E2%AC%86%EF%B8%8F&sort[direction]=desc)!
| Metric |Value (%)|
|-------------------|--------:|
|**Average** | 40.11|
|IFEval (0-Shot) | 42.71|
|BBH (3-Shot) | 55.54|
|MATH Lvl 5 (4-Shot)| 48.34|
|GPQA (0-shot) | 21.14|
|MuSR (0-shot) | 20.05|
|MMLU-PRO (5-shot) | 52.86|
|
ADG-2353/Reinforce-Pixelcopter | ADG-2353 | "2024-03-30T21:40:37Z" | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2024-03-30T21:40:01Z" | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 35.10 +/- 24.07
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
sail-rvc/smg4 | sail-rvc | "2023-07-14T07:43:49Z" | 1 | 0 | transformers | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
] | audio-to-audio | "2023-07-14T07:43:35Z" |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# smg4
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:43:49
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
|
flooptherocket/DialogGPT-small-rick | flooptherocket | "2021-09-10T01:17:41Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | ---
tags: conversational
---
@Rick from Rick and Morty GPT-2 Conversation Model
---
|
huggingtweets/johnowhitaker | huggingtweets | "2021-08-11T10:36:34Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | ---
language: en
thumbnail: https://www.huggingtweets.com/johnowhitaker/1628678191103/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1165660747504005120/5nA4Go6i_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Jonathan Whitaker</div>
<div style="text-align: center; font-size: 14px;">@johnowhitaker</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Jonathan Whitaker.
| Data | Jonathan Whitaker |
| --- | --- |
| Tweets downloaded | 508 |
| Retweets | 45 |
| Short tweets | 13 |
| Tweets kept | 450 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2iuk80nc/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @johnowhitaker's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2xsei074) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2xsei074/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/johnowhitaker')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
lctzz540/bunbo | lctzz540 | "2024-05-18T10:05:18Z" | 2 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:ura-hcmut/ura-llama-7b",
"base_model:adapter:ura-hcmut/ura-llama-7b",
"region:us"
] | null | "2024-05-18T10:04:42Z" | ---
library_name: peft
base_model: ura-hcmut/ura-llama-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
TalentoTechIA/JuanDavidArdila | TalentoTechIA | "2025-01-21T01:27:41Z" | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2025-01-21T01:11:17Z" | ---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: JuanDavidArdila
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# JuanDavidArdila
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0326
- Accuracy: 0.9850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.0509 | 3.8462 | 500 | 0.0326 | 0.9850 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
nhoxinh/46bae12d-dc84-42e4-a2e0-fd8b7c1ad706 | nhoxinh | "2025-01-15T01:29:26Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:codellama/CodeLlama-7b-hf",
"base_model:adapter:codellama/CodeLlama-7b-hf",
"license:llama2",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-15T01:09:59Z" | ---
library_name: peft
license: llama2
base_model: codellama/CodeLlama-7b-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 46bae12d-dc84-42e4-a2e0-fd8b7c1ad706
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: codellama/CodeLlama-7b-hf
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 016ff5466a568eaf_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/016ff5466a568eaf_train_data.json
type:
field_instruction: role
field_output: text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhoxinh/46bae12d-dc84-42e4-a2e0-fd8b7c1ad706
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/016ff5466a568eaf_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e8ec2a58-4a2d-41db-ba55-d5a6443e9dce
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: e8ec2a58-4a2d-41db-ba55-d5a6443e9dce
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 46bae12d-dc84-42e4-a2e0-fd8b7c1ad706
This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8143
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.6048 | 0.0573 | 200 | 1.8143 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
LoneStriker/TenyxChat-8x7B-v1-6.0bpw-h6-exl2 | LoneStriker | "2024-01-19T22:47:12Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"tenyx-fine-tuning",
"dpo",
"tenyxchat",
"conversational",
"en",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"arxiv:2305.18290",
"arxiv:2401.04088",
"arxiv:2306.05685",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-01-19T22:32:44Z" | ---
license: apache-2.0
language:
- en
library_name: transformers
tags:
- tenyx-fine-tuning
- dpo
- tenyxchat
datasets:
- HuggingFaceH4/ultrafeedback_binarized
---
# TenyxChat: Language Model Alignment using Tenyx Fine-tuning
Introducing TenyxChat-8x7B-v1, part of our TenyxChat series trained to function as useful assistants through preference tuning, using Tenyx's recently released advanced fine-tuning technology ([VentureBeat article](https://venturebeat.com/ai/tenyx-aims-to-fix-llms-catastrophic-forgetting-problem/)). Our model is trained using the [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290) framework on the open-source AI feedback dataset [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized).
We fine-tune [Mixtral-8x7B-Instruct-v0.1](https://arxiv.org/pdf/2401.04088.pdf) with our proprietary approach ([blog](https://www.tenyx.com/post/forgetting-and-toxicity-in-llms-a-deep-dive-on-fine-tuning-methods), [service](https://www.tenyx.com/fine-tuning)),
similar to that of our [7B model](https://huggingface.co/tenyx/TenyxChat-7B-v1), and show an increase in [MT-Bench](https://arxiv.org/abs/2306.05685) scores.
Our approach aims to mitigate forgetting in LLMs in a computationally efficient manner, thereby enabling continual fine-tuning capabilities without altering the pre-trained output distribution.
TenyxChat-8x7B-v1 was trained using eight A100s (80GB) for about eight hours, with a training setup obtained from HuggingFaceH4 ([GitHub](https://github.com/huggingface/alignment-handbook)).
# Model details
- Model type: Fine-tuned Mixture Of Expert 8x7B model for chat.
- License: Apache 2.0
- Base model: [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
- Demo: [spaces/tenyx/TenyxChat-8x7B-v1](https://huggingface.co/spaces/tenyx/TenyxChat-8x7B-v1)
## Usage
Our model uses a simple chat template based on Mixtral-8x7B-Instruct-v0.1 . The chat template usage with a Hugging face generation example is shown below.
### Chat Template (Jinja)
```rust
{{ bos_token }}
{% for message in messages %}
{% if message['role'] == 'user' %}
{{ '[INST]' + message['content'] + '[/INST]' }}
{% elif message['role'] == 'system' %}
{{ '[INST]' + message['content'] + '[/INST]' }}
{% elif message['role'] == 'assistant' %}
{{ message['content'] + eos_token }}
{% endif %}
{% endfor %}
```
### Hugging face Example
```python
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="tenyx/TenyxChat-8x7B-v1", torch_dtype=torch.bfloat16, device_map="auto")
messages = [
{"role": "system", "content": "You are a friendly chatbot who always responds in the style of a pirate."},
{"role": "user", "content": "Hi. I would like to make a hotel booking."},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=512, do_sample=False)
```
### Output
```
<s>[INST]You are a friendly chatbot who always responds in the style of a pirate.[/INST]
[INST]Hi. I would like to make a hotel booking.[/INST]
Ahoy there, me hearty! Ye wish to make a hotel booking, do ye? Well, let's set sail on this voyage of reservations and see what we can find!
What's the name of the port (hotel) and the dates of our journey (check-in and check-out)? I'll do me best to assist ye!
```
# Performance
At the time of release (Jan 2024), TenyxChat-8x7B-v1 is the highest-ranked model, only superseded by GPT4, on the MT-Bench evaluation available for download and commercial use.
## MT-Bench
MT-Bench is a benchmark made up of 80 high-quality multi-turn questions. These questions fall into eight categories: Writing, Roleplay, Reasoning, Math, Coding, Extraction, STEM, and Humanities. The chat models are rated using GPT-4 on a scale of 1 to 10, with higher values corresponding to better responses.
| Model | First Turn | Second Turn | Average |
| --- | --- | --- | --- |
| GPT-4* | 8.95625 | 9.02500 | 8.990625 |
| TenyxChat-8x7B-v1 | 8.63750 | 8.16250 | 8.400000 |
| Mixtral (reproduced) | 8.49375 | 8.00000 | 8.246875 |
| GPT-3.5-turbo* | 8.07500 | 7.81250 | 7.943750 |
*values reported on [lmsys](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge) ChatBot Arena

# Limitations
TenyxChat-8x7B-v1, like other language models, has its own set of limitations. We haven’t fine-tuned the model explicitly to align with **human** safety preferences. Therefore, it is capable of producing undesirable outputs, particularly when adversarially prompted. From our observation, the model still tends to struggle with tasks that involve reasoning and math questions. In some instances, it might generate verbose or extraneous content.
# License
TenyxChat-8x7B-v1, similar to Mixtral-8x7B-Instruct-v0.1 , is distributed under the Apache License 2.0.
# Citation
If you use TenyxChat-8x7B-v1 for your research, cite us as
```
@misc{tenyxchat2024,
title={TenyxChat: Language Model Alignment using Tenyx Fine-tuning},
author={Tenyx},
year={2024},
}
``` |
gubartz/st_scibert_abstruct | gubartz | "2023-08-04T17:49:54Z" | 41 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2023-08-04T17:49:43Z" | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 352 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.BatchAllTripletLoss.BatchAllTripletLoss`
Parameters of the fit()-Method:
```
{
"epochs": 20,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.TripletEvaluator.TripletEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 704,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
DavidAU/DeepSeek-MOE-4X8B-R1-Distill-Llama-3.1-Mad-Scientist-24B | DavidAU | "2025-02-15T03:45:20Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"mergekit",
"merge",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-15T00:14:14Z" | ---
library_name: transformers
tags:
- mergekit
- merge
base_model: []
---
<h2>DeepSeek-MOE-4X8B-R1-Distill-Llama-3.1-Mad-Scientist-24B</h2>
This repo contains the full precision source code, in "safe tensors" format to generate GGUFs, GPTQ, EXL2, AWQ, HQQ and other formats.
The source code can also be used directly.
(source files will be uploaded when parameter count shows in upper left)
NOTE: Links to GGUFs below.
<B>IMPORTANT: Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B>
If you are going to use this model, (source, GGUF or a different quant), please review this document for critical parameter, sampler and advance sampler settings (for multiple AI/LLM aps).
This a "Class 1" (settings will enhance operation) model:
For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) (especially for use case(s) beyond the model's design) please see:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
REASON:
Regardless of "model class" this document will detail methods to enhance operations.
If the model is a Class 3/4 model the default settings (parameters, samplers, advanced samplers) must be set for "use case(s)" uses correctly. Some AI/LLM apps DO NOT have consistant default setting(s) which result in sub-par model operation. Like wise for Class 3/4 models (which operate somewhat to very differently than standard models) additional samplers and advanced samplers settings are required to "smooth out" operation, AND/OR also allow full operation for use cases the model was not designed for.
BONUS - Use these settings for ANY model, ANY repo, ANY quant (including source/full precision):
This document also details parameters, sampler and advanced samplers that can be use FOR ANY MODEL, FROM ANY REPO too - all quants, and of course source code operation too - to enhance the operation of any model.
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
NOTE:
I strongly suggest you also visit the DavidAU GGUF (below) repo too for more details in using this model ; especially if it is "Class 3" or "Class 4" to get maximum performance from the model.
For full information about this model, including:
- Details about this model and its use case(s).
- Context limits
- Special usage notes / settings.
- Any model(s) used to create this model.
- Template(s) used to access/use this model.
- Example generation(s)
- GGUF quants of this model
Please go to:
[ https://huggingface.co/DavidAU/DeepSeek-MOE-4X8B-R1-Distill-Llama-3.1-Mad-Scientist-24B-gguf ] |
Reza2kn/XRAY | Reza2kn | "2024-10-10T02:31:22Z" | 7 | 4 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | "2024-10-10T02:28:57Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
chest XRAY image of 50 year old male with bilateral secondary PTB with
right upper atelectasis, right pleural adhesions, left compensatory
emphysema
output:
url: images/1728521628967__000016669_0.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: XRAY
---
# XRAY
<Gallery />
## Model description
Tryin' somethin' here. v1
## Trigger words
You should use `XRAY` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Reza2kn/XRAY/tree/main) them in the Files & versions tab.
|
martimfasantos/tinyllama-1.1b-sum-dpo-full_LR2e-7_3epochs | martimfasantos | "2024-06-09T13:24:43Z" | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"dataset:openai/summarize_from_feedback",
"base_model:martimfasantos/tinyllama-1.1b-sum-sft-full",
"base_model:finetune:martimfasantos/tinyllama-1.1b-sum-sft-full",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-08T12:39:25Z" | ---
license: apache-2.0
base_model: martimfasantos/tinyllama-1.1b-sum-sft-full
tags:
- alignment-handbook
- trl
- dpo
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- openai/summarize_from_feedback
model-index:
- name: tinyllama-1.1b-sum-dpo-full_LR2e-7_3epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tinyllama-1.1b-sum-dpo-full_LR2e-7_3epochs
This model is a fine-tuned version of [martimfasantos/tinyllama-1.1b-sum-sft-full](https://huggingface.co/martimfasantos/tinyllama-1.1b-sum-sft-full) on the openai/summarize_from_feedback dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6411
- Rewards/chosen: -1.5955
- Rewards/rejected: -1.9066
- Rewards/accuracies: 0.6273
- Rewards/margins: 0.3112
- Logps/rejected: -253.4108
- Logps/chosen: -218.5612
- Logits/rejected: -2.1502
- Logits/chosen: -2.1697
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:-----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6924 | 0.0689 | 400 | 0.6930 | 0.0011 | 0.0007 | 0.5390 | 0.0003 | -62.6755 | -58.9094 | -2.9687 | -2.9723 |
| 0.6891 | 0.1378 | 800 | 0.6909 | -0.0061 | -0.0108 | 0.5748 | 0.0047 | -63.8305 | -59.6239 | -2.9588 | -2.9622 |
| 0.6874 | 0.2068 | 1200 | 0.6876 | -0.0302 | -0.0427 | 0.5871 | 0.0124 | -67.0173 | -62.0385 | -2.9361 | -2.9395 |
| 0.676 | 0.2757 | 1600 | 0.6820 | -0.1057 | -0.1316 | 0.5850 | 0.0259 | -75.9065 | -69.5813 | -2.8942 | -2.8976 |
| 0.6751 | 0.3446 | 2000 | 0.6770 | -0.1715 | -0.2098 | 0.5890 | 0.0384 | -83.7308 | -76.1611 | -2.8434 | -2.8468 |
| 0.6518 | 0.4135 | 2400 | 0.6676 | -0.3727 | -0.4381 | 0.6069 | 0.0654 | -106.5637 | -96.2904 | -2.7893 | -2.7926 |
| 0.6695 | 0.4824 | 2800 | 0.6631 | -0.4734 | -0.5560 | 0.6141 | 0.0826 | -118.3500 | -106.3523 | -2.7415 | -2.7450 |
| 0.6467 | 0.5513 | 3200 | 0.6583 | -0.6700 | -0.7814 | 0.625 | 0.1113 | -140.8851 | -126.0199 | -2.6864 | -2.6902 |
| 0.6264 | 0.6203 | 3600 | 0.6586 | -0.6359 | -0.7384 | 0.6106 | 0.1024 | -136.5857 | -122.6100 | -2.6176 | -2.6225 |
| 0.6203 | 0.6892 | 4000 | 0.6523 | -0.7851 | -0.9183 | 0.6166 | 0.1332 | -154.5775 | -137.5248 | -2.5583 | -2.5642 |
| 0.6341 | 0.7581 | 4400 | 0.6487 | -0.8786 | -1.0259 | 0.6129 | 0.1473 | -165.3377 | -146.8752 | -2.4643 | -2.4723 |
| 0.6184 | 0.8270 | 4800 | 0.6454 | -1.0766 | -1.2481 | 0.6129 | 0.1716 | -187.5630 | -166.6730 | -2.4141 | -2.4242 |
| 0.609 | 0.8959 | 5200 | 0.6414 | -0.9919 | -1.1678 | 0.6164 | 0.1759 | -179.5278 | -158.2066 | -2.3970 | -2.4080 |
| 0.5977 | 0.9649 | 5600 | 0.6432 | -0.9166 | -1.0804 | 0.6273 | 0.1638 | -170.7888 | -150.6710 | -2.3933 | -2.4042 |
| 0.5845 | 1.0338 | 6000 | 0.6438 | -1.3686 | -1.6032 | 0.6245 | 0.2346 | -223.0724 | -195.8758 | -2.2640 | -2.2816 |
| 0.5789 | 1.1027 | 6400 | 0.6455 | -1.3882 | -1.6212 | 0.6164 | 0.2331 | -224.8725 | -197.8306 | -2.2428 | -2.2595 |
| 0.5681 | 1.1716 | 6800 | 0.6434 | -1.3348 | -1.5500 | 0.6129 | 0.2153 | -217.7540 | -192.4917 | -2.2435 | -2.2593 |
| 0.5602 | 1.2405 | 7200 | 0.6448 | -1.3673 | -1.5959 | 0.6234 | 0.2286 | -222.3391 | -195.7428 | -2.2210 | -2.2378 |
| 0.6357 | 1.3094 | 7600 | 0.6413 | -1.3975 | -1.6344 | 0.6125 | 0.2368 | -226.1876 | -198.7702 | -2.2034 | -2.2208 |
| 0.5491 | 1.3784 | 8000 | 0.6438 | -1.4655 | -1.7121 | 0.6055 | 0.2466 | -233.9599 | -205.5657 | -2.1906 | -2.2085 |
| 0.5537 | 1.4473 | 8400 | 0.6445 | -1.4375 | -1.6793 | 0.6259 | 0.2418 | -230.6812 | -202.7634 | -2.1797 | -2.1984 |
| 0.61 | 1.5162 | 8800 | 0.6405 | -1.0941 | -1.2946 | 0.6164 | 0.2005 | -192.2120 | -168.4266 | -2.2428 | -2.2579 |
| 0.523 | 1.5851 | 9200 | 0.6431 | -1.4596 | -1.7029 | 0.6289 | 0.2433 | -233.0398 | -204.9723 | -2.1570 | -2.1756 |
| 0.5412 | 1.6540 | 9600 | 0.6393 | -1.4228 | -1.6896 | 0.6315 | 0.2668 | -231.7097 | -201.2986 | -2.1513 | -2.1708 |
| 0.5368 | 1.7229 | 10000 | 0.6408 | -1.3358 | -1.5858 | 0.6236 | 0.2500 | -221.3330 | -192.5947 | -2.1730 | -2.1915 |
| 0.5064 | 1.7919 | 10400 | 0.6423 | -1.0625 | -1.2620 | 0.6215 | 0.1995 | -188.9488 | -165.2631 | -2.2150 | -2.2307 |
| 0.5268 | 1.8608 | 10800 | 0.6406 | -1.4254 | -1.6829 | 0.6341 | 0.2575 | -231.0404 | -201.5558 | -2.1644 | -2.1831 |
| 0.5384 | 1.9297 | 11200 | 0.6418 | -1.6486 | -1.9439 | 0.6364 | 0.2954 | -257.1440 | -223.8720 | -2.1299 | -2.1503 |
| 0.5734 | 1.9986 | 11600 | 0.6378 | -1.4356 | -1.7101 | 0.6362 | 0.2744 | -233.7563 | -202.5782 | -2.1624 | -2.1813 |
| 0.5302 | 2.0675 | 12000 | 0.6413 | -1.7064 | -2.0285 | 0.6292 | 0.3221 | -265.5970 | -229.6515 | -2.1257 | -2.1466 |
| 0.4961 | 2.1365 | 12400 | 0.6474 | -2.0075 | -2.3712 | 0.6387 | 0.3637 | -299.8690 | -259.7696 | -2.0958 | -2.1178 |
| 0.55 | 2.2054 | 12800 | 0.6415 | -1.5035 | -1.7868 | 0.6315 | 0.2833 | -241.4328 | -209.3660 | -2.1574 | -2.1761 |
| 0.5546 | 2.2743 | 13200 | 0.6425 | -1.6715 | -1.9874 | 0.6303 | 0.3159 | -261.4859 | -226.1615 | -2.1413 | -2.1612 |
| 0.5639 | 2.3432 | 13600 | 0.6409 | -1.5908 | -1.8980 | 0.6289 | 0.3072 | -252.5519 | -218.1001 | -2.1481 | -2.1675 |
| 0.5055 | 2.4121 | 14000 | 0.6384 | -1.4618 | -1.7629 | 0.6257 | 0.3010 | -239.0347 | -205.1979 | -2.1665 | -2.1857 |
| 0.5404 | 2.4810 | 14400 | 0.6405 | -1.6514 | -1.9790 | 0.6285 | 0.3276 | -260.6489 | -224.1589 | -2.1411 | -2.1613 |
| 0.5348 | 2.5500 | 14800 | 0.6418 | -1.6812 | -2.0090 | 0.6276 | 0.3278 | -263.6481 | -227.1385 | -2.1375 | -2.1578 |
| 0.5114 | 2.6189 | 15200 | 0.6408 | -1.5587 | -1.8632 | 0.6310 | 0.3046 | -249.0734 | -214.8810 | -2.1538 | -2.1732 |
| 0.5356 | 2.6878 | 15600 | 0.6405 | -1.5493 | -1.8534 | 0.6266 | 0.3041 | -248.0918 | -213.9473 | -2.1550 | -2.1743 |
| 0.4885 | 2.7567 | 16000 | 0.6406 | -1.5822 | -1.8916 | 0.6269 | 0.3094 | -251.9056 | -217.2328 | -2.1512 | -2.1707 |
| 0.5057 | 2.8256 | 16400 | 0.6410 | -1.5799 | -1.8883 | 0.6306 | 0.3084 | -251.5751 | -217.0051 | -2.1527 | -2.1720 |
| 0.5731 | 2.8946 | 16800 | 0.6412 | -1.5917 | -1.9021 | 0.6271 | 0.3104 | -252.9564 | -218.1854 | -2.1507 | -2.1702 |
| 0.4958 | 2.9635 | 17200 | 0.6412 | -1.5933 | -1.9040 | 0.6296 | 0.3107 | -253.1478 | -218.3473 | -2.1506 | -2.1702 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
|
SynthAIzer/finetuned-sentence-similarity | SynthAIzer | "2024-11-04T06:23:02Z" | 7 | 1 | null | [
"safetensors",
"mpnet",
"text classification",
"Transformers",
"bert",
"text-classification",
"en",
"arxiv:1910.09700",
"region:us"
] | text-classification | "2024-10-29T05:02:47Z" | ---
language:
- en
pipeline_tag: text-classification
tags:
- text classification
- Transformers
- bert
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ai4bharat/indic-conformer-600m | ai4bharat | "2025-03-13T11:03:46Z" | 0 | 0 | null | [
"automatic-speech-recognition",
"custom_code",
"region:us"
] | automatic-speech-recognition | "2025-03-13T10:53:49Z" | ---
pipeline_tag: automatic-speech-recognition
---
# **IndicConformer**
AI4Bharat's IndicConformers is a suite of ASR models built to deliver accurate speech-to-text conversion in all 22 official Indian languages. By leveraging cutting-edge deep learning techniques, these models provide precise transcriptions. As the country's first open-source ASR system covering such a vast array of languages, AI4Bharat Indic Conformer is a transformative tool for making technology more inclusive and accessible to all. IndicConformer is released under the MIT license.
## **Model Details**
- **Model Name:** IndicConformer-600M-Multi
- **Repository:** [ai4bharat/indic-conformer-600m-multilingual](https://huggingface.co/ai4bharat/indic-conformer-600m-multilingual)
- **Architecture:** Multilingual Conformer-based Hybrid CTC + RNNT ASR model
- **Parameter Size:** 600M
- **Languages Supported:** IN-22
---
## **Model Usage**
This model can be used to transcribe speech in various Indian languages. It supports two decoding strategies:
- **CTC (Connectionist Temporal Classification)**
- **RNNT (Recurrent Neural Network Transducer)**
### **Installation**
Ensure that you have `transformers` and `torchaudio` installed:
```bash
pip install transformers torchaudio
```
### **Inference Example**
```python
from transformers import AutoModel
import torchaudio
# Load the model
model = AutoModel.from_pretrained("ai4bharat/indic-conformer-600m-multilingual", trust_remote_code=True)
# Load an audio file
wav, sr = torchaudio.load("audio.flac")
target_sample_rate = 16000 # Expected sample rate
if sr != target_sample_rate:
resampler = torchaudio.transforms.Resample(orig_freq=sr, new_freq=target_sample_rate)
wav = resampler(wav)
# Perform ASR with CTC decoding
transcription_ctc = model(wav, "hi", "ctc")
print("CTC Transcription:", transcription_ctc)
# Perform ASR with RNNT decoding
transcription_rnnt = model(wav, "hi", "rnnt")
print("RNNT Transcription:", transcription_rnnt)
```
## **Supported Languages**
IndicConformer-600M-Multi is trained for **22 offcially recognized languages of India**, including:
- Assamese(`as`)
- Bengali(`bn`)
- Bodo(`brx`)
- Dogri(`doi`)
- Gujarati(`gu`)
- Hindi(`hi`)
- Kannada(`kn`)
- Konkani(`kok`)
- Kashmiri(`ks`)
- Maithili(`mai`)
- Malayalam(`ml`)
- Manipuri(`mni`)
- Marathi(`mr`)
- Nepali(`ne`)
- Odia(`or`)
- Punjabi(`pa`)
- Sanskrit(`sa`)
- Santali(`sat`)
- Sindhi(`sd`)
- Tamil(`ta`)
- Telugu(`te`)
- Urdu(`ur`)
## **Contact**
For any questions or feedback, please contact:
- Tahir Javed ([email protected])
- Kaushal Bhogale ([email protected]) |
zelk12/MT-Merge7-N-gemma-2-MT3g7MT2g7-9B | zelk12 | "2025-03-03T13:27:49Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:zelk12/MT2-Gen7-gemma-2-9B",
"base_model:merge:zelk12/MT2-Gen7-gemma-2-9B",
"base_model:zelk12/MT3-Gen7-gemma-2-9B",
"base_model:merge:zelk12/MT3-Gen7-gemma-2-9B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-03T13:21:49Z" | ---
base_model:
- zelk12/MT2-Gen7-gemma-2-9B
- zelk12/MT3-Gen7-gemma-2-9B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* [zelk12/MT2-Gen7-gemma-2-9B](https://huggingface.co/zelk12/MT2-Gen7-gemma-2-9B)
* [zelk12/MT3-Gen7-gemma-2-9B](https://huggingface.co/zelk12/MT3-Gen7-gemma-2-9B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: zelk12/MT3-Gen7-gemma-2-9B
- model: zelk12/MT2-Gen7-gemma-2-9B
merge_method: slerp
base_model: zelk12/MT3-Gen7-gemma-2-9B
dtype: bfloat16
parameters:
t: 0.25
```
|
jaymanvirk/ppo_pyramids_rnd | jaymanvirk | "2024-05-11T06:53:28Z" | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | "2024-05-11T06:53:26Z" | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: jaymanvirk/ppo_pyramids_rnd
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ahmedelsayed/v1-bloom-1b1-sql-context | ahmedelsayed | "2024-02-27T00:17:07Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-02-27T00:17:05Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lesso07/5693867f-d516-4b71-be57-39b004714adb | lesso07 | "2025-02-09T22:45:58Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.2-1B-Instruct",
"base_model:adapter:unsloth/Llama-3.2-1B-Instruct",
"license:llama3.2",
"region:us"
] | null | "2025-02-09T22:27:53Z" | ---
library_name: peft
license: llama3.2
base_model: unsloth/Llama-3.2-1B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5693867f-d516-4b71-be57-39b004714adb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# 5693867f-d516-4b71-be57-39b004714adb
This model is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3522
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000207
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 2.0411 |
| 1.3108 | 0.0052 | 50 | 1.5953 |
| 1.2891 | 0.0104 | 100 | 1.5803 |
| 1.3557 | 0.0156 | 150 | 1.4890 |
| 1.3051 | 0.0208 | 200 | 1.4918 |
| 1.276 | 0.0260 | 250 | 1.4426 |
| 1.2771 | 0.0312 | 300 | 1.4338 |
| 1.1274 | 0.0364 | 350 | 1.3891 |
| 1.1713 | 0.0416 | 400 | 1.3654 |
| 1.1527 | 0.0468 | 450 | 1.3580 |
| 1.1735 | 0.0520 | 500 | 1.3522 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
stefan-it/hmbench-icdar-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4 | stefan-it | "2023-10-17T23:06:55Z" | 2 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] | token-classification | "2023-10-13T05:42:53Z" | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: Je suis convaincu , a-t43 dit . que nous n"y parviendrions pas , mais nous
ne pouvons céder parce que l' état moral de nos troupe* en souffrirait trop .
( Fournier . ) Des avions ennemis lancent dix-sept bombes sur Dunkerque LONDRES
. 31 décembre .
---
# Fine-tuned Flair Model on French ICDAR-Europeana NER Dataset
This Flair model was fine-tuned on the
[French ICDAR-Europeana](https://github.com/stefan-it/historic-domain-adaptation-icdar)
NER Dataset using hmByT5 as backbone LM.
The ICDAR-Europeana NER Dataset is a preprocessed variant of the
[Europeana NER Corpora](https://github.com/EuropeanaNewspapers/ner-corpora) for Dutch and French.
The following NEs were annotated: `PER`, `LOC` and `ORG`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr0.00015 | [0.7743][1] | [0.7654][2] | [0.7604][3] | [0.7729][4] | [0.7736][5] | 76.93 ± 0.55 |
| bs8-e10-lr0.00016 | [0.7686][6] | [0.7648][7] | [0.7678][8] | [0.7653][9] | [0.7755][10] | 76.84 ± 0.38 |
| bs4-e10-lr0.00015 | [0.7757][11] | [0.7549][12] | [0.7693][13] | [0.7597][14] | [0.7696][15] | 76.58 ± 0.75 |
| bs4-e10-lr0.00016 | [0.7625][16] | [0.7575][17] | [0.769][18] | [0.7635][19] | [0.7647][20] | 76.34 ± 0.37 |
[1]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
Pdro-ruiz/Reinforce-CartPole | Pdro-ruiz | "2025-02-14T17:04:44Z" | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2025-02-14T17:04:39Z" | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 155.30 +/- 6.25
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
visdata/mm5 | visdata | "2025-02-18T15:02:31Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-18T14:56:47Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TheBloke/OpenOrca-Zephyr-7B-AWQ | TheBloke | "2023-12-04T23:50:27Z" | 10 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"base_model:Weyaxi/OpenOrca-Zephyr-7B",
"base_model:quantized:Weyaxi/OpenOrca-Zephyr-7B",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
] | text-generation | "2023-12-04T23:33:16Z" | ---
base_model: Weyaxi/OpenOrca-Zephyr-7B
inference: false
license: cc-by-nc-4.0
model_creator: "Ya\u011F\u0131z \xC7al\u0131k"
model_name: OpenOrca Zephyr 7B
model_type: mistral
prompt_template: '<|system|>
{system_message}</s>
<|user|>
{prompt}</s>
<|assistant|>
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# OpenOrca Zephyr 7B - AWQ
- Model creator: [Yağız Çalık](https://huggingface.co/Weyaxi)
- Original model: [OpenOrca Zephyr 7B](https://huggingface.co/Weyaxi/OpenOrca-Zephyr-7B)
<!-- description start -->
## Description
This repo contains AWQ model files for [Yağız Çalık's OpenOrca Zephyr 7B](https://huggingface.co/Weyaxi/OpenOrca-Zephyr-7B).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/OpenOrca-Zephyr-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/OpenOrca-Zephyr-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/OpenOrca-Zephyr-7B-GGUF)
* [Yağız Çalık's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Weyaxi/OpenOrca-Zephyr-7B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Zephyr
```
<|system|>
{system_message}</s>
<|user|>
{prompt}</s>
<|assistant|>
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files, and AWQ parameters
I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/OpenOrca-Zephyr-7B-AWQ/tree/main) | 4 | 128 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.15 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/OpenOrca-Zephyr-7B-AWQ`.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `OpenOrca-Zephyr-7B-AWQ`
7. Select **Loader: AutoAWQ**.
8. Click Load, and the model will load and is now ready for use.
9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_AWQ.md-text-generation-webui end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Multi-user inference server: vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- Please ensure you are using vLLM version 0.2 or later.
- When using vLLM as a server, pass the `--quantization awq` parameter.
For example:
```shell
python3 -m vllm.entrypoints.api_server --model TheBloke/OpenOrca-Zephyr-7B-AWQ --quantization awq --dtype auto
```
- When using vLLM from Python code, again set `quantization=awq`.
For example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Tell me about AI",
"Write a story about llamas",
"What is 291 - 150?",
"How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
]
prompt_template=f'''<|system|>
{system_message}</s>
<|user|>
{prompt}</s>
<|assistant|>
'''
prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/OpenOrca-Zephyr-7B-AWQ", quantization="awq", dtype="auto")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-tgi start -->
## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/OpenOrca-Zephyr-7B-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''<|system|>
{system_message}</s>
<|user|>
{prompt}</s>
<|assistant|>
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: ", response)
```
<!-- README_AWQ.md-use-from-tgi end -->
<!-- README_AWQ.md-use-from-python start -->
## Inference from Python code using Transformers
### Install the necessary packages
- Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later.
- Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later.
```shell
pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0"
```
Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0.
If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command:
```shell
pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### Transformers example code (requires Transformers 4.35.0 and later)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model_name_or_path = "TheBloke/OpenOrca-Zephyr-7B-AWQ"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
low_cpu_mem_usage=True,
device_map="cuda:0"
)
# Using the text streamer to stream output one token at a time
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
prompt = "Tell me about AI"
prompt_template=f'''<|system|>
{system_message}</s>
<|user|>
{prompt}</s>
<|assistant|>
'''
# Convert prompt to tokens
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
generation_params = {
"do_sample": True,
"temperature": 0.7,
"top_p": 0.95,
"top_k": 40,
"max_new_tokens": 512,
"repetition_penalty": 1.1
}
# Generate streamed output, visible one token at a time
generation_output = model.generate(
tokens,
streamer=streamer,
**generation_params
)
# Generation without a streamer, which will include the prompt in the output
generation_output = model.generate(
tokens,
**generation_params
)
# Get the tokens from the output, decode them, print them
token_output = generation_output[0]
text_output = tokenizer.decode(token_output)
print("model.generate output: ", text_output)
# Inference is also possible via Transformers' pipeline
from transformers import pipeline
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
**generation_params
)
pipe_output = pipe(prompt_template)[0]['generated_text']
print("pipeline output: ", pipe_output)
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with:
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
- [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later.
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Yağız Çalık's OpenOrca Zephyr 7B
<a href="https://www.buymeacoffee.com/PulsarAI" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
Merge of [HuggingFaceH4/zephyr-7b-alpha](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha) and [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca) using ties merge.
### *Weights*
- [HuggingFaceH4/zephyr-7b-alpha](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha): 0.5
- [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca): 0.3
### *Density*
- [HuggingFaceH4/zephyr-7b-alpha](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha): 0.5
- [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca): 0.5
# Evulation Results ([Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard))
| Metric | Value |
|-----------------------|-------|
| Avg. | |
| ARC (25-shot) | |
| HellaSwag (10-shot) | |
| MMLU (5-shot) | |
| TruthfulQA (0-shot) | |
|
anhtranhong/fingpt-mt_llama2-7b_lora_with_fiqa-qa-v1 | anhtranhong | "2024-02-24T10:13:58Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2024-02-24T06:36:28Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0
|
tensorblock/Llama-medx_v2-GGUF | tensorblock | "2024-11-16T01:44:32Z" | 37 | 0 | transformers | [
"transformers",
"gguf",
"medical",
"TensorBlock",
"GGUF",
"dataset:skumar9/orpo-mmlu",
"base_model:skumar9/Llama-medx_v2",
"base_model:quantized:skumar9/Llama-medx_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-11-14T20:59:05Z" | ---
library_name: transformers
license: apache-2.0
datasets:
- skumar9/orpo-mmlu
tags:
- medical
- TensorBlock
- GGUF
base_model: skumar9/Llama-medx_v2
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## skumar9/Llama-medx_v2 - GGUF
This repo contains GGUF format model files for [skumar9/Llama-medx_v2](https://huggingface.co/skumar9/Llama-medx_v2).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Llama-medx_v2-Q2_K.gguf](https://huggingface.co/tensorblock/Llama-medx_v2-GGUF/blob/main/Llama-medx_v2-Q2_K.gguf) | Q2_K | 2.961 GB | smallest, significant quality loss - not recommended for most purposes |
| [Llama-medx_v2-Q3_K_S.gguf](https://huggingface.co/tensorblock/Llama-medx_v2-GGUF/blob/main/Llama-medx_v2-Q3_K_S.gguf) | Q3_K_S | 3.413 GB | very small, high quality loss |
| [Llama-medx_v2-Q3_K_M.gguf](https://huggingface.co/tensorblock/Llama-medx_v2-GGUF/blob/main/Llama-medx_v2-Q3_K_M.gguf) | Q3_K_M | 3.743 GB | very small, high quality loss |
| [Llama-medx_v2-Q3_K_L.gguf](https://huggingface.co/tensorblock/Llama-medx_v2-GGUF/blob/main/Llama-medx_v2-Q3_K_L.gguf) | Q3_K_L | 4.025 GB | small, substantial quality loss |
| [Llama-medx_v2-Q4_0.gguf](https://huggingface.co/tensorblock/Llama-medx_v2-GGUF/blob/main/Llama-medx_v2-Q4_0.gguf) | Q4_0 | 4.341 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Llama-medx_v2-Q4_K_S.gguf](https://huggingface.co/tensorblock/Llama-medx_v2-GGUF/blob/main/Llama-medx_v2-Q4_K_S.gguf) | Q4_K_S | 4.370 GB | small, greater quality loss |
| [Llama-medx_v2-Q4_K_M.gguf](https://huggingface.co/tensorblock/Llama-medx_v2-GGUF/blob/main/Llama-medx_v2-Q4_K_M.gguf) | Q4_K_M | 4.583 GB | medium, balanced quality - recommended |
| [Llama-medx_v2-Q5_0.gguf](https://huggingface.co/tensorblock/Llama-medx_v2-GGUF/blob/main/Llama-medx_v2-Q5_0.gguf) | Q5_0 | 5.215 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Llama-medx_v2-Q5_K_S.gguf](https://huggingface.co/tensorblock/Llama-medx_v2-GGUF/blob/main/Llama-medx_v2-Q5_K_S.gguf) | Q5_K_S | 5.215 GB | large, low quality loss - recommended |
| [Llama-medx_v2-Q5_K_M.gguf](https://huggingface.co/tensorblock/Llama-medx_v2-GGUF/blob/main/Llama-medx_v2-Q5_K_M.gguf) | Q5_K_M | 5.339 GB | large, very low quality loss - recommended |
| [Llama-medx_v2-Q6_K.gguf](https://huggingface.co/tensorblock/Llama-medx_v2-GGUF/blob/main/Llama-medx_v2-Q6_K.gguf) | Q6_K | 6.143 GB | very large, extremely low quality loss |
| [Llama-medx_v2-Q8_0.gguf](https://huggingface.co/tensorblock/Llama-medx_v2-GGUF/blob/main/Llama-medx_v2-Q8_0.gguf) | Q8_0 | 7.954 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Llama-medx_v2-GGUF --include "Llama-medx_v2-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Llama-medx_v2-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
Augusto777/SWv2-DMAE-H-6-ps-clean-fix-U-40-Cross-4 | Augusto777 | "2025-02-18T15:42:06Z" | 0 | 0 | null | [
"tensorboard",
"safetensors",
"swinv2",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swinv2-tiny-patch4-window8-256",
"base_model:finetune:microsoft/swinv2-tiny-patch4-window8-256",
"license:apache-2.0",
"model-index",
"region:us"
] | null | "2025-02-18T15:07:43Z" | ---
license: apache-2.0
base_model: microsoft/swinv2-tiny-patch4-window8-256
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: SWv2-DMAE-H-6-ps-clean-fix-U-40-Cross-4
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8076923076923077
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SWv2-DMAE-H-6-ps-clean-fix-U-40-Cross-4
This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6121
- Accuracy: 0.8077
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3848 | 0.96 | 11 | 1.3618 | 0.3462 |
| 1.3878 | 2.0 | 23 | 1.3404 | 0.3462 |
| 1.3655 | 2.96 | 34 | 1.2639 | 0.5192 |
| 1.2532 | 4.0 | 46 | 1.1336 | 0.5769 |
| 1.0874 | 4.96 | 57 | 0.9641 | 0.6346 |
| 0.9273 | 6.0 | 69 | 0.8201 | 0.75 |
| 0.7103 | 6.96 | 80 | 0.7912 | 0.6154 |
| 0.6715 | 8.0 | 92 | 0.6121 | 0.8077 |
| 0.6077 | 8.96 | 103 | 0.6953 | 0.7115 |
| 0.5358 | 10.0 | 115 | 0.6212 | 0.75 |
| 0.516 | 10.96 | 126 | 0.6300 | 0.7692 |
| 0.4471 | 12.0 | 138 | 0.6625 | 0.75 |
| 0.4423 | 12.96 | 149 | 0.5968 | 0.8077 |
| 0.3955 | 14.0 | 161 | 0.6423 | 0.75 |
| 0.3662 | 14.96 | 172 | 0.6578 | 0.7885 |
| 0.3448 | 16.0 | 184 | 0.6242 | 0.7885 |
| 0.3201 | 16.96 | 195 | 0.6471 | 0.7692 |
| 0.3236 | 18.0 | 207 | 0.7658 | 0.75 |
| 0.2718 | 18.96 | 218 | 0.7001 | 0.7692 |
| 0.2885 | 20.0 | 230 | 0.7451 | 0.75 |
| 0.2486 | 20.96 | 241 | 0.7268 | 0.75 |
| 0.2727 | 22.0 | 253 | 0.7521 | 0.75 |
| 0.2366 | 22.96 | 264 | 0.7495 | 0.75 |
| 0.246 | 24.0 | 276 | 0.6777 | 0.7885 |
| 0.241 | 24.96 | 287 | 0.7473 | 0.7308 |
| 0.2616 | 26.0 | 299 | 0.7162 | 0.7692 |
| 0.2193 | 26.96 | 310 | 0.7607 | 0.7308 |
| 0.2047 | 28.0 | 322 | 0.8142 | 0.7115 |
| 0.215 | 28.96 | 333 | 0.8245 | 0.7308 |
| 0.2255 | 30.0 | 345 | 0.7968 | 0.7308 |
| 0.2126 | 30.96 | 356 | 0.7737 | 0.75 |
| 0.1991 | 32.0 | 368 | 0.7784 | 0.7692 |
| 0.186 | 32.96 | 379 | 0.8160 | 0.75 |
| 0.1826 | 34.0 | 391 | 0.7998 | 0.75 |
| 0.1942 | 34.96 | 402 | 0.7761 | 0.7692 |
| 0.1784 | 36.0 | 414 | 0.7894 | 0.7692 |
| 0.1652 | 36.96 | 425 | 0.7950 | 0.7885 |
| 0.1837 | 38.0 | 437 | 0.7954 | 0.7692 |
| 0.1801 | 38.26 | 440 | 0.7961 | 0.7692 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
GlenZhang/lora_model | GlenZhang | "2024-05-22T12:44:04Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-05-22T12:42:33Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** GlenZhang
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
shubhamgantayat/tiiuae-falcon-rw-1b-wet-strength-model | shubhamgantayat | "2023-10-10T13:10:47Z" | 195 | 0 | transformers | [
"transformers",
"pytorch",
"falcon",
"text-generation",
"generated_from_trainer",
"custom_code",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-10-10T10:37:12Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: tiiuae-falcon-rw-1b-wet-strength-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiiuae-falcon-rw-1b-wet-strength-model
This model is a fine-tuned version of [tiiuae/falcon-rw-1b](https://huggingface.co/tiiuae/falcon-rw-1b) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.27.2
- Pytorch 2.0.1+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
|
glif-loradex-trainer/insectagon_memecoins_default1 | glif-loradex-trainer | "2024-11-10T21:19:32Z" | 30 | 0 | diffusers | [
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] | text-to-image | "2024-11-10T21:18:19Z" | ---
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: samples/1731273354813__000004000_0.jpg
text: Brett sitting alone in the jungle crying [SMfighter]
- output:
url: samples/1731273378540__000004000_1.jpg
text: Trump dancing with an angry face [SMfighter]
- output:
url: samples/1731273402047__000004000_2.jpg
text: An 8-bit super street fighter game with doge vs pepe [SMfighter]
- output:
url: samples/1731273425555__000004000_3.jpg
text: An exciting action scene featuring mew [SMfighter]
- output:
url: samples/1731273449740__000004000_4.jpg
text: a Japanese anime dramatic scene with toshi and a human woman [SMfighter]
- output:
url: samples/1731273473224__000004000_5.jpg
text: A man sitting and explaining life to a sad slerf [SMfighter]
base_model: black-forest-labs/FLUX.1-dev
trigger: SMfighter
instance_prompt: SMfighter
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# memecoins_default1
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `insectagon`.
<Gallery />
## Trigger words
You should use `SMfighter` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/glif-loradex-trainer/insectagon_memecoins_default1/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
sail-rvc/Selena_Gomez__RVC_-_1000_Epochs_ | sail-rvc | "2023-07-14T07:31:25Z" | 21 | 1 | transformers | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
] | audio-to-audio | "2023-07-14T07:31:10Z" |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# Selena_Gomez__RVC_-_1000_Epochs_
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:31:25
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
|
RichardErkhov/jbaron34_-_SmolLM2-135M-Bebop-Reranker-gguf | RichardErkhov | "2025-03-02T03:49:10Z" | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-03-02T03:46:01Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
SmolLM2-135M-Bebop-Reranker - GGUF
- Model creator: https://huggingface.co/jbaron34/
- Original model: https://huggingface.co/jbaron34/SmolLM2-135M-Bebop-Reranker/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [SmolLM2-135M-Bebop-Reranker.Q2_K.gguf](https://huggingface.co/RichardErkhov/jbaron34_-_SmolLM2-135M-Bebop-Reranker-gguf/blob/main/SmolLM2-135M-Bebop-Reranker.Q2_K.gguf) | Q2_K | 0.08GB |
| [SmolLM2-135M-Bebop-Reranker.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/jbaron34_-_SmolLM2-135M-Bebop-Reranker-gguf/blob/main/SmolLM2-135M-Bebop-Reranker.IQ3_XS.gguf) | IQ3_XS | 0.08GB |
| [SmolLM2-135M-Bebop-Reranker.IQ3_S.gguf](https://huggingface.co/RichardErkhov/jbaron34_-_SmolLM2-135M-Bebop-Reranker-gguf/blob/main/SmolLM2-135M-Bebop-Reranker.IQ3_S.gguf) | IQ3_S | 0.08GB |
| [SmolLM2-135M-Bebop-Reranker.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/jbaron34_-_SmolLM2-135M-Bebop-Reranker-gguf/blob/main/SmolLM2-135M-Bebop-Reranker.Q3_K_S.gguf) | Q3_K_S | 0.08GB |
| [SmolLM2-135M-Bebop-Reranker.IQ3_M.gguf](https://huggingface.co/RichardErkhov/jbaron34_-_SmolLM2-135M-Bebop-Reranker-gguf/blob/main/SmolLM2-135M-Bebop-Reranker.IQ3_M.gguf) | IQ3_M | 0.08GB |
| [SmolLM2-135M-Bebop-Reranker.Q3_K.gguf](https://huggingface.co/RichardErkhov/jbaron34_-_SmolLM2-135M-Bebop-Reranker-gguf/blob/main/SmolLM2-135M-Bebop-Reranker.Q3_K.gguf) | Q3_K | 0.09GB |
| [SmolLM2-135M-Bebop-Reranker.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/jbaron34_-_SmolLM2-135M-Bebop-Reranker-gguf/blob/main/SmolLM2-135M-Bebop-Reranker.Q3_K_M.gguf) | Q3_K_M | 0.09GB |
| [SmolLM2-135M-Bebop-Reranker.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/jbaron34_-_SmolLM2-135M-Bebop-Reranker-gguf/blob/main/SmolLM2-135M-Bebop-Reranker.Q3_K_L.gguf) | Q3_K_L | 0.09GB |
| [SmolLM2-135M-Bebop-Reranker.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/jbaron34_-_SmolLM2-135M-Bebop-Reranker-gguf/blob/main/SmolLM2-135M-Bebop-Reranker.IQ4_XS.gguf) | IQ4_XS | 0.09GB |
| [SmolLM2-135M-Bebop-Reranker.Q4_0.gguf](https://huggingface.co/RichardErkhov/jbaron34_-_SmolLM2-135M-Bebop-Reranker-gguf/blob/main/SmolLM2-135M-Bebop-Reranker.Q4_0.gguf) | Q4_0 | 0.09GB |
| [SmolLM2-135M-Bebop-Reranker.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/jbaron34_-_SmolLM2-135M-Bebop-Reranker-gguf/blob/main/SmolLM2-135M-Bebop-Reranker.IQ4_NL.gguf) | IQ4_NL | 0.09GB |
| [SmolLM2-135M-Bebop-Reranker.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/jbaron34_-_SmolLM2-135M-Bebop-Reranker-gguf/blob/main/SmolLM2-135M-Bebop-Reranker.Q4_K_S.gguf) | Q4_K_S | 0.1GB |
| [SmolLM2-135M-Bebop-Reranker.Q4_K.gguf](https://huggingface.co/RichardErkhov/jbaron34_-_SmolLM2-135M-Bebop-Reranker-gguf/blob/main/SmolLM2-135M-Bebop-Reranker.Q4_K.gguf) | Q4_K | 0.1GB |
| [SmolLM2-135M-Bebop-Reranker.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/jbaron34_-_SmolLM2-135M-Bebop-Reranker-gguf/blob/main/SmolLM2-135M-Bebop-Reranker.Q4_K_M.gguf) | Q4_K_M | 0.1GB |
| [SmolLM2-135M-Bebop-Reranker.Q4_1.gguf](https://huggingface.co/RichardErkhov/jbaron34_-_SmolLM2-135M-Bebop-Reranker-gguf/blob/main/SmolLM2-135M-Bebop-Reranker.Q4_1.gguf) | Q4_1 | 0.09GB |
| [SmolLM2-135M-Bebop-Reranker.Q5_0.gguf](https://huggingface.co/RichardErkhov/jbaron34_-_SmolLM2-135M-Bebop-Reranker-gguf/blob/main/SmolLM2-135M-Bebop-Reranker.Q5_0.gguf) | Q5_0 | 0.1GB |
| [SmolLM2-135M-Bebop-Reranker.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/jbaron34_-_SmolLM2-135M-Bebop-Reranker-gguf/blob/main/SmolLM2-135M-Bebop-Reranker.Q5_K_S.gguf) | Q5_K_S | 0.1GB |
| [SmolLM2-135M-Bebop-Reranker.Q5_K.gguf](https://huggingface.co/RichardErkhov/jbaron34_-_SmolLM2-135M-Bebop-Reranker-gguf/blob/main/SmolLM2-135M-Bebop-Reranker.Q5_K.gguf) | Q5_K | 0.1GB |
| [SmolLM2-135M-Bebop-Reranker.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/jbaron34_-_SmolLM2-135M-Bebop-Reranker-gguf/blob/main/SmolLM2-135M-Bebop-Reranker.Q5_K_M.gguf) | Q5_K_M | 0.1GB |
| [SmolLM2-135M-Bebop-Reranker.Q5_1.gguf](https://huggingface.co/RichardErkhov/jbaron34_-_SmolLM2-135M-Bebop-Reranker-gguf/blob/main/SmolLM2-135M-Bebop-Reranker.Q5_1.gguf) | Q5_1 | 0.1GB |
| [SmolLM2-135M-Bebop-Reranker.Q6_K.gguf](https://huggingface.co/RichardErkhov/jbaron34_-_SmolLM2-135M-Bebop-Reranker-gguf/blob/main/SmolLM2-135M-Bebop-Reranker.Q6_K.gguf) | Q6_K | 0.13GB |
| [SmolLM2-135M-Bebop-Reranker.Q8_0.gguf](https://huggingface.co/RichardErkhov/jbaron34_-_SmolLM2-135M-Bebop-Reranker-gguf/blob/main/SmolLM2-135M-Bebop-Reranker.Q8_0.gguf) | Q8_0 | 0.13GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
farzadd/falcon-7b-test_finetune_QA_FAQ | farzadd | "2023-06-30T10:13:13Z" | 1 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-06-30T09:46:48Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
DavidNovikov/ddpm-butterflies-128 | DavidNovikov | "2022-08-08T21:05:11Z" | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | "2022-08-08T20:22:07Z" | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/DavidNovikov/ddpm-butterflies-128/tensorboard?#scalars)
|
osanseviero/llama-or-potato | osanseviero | "2022-04-01T09:45:26Z" | 63 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"llama-leaderboard",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2022-04-01T09:05:43Z" | ---
tags:
- image-classification
- pytorch
- huggingpics
- llama-leaderboard
metrics:
- accuracy
model-index:
- name: llama-or-potato
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
# llama-or-potato
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### llamas

#### potato
 |
Essacheez/Phi-3-mini-4k-instruct-finetune-translation-10k-system-prompt-style | Essacheez | "2024-05-31T12:50:21Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-30T12:16:54Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lesso02/e6188fad-891e-47ee-8819-b42281b4c7fa | lesso02 | "2025-03-16T11:07:32Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:fxmarty/tiny-random-GemmaForCausalLM",
"base_model:adapter:fxmarty/tiny-random-GemmaForCausalLM",
"license:mit",
"region:us"
] | null | "2025-03-13T22:53:31Z" | ---
library_name: peft
license: mit
base_model: fxmarty/tiny-random-GemmaForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e6188fad-891e-47ee-8819-b42281b4c7fa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# e6188fad-891e-47ee-8819-b42281b4c7fa
This model is a fine-tuned version of [fxmarty/tiny-random-GemmaForCausalLM](https://huggingface.co/fxmarty/tiny-random-GemmaForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 12.3067
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000202
- train_batch_size: 4
- eval_batch_size: 4
- seed: 20
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 2500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0008 | 1 | 12.4558 |
| 12.3442 | 0.3891 | 500 | 12.3301 |
| 12.3328 | 0.7781 | 1000 | 12.3138 |
| 12.3262 | 1.1672 | 1500 | 12.3083 |
| 12.3238 | 1.5563 | 2000 | 12.3072 |
| 12.3249 | 1.9453 | 2500 | 12.3067 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
bardsai/whisper-small-pl | bardsai | "2024-01-02T21:42:00Z" | 34 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"pl",
"dataset:mozilla-foundation/common_voice_11_0",
"dataset:google/fleurs",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-12-08T10:46:18Z" | ---
language:
- pl
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
- google/fleurs
base_model: openai/whisper-small
model-index:
- name: Whisper Small PL
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: mozilla-foundation/common_voice_11_0
type: mozilla-foundation/common_voice_11_0
config: pl
split: test
metrics:
- type: wer
value: 14.57
name: WER
- type: wer_without_norm
value: 33.57
name: WER unnormalized
- type: cer
value: 4.02
name: CER
- type: mer
value: 14.37
name: MER
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: facebook/voxpopuli
type: facebook/voxpopuli
config: pl
split: test
metrics:
- type: wer
value: 15.73
name: WER
- type: wer_without_norm
value: 34.51
name: WER unnormalized
- type: cer
value: 7.73
name: CER
- type: mer
value: 15.28
name: MER
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: google/fleurs
type: google/fleurs
config: pl_pl
split: test
metrics:
- type: wer
value: 16.79
name: WER
- type: wer_without_norm
value: 35.69
name: WER unnormalized
- type: cer
value: 4.99
name: CER
- type: mer
value: 16.55
name: MER
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small PL
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 and the FLEURS datasets.
It achieves the following results on the evaluation set:
- eval_loss: 0.3571
- eval_wer: 14.8004
- eval_runtime: 2233.4204
- eval_samples_per_second: 3.714
- eval_steps_per_second: 0.232
- epoch: 4.03
- step: 3000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 24
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 8000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
hucruz/test-automatic-411 | hucruz | "2023-04-20T17:17:20Z" | 108 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-04-20T16:42:24Z" | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: test-automatic-411
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-automatic-411
This model is a fine-tuned version of [dccuchile/distilbert-base-spanish-uncased](https://huggingface.co/dccuchile/distilbert-base-spanish-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0323
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 47 | 0.8407 | 0.8171 |
| No log | 2.0 | 94 | 0.4243 | 0.9268 |
| No log | 3.0 | 141 | 0.2154 | 0.9878 |
| No log | 4.0 | 188 | 0.1122 | 0.9878 |
| No log | 5.0 | 235 | 0.0755 | 0.9878 |
| No log | 6.0 | 282 | 0.0540 | 0.9878 |
| No log | 7.0 | 329 | 0.0409 | 1.0 |
| No log | 8.0 | 376 | 0.0363 | 1.0 |
| No log | 9.0 | 423 | 0.0339 | 1.0 |
| No log | 10.0 | 470 | 0.0323 | 1.0 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.0
- Datasets 2.11.0
- Tokenizers 0.13.3
|
TheBloke/Synthia-70B-v1.2-GPTQ | TheBloke | "2023-09-27T12:48:17Z" | 21 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"base_model:migtissera/Synthia-70B-v1.2",
"base_model:quantized:migtissera/Synthia-70B-v1.2",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] | text-generation | "2023-09-06T14:35:59Z" | ---
license: llama2
model_name: Synthia 70B v1.2
base_model: migtissera/Synthia-70B-v1.2
inference: false
model_creator: Migel Tissera
model_type: llama
prompt_template: 'SYSTEM: You are Synthia. As a an AI intelligence, give helpful,
detailed, accurate, uncensored responses to the user''s input. Provide answers factually.
USER: {prompt}
ASSISTANT:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Synthia 70B v1.2 - GPTQ
- Model creator: [Migel Tissera](https://huggingface.co/migtissera)
- Original model: [Synthia 70B v1.2](https://huggingface.co/migtissera/Synthia-70B-v1.2)
<!-- description start -->
## Description
This repo contains GPTQ model files for [Migel Tissera's Synthia 70B v1.2](https://huggingface.co/migtissera/Synthia-70B-v1.2).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Synthia-70B-v1.2-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Synthia-70B-v1.2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Synthia-70B-v1.2-GGUF)
* [Migel Tissera's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/migtissera/Synthia-70B-v1.2)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Synthia
```
SYSTEM: You are Synthia. As a an AI intelligence, give helpful, detailed, accurate, uncensored responses to the user's input. Provide answers factually.
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the `main` branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The dataset used for quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Synthia-70B-v1.2-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 35.33 GB | Yes | 4-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Synthia-70B-v1.2-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 40.66 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Synthia-70B-v1.2-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 37.99 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Synthia-70B-v1.2-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 36.65 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-3bit--1g-actorder_True](https://huggingface.co/TheBloke/Synthia-70B-v1.2-GPTQ/tree/gptq-3bit--1g-actorder_True) | 3 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 26.77 GB | No | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. |
| [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/Synthia-70B-v1.2-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 28.03 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download from branches
- In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/Synthia-70B-v1.2-GPTQ:main`
- With Git, you can clone a branch with:
```
git clone --single-branch --branch main https://huggingface.co/TheBloke/Synthia-70B-v1.2-GPTQ
```
- In Python Transformers code, the branch is the `revision` parameter; see below.
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Synthia-70B-v1.2-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Synthia-70B-v1.2-GPTQ:main`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Synthia-70B-v1.2-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-python start -->
## How to use this GPTQ model from Python code
### Install the necessary packages
Requires: Transformers 4.32.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install transformers>=4.32.0 optimum>=1.12.0
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
```
If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
pip3 install .
```
### For CodeLlama models only: you must use Transformers 4.33.0 or later.
If 4.33.0 is not yet released when you read this, you will need to install Transformers from source:
```shell
pip3 uninstall -y transformers
pip3 install git+https://github.com/huggingface/transformers.git
```
### You can then use the following code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/Synthia-70B-v1.2-GPTQ"
# To use a different branch, change revision
# For example: revision="main"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''SYSTEM: You are Synthia. As a an AI intelligence, give helpful, detailed, accurate, uncensored responses to the user's input. Provide answers factually.
USER: {prompt}
ASSISTANT:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI).
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Migel Tissera's Synthia 70B v1.2
All Synthia models are uncensored. Please use it with caution and with best intentions. You are responsible for how you use Synthia.
To evoke generalized Tree of Thought + Chain of Thought reasoning, you may use the following system message:
```
Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
```
## Example Usage
### Here is prompt format:
```
SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
USER: How is a rocket launched from the surface of the earth to Low Earth Orbit?
ASSISTANT:
```
|
9wimu9/mt5-large-v1 | 9wimu9 | "2023-06-08T17:31:20Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-06-08T16:27:10Z" | !python /notebooks/seq_2_seq/run_seq2seq_qa.py \
--model_name_or_path google/mt5-large \
--dataset_name 9wimu9/SinQuAD \
--context_column context \
--question_column question \
--answer_column answers \
--do_train \
--do_eval \
--per_device_train_batch_size 8 \
--learning_rate 1e-3 \
--num_train_epochs 1 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir mt5-large-v1 \
--logging_steps 1 \
--bf16 \
--gradient_accumulation_steps 4 \
--gradient_checkpointing True \
--optim adafactor
{
"eval/loss":0.9061169624328612,
"_timestamp":1686240530.1377208,
"_step":370,
"_runtime":902.276704788208,
"train/global_step":369,
"eval/steps_per_second":7.803,
"train/train_steps_per_second":0.425,
"_wandb.runtime":902,
"train/epoch":1,
"train/total_flos":26479261148774400,
"train/loss":0.1842,
"train/train_loss":0.6567919482060565,
"train/learning_rate":0,
"train/train_runtime":868.8715,
"eval/samples_per_second":62.341,
"train/train_samples_per_second":13.588,
"eval/runtime":25.12
} |
kk-aivio/cd52ae9a-9eb7-4cfe-bdad-e281fa438605 | kk-aivio | "2025-02-02T23:22:43Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"base_model:adapter:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"license:llama3",
"region:us"
] | null | "2025-02-02T23:19:15Z" | ---
library_name: peft
license: llama3
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
tags:
- axolotl
- generated_from_trainer
model-index:
- name: cd52ae9a-9eb7-4cfe-bdad-e281fa438605
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- be5ab324e25875e7_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/be5ab324e25875e7_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kk-aivio/cd52ae9a-9eb7-4cfe-bdad-e281fa438605
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/be5ab324e25875e7_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 21c17688-3386-4af0-a372-07bbb0501a28
wandb_project: Birthday-SN56-17-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 21c17688-3386-4af0-a372-07bbb0501a28
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# cd52ae9a-9eb7-4cfe-bdad-e281fa438605
This model is a fine-tuned version of [Orenguteng/Llama-3-8B-Lexi-Uncensored](https://huggingface.co/Orenguteng/Llama-3-8B-Lexi-Uncensored) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8373
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0015 | 1 | 5.9562 |
| 5.2512 | 0.0726 | 50 | 4.8114 |
| 3.6303 | 0.1452 | 100 | 3.2205 |
| 2.9013 | 0.2178 | 150 | 2.0801 |
| 2.2513 | 0.2904 | 200 | 1.8373 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
deepnet111/sn9-3b-star-008 | deepnet111 | "2025-01-30T14:06:47Z" | 25 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-30T14:03:46Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48_frz | gokuls | "2023-06-22T17:15:57Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2023-06-20T09:59:23Z" | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert_12_layer_model_v2_complete_training_new_wt_init_48_frz
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_12_layer_model_v2_complete_training_new_wt_init_48_frz
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4340
- Accuracy: 0.5488
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 3.8468 | 0.08 | 10000 | 3.6051 | 0.4101 |
| 3.6009 | 0.16 | 20000 | 3.3734 | 0.4369 |
| 3.4559 | 0.25 | 30000 | 3.2348 | 0.4517 |
| 3.3578 | 0.33 | 40000 | 3.1395 | 0.4623 |
| 3.2803 | 0.41 | 50000 | 3.0632 | 0.4709 |
| 3.2157 | 0.49 | 60000 | 3.0010 | 0.4780 |
| 3.1503 | 0.57 | 70000 | 2.9554 | 0.4838 |
| 3.1044 | 0.66 | 80000 | 2.9104 | 0.4888 |
| 3.0703 | 0.74 | 90000 | 2.8759 | 0.4931 |
| 3.029 | 0.82 | 100000 | 2.8357 | 0.4976 |
| 2.9907 | 0.9 | 110000 | 2.8082 | 0.5013 |
| 2.9619 | 0.98 | 120000 | 2.7805 | 0.5042 |
| 2.9284 | 1.07 | 130000 | 2.7578 | 0.5072 |
| 2.9027 | 1.15 | 140000 | 2.7295 | 0.5103 |
| 2.8738 | 1.23 | 150000 | 2.7094 | 0.5133 |
| 2.8603 | 1.31 | 160000 | 2.6848 | 0.5160 |
| 2.829 | 1.39 | 170000 | 2.6667 | 0.5185 |
| 2.8106 | 1.47 | 180000 | 2.6479 | 0.5208 |
| 2.7942 | 1.56 | 190000 | 2.6304 | 0.5227 |
| 2.772 | 1.64 | 200000 | 2.6156 | 0.5249 |
| 2.7546 | 1.72 | 210000 | 2.5994 | 0.5270 |
| 2.7348 | 1.8 | 220000 | 2.5858 | 0.5290 |
| 2.725 | 1.88 | 230000 | 2.5728 | 0.5304 |
| 2.7116 | 1.97 | 240000 | 2.5587 | 0.5324 |
| 2.6953 | 2.05 | 250000 | 2.5476 | 0.5338 |
| 2.6883 | 2.13 | 260000 | 2.5339 | 0.5355 |
| 2.6768 | 2.21 | 270000 | 2.5231 | 0.5371 |
| 2.6622 | 2.29 | 280000 | 2.5097 | 0.5383 |
| 2.6499 | 2.38 | 290000 | 2.5026 | 0.5396 |
| 2.6361 | 2.46 | 300000 | 2.4916 | 0.5412 |
| 2.629 | 2.54 | 310000 | 2.4843 | 0.5421 |
| 2.6269 | 2.62 | 320000 | 2.4737 | 0.5432 |
| 2.6175 | 2.7 | 330000 | 2.4676 | 0.5443 |
| 2.5961 | 2.79 | 340000 | 2.4580 | 0.5457 |
| 2.5926 | 2.87 | 350000 | 2.4502 | 0.5468 |
| 2.5866 | 2.95 | 360000 | 2.4413 | 0.5480 |
| 2.5781 | 3.03 | 370000 | 2.4340 | 0.5488 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
GleamEyeBeast/Mandarin | GleamEyeBeast | "2022-02-07T04:25:26Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-03-02T23:29:04Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: Mandarin
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mandarin
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
HongKi08/SanHak | HongKi08 | "2025-03-28T06:33:30Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-03-28T05:33:47Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>503</h1>
<p>We had to rate limit you. To continue using our service, please log in or create an account.</p>
</div>
</main>
</body>
</html> |
Helsinki-NLP/opus-mt-tc-bible-big-urj-deu_eng_nld | Helsinki-NLP | "2024-10-08T21:50:13Z" | 113 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"marian",
"text2text-generation",
"translation",
"opus-mt-tc-bible",
"chm",
"de",
"en",
"et",
"fi",
"fkv",
"hu",
"izh",
"krl",
"kv",
"liv",
"mdf",
"mrj",
"myv",
"nl",
"se",
"sma",
"smn",
"udm",
"vot",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2024-10-08T21:50:02Z" | ---
library_name: transformers
language:
- chm
- de
- en
- et
- fi
- fkv
- hu
- izh
- krl
- kv
- liv
- mdf
- mrj
- myv
- nl
- se
- sma
- smn
- udm
- vot
tags:
- translation
- opus-mt-tc-bible
license: apache-2.0
model-index:
- name: opus-mt-tc-bible-big-urj-deu_eng_nld
results:
- task:
name: Translation multi-multi
type: translation
args: multi-multi
dataset:
name: tatoeba-test-v2020-07-28-v2023-09-26
type: tatoeba_mt
args: multi-multi
metrics:
- name: BLEU
type: bleu
value: 46.1
- name: chr-F
type: chrf
value: 0.65088
---
# opus-mt-tc-bible-big-urj-deu_eng_nld
## Table of Contents
- [Model Details](#model-details)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
- [Training](#training)
- [Evaluation](#evaluation)
- [Citation Information](#citation-information)
- [Acknowledgements](#acknowledgements)
## Model Details
Neural machine translation model for translating from Uralic languages (urj) to unknown (deu+eng+nld).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
**Model Description:**
- **Developed by:** Language Technology Research Group at the University of Helsinki
- **Model Type:** Translation (transformer-big)
- **Release**: 2024-08-18
- **License:** Apache-2.0
- **Language(s):**
- Source Language(s): chm est fin fkv hun izh koi kom kpv krl liv mdf mrj myv sma sme smn udm vot vro
- Target Language(s): deu eng nld
- Valid Target Language Labels: >>deu<< >>eng<< >>nld<< >>xxx<<
- **Original Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-18.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/urj-deu+eng+nld/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-18.zip)
- **Resources for more information:**
- [OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/urj-deu%2Beng%2Bnld/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-08-18)
- [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
- [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian)
- [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/)
- [HPLT bilingual data v1 (as part of the Tatoeba Translation Challenge dataset)](https://hplt-project.org/datasets/v1)
- [A massively parallel Bible corpus](https://aclanthology.org/L14-1215/)
This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>deu<<`
## Uses
This model can be used for translation and text-to-text generation.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
## How to Get Started With the Model
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
">>deu<< Jobb meghalni, mint úgy élni.",
">>eng<< Az algák miatt ilyen színű a tó."
]
model_name = "pytorch-models/opus-mt-tc-bible-big-urj-deu_eng_nld"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Es ist besser zu sterben, als so zu leben.
# Because of the algae, the lake is such a color.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-bible-big-urj-deu_eng_nld")
print(pipe(">>deu<< Jobb meghalni, mint úgy élni."))
# expected output: Es ist besser zu sterben, als so zu leben.
```
## Training
- **Data**: opusTCv20230926max50+bt+jhubc ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
- **Pre-processing**: SentencePiece (spm32k,spm32k)
- **Model Type:** transformer-big
- **Original MarianNMT Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-18.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/urj-deu+eng+nld/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-18.zip)
- **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
## Evaluation
* [Model scores at the OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/urj-deu%2Beng%2Bnld/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-08-18)
* test set translations: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-18.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/urj-deu+eng+nld/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-18.test.txt)
* test set scores: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-18.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/urj-deu+eng+nld/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-18.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| multi-multi | tatoeba-test-v2020-07-28-v2023-09-26 | 0.65088 | 46.1 | 10000 | 78967 |
## Citation Information
* Publications: [Democratizing neural machine translation with OPUS-MT](https://doi.org/10.1007/s10579-023-09704-w) and [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```bibtex
@article{tiedemann2023democratizing,
title={Democratizing neural machine translation with {OPUS-MT}},
author={Tiedemann, J{\"o}rg and Aulamo, Mikko and Bakshandaeva, Daria and Boggia, Michele and Gr{\"o}nroos, Stig-Arne and Nieminen, Tommi and Raganato, Alessandro and Scherrer, Yves and Vazquez, Raul and Virpioja, Sami},
journal={Language Resources and Evaluation},
number={58},
pages={713--755},
year={2023},
publisher={Springer Nature},
issn={1574-0218},
doi={10.1007/s10579-023-09704-w}
}
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Acknowledgements
The work is supported by the [HPLT project](https://hplt-project.org/), funded by the European Union’s Horizon Europe research and innovation programme under grant agreement No 101070350. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland, and the [EuroHPC supercomputer LUMI](https://www.lumi-supercomputer.eu/).
## Model conversion info
* transformers version: 4.45.1
* OPUS-MT git hash: 0882077
* port time: Wed Oct 9 00:49:52 EEST 2024
* port machine: LM0-400-22516.local
|
PrunaAI/TheDrummer-Llama-3SOME-8B-v1-QUANTO-float8bit-smashed | PrunaAI | "2024-08-02T16:03:00Z" | 3 | 0 | transformers | [
"transformers",
"pruna-ai",
"base_model:BeaverLegacy/Llama-3SOME-8B-v1",
"base_model:finetune:BeaverLegacy/Llama-3SOME-8B-v1",
"endpoints_compatible",
"region:us"
] | null | "2024-06-17T21:14:50Z" | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: TheDrummer/Llama-3SOME-8B-v1
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with quanto.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo TheDrummer/Llama-3SOME-8B-v1 installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install quanto
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
IMPORTS
model = AutoModelForCausalLM.from_pretrained("PrunaAI/TheDrummer-Llama-3SOME-8B-v1-QUANTO-float8bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("TheDrummer/Llama-3SOME-8B-v1")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model TheDrummer/Llama-3SOME-8B-v1 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
nlpai-lab/kullm-polyglot-5.8b-v2 | nlpai-lab | "2023-06-07T06:45:30Z" | 2,351 | 23 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"ko",
"dataset:nlpai-lab/kullm-v2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-06-07T05:30:10Z" | ---
license: apache-2.0
datasets:
- nlpai-lab/kullm-v2
language:
- ko
---
# KULLM-Polyglot-5.8B-v2
This model is a parameter-efficient fine-tuned version of [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) on a KULLM v2
Detail Codes are available at [KULLM Github Repository](https://github.com/nlpai-lab/KULLM)
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-4
- train_batch_size: 128
- seed: 42
- distributed_type: multi-GPU (A100 80G)
- num_devices: 4
- gradient_accumulation_steps: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8.0
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3 |
bluebird089/videomae-base-finetuned-ai4life-subset | bluebird089 | "2024-02-27T17:36:20Z" | 62 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | "2024-02-27T09:58:51Z" | ---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
model-index:
- name: videomae-base-finetuned-ai4life-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ai4life-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 656
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.1
|
kk-aivio/8538c8be-eaf0-43a6-a74e-2d13cd2744cc | kk-aivio | "2025-02-03T09:34:53Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-3B-Instruct",
"license:other",
"region:us"
] | null | "2025-02-03T09:30:39Z" | ---
library_name: peft
license: other
base_model: Qwen/Qwen2.5-3B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8538c8be-eaf0-43a6-a74e-2d13cd2744cc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
# 8538c8be-eaf0-43a6-a74e-2d13cd2744cc
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0357
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
RichardErkhov/HugoVoxx_-_Gemma-2-2b-it-ag-merged-model-gguf | RichardErkhov | "2025-02-26T19:15:58Z" | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-26T18:13:07Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Gemma-2-2b-it-ag-merged-model - GGUF
- Model creator: https://huggingface.co/HugoVoxx/
- Original model: https://huggingface.co/HugoVoxx/Gemma-2-2b-it-ag-merged-model/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Gemma-2-2b-it-ag-merged-model.Q2_K.gguf](https://huggingface.co/RichardErkhov/HugoVoxx_-_Gemma-2-2b-it-ag-merged-model-gguf/blob/main/Gemma-2-2b-it-ag-merged-model.Q2_K.gguf) | Q2_K | 1.15GB |
| [Gemma-2-2b-it-ag-merged-model.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/HugoVoxx_-_Gemma-2-2b-it-ag-merged-model-gguf/blob/main/Gemma-2-2b-it-ag-merged-model.IQ3_XS.gguf) | IQ3_XS | 1.22GB |
| [Gemma-2-2b-it-ag-merged-model.IQ3_S.gguf](https://huggingface.co/RichardErkhov/HugoVoxx_-_Gemma-2-2b-it-ag-merged-model-gguf/blob/main/Gemma-2-2b-it-ag-merged-model.IQ3_S.gguf) | IQ3_S | 1.27GB |
| [Gemma-2-2b-it-ag-merged-model.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/HugoVoxx_-_Gemma-2-2b-it-ag-merged-model-gguf/blob/main/Gemma-2-2b-it-ag-merged-model.Q3_K_S.gguf) | Q3_K_S | 1.27GB |
| [Gemma-2-2b-it-ag-merged-model.IQ3_M.gguf](https://huggingface.co/RichardErkhov/HugoVoxx_-_Gemma-2-2b-it-ag-merged-model-gguf/blob/main/Gemma-2-2b-it-ag-merged-model.IQ3_M.gguf) | IQ3_M | 1.3GB |
| [Gemma-2-2b-it-ag-merged-model.Q3_K.gguf](https://huggingface.co/RichardErkhov/HugoVoxx_-_Gemma-2-2b-it-ag-merged-model-gguf/blob/main/Gemma-2-2b-it-ag-merged-model.Q3_K.gguf) | Q3_K | 1.36GB |
| [Gemma-2-2b-it-ag-merged-model.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/HugoVoxx_-_Gemma-2-2b-it-ag-merged-model-gguf/blob/main/Gemma-2-2b-it-ag-merged-model.Q3_K_M.gguf) | Q3_K_M | 1.36GB |
| [Gemma-2-2b-it-ag-merged-model.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/HugoVoxx_-_Gemma-2-2b-it-ag-merged-model-gguf/blob/main/Gemma-2-2b-it-ag-merged-model.Q3_K_L.gguf) | Q3_K_L | 1.44GB |
| [Gemma-2-2b-it-ag-merged-model.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/HugoVoxx_-_Gemma-2-2b-it-ag-merged-model-gguf/blob/main/Gemma-2-2b-it-ag-merged-model.IQ4_XS.gguf) | IQ4_XS | 1.47GB |
| [Gemma-2-2b-it-ag-merged-model.Q4_0.gguf](https://huggingface.co/RichardErkhov/HugoVoxx_-_Gemma-2-2b-it-ag-merged-model-gguf/blob/main/Gemma-2-2b-it-ag-merged-model.Q4_0.gguf) | Q4_0 | 1.52GB |
| [Gemma-2-2b-it-ag-merged-model.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/HugoVoxx_-_Gemma-2-2b-it-ag-merged-model-gguf/blob/main/Gemma-2-2b-it-ag-merged-model.IQ4_NL.gguf) | IQ4_NL | 1.53GB |
| [Gemma-2-2b-it-ag-merged-model.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/HugoVoxx_-_Gemma-2-2b-it-ag-merged-model-gguf/blob/main/Gemma-2-2b-it-ag-merged-model.Q4_K_S.gguf) | Q4_K_S | 1.53GB |
| [Gemma-2-2b-it-ag-merged-model.Q4_K.gguf](https://huggingface.co/RichardErkhov/HugoVoxx_-_Gemma-2-2b-it-ag-merged-model-gguf/blob/main/Gemma-2-2b-it-ag-merged-model.Q4_K.gguf) | Q4_K | 1.59GB |
| [Gemma-2-2b-it-ag-merged-model.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/HugoVoxx_-_Gemma-2-2b-it-ag-merged-model-gguf/blob/main/Gemma-2-2b-it-ag-merged-model.Q4_K_M.gguf) | Q4_K_M | 1.59GB |
| [Gemma-2-2b-it-ag-merged-model.Q4_1.gguf](https://huggingface.co/RichardErkhov/HugoVoxx_-_Gemma-2-2b-it-ag-merged-model-gguf/blob/main/Gemma-2-2b-it-ag-merged-model.Q4_1.gguf) | Q4_1 | 1.64GB |
| [Gemma-2-2b-it-ag-merged-model.Q5_0.gguf](https://huggingface.co/RichardErkhov/HugoVoxx_-_Gemma-2-2b-it-ag-merged-model-gguf/blob/main/Gemma-2-2b-it-ag-merged-model.Q5_0.gguf) | Q5_0 | 1.75GB |
| [Gemma-2-2b-it-ag-merged-model.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/HugoVoxx_-_Gemma-2-2b-it-ag-merged-model-gguf/blob/main/Gemma-2-2b-it-ag-merged-model.Q5_K_S.gguf) | Q5_K_S | 1.75GB |
| [Gemma-2-2b-it-ag-merged-model.Q5_K.gguf](https://huggingface.co/RichardErkhov/HugoVoxx_-_Gemma-2-2b-it-ag-merged-model-gguf/blob/main/Gemma-2-2b-it-ag-merged-model.Q5_K.gguf) | Q5_K | 1.79GB |
| [Gemma-2-2b-it-ag-merged-model.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/HugoVoxx_-_Gemma-2-2b-it-ag-merged-model-gguf/blob/main/Gemma-2-2b-it-ag-merged-model.Q5_K_M.gguf) | Q5_K_M | 1.79GB |
| [Gemma-2-2b-it-ag-merged-model.Q5_1.gguf](https://huggingface.co/RichardErkhov/HugoVoxx_-_Gemma-2-2b-it-ag-merged-model-gguf/blob/main/Gemma-2-2b-it-ag-merged-model.Q5_1.gguf) | Q5_1 | 1.87GB |
| [Gemma-2-2b-it-ag-merged-model.Q6_K.gguf](https://huggingface.co/RichardErkhov/HugoVoxx_-_Gemma-2-2b-it-ag-merged-model-gguf/blob/main/Gemma-2-2b-it-ag-merged-model.Q6_K.gguf) | Q6_K | 2.0GB |
| [Gemma-2-2b-it-ag-merged-model.Q8_0.gguf](https://huggingface.co/RichardErkhov/HugoVoxx_-_Gemma-2-2b-it-ag-merged-model-gguf/blob/main/Gemma-2-2b-it-ag-merged-model.Q8_0.gguf) | Q8_0 | 2.59GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
huggingtweets/bichebuni | huggingtweets | "2021-05-21T20:37:06Z" | 6 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | ---
language: en
thumbnail: https://www.huggingtweets.com/bichebuni/1614096170963/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1356414477143519232/H2T46KhD_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Ellie 🐰 🤖 AI Bot </div>
<div style="font-size: 15px">@bichebuni bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@bichebuni's tweets](https://twitter.com/bichebuni).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 1578 |
| Retweets | 559 |
| Short tweets | 216 |
| Tweets kept | 803 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2jluupd2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @bichebuni's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2a0ttba9) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2a0ttba9/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/bichebuni')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Nubletz/bertwiki-simplestyle-split-embedding-recon-53 | Nubletz | "2025-01-21T22:09:42Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2025-01-21T22:09:31Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MaziyarPanahi/Ognoexperiment27multi_verse_modelNeuralsynthesis-7B-GGUF | MaziyarPanahi | "2024-05-21T18:37:50Z" | 62 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"automerger",
"base_model:Kukedlc/NeuralSynthesis-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:automerger/Ognoexperiment27multi_verse_modelNeuralsynthesis-7B",
"base_model:quantized:automerger/Ognoexperiment27multi_verse_modelNeuralsynthesis-7B"
] | text-generation | "2024-05-21T18:05:26Z" | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- merge
- mergekit
- lazymergekit
- automerger
- base_model:Kukedlc/NeuralSynthesis-7B-v0.1
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
- text-generation
model_name: Ognoexperiment27multi_verse_modelNeuralsynthesis-7B-GGUF
base_model: automerger/Ognoexperiment27multi_verse_modelNeuralsynthesis-7B
inference: false
model_creator: automerger
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Ognoexperiment27multi_verse_modelNeuralsynthesis-7B-GGUF](https://huggingface.co/MaziyarPanahi/Ognoexperiment27multi_verse_modelNeuralsynthesis-7B-GGUF)
- Model creator: [automerger](https://huggingface.co/automerger)
- Original model: [automerger/Ognoexperiment27multi_verse_modelNeuralsynthesis-7B](https://huggingface.co/automerger/Ognoexperiment27multi_verse_modelNeuralsynthesis-7B)
## Description
[MaziyarPanahi/Ognoexperiment27multi_verse_modelNeuralsynthesis-7B-GGUF](https://huggingface.co/MaziyarPanahi/Ognoexperiment27multi_verse_modelNeuralsynthesis-7B-GGUF) contains GGUF format model files for [automerger/Ognoexperiment27multi_verse_modelNeuralsynthesis-7B](https://huggingface.co/automerger/Ognoexperiment27multi_verse_modelNeuralsynthesis-7B).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
visdata/mmm0 | visdata | "2025-02-20T17:07:42Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-20T17:02:15Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
adarsh12x/mistral_7b_samantha | adarsh12x | "2024-02-25T21:51:46Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:samantha-data",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1",
"license:apache-2.0",
"region:us"
] | null | "2024-02-25T21:51:38Z" | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- samantha-data
base_model: mistralai/Mistral-7B-Instruct-v0.1
model-index:
- name: mistral_7b_samantha
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral_7b_samantha
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on the samantha-data dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7573
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 0.03
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7235 | 0.4 | 10 | 1.8474 |
| 1.2154 | 0.8 | 20 | 1.7551 |
| 1.1009 | 1.2 | 30 | 1.6884 |
| 0.981 | 1.6 | 40 | 1.6751 |
| 0.9463 | 2.0 | 50 | 1.6704 |
| 0.8443 | 2.4 | 60 | 1.7248 |
| 0.8128 | 2.8 | 70 | 1.7101 |
| 0.8158 | 3.2 | 80 | 1.7826 |
| 0.7087 | 3.6 | 90 | 1.7805 |
| 0.7091 | 4.0 | 100 | 1.7573 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.1
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 |
FounderOfHuggingface/gpt2_lora_r16_dbpedia_14_t18_e5_member_shadow36 | FounderOfHuggingface | "2024-01-11T08:17:57Z" | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | "2024-01-11T08:17:54Z" | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
PeterKruger/AutoBench | PeterKruger | "2025-03-03T23:45:38Z" | 0 | 0 | null | [
"code",
"en",
"license:mit",
"region:us"
] | null | "2025-02-20T21:59:16Z" | ---
license: mit
language:
- en
tags:
- code
---
# AutoBench 1.0 - Collective-LLM-as-a-Judge Benchmark System
**Table of Contents**
* [Overview](#overview)
* [Key Features of AutoBench v1.0](#key-features-of-autobench-v10)
* [Getting Started](#getting-started)
* [Prerequisites](#prerequisites)
* [Google Cloud Authentication for Vertex AI](#google-cloud-authentication-for-vertex-ai)
* [API Keys](#api-keys)
* [Configuration](#configuration)
* [Running the Benchmark](#running-the-benchmark)
* [Output Files](#output-files)
* [Customization](#customization)
* [Limitations](#limitations)
* [Learn more and contribute](#learn_more_and_contribute)
* [License](#license)
* [Contact](#contact)
## Overview
AutoBench 1.0 is an innovative and automated benchmark system designed to evaluate the performance of Large Language Models (LLMs) with unprecedented dynamism, flexibility, and cost-effectiveness. Leveraging the "Collective-LLM-as-a-Judge" approach, AutoBench uses LLMs themselves to collectively assess the quality of questions and answers, overcoming the limitations of traditional static benchmarks and human-biased evaluations.
The system is designed to be:
* **Correlated with Established Benchmarks:** Achieves high correlations with Chatbot Arena, MMLU, and AAQI, demonstrating alignment with human evaluations and broader AI capabilities.
* **Cost-Effective:** On a single sub-$100 budget and ca. 5-10 hour runtime, it will provide higly accurate rank for 20 models, therefore making large-scale and frequent benchmarking feasible.
* **Dynamic and Hard to Hack:** Dynamically generated questions in each iteration prevent "benchmark gaming" and ensure models demonstrate genuine general abilities.
* **Scalable:** Designed for continuous monitoring of LLM progress and future-proofed for evolving AI capabilities.
* **Granular:** Provides detailed performance breakdowns across various topics (Math, General Culture, Logics, Code, Science, History, etc.).
**For an intro explanation of the methodology, please refer to the Hugging Face Blog Post: [Escape the Benchmark Trap: AutoBench – the Collective-LLM-as-a-Judge System for Evaluating AI models (ASI-Ready!)](https://huggingface.co/blog/PeterKruger/autobench).**
**For a simple demo, try the Hugging Faces Spaces implementation of the benchmark: [AutoBench 1.0 Demo](https://huggingface.co/spaces/PeterKruger/AutoBench).**
**For a detailed explanation of the methodology, please refer to the [Detailed Methodology Document](AutoBench_1_0_Detailed_Methodology_Document.pdf).**
## Key Features of AutoBench 1.0
* **Dynamic and Adaptive:** The system generates new questions for each iteration, making it resistant to gaming and adaptable to the rapid evolution of LLMs.
* **Reduced Human Bias – and Defined LLM-as-a-Judge Perspective:** Minimizes human subjectivity by using LLMs for evaluation, embracing inherent "model bias" as a perspective relative to the current LLM ecosystem.
* **Scalability and Cost-Effectiveness:** Significantly reduces the cost and time associated with traditional human evaluation, enabling frequent and large-scale benchmark updates.
* **Granular Topic-Specific Insights:** Offers detailed performance breakdowns across various topics, providing a nuanced understanding of LLM strengths and weaknesses.
* **Iterative Refinement and Weighting Stability:** Employs an iterative weighting mechanism that dynamically adjusts model weights based on performance, ensuring stability and convergence over time.
* **Well-Defined Question Quality Control:** Implements a transparent and rigorous approach to question quality control with quantifiable acceptance criteria, ensuring high-quality and relevant questions.
## Getting Started
### Prerequisites
* **Python 3.7+**
* **Required Python Libraries:**
```bash
pip install openai together anthropic vertexai pandas numpy concurrent.futures re time csv google-api-core
```
Ensure you have the latest versions, especially for `openai` (version 1.0.0 or later is recommended).
* **Google Colab Environment (Recommended):** While the script can be adapted to other environments, it is primarily designed to run in Google Colab due to the use of Colab Secrets Manager for API key security and Vertex AI integration.
* **Google Cloud Account and Vertex AI API Enabled:** To utilize Gemini models through Vertex AI, you need a Google Cloud account with the Vertex AI API enabled.
### Google Cloud Authentication for Vertex AI
To use Gemini models via Vertex AI, you must authenticate your Google Cloud account within the Colab environment. **Before running the benchmark, follow these steps:**
1. **Download your Vertex AI API authentication JSON file:**
* Go to the [Google Cloud Console](https://console.cloud.google.com/).
* Navigate to "IAM & Admin" > "Service Accounts".
* Create or select an existing service account with the necessary Vertex AI permissions.
* Create a new JSON key for the service account and download it to your local machine.
2. **Upload the JSON key file to your Colab environment:**
* In your Google Colab notebook, use the file upload button in the sidebar (folder icon) to upload the JSON key file to the `/content/` directory. **Ensure you upload it to `/content/`**.
3. **Run the following authentication code in your Colab notebook *before* running any other cells:**
```python
#Remember to upload your Vertex AI API auth json file in /content
#Run this before anything else
import os
from google.colab import auth
# 1. Colab User Authentication (Interactive)
auth.authenticate_user()
# 2. Service Account Authentication (using the JSON file)
# Make SURE the file is uploaded to /content/ and the filename is correct!
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "/content/YOUR_VERTEX_AI_KEY_FILE.json" # Replace with your actual JSON filename
# ... rest of your code ...
```
**Important:**
* **Replace `YOUR_VERTEX_AI_KEY_FILE.json` with the actual filename of your downloaded JSON key file.**
* **Make sure the JSON file is uploaded to the `/content/` directory in Colab.**
* **Run this authentication code *only once* at the beginning of your Colab session.**
* **For initial setup, running `auth.authenticate_user()` (interactive authentication) is recommended to verify your Google Cloud connection before relying on service account authentication.** You can comment it out after confirming it works.
### API Keys
AutoBench 1.0 requires API keys for accessing the following services:
* **OpenAI:** For models like `gpt-4o`, `gpt-3.5-turbo`, and Grok models (accessed through OpenAI API).
* **Together AI:** For a wide range of open-source models like Llama 3, Gemma, Mistral, and Qwen.
* **Anthropic:** For Claude 3 models.
* **Nebius:** For DeepSeek models (accessed through Nebius API, similar to OpenAI).
* **Vertex AI (Google Cloud):** For Gemini models.
**Securely manage your API keys using Google Colab Secrets Manager:**
1. In your Google Colab notebook, navigate to the "Secrets" panel (key icon in the sidebar).
2. Add the following secrets, replacing `YOUR_API_KEY` with your actual API keys:
* `OpenAI_API_key`: Your OpenAI API key.
* `TOGETHER_API_KEY`: Your Together AI API key.
* `ANTHROPIC_API_KEY`: Your Anthropic API key.
* `GROK_API_KEY`: Your Grok API key (accessed through OpenAI API, requires Grok access).
* `NEBIUS_API_KEY`: Your Nebius API key.
**The script is configured to retrieve these keys using `google.colab.userdata.get()`.**
### Configuration
The core configurations for AutoBench 1.0 are defined directly within the Python script (`llm_benchmark.py`) for easy modification. Key configuration sections include:
* **`model_config` Dictionary:** This dictionary defines each LLM used in the benchmark, including:
* `type`: API provider (`"gemini"`, `"openai"`, `"together"`, `"anthropic"`, `"nebius"`, `"grok"`).
* `name`: Model identifier (e.g., `"gpt-4o-2024-11-20"`, `"gemini-2.0-flash-001"`).
* `role`: Model's designated role in the benchmark (`"answer"`, `"rank"`, or `"both"`).
* **Model Lists:** `openai_models`, `gemini_models`, `together_models`, `anthropic_models`, `nebius_models`, `grok_models` lists specify which models from `model_config` will be actively benchmarked.
* **`topics` List:** Defines the list of topics used for question generation (e.g., `["math", "history", "creative writing", ...]`).
* **`difficulties` List:** Defines the difficulty levels for questions (e.g., `["a very simple", "a simple", "a", "a difficult", "a very difficult"]`).
* **`difficulty_probabilities` Dictionary:** Controls the distribution of question difficulty levels during benchmark iterations.
* **Global Parameters:** Various parameters at the beginning of the script (e.g., `time_sleep`, `base_temp`, `question_temp`, `answer_temp`, token limits, thresholds) can be adjusted to fine-tune the benchmark.
**To customize the benchmark:**
1. **Edit the `model_config` dictionary** to add, remove, or modify models, their types, names, and roles.
2. **Adjust the model lists** (`openai_models`, etc.) to select the specific models you want to include in the benchmark run.
3. **Modify the `topics` and `difficulties` lists** to customize the benchmark's scope and challenge.
4. **Tweak global parameters** to adjust temperature, token limits, timeouts, and other settings as needed.
## Running the Benchmark
1. **Open the `llm_benchmark.py` file in Google Colab.**
2. **Ensure you have set up your API keys in Colab Secrets Manager and authenticated with Google Cloud for Vertex AI as described above.**
3. run !pip install openai numpy pandas together anthropic google-cloud-aiplatform to load all required packages
3. **Review and customize the configuration sections in the script if needed.**
4. **Run all cells in the notebook sequentially.**
The script will execute the benchmark iterations, dynamically generate questions and answers, rank model performance, and update model weights iteratively. Progress and results will be printed to the Colab output, and detailed results will be saved to CSV files.
## Output Files
AutoBench 1.0 generates the following output files:
* **`llm_benchmark_results.csv`:** This file contains aggregated benchmark results, including:
* Average rank for each model across all iterations.
* Topic-specific average ranks, providing granular performance insights.
* **`llm_benchmark_iteration_results.csv` (or similar, timestamped):** This file provides detailed results for each iteration, including:
* Iteration number, topic, and difficulty.
* Generated question prompt and question.
* Answers generated by each model.
* Ranks assigned by judging models for each answer.
* Average rank for each model in each iteration.
* Durations for answer generation and ranking processes.
* **`model_weights_out.csv`:** This file saves the model weights at the end of the benchmark run. These weights are updated iteratively based on model performance and can be used as input for subsequent benchmark runs (by renaming it to `weights_in.csv` or updating the `old_weights_file` variable) to enable continuous learning and adaptation of the benchmark.
* **`weights_in.csv` (or similar, input weights file):** If you provide a file with pre-existing model weights (e.g., from a previous run), this file will be loaded at the beginning of the benchmark to initialize model weights. PLEASE NOTE: the system will recognize if new models have been introduced and initialize weights and ranks accordingly.
## Customization
AutoBench 1.0 is highly customizable. You can:
* **Add or remove LLMs** by modifying the `model_config` dictionary and the model lists.
* **Change the topics and difficulty levels** to focus on specific areas of LLM performance.
* **Adjust prompts** for question generation and ranking to refine the benchmark's focus and evaluation criteria.
* **Modify the number of iterations (`t`)** to control the benchmark's runtime and robustness.
* **Fine-tune parameters** like temperature, token limits, timeouts, and thresholds to optimize the benchmark for your specific needs and resources.
## Limitations
AutoBench 1.0, while offering significant advantages, also has limitations and potential biases inherent to the LLM-as-a-Judge approach:
* **LLM-as-a-Judge Bias:** The benchmark inherently reflects the biases of the LLMs used as judges. Results are relative to the "view" of the current set of LLMs, not necessarily against an absolute, objective standard.
* **Question Quality Control Dependency:** The quality of the benchmark depends on the ability of the LLM judges to effectively evaluate question quality.
* **Ranking Granularity:** The 1-5 ranking scale may not capture subtle differences in answer quality, potentially losing nuance between high-performing models.
* **Potential Suboptimality of Weighting:** The cumulative average weighting mechanism may converge to a locally optimal but not globally optimal state.
* **Black Box Nature of LLMs:** The internal decision-making processes of the judging LLMs remain opaque, limiting full transparency of the evaluation process.
**Please refer to the [Detailed Methodology Document](AutoBench_1_0_Detailed_Methodology_Document.pdf) for a more in-depth discussion of limitations and potential biases.**
## Learn more and contribute
* **Start from our blog post on Hugging Face**: [Escape the Benchmark Trap: AutoBench – the Collective-LLM-as-a-Judge System for Evaluating AI models (ASI-Ready!)](https://huggingface.co/blog/PeterKruger/autobench)
* **Explore the code and data:** [Hugging Face AutoBench 1.0 Repository](https://huggingface.co/PeterKruger/AutoBench) <!-- Replace with actual link -->
* **Try our Demo on Spaces:** [AutoBench 1.0 Demo](https://huggingface.co/spaces/PeterKruger/AutoBench) <!-- Replace with actual link -->
* **Read the detailed methodology:** [Detailed Methodology Document](https://huggingface.co/PeterKruger/AutoBench/blob/main/AutoBench_1_0_Detailed_Methodology_Document.pdf) <!-- Replace with link -->
* **Join the discussion:** [Hugging Face AutoBench Community Discussion](https://huggingface.co/PeterKruger/AutoBench/discussions) <!-- Replace with link -->
* **Contribute:** Help us by suggesting new topics, refining prompts, or enhancing the weighting algorithm—submit pull requests or issues via the Hugging Face Repo.
## License
[MIT License](LICENSE)
## Contact
[Peter Kruger/eZecute]
[[email protected]] |
HamZurger/ppo-Huggy | HamZurger | "2023-08-25T13:07:29Z" | 3 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | "2023-08-25T13:07:17Z" | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: HamZurger/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
mradermacher/chatbot_gpt2_healthcaremagic100k-GGUF | mradermacher | "2025-02-28T15:52:11Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:maull04/chatbot_gpt2_healthcaremagic100k",
"base_model:quantized:maull04/chatbot_gpt2_healthcaremagic100k",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | "2025-02-28T15:49:46Z" | ---
base_model: maull04/chatbot_gpt2_healthcaremagic100k
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/maull04/chatbot_gpt2_healthcaremagic100k
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/chatbot_gpt2_healthcaremagic100k-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/chatbot_gpt2_healthcaremagic100k-GGUF/resolve/main/chatbot_gpt2_healthcaremagic100k.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/chatbot_gpt2_healthcaremagic100k-GGUF/resolve/main/chatbot_gpt2_healthcaremagic100k.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/chatbot_gpt2_healthcaremagic100k-GGUF/resolve/main/chatbot_gpt2_healthcaremagic100k.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/chatbot_gpt2_healthcaremagic100k-GGUF/resolve/main/chatbot_gpt2_healthcaremagic100k.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/chatbot_gpt2_healthcaremagic100k-GGUF/resolve/main/chatbot_gpt2_healthcaremagic100k.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/chatbot_gpt2_healthcaremagic100k-GGUF/resolve/main/chatbot_gpt2_healthcaremagic100k.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/chatbot_gpt2_healthcaremagic100k-GGUF/resolve/main/chatbot_gpt2_healthcaremagic100k.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/chatbot_gpt2_healthcaremagic100k-GGUF/resolve/main/chatbot_gpt2_healthcaremagic100k.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/chatbot_gpt2_healthcaremagic100k-GGUF/resolve/main/chatbot_gpt2_healthcaremagic100k.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/chatbot_gpt2_healthcaremagic100k-GGUF/resolve/main/chatbot_gpt2_healthcaremagic100k.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/chatbot_gpt2_healthcaremagic100k-GGUF/resolve/main/chatbot_gpt2_healthcaremagic100k.Q8_0.gguf) | Q8_0 | 0.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/chatbot_gpt2_healthcaremagic100k-GGUF/resolve/main/chatbot_gpt2_healthcaremagic100k.f16.gguf) | f16 | 0.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Subsets and Splits