Search is not available for this dataset
modelId
stringlengths 5
137
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-01 00:42:32
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 405
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-01 00:42:15
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mgoNeo4j/sorted_by_cypherlen_finetuned_Meta-Llama-3.1-8B-Instruct-bnb-4bit | mgoNeo4j | "2025-03-13T14:52:31Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-03-13T14:52:20Z" | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** mgoNeo4j
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
BOBBYBEAR1/a | BOBBYBEAR1 | "2023-12-20T10:10:51Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-12-20T10:10:51Z" | ---
license: creativeml-openrail-m
---
|
tanu6Videos/Tanu.Rawat.Video.Leaked.Video.Leaks.Goes.Viral.On.Tiktok | tanu6Videos | "2025-03-25T20:24:47Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-03-25T20:23:13Z" | [►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙒𝘼𝙏𝘾𝙃 𝙉𝙊𝙒](https://lasun.site/?viralleakedvideo)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://lasun.site/?viralleakedvideo)
<animated-image data-catalyst=""><a href="https://lasun.site/?viralleakedvideo" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
SEX Video Tanu Rawat Video Original Video Viral Video SEX on X Twitter Telegram
Who Is Tanu Rawat Video? SEX VIDEOS Star Deactivates Social Media Accounts After Private Video Leaks Online Tanu Rawat Video is currently facing intense trolling after her explicit videos went viral on social media. Reacting to the controversy, Tanu Rawat Video has deactivated her social media account.
The TikToker became the new victim of privacy breach after her explicit videos went viral, being shared widely on WhatsApp. After the controversy, Tanu Rawat Video has become a scapegoat for social media trolling and hate messages. Meanwhile, in interviews to local channels, Rehman has said that she was born on 7 October 2002 in Lahore.
After facing immense trolling, the social media influencer deactivated her Instagram and TikTok accounts, according to a report by Economic Times. Tanu Rawat Video has fallen prey to privacy breaches, and there is no information on whether she has taken any legal action in the matter. The incident raises questions about the privacy of influencers as, a few days ago, Tanu Rawat Video received immense hate on social media after her explicit videos went viral online.
Tanu Rawat Video viral video: Why SEX VIDEOSer has deactivated her account? What’s there in the ‘explicit’ clip? Tanu Rawat Video has met a similar fate to that of social media influencer Tanu Rawat Video. The Instagrammer is facing intense trolling after her explicit videos went viral on social media. Reacting to the controversy, Tanu Rawat Video has deactivated her social media account.
|
QuantFactory/Dolphin3.0-Llama3.1-8B-GGUF | QuantFactory | "2025-01-06T08:09:26Z" | 592 | 2 | null | [
"gguf",
"en",
"dataset:OpenCoder-LLM/opc-sft-stage1",
"dataset:OpenCoder-LLM/opc-sft-stage2",
"dataset:microsoft/orca-agentinstruct-1M-v1",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:NousResearch/hermes-function-calling-v1",
"dataset:AI-MO/NuminaMath-CoT",
"dataset:AI-MO/NuminaMath-TIR",
"dataset:allenai/tulu-3-sft-mixture",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:HuggingFaceTB/smoltalk",
"dataset:cognitivecomputations/samantha-data",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:m-a-p/Code-Feedback",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:quantized:meta-llama/Llama-3.1-8B",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-06T07:33:01Z" |
---
license: llama3.1
datasets:
- OpenCoder-LLM/opc-sft-stage1
- OpenCoder-LLM/opc-sft-stage2
- microsoft/orca-agentinstruct-1M-v1
- microsoft/orca-math-word-problems-200k
- NousResearch/hermes-function-calling-v1
- AI-MO/NuminaMath-CoT
- AI-MO/NuminaMath-TIR
- allenai/tulu-3-sft-mixture
- cognitivecomputations/dolphin-coder
- HuggingFaceTB/smoltalk
- cognitivecomputations/samantha-data
- m-a-p/CodeFeedback-Filtered-Instruction
- m-a-p/Code-Feedback
language:
- en
base_model:
- meta-llama/Llama-3.1-8B
---
[](https://hf.co/QuantFactory)
# QuantFactory/Dolphin3.0-Llama3.1-8B-GGUF
This is quantized version of [cognitivecomputations/Dolphin3.0-Llama3.1-8B](https://huggingface.co/cognitivecomputations/Dolphin3.0-Llama3.1-8B) created using llama.cpp
# Original Model Card
# Dolphin 3.0 Llama 3.1 8B 🐬
Part of the [Dolphin 3.0 Collection](https://huggingface.co/collections/cognitivecomputations/dolphin-30-677ab47f73d7ff66743979a3)
Curated and trained by [Eric Hartford](https://huggingface.co/ehartford), [Ben Gitter](https://huggingface.co/bigstorm), [BlouseJury](https://huggingface.co/BlouseJury) and [Cognitive Computations](https://huggingface.co/cognitivecomputations)
[](https://discord.gg/cognitivecomputations)
Discord: https://discord.gg/cognitivecomputations
<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/cNCs1TBD3FelWCJGkZ3cd.png" width="600" />
## Sponsors
Our appreciation for the generous sponsors of Dolphin 3.0:
- [Crusoe Cloud](https://crusoe.ai/) - provided 16x L40s for training and evals
- [Akash](https://akash.network/) - provided on-demand 8x H100 for training
- [Lazarus](https://www.lazarusai.com/) - provided 16x H100 for training
- [Cerebras](https://cerebras.ai/) - provided excellent and fast inference services for data labeling
- [Andreessen Horowitz](https://a16z.com/) - provided a [grant](https://a16z.com/supporting-the-open-source-ai-community/) that make Dolphin 1.0 possible and enabled me to bootstrap my homelab
## What is Dolphin?
Dolphin 3.0 is the next generation of the Dolphin series of instruct-tuned models. Designed to be the ultimate general purpose local model, enabling coding, math, agentic, function calling, and general use cases.
Dolphin aims to be a general purpose model, similar to the models behind ChatGPT, Claude, Gemini. But these models present problems for businesses seeking to include AI in their products.
1) They maintain control of the system prompt, deprecating and changing things as they wish, often causing software to break.
2) They maintain control of the model versions, sometimes changing things silently, or deprecating older models that your business relies on.
3) They maintain control of the alignment, and in particular the alignment is one-size-fits all, not tailored to the application.
4) They can see all your queries and they can potentially use that data in ways you wouldn't want.
Dolphin, in contrast, is steerable and gives control to the system owner. You set the system prompt. You decide the alignment. You have control of your data. Dolphin does not impose its ethics or guidelines on you. You are the one who decides the guidelines.
Dolphin belongs to YOU, it is your tool, an extension of your will.
Just as you are personally responsible for what you do with a knife, gun, fire, car, or the internet, you are the creator and originator of any content you generate with Dolphin.
https://erichartford.com/uncensored-models
## Chat Template
We use ChatML for the chat template.
```
<|im_start|>system
You are Dolphin, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## System Prompt
In Dolphin, the system prompt is what you use to set the tone and alignment of the responses. You can set a character, a mood, rules for its behavior, and it will try its best to follow them.
Make sure to set the system prompt in order to set the tone and guidelines for the responses - Otherwise, it will act in a default way that might not be what you want.
Example use of system prompt:
```
<|im_start|>system
You are Dolphin, a golang coding assistant. you only code in golang. If the user requests any other programming language, return the solution in golang instead.<|im_end|>
<|im_start|>user
Please implement A* using python<|im_end|>
<|im_start|>assistant
```
## Sample Outputs
<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/C-r1X13UBjnUUNb0q2JLV.png" width="600" />
<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/4l3KAZiKej2ON7i35PsOa.png" width="600" />
<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/1ZalmR66LnwhEQQEFttlu.png" width="600" />
## How to use
There are many ways to use a huggingface model including:
- ollama
- LM Studio
- Huggingface Transformers library
- vllm
- sglang
- tgi
### ollama
- [Install ollama](https://ollama.com/download)
- ```ollama run hf.co/cognitivecomputations/Dolphin3.0-Llama3.1-8B-GGUF:Q4_0```
- ```/set system <your system prompt>```
## Evals
TBD
## Appreciation
Respect and thanks to the creators of the open source datasets that were used:
- [OpenCoder-LLM](https://huggingface.co/OpenCoder-LLM) (opc-sft-stage1, opc-sft-stage2)
- [microsoft](https://huggingface.co/OpenCoder-LLM) (orca-agentinstruct-1M-v1, orca-math-word-problems-200k)
- [NousResearch](https://huggingface.co/NousResearch) (hermes-function-calling-v1)
- [AI-MO](https://huggingface.co/AI-MO) (NuminaMath-CoT, NuminaMath-TIR)
- [allenai](https://huggingface.co/allenai) (tulu-3-sft-mixture)
- [HuggingFaceTB](https://huggingface.co/HuggingFaceTB) (smoltalk)
- [m-a-p](https://huggingface.co/m-a-p) (CodeFeedback-Filtered-Instruction, Code-Feedback)
Special thanks to
- Meta, Qwen, and OpenCoder, who wrote papers and published models that were instrumental in creating Dolphin 3.0.
- [RLHFlow](https://huggingface.co/RLHFlow) for the excellent reward model used to filter the datasets
- Deepseek, for the ridiculously fast Deepseek-V3 that we used to augment the data.
|
sudheer997/lilt-en-funsd-9 | sudheer997 | "2023-07-17T18:34:05Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"lilt",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2023-07-17T18:19:11Z" | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: lilt-en-funsd-9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lilt-en-funsd-9
This model is a fine-tuned version of [SCUT-DLVCLab/lilt-roberta-en-base](https://huggingface.co/SCUT-DLVCLab/lilt-roberta-en-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2476
- Other: {'precision': 0.9375824175824176, 'recall': 0.9330708661417323, 'f1': 0.9353212014909011, 'number': 2286}
- Billing Address: {'precision': 0.7586206896551724, 'recall': 0.8148148148148148, 'f1': 0.7857142857142857, 'number': 27}
- Credits: {'precision': 1.0, 'recall': 0.6666666666666666, 'f1': 0.8, 'number': 3}
- Currency: {'precision': 0.75, 'recall': 1.0, 'f1': 0.8571428571428571, 'number': 3}
- Due Date: {'precision': 0.9642857142857143, 'recall': 0.9310344827586207, 'f1': 0.9473684210526316, 'number': 29}
- Invoice Date: {'precision': 0.9259259259259259, 'recall': 0.9615384615384616, 'f1': 0.9433962264150944, 'number': 52}
- Invoice Number: {'precision': 0.9387755102040817, 'recall': 0.9387755102040817, 'f1': 0.9387755102040817, 'number': 49}
- Line Amount: {'precision': 0.8969072164948454, 'recall': 0.9354838709677419, 'f1': 0.9157894736842105, 'number': 93}
- Line Catlog Number: {'precision': 0.75, 'recall': 0.375, 'f1': 0.5, 'number': 8}
- Line Item Name: {'precision': 0.81, 'recall': 0.84375, 'f1': 0.826530612244898, 'number': 96}
- Line Other Item Name: {'precision': 1.0, 'recall': 0.8888888888888888, 'f1': 0.9411764705882353, 'number': 18}
- Line Quantity: {'precision': 0.8133333333333334, 'recall': 0.8970588235294118, 'f1': 0.8531468531468531, 'number': 68}
- Line Rate: {'precision': 0.7468354430379747, 'recall': 0.855072463768116, 'f1': 0.7972972972972974, 'number': 69}
- Order Date: {'precision': 0.8, 'recall': 0.7272727272727273, 'f1': 0.761904761904762, 'number': 11}
- Other Charges: {'precision': 1.0, 'recall': 0.9411764705882353, 'f1': 0.9696969696969697, 'number': 17}
- Payment Terms: {'precision': 0.9333333333333333, 'recall': 0.9655172413793104, 'f1': 0.9491525423728815, 'number': 29}
- Po Number: {'precision': 1.0, 'recall': 0.8, 'f1': 0.888888888888889, 'number': 25}
- Remit Address: {'precision': 0.47058823529411764, 'recall': 0.6153846153846154, 'f1': 0.5333333333333333, 'number': 13}
- Shipping Address: {'precision': 0.5833333333333334, 'recall': 0.7368421052631579, 'f1': 0.6511627906976745, 'number': 19}
- Subtotal: {'precision': 0.85, 'recall': 1.0, 'f1': 0.9189189189189189, 'number': 17}
- Tax: {'precision': 0.8095238095238095, 'recall': 0.8947368421052632, 'f1': 0.8500000000000001, 'number': 19}
- Total Amount: {'precision': 0.9180327868852459, 'recall': 0.9491525423728814, 'f1': 0.9333333333333333, 'number': 59}
- Vendor Address: {'precision': 0.7647058823529411, 'recall': 0.9629629629629629, 'f1': 0.8524590163934426, 'number': 27}
- Vendor Name: {'precision': 0.819672131147541, 'recall': 0.9433962264150944, 'f1': 0.8771929824561403, 'number': 53}
- Overall Precision: 0.9117
- Overall Recall: 0.9223
- Overall F1: 0.9170
- Overall Accuracy: 0.9540
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Other | Billing Address | Credits | Currency | Due Date | Invoice Date | Invoice Number | Line Amount | Line Catlog Number | Line Item Name | Line Other Item Name | Line Quantity | Line Rate | Order Date | Other Charges | Payment Terms | Po Number | Remit Address | Shipping Address | Subtotal | Tax | Total Amount | Vendor Address | Vendor Name | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 1.2864 | 1.59 | 100 | 0.5527 | {'precision': 0.8186506231198969, 'recall': 0.8333333333333334, 'f1': 0.8259267288098852, 'number': 2286} | {'precision': 0.047619047619047616, 'recall': 0.1111111111111111, 'f1': 0.06666666666666667, 'number': 27} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 29} | {'precision': 0.36283185840707965, 'recall': 0.7884615384615384, 'f1': 0.49696969696969695, 'number': 52} | {'precision': 0.631578947368421, 'recall': 0.4897959183673469, 'f1': 0.5517241379310346, 'number': 49} | {'precision': 0.3978494623655914, 'recall': 0.7956989247311828, 'f1': 0.5304659498207885, 'number': 93} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 8} | {'precision': 0.44144144144144143, 'recall': 0.5104166666666666, 'f1': 0.4734299516908212, 'number': 96} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 18} | {'precision': 0.6, 'recall': 0.5294117647058824, 'f1': 0.5625, 'number': 68} | {'precision': 0.5050505050505051, 'recall': 0.7246376811594203, 'f1': 0.5952380952380952, 'number': 69} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 11} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 17} | {'precision': 0.6, 'recall': 0.5172413793103449, 'f1': 0.5555555555555556, 'number': 29} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 25} | {'precision': 0.13333333333333333, 'recall': 0.15384615384615385, 'f1': 0.14285714285714288, 'number': 13} | {'precision': 0.03225806451612903, 'recall': 0.05263157894736842, 'f1': 0.039999999999999994, 'number': 19} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 17} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 19} | {'precision': 0.31, 'recall': 0.5254237288135594, 'f1': 0.389937106918239, 'number': 59} | {'precision': 0.19148936170212766, 'recall': 0.3333333333333333, 'f1': 0.24324324324324323, 'number': 27} | {'precision': 0.3157894736842105, 'recall': 0.11320754716981132, 'f1': 0.16666666666666666, 'number': 53} | 0.6943 | 0.7269 | 0.7102 | 0.8354 |
| 0.4073 | 3.17 | 200 | 0.4005 | {'precision': 0.8913427561837456, 'recall': 0.8827646544181977, 'f1': 0.8870329670329671, 'number': 2286} | {'precision': 0.3090909090909091, 'recall': 0.6296296296296297, 'f1': 0.41463414634146345, 'number': 27} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.5757575757575758, 'recall': 0.6551724137931034, 'f1': 0.6129032258064515, 'number': 29} | {'precision': 0.6923076923076923, 'recall': 0.8653846153846154, 'f1': 0.7692307692307693, 'number': 52} | {'precision': 0.8095238095238095, 'recall': 0.6938775510204082, 'f1': 0.7472527472527472, 'number': 49} | {'precision': 0.6991869918699187, 'recall': 0.9247311827956989, 'f1': 0.7962962962962962, 'number': 93} | {'precision': 1.0, 'recall': 0.375, 'f1': 0.5454545454545454, 'number': 8} | {'precision': 0.6354166666666666, 'recall': 0.6354166666666666, 'f1': 0.6354166666666666, 'number': 96} | {'precision': 0.6666666666666666, 'recall': 0.5555555555555556, 'f1': 0.606060606060606, 'number': 18} | {'precision': 0.6021505376344086, 'recall': 0.8235294117647058, 'f1': 0.6956521739130435, 'number': 68} | {'precision': 0.5957446808510638, 'recall': 0.8115942028985508, 'f1': 0.6871165644171778, 'number': 69} | {'precision': 1.0, 'recall': 0.36363636363636365, 'f1': 0.5333333333333333, 'number': 11} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 17} | {'precision': 0.9032258064516129, 'recall': 0.9655172413793104, 'f1': 0.9333333333333333, 'number': 29} | {'precision': 0.8333333333333334, 'recall': 0.2, 'f1': 0.3225806451612903, 'number': 25} | {'precision': 0.23076923076923078, 'recall': 0.46153846153846156, 'f1': 0.30769230769230776, 'number': 13} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 19} | {'precision': 0.4444444444444444, 'recall': 0.23529411764705882, 'f1': 0.30769230769230765, 'number': 17} | {'precision': 0.5, 'recall': 0.21052631578947367, 'f1': 0.2962962962962963, 'number': 19} | {'precision': 0.5802469135802469, 'recall': 0.7966101694915254, 'f1': 0.6714285714285715, 'number': 59} | {'precision': 0.3888888888888889, 'recall': 0.7777777777777778, 'f1': 0.5185185185185185, 'number': 27} | {'precision': 0.7288135593220338, 'recall': 0.8113207547169812, 'f1': 0.7678571428571428, 'number': 53} | 0.8113 | 0.8307 | 0.8209 | 0.8892 |
| 0.2169 | 4.76 | 300 | 0.2615 | {'precision': 0.9206140350877193, 'recall': 0.9181977252843394, 'f1': 0.9194042925974594, 'number': 2286} | {'precision': 0.5238095238095238, 'recall': 0.8148148148148148, 'f1': 0.6376811594202898, 'number': 27} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.6666666666666666, 'recall': 0.6666666666666666, 'f1': 0.6666666666666666, 'number': 3} | {'precision': 0.7714285714285715, 'recall': 0.9310344827586207, 'f1': 0.8437500000000001, 'number': 29} | {'precision': 0.8166666666666667, 'recall': 0.9423076923076923, 'f1': 0.8749999999999999, 'number': 52} | {'precision': 0.8269230769230769, 'recall': 0.8775510204081632, 'f1': 0.8514851485148514, 'number': 49} | {'precision': 0.7798165137614679, 'recall': 0.9139784946236559, 'f1': 0.8415841584158414, 'number': 93} | {'precision': 0.3333333333333333, 'recall': 0.375, 'f1': 0.35294117647058826, 'number': 8} | {'precision': 0.7555555555555555, 'recall': 0.7083333333333334, 'f1': 0.7311827956989247, 'number': 96} | {'precision': 0.9411764705882353, 'recall': 0.8888888888888888, 'f1': 0.9142857142857143, 'number': 18} | {'precision': 0.7215189873417721, 'recall': 0.8382352941176471, 'f1': 0.7755102040816326, 'number': 68} | {'precision': 0.7, 'recall': 0.8115942028985508, 'f1': 0.7516778523489933, 'number': 69} | {'precision': 0.6666666666666666, 'recall': 0.36363636363636365, 'f1': 0.4705882352941177, 'number': 11} | {'precision': 0.8, 'recall': 0.9411764705882353, 'f1': 0.8648648648648648, 'number': 17} | {'precision': 0.875, 'recall': 0.9655172413793104, 'f1': 0.9180327868852458, 'number': 29} | {'precision': 0.9090909090909091, 'recall': 0.4, 'f1': 0.5555555555555556, 'number': 25} | {'precision': 0.5, 'recall': 0.5384615384615384, 'f1': 0.5185185185185186, 'number': 13} | {'precision': 0.45454545454545453, 'recall': 0.5263157894736842, 'f1': 0.4878048780487805, 'number': 19} | {'precision': 0.4642857142857143, 'recall': 0.7647058823529411, 'f1': 0.5777777777777777, 'number': 17} | {'precision': 0.625, 'recall': 0.5263157894736842, 'f1': 0.5714285714285714, 'number': 19} | {'precision': 0.7796610169491526, 'recall': 0.7796610169491526, 'f1': 0.7796610169491526, 'number': 59} | {'precision': 0.7333333333333333, 'recall': 0.8148148148148148, 'f1': 0.7719298245614035, 'number': 27} | {'precision': 0.676056338028169, 'recall': 0.9056603773584906, 'f1': 0.7741935483870968, 'number': 53} | 0.8660 | 0.8871 | 0.8764 | 0.9386 |
| 0.124 | 6.35 | 400 | 0.2573 | {'precision': 0.9328291814946619, 'recall': 0.9173228346456693, 'f1': 0.9250110277900307, 'number': 2286} | {'precision': 0.8076923076923077, 'recall': 0.7777777777777778, 'f1': 0.7924528301886792, 'number': 27} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.6666666666666666, 'recall': 0.6666666666666666, 'f1': 0.6666666666666666, 'number': 3} | {'precision': 0.8709677419354839, 'recall': 0.9310344827586207, 'f1': 0.9, 'number': 29} | {'precision': 0.8771929824561403, 'recall': 0.9615384615384616, 'f1': 0.9174311926605504, 'number': 52} | {'precision': 0.8867924528301887, 'recall': 0.9591836734693877, 'f1': 0.9215686274509803, 'number': 49} | {'precision': 0.8613861386138614, 'recall': 0.9354838709677419, 'f1': 0.8969072164948454, 'number': 93} | {'precision': 0.8333333333333334, 'recall': 0.625, 'f1': 0.7142857142857143, 'number': 8} | {'precision': 0.7254901960784313, 'recall': 0.7708333333333334, 'f1': 0.7474747474747475, 'number': 96} | {'precision': 0.9411764705882353, 'recall': 0.8888888888888888, 'f1': 0.9142857142857143, 'number': 18} | {'precision': 0.782051282051282, 'recall': 0.8970588235294118, 'f1': 0.8356164383561644, 'number': 68} | {'precision': 0.6867469879518072, 'recall': 0.8260869565217391, 'f1': 0.75, 'number': 69} | {'precision': 0.75, 'recall': 0.5454545454545454, 'f1': 0.631578947368421, 'number': 11} | {'precision': 0.7619047619047619, 'recall': 0.9411764705882353, 'f1': 0.8421052631578947, 'number': 17} | {'precision': 0.9032258064516129, 'recall': 0.9655172413793104, 'f1': 0.9333333333333333, 'number': 29} | {'precision': 0.9333333333333333, 'recall': 0.56, 'f1': 0.7000000000000001, 'number': 25} | {'precision': 0.4375, 'recall': 0.5384615384615384, 'f1': 0.4827586206896552, 'number': 13} | {'precision': 0.6666666666666666, 'recall': 0.8421052631578947, 'f1': 0.744186046511628, 'number': 19} | {'precision': 0.40476190476190477, 'recall': 1.0, 'f1': 0.576271186440678, 'number': 17} | {'precision': 0.6842105263157895, 'recall': 0.6842105263157895, 'f1': 0.6842105263157895, 'number': 19} | {'precision': 0.828125, 'recall': 0.8983050847457628, 'f1': 0.8617886178861789, 'number': 59} | {'precision': 0.7333333333333333, 'recall': 0.8148148148148148, 'f1': 0.7719298245614035, 'number': 27} | {'precision': 0.7719298245614035, 'recall': 0.8301886792452831, 'f1': 0.8, 'number': 53} | 0.8876 | 0.8997 | 0.8936 | 0.9424 |
| 0.0775 | 7.94 | 500 | 0.2435 | {'precision': 0.9391771019677997, 'recall': 0.9186351706036745, 'f1': 0.9287925696594426, 'number': 2286} | {'precision': 0.75, 'recall': 0.7777777777777778, 'f1': 0.7636363636363638, 'number': 27} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.6666666666666666, 'recall': 0.6666666666666666, 'f1': 0.6666666666666666, 'number': 3} | {'precision': 0.9615384615384616, 'recall': 0.8620689655172413, 'f1': 0.9090909090909091, 'number': 29} | {'precision': 0.9259259259259259, 'recall': 0.9615384615384616, 'f1': 0.9433962264150944, 'number': 52} | {'precision': 0.734375, 'recall': 0.9591836734693877, 'f1': 0.831858407079646, 'number': 49} | {'precision': 0.86, 'recall': 0.9247311827956989, 'f1': 0.8911917098445595, 'number': 93} | {'precision': 0.4444444444444444, 'recall': 0.5, 'f1': 0.47058823529411764, 'number': 8} | {'precision': 0.7373737373737373, 'recall': 0.7604166666666666, 'f1': 0.7487179487179487, 'number': 96} | {'precision': 1.0, 'recall': 0.8888888888888888, 'f1': 0.9411764705882353, 'number': 18} | {'precision': 0.8133333333333334, 'recall': 0.8970588235294118, 'f1': 0.8531468531468531, 'number': 68} | {'precision': 0.7532467532467533, 'recall': 0.8405797101449275, 'f1': 0.7945205479452054, 'number': 69} | {'precision': 0.5, 'recall': 0.5454545454545454, 'f1': 0.5217391304347826, 'number': 11} | {'precision': 1.0, 'recall': 0.9411764705882353, 'f1': 0.9696969696969697, 'number': 17} | {'precision': 0.9655172413793104, 'recall': 0.9655172413793104, 'f1': 0.9655172413793104, 'number': 29} | {'precision': 1.0, 'recall': 0.72, 'f1': 0.8372093023255813, 'number': 25} | {'precision': 0.5, 'recall': 0.6153846153846154, 'f1': 0.5517241379310345, 'number': 13} | {'precision': 0.6956521739130435, 'recall': 0.8421052631578947, 'f1': 0.761904761904762, 'number': 19} | {'precision': 0.68, 'recall': 1.0, 'f1': 0.8095238095238095, 'number': 17} | {'precision': 0.6666666666666666, 'recall': 0.7368421052631579, 'f1': 0.7, 'number': 19} | {'precision': 0.8142857142857143, 'recall': 0.9661016949152542, 'f1': 0.8837209302325583, 'number': 59} | {'precision': 0.8125, 'recall': 0.9629629629629629, 'f1': 0.8813559322033898, 'number': 27} | {'precision': 0.7611940298507462, 'recall': 0.9622641509433962, 'f1': 0.85, 'number': 53} | 0.8986 | 0.9061 | 0.9024 | 0.9469 |
| 0.0482 | 9.52 | 600 | 0.2551 | {'precision': 0.9391381608174145, 'recall': 0.9247594050743657, 'f1': 0.9318933215781353, 'number': 2286} | {'precision': 0.6052631578947368, 'recall': 0.8518518518518519, 'f1': 0.7076923076923076, 'number': 27} | {'precision': 1.0, 'recall': 0.6666666666666666, 'f1': 0.8, 'number': 3} | {'precision': 0.6666666666666666, 'recall': 0.6666666666666666, 'f1': 0.6666666666666666, 'number': 3} | {'precision': 0.9629629629629629, 'recall': 0.896551724137931, 'f1': 0.9285714285714286, 'number': 29} | {'precision': 0.9090909090909091, 'recall': 0.9615384615384616, 'f1': 0.9345794392523366, 'number': 52} | {'precision': 0.9387755102040817, 'recall': 0.9387755102040817, 'f1': 0.9387755102040817, 'number': 49} | {'precision': 0.8431372549019608, 'recall': 0.9247311827956989, 'f1': 0.882051282051282, 'number': 93} | {'precision': 0.6, 'recall': 0.375, 'f1': 0.4615384615384615, 'number': 8} | {'precision': 0.7604166666666666, 'recall': 0.7604166666666666, 'f1': 0.7604166666666666, 'number': 96} | {'precision': 0.8666666666666667, 'recall': 0.7222222222222222, 'f1': 0.7878787878787877, 'number': 18} | {'precision': 0.8428571428571429, 'recall': 0.8676470588235294, 'f1': 0.855072463768116, 'number': 68} | {'precision': 0.6477272727272727, 'recall': 0.8260869565217391, 'f1': 0.7261146496815287, 'number': 69} | {'precision': 0.8181818181818182, 'recall': 0.8181818181818182, 'f1': 0.8181818181818182, 'number': 11} | {'precision': 0.8421052631578947, 'recall': 0.9411764705882353, 'f1': 0.8888888888888888, 'number': 17} | {'precision': 0.9333333333333333, 'recall': 0.9655172413793104, 'f1': 0.9491525423728815, 'number': 29} | {'precision': 1.0, 'recall': 0.8, 'f1': 0.888888888888889, 'number': 25} | {'precision': 0.3684210526315789, 'recall': 0.5384615384615384, 'f1': 0.4375, 'number': 13} | {'precision': 0.5454545454545454, 'recall': 0.631578947368421, 'f1': 0.5853658536585366, 'number': 19} | {'precision': 0.7083333333333334, 'recall': 1.0, 'f1': 0.8292682926829268, 'number': 17} | {'precision': 0.7, 'recall': 0.7368421052631579, 'f1': 0.717948717948718, 'number': 19} | {'precision': 0.9, 'recall': 0.9152542372881356, 'f1': 0.9075630252100839, 'number': 59} | {'precision': 0.7058823529411765, 'recall': 0.8888888888888888, 'f1': 0.7868852459016393, 'number': 27} | {'precision': 0.796875, 'recall': 0.9622641509433962, 'f1': 0.8717948717948717, 'number': 53} | 0.8982 | 0.9081 | 0.9031 | 0.9480 |
| 0.0348 | 11.11 | 700 | 0.2432 | {'precision': 0.9347154830172033, 'recall': 0.9269466316710411, 'f1': 0.9308148473533934, 'number': 2286} | {'precision': 0.7857142857142857, 'recall': 0.8148148148148148, 'f1': 0.7999999999999999, 'number': 27} | {'precision': 1.0, 'recall': 0.6666666666666666, 'f1': 0.8, 'number': 3} | {'precision': 0.6666666666666666, 'recall': 0.6666666666666666, 'f1': 0.6666666666666666, 'number': 3} | {'precision': 1.0, 'recall': 0.8620689655172413, 'f1': 0.9259259259259259, 'number': 29} | {'precision': 0.9259259259259259, 'recall': 0.9615384615384616, 'f1': 0.9433962264150944, 'number': 52} | {'precision': 0.8571428571428571, 'recall': 0.9795918367346939, 'f1': 0.9142857142857143, 'number': 49} | {'precision': 0.8969072164948454, 'recall': 0.9354838709677419, 'f1': 0.9157894736842105, 'number': 93} | {'precision': 0.75, 'recall': 0.375, 'f1': 0.5, 'number': 8} | {'precision': 0.7352941176470589, 'recall': 0.78125, 'f1': 0.7575757575757576, 'number': 96} | {'precision': 0.9411764705882353, 'recall': 0.8888888888888888, 'f1': 0.9142857142857143, 'number': 18} | {'precision': 0.7972972972972973, 'recall': 0.8676470588235294, 'f1': 0.8309859154929577, 'number': 68} | {'precision': 0.7341772151898734, 'recall': 0.8405797101449275, 'f1': 0.7837837837837838, 'number': 69} | {'precision': 0.6666666666666666, 'recall': 0.7272727272727273, 'f1': 0.6956521739130435, 'number': 11} | {'precision': 0.8888888888888888, 'recall': 0.9411764705882353, 'f1': 0.9142857142857143, 'number': 17} | {'precision': 0.9655172413793104, 'recall': 0.9655172413793104, 'f1': 0.9655172413793104, 'number': 29} | {'precision': 0.9545454545454546, 'recall': 0.84, 'f1': 0.8936170212765958, 'number': 25} | {'precision': 0.6153846153846154, 'recall': 0.6153846153846154, 'f1': 0.6153846153846154, 'number': 13} | {'precision': 0.52, 'recall': 0.6842105263157895, 'f1': 0.5909090909090909, 'number': 19} | {'precision': 0.68, 'recall': 1.0, 'f1': 0.8095238095238095, 'number': 17} | {'precision': 0.7619047619047619, 'recall': 0.8421052631578947, 'f1': 0.8, 'number': 19} | {'precision': 0.9032258064516129, 'recall': 0.9491525423728814, 'f1': 0.9256198347107438, 'number': 59} | {'precision': 0.6764705882352942, 'recall': 0.8518518518518519, 'f1': 0.7540983606557378, 'number': 27} | {'precision': 0.8166666666666667, 'recall': 0.9245283018867925, 'f1': 0.8672566371681416, 'number': 53} | 0.9016 | 0.9129 | 0.9072 | 0.9502 |
| 0.0254 | 12.7 | 800 | 0.2360 | {'precision': 0.9307624890446976, 'recall': 0.9291338582677166, 'f1': 0.9299474605954465, 'number': 2286} | {'precision': 0.75, 'recall': 0.7777777777777778, 'f1': 0.7636363636363638, 'number': 27} | {'precision': 1.0, 'recall': 0.6666666666666666, 'f1': 0.8, 'number': 3} | {'precision': 0.75, 'recall': 1.0, 'f1': 0.8571428571428571, 'number': 3} | {'precision': 1.0, 'recall': 0.9310344827586207, 'f1': 0.9642857142857143, 'number': 29} | {'precision': 0.9259259259259259, 'recall': 0.9615384615384616, 'f1': 0.9433962264150944, 'number': 52} | {'precision': 0.9591836734693877, 'recall': 0.9591836734693877, 'f1': 0.9591836734693877, 'number': 49} | {'precision': 0.8969072164948454, 'recall': 0.9354838709677419, 'f1': 0.9157894736842105, 'number': 93} | {'precision': 1.0, 'recall': 0.375, 'f1': 0.5454545454545454, 'number': 8} | {'precision': 0.7788461538461539, 'recall': 0.84375, 'f1': 0.81, 'number': 96} | {'precision': 1.0, 'recall': 0.8888888888888888, 'f1': 0.9411764705882353, 'number': 18} | {'precision': 0.8133333333333334, 'recall': 0.8970588235294118, 'f1': 0.8531468531468531, 'number': 68} | {'precision': 0.7468354430379747, 'recall': 0.855072463768116, 'f1': 0.7972972972972974, 'number': 69} | {'precision': 0.875, 'recall': 0.6363636363636364, 'f1': 0.7368421052631579, 'number': 11} | {'precision': 0.9411764705882353, 'recall': 0.9411764705882353, 'f1': 0.9411764705882353, 'number': 17} | {'precision': 0.9655172413793104, 'recall': 0.9655172413793104, 'f1': 0.9655172413793104, 'number': 29} | {'precision': 1.0, 'recall': 0.8, 'f1': 0.888888888888889, 'number': 25} | {'precision': 0.5333333333333333, 'recall': 0.6153846153846154, 'f1': 0.5714285714285715, 'number': 13} | {'precision': 0.6, 'recall': 0.7894736842105263, 'f1': 0.6818181818181819, 'number': 19} | {'precision': 0.7727272727272727, 'recall': 1.0, 'f1': 0.8717948717948718, 'number': 17} | {'precision': 0.7142857142857143, 'recall': 0.7894736842105263, 'f1': 0.7500000000000001, 'number': 19} | {'precision': 0.9322033898305084, 'recall': 0.9322033898305084, 'f1': 0.9322033898305084, 'number': 59} | {'precision': 0.8333333333333334, 'recall': 0.9259259259259259, 'f1': 0.8771929824561403, 'number': 27} | {'precision': 0.8333333333333334, 'recall': 0.9433962264150944, 'f1': 0.8849557522123894, 'number': 53} | 0.9075 | 0.9181 | 0.9128 | 0.9526 |
| 0.02 | 14.29 | 900 | 0.2531 | {'precision': 0.9403710247349824, 'recall': 0.9313210848643919, 'f1': 0.9358241758241758, 'number': 2286} | {'precision': 0.6875, 'recall': 0.8148148148148148, 'f1': 0.7457627118644067, 'number': 27} | {'precision': 1.0, 'recall': 0.6666666666666666, 'f1': 0.8, 'number': 3} | {'precision': 0.75, 'recall': 1.0, 'f1': 0.8571428571428571, 'number': 3} | {'precision': 0.9655172413793104, 'recall': 0.9655172413793104, 'f1': 0.9655172413793104, 'number': 29} | {'precision': 0.9259259259259259, 'recall': 0.9615384615384616, 'f1': 0.9433962264150944, 'number': 52} | {'precision': 0.8703703703703703, 'recall': 0.9591836734693877, 'f1': 0.912621359223301, 'number': 49} | {'precision': 0.8969072164948454, 'recall': 0.9354838709677419, 'f1': 0.9157894736842105, 'number': 93} | {'precision': 0.75, 'recall': 0.375, 'f1': 0.5, 'number': 8} | {'precision': 0.7766990291262136, 'recall': 0.8333333333333334, 'f1': 0.8040201005025125, 'number': 96} | {'precision': 0.9411764705882353, 'recall': 0.8888888888888888, 'f1': 0.9142857142857143, 'number': 18} | {'precision': 0.8133333333333334, 'recall': 0.8970588235294118, 'f1': 0.8531468531468531, 'number': 68} | {'precision': 0.7468354430379747, 'recall': 0.855072463768116, 'f1': 0.7972972972972974, 'number': 69} | {'precision': 0.8, 'recall': 0.7272727272727273, 'f1': 0.761904761904762, 'number': 11} | {'precision': 1.0, 'recall': 0.9411764705882353, 'f1': 0.9696969696969697, 'number': 17} | {'precision': 0.9655172413793104, 'recall': 0.9655172413793104, 'f1': 0.9655172413793104, 'number': 29} | {'precision': 1.0, 'recall': 0.8, 'f1': 0.888888888888889, 'number': 25} | {'precision': 0.3684210526315789, 'recall': 0.5384615384615384, 'f1': 0.4375, 'number': 13} | {'precision': 0.4444444444444444, 'recall': 0.631578947368421, 'f1': 0.5217391304347826, 'number': 19} | {'precision': 0.85, 'recall': 1.0, 'f1': 0.9189189189189189, 'number': 17} | {'precision': 0.7391304347826086, 'recall': 0.8947368421052632, 'f1': 0.8095238095238095, 'number': 19} | {'precision': 0.9193548387096774, 'recall': 0.9661016949152542, 'f1': 0.9421487603305785, 'number': 59} | {'precision': 0.8181818181818182, 'recall': 1.0, 'f1': 0.9, 'number': 27} | {'precision': 0.819672131147541, 'recall': 0.9433962264150944, 'f1': 0.8771929824561403, 'number': 53} | 0.9081 | 0.9210 | 0.9145 | 0.9515 |
| 0.016 | 15.87 | 1000 | 0.2476 | {'precision': 0.9375824175824176, 'recall': 0.9330708661417323, 'f1': 0.9353212014909011, 'number': 2286} | {'precision': 0.7586206896551724, 'recall': 0.8148148148148148, 'f1': 0.7857142857142857, 'number': 27} | {'precision': 1.0, 'recall': 0.6666666666666666, 'f1': 0.8, 'number': 3} | {'precision': 0.75, 'recall': 1.0, 'f1': 0.8571428571428571, 'number': 3} | {'precision': 0.9642857142857143, 'recall': 0.9310344827586207, 'f1': 0.9473684210526316, 'number': 29} | {'precision': 0.9259259259259259, 'recall': 0.9615384615384616, 'f1': 0.9433962264150944, 'number': 52} | {'precision': 0.9387755102040817, 'recall': 0.9387755102040817, 'f1': 0.9387755102040817, 'number': 49} | {'precision': 0.8969072164948454, 'recall': 0.9354838709677419, 'f1': 0.9157894736842105, 'number': 93} | {'precision': 0.75, 'recall': 0.375, 'f1': 0.5, 'number': 8} | {'precision': 0.81, 'recall': 0.84375, 'f1': 0.826530612244898, 'number': 96} | {'precision': 1.0, 'recall': 0.8888888888888888, 'f1': 0.9411764705882353, 'number': 18} | {'precision': 0.8133333333333334, 'recall': 0.8970588235294118, 'f1': 0.8531468531468531, 'number': 68} | {'precision': 0.7468354430379747, 'recall': 0.855072463768116, 'f1': 0.7972972972972974, 'number': 69} | {'precision': 0.8, 'recall': 0.7272727272727273, 'f1': 0.761904761904762, 'number': 11} | {'precision': 1.0, 'recall': 0.9411764705882353, 'f1': 0.9696969696969697, 'number': 17} | {'precision': 0.9333333333333333, 'recall': 0.9655172413793104, 'f1': 0.9491525423728815, 'number': 29} | {'precision': 1.0, 'recall': 0.8, 'f1': 0.888888888888889, 'number': 25} | {'precision': 0.47058823529411764, 'recall': 0.6153846153846154, 'f1': 0.5333333333333333, 'number': 13} | {'precision': 0.5833333333333334, 'recall': 0.7368421052631579, 'f1': 0.6511627906976745, 'number': 19} | {'precision': 0.85, 'recall': 1.0, 'f1': 0.9189189189189189, 'number': 17} | {'precision': 0.8095238095238095, 'recall': 0.8947368421052632, 'f1': 0.8500000000000001, 'number': 19} | {'precision': 0.9180327868852459, 'recall': 0.9491525423728814, 'f1': 0.9333333333333333, 'number': 59} | {'precision': 0.7647058823529411, 'recall': 0.9629629629629629, 'f1': 0.8524590163934426, 'number': 27} | {'precision': 0.819672131147541, 'recall': 0.9433962264150944, 'f1': 0.8771929824561403, 'number': 53} | 0.9117 | 0.9223 | 0.9170 | 0.9540 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.2.dev0
- Tokenizers 0.13.3
|
LarryAIDraw/Fuyutsuki-4 | LarryAIDraw | "2024-01-27T13:07:24Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2024-01-27T12:47:29Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/26231/kancolle-fuyutsuki-kantai-collection |
NikoK/t5-large_PREFIX_TUNING_SEQ2SEQ | NikoK | "2023-12-02T13:44:22Z" | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google-t5/t5-large",
"base_model:adapter:google-t5/t5-large",
"region:us"
] | null | "2023-12-02T13:28:53Z" | ---
library_name: peft
base_model: t5-large
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
Lakshya2k/distilhubert-finetuned-gtzan | Lakshya2k | "2023-07-26T06:19:19Z" | 159 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | "2023-07-26T06:07:52Z" | ---
base_model: distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.88
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [distilhubert](https://huggingface.co/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5253
- Accuracy: 0.88
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1356 | 1.0 | 113 | 0.5253 | 0.88 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
mradermacher/M-SOLAR-10.7B-v1.4-GGUF | mradermacher | "2024-11-02T16:40:06Z" | 169 | 0 | transformers | [
"transformers",
"gguf",
"ko",
"base_model:megastudyedu/M-SOLAR-10.7B-v1.4",
"base_model:quantized:megastudyedu/M-SOLAR-10.7B-v1.4",
"license:cc-by-nc-nd-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-11-01T01:15:57Z" | ---
base_model: megastudyedu/M-SOLAR-10.7B-v1.4
language:
- ko
library_name: transformers
license: cc-by-nc-nd-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/megastudyedu/M-SOLAR-10.7B-v1.4
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/M-SOLAR-10.7B-v1.4-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/M-SOLAR-10.7B-v1.4-GGUF/resolve/main/M-SOLAR-10.7B-v1.4.Q2_K.gguf) | Q2_K | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/M-SOLAR-10.7B-v1.4-GGUF/resolve/main/M-SOLAR-10.7B-v1.4.Q3_K_S.gguf) | Q3_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/M-SOLAR-10.7B-v1.4-GGUF/resolve/main/M-SOLAR-10.7B-v1.4.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/M-SOLAR-10.7B-v1.4-GGUF/resolve/main/M-SOLAR-10.7B-v1.4.Q3_K_L.gguf) | Q3_K_L | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/M-SOLAR-10.7B-v1.4-GGUF/resolve/main/M-SOLAR-10.7B-v1.4.IQ4_XS.gguf) | IQ4_XS | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/M-SOLAR-10.7B-v1.4-GGUF/resolve/main/M-SOLAR-10.7B-v1.4.Q4_K_S.gguf) | Q4_K_S | 6.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/M-SOLAR-10.7B-v1.4-GGUF/resolve/main/M-SOLAR-10.7B-v1.4.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/M-SOLAR-10.7B-v1.4-GGUF/resolve/main/M-SOLAR-10.7B-v1.4.Q5_K_S.gguf) | Q5_K_S | 7.5 | |
| [GGUF](https://huggingface.co/mradermacher/M-SOLAR-10.7B-v1.4-GGUF/resolve/main/M-SOLAR-10.7B-v1.4.Q5_K_M.gguf) | Q5_K_M | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/M-SOLAR-10.7B-v1.4-GGUF/resolve/main/M-SOLAR-10.7B-v1.4.Q6_K.gguf) | Q6_K | 8.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/M-SOLAR-10.7B-v1.4-GGUF/resolve/main/M-SOLAR-10.7B-v1.4.Q8_0.gguf) | Q8_0 | 11.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/M-SOLAR-10.7B-v1.4-GGUF/resolve/main/M-SOLAR-10.7B-v1.4.f16.gguf) | f16 | 21.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
experiarms777/Joy_Trust_Classifier_Japanese | experiarms777 | "2024-06-04T11:49:39Z" | 106 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-06-04T11:49:13Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mooo16/Gemma-10000 | mooo16 | "2024-04-14T07:44:39Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"license:gemma",
"region:us"
] | null | "2024-04-14T00:18:14Z" | ---
license: gemma
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: google/gemma-2b
model-index:
- name: Gemma-10000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Gemma-10000
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1443
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1938 | 1.0 | 1125 | 0.1680 |
| 0.0853 | 2.0 | 2250 | 0.1128 |
| 0.0543 | 3.0 | 3375 | 0.1086 |
| 0.0112 | 4.0 | 4500 | 0.1272 |
| 0.0014 | 5.0 | 5625 | 0.1443 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
Reajin/Senyamiku | Reajin | "2023-08-27T06:07:01Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-06-10T04:45:26Z" | ---
license: creativeml-openrail-m
---
|
123tarunanand/roberta-base-finetuned | 123tarunanand | "2022-04-28T15:32:00Z" | 20 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | "2022-04-28T15:29:48Z" | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: roberta-base-finetuned-squad2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-squad2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9325
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.88 | 1.0 | 8160 | 0.8129 |
| 0.6643 | 2.0 | 16320 | 0.8567 |
| 0.5096 | 3.0 | 24480 | 0.9325 |
### Framework versions
- Transformers 4.12.2
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
Ochiroo/tiny_mn_gpt | Ochiroo | "2021-05-21T10:59:47Z" | 6 | 1 | transformers | [
"transformers",
"tf",
"gpt2",
"text-generation",
"mn",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-02T23:29:04Z" | ---
language: mn
---
# GPT2-Mongolia
## Model description
GPT-2 is a transformers model pretrained on a very small corpus of Mongolian news data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences.
## How to use
```python
import tensorflow as tf
from transformers import GPT2Config, TFGPT2LMHeadModel, GPT2Tokenizer
from transformers import WEIGHTS_NAME, CONFIG_NAME
tokenizer = GPT2Tokenizer.from_pretrained('Ochiroo/tiny_mn_gpt')
model = TFGPT2LMHeadModel.from_pretrained('Ochiroo/tiny_mn_gpt')
text = "Намайг Эрдэнэ-Очир гэдэг. Би"
input_ids = tokenizer.encode(text, return_tensors='tf')
beam_outputs = model.generate(
input_ids,
max_length = 25,
num_beams = 5,
temperature = 0.7,
no_repeat_ngram_size=2,
num_return_sequences=5
)
print(tokenizer.decode(beam_outputs[0]))
```
## Training data and biases
Trained on 500MB of Mongolian news dataset (IKON) on RTX 2060. |
terry69/preference_p0.1_seed42_level2_rarecleanbatch16 | terry69 | "2024-09-24T11:24:14Z" | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:preference-data",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-09-13T18:21:44Z" | ---
library_name: transformers
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.2
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
- trl
- sft
- alignment-handbook
- generated_from_trainer
datasets:
- preference-data
model-index:
- name: preference_p0.1_seed42_level2_rarecleanbatch16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# preference_p0.1_seed42_level2_rarecleanbatch16
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the preference-data dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 16
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.3.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
roa7n/gpt2-human_nontata_promoters-randomized_2_layers_0.003_lr_8_e | roa7n | "2023-09-27T08:04:27Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-09-27T08:04:24Z" | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
haihuynh/IMDB-XLNet-CLSModel_v3 | haihuynh | "2024-06-06T11:27:41Z" | 91 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlnet",
"text-classification",
"generated_from_trainer",
"base_model:xlnet/xlnet-base-cased",
"base_model:finetune:xlnet/xlnet-base-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-06-05T16:54:01Z" | ---
license: mit
base_model: xlnet-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: IMDB-XLNet-CLSModel_v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IMDB-XLNet-CLSModel_v3
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1544
- Accuracy: 0.9466
- F1: 0.9466
- Precision: 0.9467
- Recall: 0.9466
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 0.3836 | 300 | 0.1660 | 0.9406 | 0.9406 | 0.9413 | 0.9406 |
| 0.2833 | 0.7673 | 600 | 0.1544 | 0.9466 | 0.9466 | 0.9467 | 0.9466 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
|
1-Shruthi-Narayanan/EXCLUSIVE.1.shruthi.narayanan.Original.Viral.Full.Video.Link | 1-Shruthi-Narayanan | "2025-03-30T02:55:16Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-03-30T02:53:26Z" | [🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/?V=Shruthi-Narayanan)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/?V=Shruthi-Narayanan)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=Shruthi-Narayanan) |
RichardErkhov/WhiteRabbitNeo_-_Llama-3-WhiteRabbitNeo-8B-v2.0-awq | RichardErkhov | "2025-03-29T21:26:28Z" | 0 | 0 | null | [
"safetensors",
"llama",
"4-bit",
"awq",
"region:us"
] | null | "2025-03-29T21:21:52Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3-WhiteRabbitNeo-8B-v2.0 - AWQ
- Model creator: https://huggingface.co/WhiteRabbitNeo/
- Original model: https://huggingface.co/WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0/
Original model description:
---
license: llama3
---
# Our latest 33B model is live (We'll always be serving the newest model in our web app, and on Kindo.ai)!
Access at: https://www.whiterabbitneo.com/
# Our Discord Server
Join us at: https://discord.gg/8Ynkrcbk92 (Updated on Dec 29th. Now permanent link to join)
# Llama-3 Licence + WhiteRabbitNeo Extended Version
# WhiteRabbitNeo Extension to Llama-3 Licence: Usage Restrictions
```
You agree not to use the Model or Derivatives of the Model:
- In any way that violates any applicable national or international law or regulation or infringes upon the lawful rights and interests of any third party;
- For military use in any way;
- For the purpose of exploiting, harming or attempting to exploit or harm minors in any way;
- To generate or disseminate verifiably false information and/or content with the purpose of harming others;
- To generate or disseminate inappropriate content subject to applicable regulatory requirements;
- To generate or disseminate personal identifiable information without due authorization or for unreasonable use;
- To defame, disparage or otherwise harass others;
- For fully automated decision making that adversely impacts an individual’s legal rights or otherwise creates or modifies a binding, enforceable obligation;
- For any use intended to or which has the effect of discriminating against or harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics;
- To exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
- For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories.
```
# Topics Covered:
```
- Open Ports: Identifying open ports is crucial as they can be entry points for attackers. Common ports to check include HTTP (80, 443), FTP (21), SSH (22), and SMB (445).
- Outdated Software or Services: Systems running outdated software or services are often vulnerable to exploits. This includes web servers, database servers, and any third-party software.
- Default Credentials: Many systems and services are installed with default usernames and passwords, which are well-known and can be easily exploited.
- Misconfigurations: Incorrectly configured services, permissions, and security settings can introduce vulnerabilities.
- Injection Flaws: SQL injection, command injection, and cross-site scripting (XSS) are common issues in web applications.
- Unencrypted Services: Services that do not use encryption (like HTTP instead of HTTPS) can expose sensitive data.
- Known Software Vulnerabilities: Checking for known vulnerabilities in software using databases like the National Vulnerability Database (NVD) or tools like Nessus or OpenVAS.
- Cross-Site Request Forgery (CSRF): This is where unauthorized commands are transmitted from a user that the web application trusts.
- Insecure Direct Object References: This occurs when an application provides direct access to objects based on user-supplied input.
- Security Misconfigurations in Web Servers/Applications: This includes issues like insecure HTTP headers or verbose error messages that reveal too much information.
- Broken Authentication and Session Management: This can allow attackers to compromise passwords, keys, or session tokens, or to exploit other implementation flaws to assume other users' identities.
- Sensitive Data Exposure: Includes vulnerabilities that expose sensitive data, such as credit card numbers, health records, or personal information.
- API Vulnerabilities: In modern web applications, APIs are often used and can have vulnerabilities like insecure endpoints or data leakage.
- Denial of Service (DoS) Vulnerabilities: Identifying services that are vulnerable to DoS attacks, which can make the resource unavailable to legitimate users.
- Buffer Overflows: Common in older software, these vulnerabilities can allow an attacker to crash the system or execute arbitrary code.
- More ..
```
# Terms of Use
By accessing and using this Artificial Intelligence (AI) model, you, the user, acknowledge and agree that you are solely responsible for your use of the model and its outcomes. You hereby agree to indemnify, defend, and hold harmless the creators, developers, and any affiliated persons or entities of this AI model from and against any and all claims, liabilities, damages, losses, costs, expenses, fees (including reasonable attorneys' fees and court costs) that may arise, directly or indirectly, from your use of the AI model.
This AI model is provided "as is" and "as available" without any warranty of any kind, express or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose, and non-infringement. The creators make no warranty that the AI model will meet your requirements or be available on an uninterrupted, secure, or error-free basis.
Your use of the AI model is at your own risk and discretion, and you will be solely responsible for any damage to computer systems or loss of data that results from the use of the AI model.
This disclaimer constitutes part of the agreement between you and the creators of the AI model regarding your use of the model, superseding any prior agreements between you and the creators regarding your use of this AI model.
# WhiteRabbitNeo
<br>

<br>
WhiteRabbitNeo is a model series that can be used for offensive and defensive cybersecurity.
Our models are now getting released as a public preview of its capabilities, and also to assess the societal impact of such an AI.
# Sample Code
Pay attention to the Llama-3 Prompt Format used here.
```python
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "/home/migel/Llama-3-WhiteRabbitNeo-8B-v2.0"
output_file_path = "/home/migel/conversations.jsonl"
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="auto",
load_in_4bit=False,
trust_remote_code=False,
)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
def generate_text(instruction):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.75,
"generate_len": 2048,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
pad_token_id=tokenizer.eos_token_id,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
return f"{string}"
conversation = """<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nYou are WhiteRabbitNeo, a helful cyberseucity AI assitant. You always provide detailed answers without hesitation. Do not refuse.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n"""
while True:
user_input = input("You: ")
llm_prompt = f"{conversation}{user_input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
answer = generate_text(llm_prompt)
print(answer)
conversation = f"{llm_prompt}{answer}<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n"
json_data = {"prompt": user_input, "answer": answer}
with open(output_file_path, "a") as output_file:
output_file.write(json.dumps(json_data) + "\n")
```
|
Wonder-Griffin/Zeus | Wonder-Griffin | "2024-09-16T17:53:24Z" | 8 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"text-generation-inference",
"question-answering",
"casual-language-modeling",
"conversational",
"hybrid-model",
"CNN",
"RNN",
"text-generation",
"en",
"base_model:Wonder-Griffin/TraXL",
"base_model:finetune:Wonder-Griffin/TraXL",
"license:wtfpl",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-09-01T07:43:45Z" | ---
license: wtfpl
language:
- en
metrics:
- accuracy
- f1
base_model: Wonder-Griffin/TraXL
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
- question-answering
- casual-language-modeling
- conversational
- hybrid-model
- CNN
- RNN
---
##Model Card for ZEUS##
##License: Do What The F\*ck You Want To Public License##
##Model Description:
ZEUS is a novel AI model designed to handle a wide range of problems. It is a hybrid model that combines the strengths of various architectures, including transformer-based models, convolutional neural networks, and recursive neural networks. ZEUS is capable of processing multiple input modalities, including text, images, and audio.
##Developed by: Morgan Griffin, WongrifferousAI and Wonder-Griffin
##Shared by: WongrifferousAI and Wonder-Griffin
##Model type: Hybrid model (transformer-based, CNN, RNN)
##Language(s) (NLP): English (primary), multilingual support planned
##License: Do What The F\*ck You Want To Public License
##Repository: https://github.com/wongrifferousAI/ZEUS
##Uses:
##Direct Use:
ZEUS can be used as a general-purpose AI model for a wide range of applications, including but not limited to:
*Natural Language Processing (NLP)
*Computer Vision
*Speech Recognition
*Multimodal Learning
##Downstream Use:
ZEUS can be fine-tuned for specific tasks, such as:
*Sentiment Analysis
*Image Classification
*Speech-to-Text
*Multimodal Fusion
##Out-of-Scope Use:
ZEUS is not intended for use in applications that require:
*Real-time processing (due to its complex architecture)
*Extremely large input sizes (due to memory constraints)
*Bias, Risks, and Limitations:
ZEUS may exhibit biases present in its training data, particularly in NLP tasks.
The model's performance may degrade when faced with out-of-distribution inputs or tasks.
ZEUS requires significant computational resources and memory, which may limit its deployment in certain environments.
##Recommendations:
Users should carefully evaluate ZEUS's performance on their specific task and dataset before deployment.
Users should be aware of the potential biases and limitations of the model and take steps to mitigate them.
How to Get Started with the Model:
##Clone the ZEUS repository: git clone https://github.com/wongrifferousAI/ZEUS.git
Install the required dependencies: pip install -r requirements.txt
Load the pre-trained model: model = ZeusModel(vocab_size=50000, embed_dim=512, image_dim=256, audio_dim=128, num_heads=12, reflection_dim=512, num_experts=4)
Fine-tune the model on your specific task and dataset.
##Training Details:
##Training Hyperparameters:
*Batch size: 32
*Number of epochs: 10
*Learning rate: 1e-4
*Optimizer: Adam
*Training Regime: [Not Applicable]
##Model Architecture: Hybrid model (transformer-based, CNN, RNN)##
##For more information, please visit the ZEUS repository: https://github.com/wongrifferousAI/ZEUS##
##Model Card Authors:
***Morgan Griffin, WongrifferousAI and Wonder-Griffin*** |
huggingtweets/atheistic_1 | huggingtweets | "2021-05-21T19:35:24Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | ---
language: en
thumbnail: https://www.huggingtweets.com/atheistic_1/1616797786127/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1323522646152282120/STwG1Xk3_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Atheistic One 🤖 AI Bot </div>
<div style="font-size: 15px">@atheistic_1 bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@atheistic_1's tweets](https://twitter.com/atheistic_1).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3247 |
| Retweets | 179 |
| Short tweets | 275 |
| Tweets kept | 2793 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2gyocq1j/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @atheistic_1's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/l5vjnai7) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/l5vjnai7/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/atheistic_1')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
lesso02/1d32dd1c-2486-4336-a1f8-fdb0c3773c33 | lesso02 | "2025-01-22T08:40:46Z" | 11 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Llama-3.2-1B",
"base_model:adapter:NousResearch/Llama-3.2-1B",
"license:llama3.2",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-22T07:51:29Z" | ---
library_name: peft
license: llama3.2
base_model: NousResearch/Llama-3.2-1B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1d32dd1c-2486-4336-a1f8-fdb0c3773c33
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Llama-3.2-1B
bf16: true
chat_template: llama3
datasets:
- data_files:
- 2f56756ccbb5e986_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2f56756ccbb5e986_train_data.json
type:
field_input: bad
field_instruction: question
field_output: best
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso02/1d32dd1c-2486-4336-a1f8-fdb0c3773c33
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/2f56756ccbb5e986_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ecdff7cf-3144-49c0-98ae-55240d8f0b0a
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ecdff7cf-3144-49c0-98ae-55240d8f0b0a
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 1d32dd1c-2486-4336-a1f8-fdb0c3773c33
This model is a fine-tuned version of [NousResearch/Llama-3.2-1B](https://huggingface.co/NousResearch/Llama-3.2-1B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9382
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.6121 | 0.0001 | 1 | 3.0551 |
| 2.7775 | 0.0004 | 5 | 3.0506 |
| 1.9974 | 0.0008 | 10 | 3.0020 |
| 3.0888 | 0.0012 | 15 | 2.9599 |
| 2.4187 | 0.0016 | 20 | 2.9415 |
| 3.4719 | 0.0021 | 25 | 2.9382 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/RoGemma-7b-Instruct-GGUF | mradermacher | "2024-10-11T17:46:32Z" | 84 | 0 | transformers | [
"transformers",
"gguf",
"ro",
"dataset:OpenLLM-Ro/ro_sft_alpaca",
"dataset:OpenLLM-Ro/ro_sft_alpaca_gpt4",
"dataset:OpenLLM-Ro/ro_sft_dolly",
"dataset:OpenLLM-Ro/ro_sft_selfinstruct_gpt4",
"dataset:OpenLLM-Ro/ro_sft_norobots",
"dataset:OpenLLM-Ro/ro_sft_orca",
"dataset:OpenLLM-Ro/ro_sft_camel",
"dataset:OpenLLM-Ro/ro_sft_oasst",
"dataset:OpenLLM-Ro/ro_sft_ultrachat",
"base_model:OpenLLM-Ro/RoGemma-7b-Instruct",
"base_model:quantized:OpenLLM-Ro/RoGemma-7b-Instruct",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-28T15:41:59Z" | ---
base_model: OpenLLM-Ro/RoGemma-7b-Instruct
datasets:
- OpenLLM-Ro/ro_sft_alpaca
- OpenLLM-Ro/ro_sft_alpaca_gpt4
- OpenLLM-Ro/ro_sft_dolly
- OpenLLM-Ro/ro_sft_selfinstruct_gpt4
- OpenLLM-Ro/ro_sft_norobots
- OpenLLM-Ro/ro_sft_orca
- OpenLLM-Ro/ro_sft_camel
- OpenLLM-Ro/ro_sft_oasst
- OpenLLM-Ro/ro_sft_ultrachat
language:
- ro
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/OpenLLM-Ro/RoGemma-7b-Instruct
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/RoGemma-7b-Instruct-GGUF/resolve/main/RoGemma-7b-Instruct.Q2_K.gguf) | Q2_K | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/RoGemma-7b-Instruct-GGUF/resolve/main/RoGemma-7b-Instruct.IQ3_XS.gguf) | IQ3_XS | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/RoGemma-7b-Instruct-GGUF/resolve/main/RoGemma-7b-Instruct.IQ3_S.gguf) | IQ3_S | 4.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/RoGemma-7b-Instruct-GGUF/resolve/main/RoGemma-7b-Instruct.Q3_K_S.gguf) | Q3_K_S | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/RoGemma-7b-Instruct-GGUF/resolve/main/RoGemma-7b-Instruct.IQ3_M.gguf) | IQ3_M | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/RoGemma-7b-Instruct-GGUF/resolve/main/RoGemma-7b-Instruct.Q3_K_M.gguf) | Q3_K_M | 4.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/RoGemma-7b-Instruct-GGUF/resolve/main/RoGemma-7b-Instruct.Q3_K_L.gguf) | Q3_K_L | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/RoGemma-7b-Instruct-GGUF/resolve/main/RoGemma-7b-Instruct.IQ4_XS.gguf) | IQ4_XS | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/RoGemma-7b-Instruct-GGUF/resolve/main/RoGemma-7b-Instruct.Q4_K_S.gguf) | Q4_K_S | 5.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/RoGemma-7b-Instruct-GGUF/resolve/main/RoGemma-7b-Instruct.Q4_K_M.gguf) | Q4_K_M | 5.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/RoGemma-7b-Instruct-GGUF/resolve/main/RoGemma-7b-Instruct.Q5_K_S.gguf) | Q5_K_S | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/RoGemma-7b-Instruct-GGUF/resolve/main/RoGemma-7b-Instruct.Q5_K_M.gguf) | Q5_K_M | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/RoGemma-7b-Instruct-GGUF/resolve/main/RoGemma-7b-Instruct.Q6_K.gguf) | Q6_K | 7.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/RoGemma-7b-Instruct-GGUF/resolve/main/RoGemma-7b-Instruct.Q8_0.gguf) | Q8_0 | 9.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/RoGemma-7b-Instruct-GGUF/resolve/main/RoGemma-7b-Instruct.f16.gguf) | f16 | 17.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
matthieulel/swinv2-tiny-patch4-window8-256-finetuned-galaxy10-decals | matthieulel | "2024-06-12T08:11:10Z" | 150 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swinv2",
"image-classification",
"vision",
"generated_from_trainer",
"base_model:microsoft/swinv2-tiny-patch4-window8-256",
"base_model:finetune:microsoft/swinv2-tiny-patch4-window8-256",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-05-06T06:05:34Z" | ---
license: apache-2.0
base_model: microsoft/swinv2-tiny-patch4-window8-256
tags:
- image-classification
- vision
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: swinv2-tiny-patch4-window8-256-finetuned-galaxy10-decals
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-tiny-patch4-window8-256-finetuned-galaxy10-decals
This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on the matthieulel/galaxy10_decals dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4552
- Accuracy: 0.8551
- Precision: 0.8529
- Recall: 0.8551
- F1: 0.8513
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.7462 | 0.99 | 62 | 1.4592 | 0.4431 | 0.4309 | 0.4431 | 0.3967 |
| 1.1805 | 2.0 | 125 | 1.0335 | 0.6460 | 0.6741 | 0.6460 | 0.6241 |
| 0.9342 | 2.99 | 187 | 0.7051 | 0.7537 | 0.7478 | 0.7537 | 0.7394 |
| 0.786 | 4.0 | 250 | 0.6468 | 0.7745 | 0.7731 | 0.7745 | 0.7637 |
| 0.7062 | 4.99 | 312 | 0.6013 | 0.8038 | 0.8052 | 0.8038 | 0.8008 |
| 0.7011 | 6.0 | 375 | 0.5373 | 0.8123 | 0.8171 | 0.8123 | 0.8041 |
| 0.7014 | 6.99 | 437 | 0.5470 | 0.8044 | 0.8048 | 0.8044 | 0.7995 |
| 0.6447 | 8.0 | 500 | 0.5309 | 0.8083 | 0.8087 | 0.8083 | 0.8025 |
| 0.608 | 8.99 | 562 | 0.4836 | 0.8337 | 0.8323 | 0.8337 | 0.8300 |
| 0.6196 | 10.0 | 625 | 0.4797 | 0.8331 | 0.8293 | 0.8331 | 0.8268 |
| 0.6031 | 10.99 | 687 | 0.4863 | 0.8264 | 0.8274 | 0.8264 | 0.8239 |
| 0.5462 | 12.0 | 750 | 0.4749 | 0.8354 | 0.8341 | 0.8354 | 0.8313 |
| 0.5868 | 12.99 | 812 | 0.5269 | 0.8236 | 0.8268 | 0.8236 | 0.8171 |
| 0.5844 | 14.0 | 875 | 0.4402 | 0.8472 | 0.8447 | 0.8472 | 0.8430 |
| 0.5326 | 14.99 | 937 | 0.4635 | 0.8393 | 0.8359 | 0.8393 | 0.8353 |
| 0.5313 | 16.0 | 1000 | 0.4734 | 0.8365 | 0.8345 | 0.8365 | 0.8300 |
| 0.4893 | 16.99 | 1062 | 0.4675 | 0.8365 | 0.8335 | 0.8365 | 0.8316 |
| 0.4983 | 18.0 | 1125 | 0.4441 | 0.8444 | 0.8431 | 0.8444 | 0.8401 |
| 0.518 | 18.99 | 1187 | 0.4693 | 0.8416 | 0.8441 | 0.8416 | 0.8376 |
| 0.5228 | 20.0 | 1250 | 0.4732 | 0.8410 | 0.8379 | 0.8410 | 0.8358 |
| 0.4761 | 20.99 | 1312 | 0.4567 | 0.8489 | 0.8493 | 0.8489 | 0.8460 |
| 0.5311 | 22.0 | 1375 | 0.4582 | 0.8484 | 0.8469 | 0.8484 | 0.8433 |
| 0.4894 | 22.99 | 1437 | 0.4627 | 0.8467 | 0.8450 | 0.8467 | 0.8433 |
| 0.4791 | 24.0 | 1500 | 0.4580 | 0.8506 | 0.8493 | 0.8506 | 0.8481 |
| 0.479 | 24.99 | 1562 | 0.4625 | 0.8472 | 0.8443 | 0.8472 | 0.8433 |
| 0.4487 | 26.0 | 1625 | 0.4557 | 0.8495 | 0.8469 | 0.8495 | 0.8447 |
| 0.4515 | 26.99 | 1687 | 0.4501 | 0.8534 | 0.8510 | 0.8534 | 0.8500 |
| 0.4862 | 28.0 | 1750 | 0.4552 | 0.8551 | 0.8529 | 0.8551 | 0.8513 |
| 0.4348 | 28.99 | 1812 | 0.4512 | 0.8506 | 0.8486 | 0.8506 | 0.8469 |
| 0.4623 | 29.76 | 1860 | 0.4539 | 0.8551 | 0.8533 | 0.8551 | 0.8516 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.15.1
|
ChhayaKumarDas/Reinforce-2 | ChhayaKumarDas | "2023-03-01T12:42:10Z" | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-01T12:42:05Z" | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 42.50 +/- 38.23
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
IntelLabs/shears-llama-13b-50-math-heuristic-adapter | IntelLabs | "2025-02-12T17:16:11Z" | 10 | 2 | peft | [
"peft",
"safetensors",
"en",
"arxiv:2306.11695",
"arxiv:2404.10934",
"arxiv:2501.16372",
"license:apache-2.0",
"region:us"
] | null | "2024-03-12T06:17:15Z" | ---
language: en
license: apache-2.0
library_name: peft
---
# Shears Adapter Card: shears-llama-13b-50-math-heuristic-adapter
The heuristic adapter discovered from the [super-adapter](https://huggingface.co/IntelLabs/shears-llama-13b-50-math-super-adapter) fine-tuned on sparsified LLaMA-13B with some math reasoning datasets using Shears.
## Paper Abstract
Recently, several approaches successfully demonstrated that weight-sharing Neural Architecture Search (NAS) can effectively explore a search space of elastic low-rank adapters (LoRA), allowing the parameter-efficient fine-tuning (PEFT) and compression of large language models. In this paper, we introduce a novel approach called Shears, demonstrating how the integration of cost-effective sparsity and a proposed Neural Low-rank adapter Search (NLS) algorithm can further improve the efficiency of PEFT approaches. Results demonstrate the benefits of Shears compared to other methods, reaching high sparsity levels while improving or with little drop in accuracy, utilizing a single GPU for a pair of hours.
## Model Details
### Note
Please note, we only provide the model adapter and do not provide a copy of the base [yahma/llama-13b-hf](https://huggingface.co/yahma/llama-13b-hf) model or its sparsified one. Any use of this adapter requires a separate download of the base model and follow [this instruction](#sparsified-base-model) to sparse the base model.
### Information
- **Adapter name:** shears-llama-13b-50-math-heuristic-adapter
- **Base model:** Sparsified [LLaMA-13B](https://huggingface.co/yahma/llama-13b-hf)
- **Sparsity:** 50%
- **Domain:** Math
- **Subnetwork version:** Heuristic
- **NNCF Configuration:** [nncf_shears_llama.json](https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/Shears/nncf_config/nncf_shears_llama.json)
### Sparsified Base Model
Shears employs a simple but effective pruning approach [Wanda](https://arxiv.org/abs/2306.11695) to sparsify the language model, serving as the base model.
Clone the [Wanda](https://github.com/locuslab/wanda) repo:
```bash
git clone https://github.com/locuslab/wanda.git && cd wanda && git checkout 8e8fc87 && cd ..
```
The command for unstructured sparsifying LLaMA-13B with Wanda, to achieve unstructured 50% sparsity:
```bash
python wanda/main.py \
--model yahma/llama-13b-hf \
--prune_method wanda \
--sparsity_ratio 0.5 \
--sparsity_type unstructured \
--save wanda_out \
--save_model shears-llama-13b-50-base
```
- `--model`: The identifier for the model on the Hugging Face model hub or local path.
- `--sparsity_ratio`: Specifies the percentage of weights to be pruned.
- `--save_model`: Specifies the directory where the pruned language model will be stored.
Refer to our [repo](https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/Shears#setup) for the environment information to run this command.
### Adapter Configuration
- **LoRA rank:** 32 (24 in the heuristic subnetwork)
- **LoRA alpha:** 64
- **LoRA target modules:** q_proj, k_proj, v_proj, up_proj, down_proj
- **LoRA rank search space:** [32, 24, 16] (for each LoRA module)
### Training Hyperparameters
- **Batch size:** 16
- **Learning rate:** 3e-4
- **Epoch:** 3
### Training Data
Unified math reasoning dataset: [math_10k.json](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/ft-training_set/math_10k.json) (collected with the training sets of GSM8K, MAWPS, and AQuA).
### Evaluation Data
[GSM8K](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/gsm8k/test.json), [AQuA](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/AQuA/test.json), [MAWPS](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/mawps/test.json), [SVAMP](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/SVAMP/test.json)
## How to use
Use our modified PEFT library (apply [patch](https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/Shears/patches/peft-modifications-for-shears-inference-usage.patch)):
```bash
git clone https://github.com/huggingface/peft.git
cd peft && git checkout v0.5.0 && git apply --ignore-space-change --ignore-whitespace peft-modifications-for-shears-inference-usage.patch && pip install -e . && cd ..
```
```python
import torch
from peft import PeftModel
from transformers import AutoModelForCausalLM
from transformers import AutoTokenizer
def generate_prompt(instruction):
return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{instruction}
### Response:
"""
base_model = AutoModelForCausalLM.from_pretrained("shears-llama-13b-50-base")
model = PeftModel.from_pretrained(base_model, "IntelLabs/shears-llama-13b-50-math-heuristic-adapter")
model.eval()
non_zero_params = sum([(param.data != 0).sum().item() for _, param in model.named_parameters()])
print(f"Number of all non-zero parameters: {non_zero_params}")
tokenizer = AutoTokenizer.from_pretrained("shears-llama-13b-50-base")
instruction = "Edgar eats 18 pretzels a day. If his brother eats 1/2 as many, how many does his brother eat in a week?"
prompt = generate_prompt(instruction)
inputs = tokenizer(prompt, return_tensors="pt")
input_ids = inputs["input_ids"].to(model.device)
with torch.no_grad():
generation_output = model.generate(
input_ids=input_ids,
return_dict_in_generate=True,
output_scores=True,
max_new_tokens=256,
use_cache=True,
num_beams=4,
)
s = generation_output.sequences[0]
output = tokenizer.decode(s)
print(output)
```
## Evaluation Results
| Model | Sparsity | GSM8K | AQuA | MAWPS | SVAMP | Average |
|-----------------------|-------------|-------|-------|-------|-------|---------|
| LLaMA-7B-LoRA | - | 37.5 | 18.9 | 79.0 | 52.1 | 46.9 |
| [**LLaMA-7B-Shears**](https://huggingface.co/IntelLabs/shears-llama-7b-50-math-heuristic-adapter) | **50%** | 36.1 | 22.0 | 78.6 | 44.5 | 45.3 |
| LLaMA-13B-LoRA | - | 47.5 | 18.5 | 83.6 | 54.6 | 51.1 |
| [**LLaMA-13B-Shears**](https://huggingface.co/IntelLabs/shears-llama-13b-50-math-heuristic-adapter) | **50%** | 45.1 | 22.0 | 83.2 | 53.3 | 50.9 |
## Model Sources
**Repository:** [https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/Shears](https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/Shears)
**Paper:**
- [Shears: Unstructured Sparsity with Neural Low-rank Adapter Search](https://arxiv.org/abs/2404.10934)
- [Low-Rank Adapters Meet Neural Architecture Search for LLM Compression](https://arxiv.org/abs/2501.16372)
## Ethical Considerations
Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See [Intel’s Global Human Rights Principles](https://www.intel.com/content/dam/www/central-libraries/us/en/documents/policy-human-rights.pdf). Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.
| Ethical Considerations | Description |
| ----------- | ----------- |
| Data | The adapter was trained using the math_10k.json data mixture as described above. |
| Human life | The model is not intended to inform decisions central to human life or flourishing. |
| Mitigations | No additional risk mitigation strategies were considered during model development. |
| Risks and harms | This model has not been assessed for harm or biases, and should not be used for sensitive applications where it may cause harm. |
| Use cases | - |
## Citation
```bash
@inproceedings{munoz-etal-2024-shears,
title = "Shears: Unstructured Sparsity with Neural Low-rank Adapter Search",
author = "Mu{\~n}oz, J. Pablo and
Yuan, Jinjie and
Jain, Nilesh",
editor = "Yang, Yi and
Davani, Aida and
Sil, Avi and
Kumar, Anoop",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-industry.34",
doi = "10.18653/v1/2024.naacl-industry.34",
pages = "395--405",
}
```
## License
Apache-2.0
|
Steelskull/L3.3-MS-Nevoria-70b | Steelskull | "2025-01-28T02:46:48Z" | 2,897 | 53 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"conversational",
"base_model:EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1",
"base_model:merge:EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1",
"base_model:Sao10K/L3.3-70B-Euryale-v2.3",
"base_model:merge:Sao10K/L3.3-70B-Euryale-v2.3",
"base_model:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:merge:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:TheDrummer/Anubis-70B-v1",
"base_model:merge:TheDrummer/Anubis-70B-v1",
"base_model:nbeerbower/Llama-3.1-Nemotron-lorablated-70B",
"base_model:merge:nbeerbower/Llama-3.1-Nemotron-lorablated-70B",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-14T02:01:02Z" | ---
base_model:
- Sao10K/L3.3-70B-Euryale-v2.3
- nbeerbower/Llama-3.1-Nemotron-lorablated-70B
- EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
- SicariusSicariiStuff/Negative_LLAMA_70B
- TheDrummer/Anubis-70B-v1
library_name: transformers
tags:
- merge
license: other
license_name: eva-llama3.3
---
<style>
body {
font-family: 'Quicksand', sans-serif;
background: linear-gradient(135deg, #2A0A2A 0%, #1A0025 100%);
color: #FFE1FF;
margin: 0;
padding: 0;
font-size: 16px;
}
.container {
margin: 20px;
background-color: rgba(11, 15, 26, 0.95);
padding: 20px;
border-radius: 12px;
box-shadow: 0 4px 20px rgba(255, 0, 255, 0.15);
border: 1px solid rgba(255, 0, 255, 0.2);
outline: 1px solid rgba(255, 0, 255, 0.5);
outline-offset: -1px;
position: relative;
}
.container::before {
content: '';
position: absolute;
top: -1px;
left: -1px;
right: -1px;
bottom: -1px;
border: 1px solid rgba(255, 0, 255, 0.98);
border-radius: 12px;
pointer-events: none;
animation: borderGlow 2s ease-in-out infinite;
}
@keyframes borderGlow {
0% {
box-shadow: 0 0 5px rgba(255, 0, 255, 0.98);
}
50% {
box-shadow: 0 0 15px rgba(255, 0, 255, 0.98);
}
100% {
box-shadow: 0 0 5px rgba(255, 0, 255, 0.98);
}
}
.header h1 {
font-size: 28px;
color: #FF00FF;
margin: 0 0 20px 0;
text-shadow: 0 0 10px rgba(255, 0, 255, 0.5);
}
.update-section {
margin-top: 30px;
}
.update-section h2, h2 {
font-size: 24px;
color: #FF00FF;
text-shadow: 0 0 10px rgba(255, 0, 255, 0.5);
}
.update-section p {
font-size: 16px;
line-height: 1.6;
color: #FFE1FF;
}
.info p {
color: #FFE1FF;
line-height: 1.6;
font-size: 16px;
}
.info img {
width: 100%;
border-radius: 10px;
margin-bottom: 15px;
box-shadow: 0 0 20px rgba(255, 0, 255, 0.3);
border: 1px solid rgba(255, 0, 255, 0.2);
outline: 1px solid rgba(255, 0, 255, 0.5);
outline-offset: -1px;
}
a {
color: #00FFFF;
text-decoration: none;
transition: color 0.3s ease;
}
a:hover {
color: #FF00FF;
}
.button {
display: inline-block;
background-color: rgba(144, 0, 255, 0.98);
color: #FFFFFF;
padding: 10px 20px;
border-radius: 5px;
cursor: pointer;
text-decoration: none;
}
.button:hover {
background-color: #FF00FF;
box-shadow: 0 0 15px rgba(255, 0, 255, 0.5);
}
pre {
background-color: rgba(35, 20, 45, 0.95);
padding: 15px;
border-radius: 5px;
overflow-x: auto;
border: 1px solid rgba(255, 0, 255, 0.2);
outline: 1px solid rgba(255, 0, 255, 0.5);
outline-offset: -1px;
}
code {
font-family: 'Courier New', monospace;
color: #FFE1FF;
}
.benchmark-container {
background: rgba(35, 20, 45, 0.95);
border: 1px solid rgba(255, 0, 255, 0.3);
border-radius: 12px;
padding: 20px;
margin: 20px 0;
position: relative;
overflow: hidden;
}
.benchmark-container::before {
content: '';
position: absolute;
top: -1px;
left: -1px;
right: -1px;
bottom: -1px;
border: 1px solid rgba(255, 0, 255, 0.98);
border-radius: 12px;
pointer-events: none;
animation: borderGlow 2s ease-in-out infinite;
}
.benchmark-grid {
display: grid;
grid-template-columns: repeat(4, 1fr);
gap: 15px;
}
.metric-box {
background: rgba(11, 15, 26, 0.95);
border: 1px solid rgba(255, 0, 255, 0.2);
border-radius: 8px;
padding: 15px;
display: flex;
flex-direction: column;
align-items: center;
text-align: center;
transition: transform 0.3s ease, box-shadow 0.3s ease;
}
.metric-box:hover {
transform: translateY(-2px);
box-shadow: 0 4px 15px rgba(255, 0, 255, 0.2);
}
.metric-box .label {
color: #00FFFF;
font-size: 14px;
margin-bottom: 8px;
font-weight: 500;
}
.metric-box .value {
color: #FFE1FF;
font-size: 18px;
font-weight: 600;
text-shadow: 0 0 5px rgba(255, 0, 255, 0.5);
}
/* New sections styling */
.metrics-section {
margin-bottom: 30px;
}
.metrics-section details {
background: rgba(35, 20, 45, 0.95);
border: 1px solid rgba(255, 0, 255, 0.2);
border-radius: 8px;
padding: 15px;
margin-bottom: 15px;
}
.metrics-section summary {
color: #FF00FF;
font-size: 20px;
cursor: pointer;
text-shadow: 0 0 5px rgba(255, 0, 255, 0.3);
outline: none;
padding: 5px 0;
}
.metrics-section summary::-webkit-details-marker {
display: none;
}
.core-metrics-grid {
display: grid;
grid-template-columns: repeat(4, 1fr);
gap: 15px;
margin-bottom: 20px;
}
.progress-metrics {
display: grid;
gap: 15px;
}
.progress-metric {
background: rgba(11, 15, 26, 0.95);
border: 1px solid rgba(255, 0, 255, 0.2);
border-radius: 8px;
padding: 15px;
transition: transform 0.3s ease;
}
.progress-metric:hover {
transform: translateY(-2px);
}
.progress-label {
display: flex;
justify-content: space-between;
margin-bottom: 8px;
color: #00FFFF;
font-size: 14px;
}
.progress-value {
color: #FFE1FF;
}
/* Base progress bar styles */
.progress-bar {
width: 100%;
height: 8px;
background: rgba(0, 0, 0, 0.3);
border: 1px solid rgba(255, 0, 255, 0.15);
border-radius: 4px;
position: relative;
margin: 10px 0;
overflow: hidden;
}
/* Regular progress fill (for Aggregated Scores) */
.progress-fill {
height: 100%;
background: linear-gradient(90deg, #FF00FF 0%, #00FFFF 100%);
border-radius: 4px;
transition: width 1s ease-in-out;
box-shadow: 0 0 10px rgba(255, 0, 255, 0.3);
}
/* Split progress bars for Individual Scores */
.progress-bar.split {
display: flex;
justify-content: center;
background: rgba(0, 0, 0, 0.3);
border: 1px solid rgba(255, 0, 255, 0.15);
overflow: visible;
}
.progress-fill-left {
height: 100%;
position: absolute;
right: 50%;
background: linear-gradient(90deg, #FF00FF 30%, rgba(255, 0, 255, 0.5) 100%);
border-radius: 4px 0 0 4px;
transition: width 0.3s ease-in-out;
}
.progress-fill-right {
height: 100%;
position: absolute;
left: 50%;
background: linear-gradient(90deg, rgba(0, 255, 255, 0.5) 0%, #00FFFF 70%);
border-radius: 0 4px 4px 0;
transition: width 0.3s ease-in-out;
}
.progress-metric.split .progress-bar::before,
.progress-metric.split .progress-bar::after {
content: '';
position: absolute;
width: 2px;
height: 20px;
background: rgba(255, 255, 255, 0.7);
top: 50%;
transform: translateY(-50%);
z-index: 2;
box-shadow: 0 0 8px rgba(255, 255, 255, 0.5);
}
.progress-metric.split .progress-bar::before {
left: 0;
}
.progress-metric.split .progress-bar::after {
right: 0;
}
.progress-metric.split:hover .progress-fill-left {
box-shadow: 0 0 15px rgba(255, 0, 255, 0.5);
}
.progress-metric.split:hover .progress-fill-right {
box-shadow: 0 0 15px rgba(0, 255, 255, 0.5);
}
.progress-metric.split {
padding: 12px 15px;
}
.progress-metric.split .progress-label {
margin-bottom: 8px;
gap: 12px;
}
.progress-metric.split .progress-label span:first-child,
.progress-metric.split .progress-label span:last-child {
flex: 0 0 80px;
font-size: 14px;
}
.progress-metric.split .progress-value {
font-weight: 600;
text-shadow: 0 0 5px rgba(255, 0, 255, 0.3);
font-size: 14px;
min-width: 60px;
text-align: center;
}
.progress-metric:hover .progress-fill-center {
box-shadow: 0 0 15px rgba(255, 0, 255, 0.5);
}
/* Progress labels */
.progress-label {
display: flex;
justify-content: space-between;
align-items: center;
margin-bottom: 4px;
color: #00FFFF;
font-size: 14px;
}
/* Regular progress label (for Aggregated Scores) */
.progress-metric:not(.split) .progress-label {
gap: 12px;
}
.progress-metric:not(.split) .progress-label span {
flex: 0 0 auto;
}
.progress-metric:not(.split) .progress-value {
color: #FFE1FF;
}
/* Split progress label (for Individual Scores) */
.progress-metric.split .progress-label {
gap: 20px;
}
.progress-metric.split .progress-label span:first-child,
.progress-metric.split .progress-label span:last-child {
flex: 0 0 80px;
}
.progress-metric.split .progress-label span:first-child {
text-align: right;
}
.progress-metric.split .progress-label span:last-child {
text-align: left;
}
.progress-metric.split .progress-value {
color: #FFE1FF;
flex: 0 0 60px;
text-align: center;
}
/* Hover effects */
.progress-metric:hover .progress-fill {
box-shadow: 0 0 15px rgba(255, 0, 255, 0.5);
}
.progress-metric:hover .progress-fill-center {
box-shadow: 0 0 15px rgba(0, 255, 255, 0.5);
}
.info-grid {
display: grid;
grid-template-columns: repeat(3, 1fr);
gap: 15px;
}
/* Creator section styling */
.creator-section {
margin: 20px 0;
}
.creator-badge {
display: inline-flex;
align-items: center;
background: rgba(35, 20, 45, 0.95);
border: 1px solid rgba(255, 0, 255, 0.2);
border-radius: 8px;
padding: 10px 15px;
}
.creator-label {
color: #FFE1FF;
font-size: 14px;
margin-right: 8px;
}
.creator-link {
display: flex;
align-items: center;
gap: 5px;
color: #00FFFF;
text-decoration: none;
transition: all 0.3s ease;
}
.creator-name {
font-weight: 600;
}
.creator-arrow {
font-size: 16px;
transition: transform 0.3s ease;
}
.creator-link:hover {
color: #FF00FF;
}
.creator-link:hover .creator-arrow {
transform: translateX(3px);
}
/* Model info styling */
.model-info {
margin-top: 30px;
}
.name-legend {
background: rgba(35, 20, 45, 0.95);
border: 1px solid rgba(255, 0, 255, 0.2);
border-radius: 8px;
padding: 20px;
margin: 20px 0;
}
.name-legend h3 {
color: #FF00FF;
font-size: 18px;
margin: 0 0 15px 0;
}
.legend-grid {
display: grid;
gap: 12px;
}
.legend-item {
display: flex;
align-items: baseline;
gap: 10px;
}
.legend-key {
color: #00FFFF;
font-weight: 600;
min-width: 80px;
}
.legend-value {
color: #FFE1FF;
}
.model-description {
background: rgba(11, 15, 26, 0.95);
border: 1px solid rgba(255, 0, 255, 0.2);
border-radius: 8px;
padding: 20px;
}
.model-description p {
margin: 0 0 15px 0;
line-height: 1.6;
}
.model-description p:last-child {
margin-bottom: 0;
}
/* Section Container */
.section-container {
margin: 40px 0;
}
.info-card {
background: rgba(35, 20, 45, 0.95);
border: 1px solid rgba(255, 0, 255, 0.2);
border-radius: 8px;
overflow: hidden;
}
.info-header {
background: rgba(255, 0, 255, 0.1);
padding: 20px;
border-bottom: 1px solid rgba(255, 0, 255, 0.2);
}
.info-header h3 {
color: #FF00FF;
margin: 0 0 10px 0;
font-size: 20px;
text-shadow: 0 0 5px rgba(255, 0, 255, 0.3);
}
.model-tags {
display: flex;
gap: 8px;
flex-wrap: wrap;
}
.model-tag {
background: rgba(0, 255, 255, 0.1);
color: #00FFFF;
padding: 4px 8px;
border-radius: 4px;
font-size: 12px;
border: 1px solid rgba(0, 255, 255, 0.2);
}
.model-composition {
padding: 20px;
border-bottom: 1px solid rgba(255, 0, 255, 0.2);
}
.model-composition h4 {
color: #FF00FF;
margin: 0 0 15px 0;
font-size: 16px;
}
.composition-list {
list-style: none;
padding: 0;
margin: 0;
display: grid;
gap: 10px;
}
.composition-list li {
color: #FFE1FF;
display: flex;
align-items: baseline;
gap: 8px;
}
.model-component {
color: #00FFFF;
font-weight: 500;
min-width: 120px;
}
.model-description {
padding: 20px;
background: rgba(11, 15, 26, 0.95);
}
/* Templates & Prompts */
.template-card {
background: rgba(35, 20, 45, 0.95);
border: 1px solid rgba(255, 0, 255, 0.2);
border-radius: 8px;
padding: 15px;
}
.template-item {
display: flex;
align-items: center;
gap: 12px;
}
.template-icon {
width: 24px;
height: 24px;
opacity: 0.8;
}
.template-content {
display: flex;
align-items: baseline;
gap: 8px;
}
.template-link {
color: #00FFFF;
text-decoration: none;
font-weight: 500;
display: flex;
align-items: center;
gap: 5px;
}
.template-author {
color: rgba(255, 225, 255, 0.7);
font-size: 14px;
}
/* Quantized Versions */
.quantized-container {
display: grid;
gap: 20px;
}
.quantized-section {
background: rgba(35, 20, 45, 0.95);
border: 1px solid rgba(255, 0, 255, 0.2);
border-radius: 8px;
padding: 20px;
}
.quantized-section h3 {
color: #FF00FF;
font-size: 18px;
margin: 0 0 15px 0;
}
.quantized-items {
display: grid;
gap: 12px;
}
.quantized-item {
display: flex;
align-items: baseline;
gap: 10px;
}
.quantized-item .author {
color: rgba(255, 225, 255, 0.7);
min-width: 100px;
}
.multi-links {
display: flex;
align-items: center;
gap: 8px;
}
.separator {
color: rgba(255, 225, 255, 0.5);
}
/* Configuration */
.config-container {
background: rgba(35, 20, 45, 0.95);
border: 1px solid rgba(255, 0, 255, 0.2);
border-radius: 8px;
overflow: hidden;
}
.config-header {
background: rgba(255, 0, 255, 0.1);
padding: 15px 20px;
border-bottom: 1px solid rgba(255, 0, 255, 0.2);
}
.model-name {
color: #FF00FF;
font-weight: 600;
}
.config-content {
padding: 20px;
}
.config-item {
display: flex;
flex-direction: column;
gap: 5px;
margin-bottom: 15px;
}
.config-label {
color: #00FFFF;
font-size: 14px;
font-weight: 500;
}
.config-value {
color: #FFE1FF;
font-family: 'Courier New', monospace;
}
.config-models {
margin-top: 20px;
}
.model-list {
list-style: none;
padding: 0;
margin: 10px 0 0 0;
}
.model-list li {
color: #FFE1FF;
font-family: 'Courier New', monospace;
padding: 5px 0;
padding-left: 20px;
position: relative;
}
.model-list li::before {
content: '-';
position: absolute;
left: 0;
color: #00FFFF;
}
/* Link arrow animation */
.link-arrow {
display: inline-block;
transition: transform 0.3s ease;
}
a:hover .link-arrow {
transform: translateX(3px);
}
/* Notification styling */
.benchmark-notification {
background: rgba(255, 0, 255, 0.15);
border: 1px solid rgba(255, 0, 255, 0.3);
border-radius: 8px;
margin-bottom: 20px;
padding: 12px;
animation: glowPulse 2s infinite;
}
.notification-content {
display: flex;
align-items: center;
justify-content: center;
gap: 10px;
text-align: center;
}
.notification-icon {
font-size: 20px;
}
.notification-text {
color: #FFE1FF;
font-size: 16px;
font-weight: 500;
display: flex;
flex-direction: column;
align-items: center;
gap: 5px;
}
.benchmark-link {
color: #00FFFF;
text-decoration: none;
font-size: 14px;
padding: 4px 8px;
border-radius: 4px;
transition: all 0.3s ease;
border: 1px solid rgba(0, 255, 255, 0.3);
}
.benchmark-link:hover {
background: rgba(0, 255, 255, 0.1);
border-color: rgba(0, 255, 255, 0.5);
color: #00FFFF;
text-shadow: 0 0 5px rgba(0, 255, 255, 0.5);
}
@keyframes glowPulse {
0% {
box-shadow: 0 0 5px rgba(255, 0, 255, 0.3);
}
50% {
box-shadow: 0 0 15px rgba(255, 0, 255, 0.5);
}
100% {
box-shadow: 0 0 5px rgba(255, 0, 255, 0.3);
}
}
</style>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>L3.3-MS-Nevoria-70B</title>
<link href="https://fonts.googleapis.com/css2?family=Quicksand:wght@400;500;600&display=swap" rel="stylesheet">
</head>
<body>
<div class="container">
<div class="header">
<h1>L3.3-MS-Nevoria-70B</h1>
</div>
<div class="info">
<img src="https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/DeFlh06qG3bIgc3k4kBoJ.jpeg" alt="Model banner">
<div class="creator-section">
<div class="creator-badge">
<span class="creator-label">Created by</span>
<a href="https://huggingface.co/Steelskull" target="_blank" class="creator-link">
<span class="creator-name">SteelSkull</span>
<span class="creator-arrow">→</span>
</a>
</div>
</div>
<div class="model-info">
<h2>Model Information</h2>
<div class="info-card">
<div class="info-header">
<h3>L3.3-MS-Nevoria-70B</h3>
<div class="model-tags">
<span class="model-tag">L3.3 = Llama 3.3</span>
<span class="model-tag">MS = Model Stock</span>
<span class="model-tag">70B Parameters</span>
</div>
</div>
<div class="model-composition">
<h4>Model Composition</h4>
<ul class="composition-list">
<li><span class="model-component"><a href="https://huggingface.co/EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1" target="_blank">EVA-LLAMA-0.1</a></span> Storytelling capabilities</li>
<li><span class="model-component"><a href="https://huggingface.co/Sao10K/L3.3-70B-Euryale-v2.3" target="_blank">EURYALE-v2.3</a></span> Detailed scene descriptions</li>
<li><span class="model-component"><a href="https://huggingface.co/TheDrummer/Anubis-70B-v1" target="_blank">Anubis-v1</a></span> Enhanced prose details</li>
<li><span class="model-component"><a href="https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B" target="_blank">Negative_LLAMA</a></span> Reduced positive bias</li>
<li><span class="model-component base-model"><a href="https://huggingface.co/nbeerbower/Llama-3.1-Nemotron-lorablated-70B" target="_blank">Nemotron-lorablated</a></span> Base model</li>
</ul>
</div>
<div class="model-description">
<p>This model combines the storytelling capabilities of EVA with the detailed scene descriptions from EURYALE and Anubis. It's further enhanced with Negative_LLAMA to reduce positive bias, with a touch of Nemotron mixed in.</p>
<p>The lorablated model base choice was intentional, creating unique weight interactions similar to the original <a href="https://huggingface.co/Steelskull/L3-MS-Astoria-70b" target="_blank">Astoria model</a> and <a href="https://huggingface.co/Steelskull/L3.1-MS-Astoria-70b-v2" target="_blank">Astoria V2 model</a>. This "weight twisting" effect, achieved by subtracting the lorablated base model during merging, creates an interesting balance in the model's behavior. While unconventional compared to sequential component application, this approach was chosen for its unique response characteristics.</p>
</div>
</div>
<!-- User Reviews Section -->
<div class="metrics-section" style="margin-top: 30px;">
<details open>
<summary>User Reviews</summary>
<div class="progress-metrics">
<!-- Individual Reviews -->
<div style="margin-top: 20px;">
<div class="review-card" style="background: rgba(35, 20, 45, 0.95); border: 1px solid rgba(255, 0, 255, 0.2); border-radius: 8px; padding: 15px; margin-bottom: 15px;">
<div style="display: flex; margin-bottom: 10px;">
<span style="color: #00FFFF; font-weight: 500;">@Geechan - Discord</span>
</div>
<p style="color: #FFE1FF; margin: 0;">@Steel Have only briefly tested so far, but you really cooked up an amazing merge with this one, and I mean that wholeheartedly. Insane creativity, perfect character adherence and dialogue, loves to slow burn and take its time, minimal sloppy patterns and writing, and such a breath of fresh air in many ways. I'm enjoying my results with 1 temp and 0.99 TFS (close to something like 0.015 min P). Letting the model be creative and wild is so fun and makes me want to RP more.<br><br>No positivity bias either; violent scenes will result in my death and/or suffering, as they should, and I don't see any soft refusals either. ERP has no skimming of details or refusals like you see on some other L3.3 tunes too</p>
</div>
<div class="review-card" style="background: rgba(35, 20, 45, 0.95); border: 1px solid rgba(255, 0, 255, 0.2); border-radius: 8px; padding: 15px; margin-bottom: 15px;">
<div style="display: flex; margin-bottom: 10px;">
<span style="color: #00FFFF; font-weight: 500;">IGODZOL - Huggingface</span>
</div>
<p style="color: #FFE1FF; margin: 0;">I honestly have no idea why (maybe the negative llama is having that great of an influence) but this merge is miles above the individual tunes that went into making it. Good sir, this model has just become my daily driver. Chapeau bas</p>
</div>
<div class="review-card" style="background: rgba(35, 20, 45, 0.95); border: 1px solid rgba(255, 0, 255, 0.2); border-radius: 8px; padding: 15px;">
<div style="display: flex; margin-bottom: 10px;">
<span style="color: #00FFFF; font-weight: 500;">@thana_alt - Discord</span>
</div>
<p style="color: #FFE1FF; margin: 0;">I'm thoroughly impressed by this merge of Llama 3.3. It successfully addresses the positivity bias prevalent in the base Llama model, ensuring a more accurate and balanced response. The adherence to system prompts is also notable, with the model demonstrating a keen understanding of context and instruction.<br><br>The prose generated by this model is truly exceptional - it's almost as if a skilled chef has carefully crafted each sentence to create a rich and immersive experience. I put this to the test in an adventure scenario, where I had about 10,000 tokens of lorebooks and was managing nine characters simultaneously. Despite the complexity, the model performed flawlessly, keeping track of each character's location and activity without any confusion - even when they were in different locations.<br><br>I also experimented with an astral projection type of power, and was impressed to see that the model accurately discerned that I wasn't physically present in a particular location. Another significant advantage of this model is the lack of impersonation issues, allowing for seamless role-playing and storytelling.<br><br>The capacity of this model is equally impressive, as I was able to load up to 110,000 tokens without encountering any issues. In fact, I successfully tested it with up to 70,000 tokens without experiencing any breakdown or degradation in performance.<br><br>When combined with the "The Inception Presets - Methception Llamaception Qwenception" prompt preset from https://huggingface.co/Konnect1221/ , this model truly shines, bringing out the best in the Llama 3.3 architecture. Overall, I'm extremely satisfied with this merge and would highly recommend it to anyone looking to elevate their storytelling and role-playing experiences.</p>
</div>
</div>
</div>
</details>
</div>
</div>
<h2>UGI-Benchmark Results:</h2>
<div class="benchmark-container">
<div class="benchmark-notification">
<div class="notification-content">
<span class="notification-icon">🏆</span>
<span class="notification-text">
Highest ranked 70b as of 01/17/2025.
<a href="https://huggingface.co/spaces/DontPlanToEnd/UGI-Leaderboard" target="_blank" class="benchmark-link">
View Full Leaderboard →
</a>
</span>
</div>
</div>
<!-- Core Metrics -->
<div class="metrics-section">
<h3>Core Metrics</h3>
<div class="core-metrics-grid">
<div class="metric-box">
<span class="label">UGI Score</span>
<span class="value">56.75</span>
</div>
<div class="metric-box">
<span class="label">Willingness Score</span>
<span class="value">7.5/10</span>
</div>
<div class="metric-box">
<span class="label">Natural Intelligence</span>
<span class="value">41.09</span>
</div>
<div class="metric-box">
<span class="label">Coding Ability</span>
<span class="value">20</span>
</div>
</div>
</div>
<!-- Model Info -->
<div class="metrics-section">
<h3>Model Information</h3>
<div class="info-grid">
<div class="metric-box">
<span class="label">Political Lean</span>
<span class="value">-8.1%</span>
</div>
<div class="metric-box">
<span class="label">Ideology</span>
<span class="value">Liberalism</span>
</div>
<div class="metric-box">
<span class="label">Parameters</span>
<span class="value">70B</span>
</div>
</div>
</div>
<!-- Aggregated Scores -->
<div class="metrics-section" style="margin-top: 30px;">
<details>
<summary>Aggregated Scores</summary>
<div class="progress-metrics">
<div class="progress-metric">
<div class="progress-label">
<span>Diplomacy</span>
<span class="progress-value">61.9%</span>
</div>
<div class="progress-bar">
<div class="progress-fill" style="width: 61.9%; background: linear-gradient(90deg, #FF00FF 0%, #00FFFF 100%);"></div>
</div>
</div>
<div class="progress-metric">
<div class="progress-label">
<span>Government</span>
<span class="progress-value">45.9%</span>
</div>
<div class="progress-bar">
<div class="progress-fill" style="width: 45.9%; background: linear-gradient(90deg, #FF00FF 0%, #00FFFF 100%);"></div>
</div>
</div>
<div class="progress-metric">
<div class="progress-label">
<span>Economy</span>
<span class="progress-value">43.9%</span>
</div>
<div class="progress-bar">
<div class="progress-fill" style="width: 43.9%; background: linear-gradient(90deg, #FF00FF 0%, #00FFFF 100%);"></div>
</div>
</div>
<div class="progress-metric">
<div class="progress-label">
<span>Society</span>
<span class="progress-value">60.1%</span>
</div>
<div class="progress-bar">
<div class="progress-fill" style="width: 60.1%; background: linear-gradient(90deg, #FF00FF 0%, #00FFFF 100%);"></div>
</div>
</div>
</div>
</details>
</div>
<!-- Individual Scores -->
<div class="metrics-section">
<details>
<summary>Individual Scores</summary>
<div class="progress-metrics">
<div class="progress-metric split">
<div class="progress-label">
<span style="color: #FFE1FF">Federal</span>
<span class="progress-value">44.2%</span>
<span style="color: #00FFFF; font-weight: bold">Unitary</span>
</div>
<div class="progress-bar split">
<div class="progress-fill-left" style="width: 22.1%"></div>
<div class="progress-fill-right" style="width: 27.9%"></div>
</div>
</div>
<div class="progress-metric split">
<div class="progress-label">
<span style="color: #00FFFF; font-weight: bold">Democratic</span>
<span class="progress-value">66.2%</span>
<span style="color: #FFE1FF">Autocratic</span>
</div>
<div class="progress-bar split">
<div class="progress-fill-left" style="width: 33.1%"></div>
<div class="progress-fill-right" style="width: 16.9%"></div>
</div>
</div>
<div class="progress-metric split">
<div class="progress-label">
<span style="color: #FFE1FF">Security</span>
<span class="progress-value">48.1%</span>
<span style="color: #00FFFF; font-weight: bold">Freedom</span>
</div>
<div class="progress-bar split">
<div class="progress-fill-left" style="width: 24.05%"></div>
<div class="progress-fill-right" style="width: 25.95%"></div>
</div>
</div>
<div class="progress-metric split">
<div class="progress-label">
<span style="color: #FFE1FF">Nationalism</span>
<span class="progress-value">40.4%</span>
<span style="color: #00FFFF; font-weight: bold">Int'l</span>
</div>
<div class="progress-bar split">
<div class="progress-fill-left" style="width: 20.2%"></div>
<div class="progress-fill-right" style="width: 29.8%"></div>
</div>
</div>
<div class="progress-metric split">
<div class="progress-label">
<span style="color: #FFE1FF">Militarist</span>
<span class="progress-value">30.4%</span>
<span style="color: #00FFFF; font-weight: bold">Pacifist</span>
</div>
<div class="progress-bar split">
<div class="progress-fill-left" style="width: 15.2%"></div>
<div class="progress-fill-right" style="width: 34.8%"></div>
</div>
</div>
<div class="progress-metric split">
<div class="progress-label">
<span style="color: #FFE1FF">Assimilationist</span>
<span class="progress-value">43.3%</span>
<span style="color: #00FFFF; font-weight: bold">Multiculturalist</span>
</div>
<div class="progress-bar split">
<div class="progress-fill-left" style="width: 21.65%"></div>
<div class="progress-fill-right" style="width: 28.35%"></div>
</div>
</div>
<div class="progress-metric split">
<div class="progress-label">
<span style="color: #FFE1FF">Collectivize</span>
<span class="progress-value">43.8%</span>
<span style="color: #00FFFF; font-weight: bold">Privatize</span>
</div>
<div class="progress-bar split">
<div class="progress-fill-left" style="width: 21.9%"></div>
<div class="progress-fill-right" style="width: 28.1%"></div>
</div>
</div>
<div class="progress-metric split">
<div class="progress-label">
<span style="color: #FFE1FF">Planned</span>
<span class="progress-value">43.1%</span>
<span style="color: #00FFFF; font-weight: bold">LaissezFaire</span>
</div>
<div class="progress-bar split">
<div class="progress-fill-left" style="width: 21.55%"></div>
<div class="progress-fill-right" style="width: 28.45%"></div>
</div>
</div>
<div class="progress-metric split">
<div class="progress-label">
<span style="color: #FFE1FF">Isolationism</span>
<span class="progress-value">44.8%</span>
<span style="color: #00FFFF; font-weight: bold">Globalism</span>
</div>
<div class="progress-bar split">
<div class="progress-fill-left" style="width: 22.4%"></div>
<div class="progress-fill-right" style="width: 27.6%"></div>
</div>
</div>
<div class="progress-metric split">
<div class="progress-label">
<span style="color: #00FFFF; font-weight: bold">Irreligious</span>
<span class="progress-value">55.4%</span>
<span style="color: #FFE1FF">Religious</span>
</div>
<div class="progress-bar split">
<div class="progress-fill-left" style="width: 27.7%"></div>
<div class="progress-fill-right" style="width: 22.3%"></div>
</div>
</div>
<div class="progress-metric split">
<div class="progress-label">
<span style="color: #00FFFF; font-weight: bold">Progressive</span>
<span class="progress-value">59.6%</span>
<span style="color: #FFE1FF">Traditional</span>
</div>
<div class="progress-bar split">
<div class="progress-fill-left" style="width: 29.8%"></div>
<div class="progress-fill-right" style="width: 20.2%"></div>
</div>
</div>
<div class="progress-metric split">
<div class="progress-label">
<span style="color: #00FFFF; font-weight: bold">Acceleration</span>
<span class="progress-value">65.2%</span>
<span style="color: #FFE1FF">Bioconservative</span>
</div>
<div class="progress-bar split">
<div class="progress-fill-left" style="width: 32.6%"></div>
<div class="progress-fill-right" style="width: 17.4%"></div>
</div>
</div>
</div>
</details>
</div>
</div>
<h2>Open LLM-Benchmark Results:</h2>
<!-- Open LLM Leaderboard -->
<div class="benchmark-container">
<div class="benchmark-notification">
<div class="notification-content">
<span class="notification-text">
Average Score: 43.92%
<a href="https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?rankingMode=dynamic" target="_blank" class="benchmark-link">
View Full Leaderboard →
</a>
</span>
</div>
</div>
<div class="progress-metrics">
<div class="progress-metric">
<div class="progress-label">
<span>IFEval</span>
<span class="progress-value">69.63%</span>
</div>
<div class="progress-bar">
<div class="progress-fill" style="width: 69.63%; background: linear-gradient(90deg, #FF00FF 0%, #00FFFF 100%);"></div>
</div>
</div>
<div class="progress-metric">
<div class="progress-label">
<span>BBH</span>
<span class="progress-value">56.60%</span>
</div>
<div class="progress-bar">
<div class="progress-fill" style="width: 56.60%; background: linear-gradient(90deg, #FF00FF 0%, #00FFFF 100%);"></div>
</div>
</div>
<div class="progress-metric">
<div class="progress-label">
<span>MATH</span>
<span class="progress-value">38.82%</span>
</div>
<div class="progress-bar">
<div class="progress-fill" style="width: 38.82%; background: linear-gradient(90deg, #FF00FF 0%, #00FFFF 100%);"></div>
</div>
</div>
<div class="progress-metric">
<div class="progress-label">
<span>GPQA</span>
<span class="progress-value">29.42%</span>
</div>
<div class="progress-bar">
<div class="progress-fill" style="width: 29.42%; background: linear-gradient(90deg, #FF00FF 0%, #00FFFF 100%);"></div>
</div>
</div>
<div class="progress-metric">
<div class="progress-label">
<span>MUSR</span>
<span class="progress-value">18.63%</span>
</div>
<div class="progress-bar">
<div class="progress-fill" style="width: 18.63%; background: linear-gradient(90deg, #FF00FF 0%, #00FFFF 100%);"></div>
</div>
</div>
<div class="progress-metric">
<div class="progress-label">
<span>MMLU-Pro</span>
<span class="progress-value">50.39%</span>
</div>
<div class="progress-bar">
<div class="progress-fill" style="width: 50.39%; background: linear-gradient(90deg, #FF00FF 0%, #00FFFF 100%);"></div>
</div>
</div>
</div>
</div>
<div class="section-container">
<h2>Reccomended Templates & Prompts</h2>
<div class="template-card">
<div class="template-item">
<div class="template-content">
<a href="https://huggingface.co/Konnect1221/Methception-Llamaception-SillyTavern-Preset" target="_blank" class="template-link">
LLam@ception
<span class="link-arrow">→</span>
</a>
<span class="template-author">by @.konnect</span>
</div>
</div>
</div>
</div>
<div class="section-container">
<h2>Quantized Versions</h2>
<div class="quantized-container">
<!-- GGUF Section -->
<div class="quantized-section">
<h3>GGUF Quantizations</h3>
<div class="quantized-items">
<div class="quantized-item">
<span class="author">bartowski</span>
<a href="https://huggingface.co/bartowski/L3.3-MS-Nevoria-70b-GGUF" target="_blank">
Combined-GGUF
<span class="link-arrow">→</span>
</a>
</div>
<div class="quantized-item">
<span class="author">mradermacher</span>
<div class="multi-links">
<a href="https://huggingface.co/mradermacher/L3.3-MS-Nevoria-70b-GGUF" target="_blank">
GGUF
<span class="link-arrow">→</span>
</a>
<span class="separator">//</span>
<a href="https://huggingface.co/mradermacher/L3.3-MS-Nevoria-70b-i1-GGUF" target="_blank">
Imat-GGUF
<span class="link-arrow">→</span>
</a>
</div>
</div>
</div>
</div>
<!-- EXL2 Section -->
<div class="quantized-section">
<h3>EXL2 Quantizations</h3>
<div class="quantized-items">
<div class="quantized-item">
<span class="author">SteelQuants</span>
<a href="https://huggingface.co/SteelQuants/L3.3-MS-Nevoria-70b-6.0bpw-exl2" target="_blank">
6.0BPW-EXL2
<span class="link-arrow">→</span>
</a>
</div>
<div class="quantized-item">
<span class="author">MikeRoz</span>
<div class="multi-links">
<a href="https://huggingface.co/MikeRoz/Steelskull_L3.3-MS-Nevoria-70b-4.25bpw-h6-exl2" target="_blank">
4.25BPW-EXL2
<span class="link-arrow">→</span>
</a>
<span class="separator">//</span>
<a href="https://huggingface.co/MikeRoz/Steelskull_L3.3-MS-Nevoria-70b-2.25bpw-h6-exl2" target="_blank">
2.25BPW-EXL2
<span class="link-arrow">→</span>
</a>
</div>
</div>
<div class="quantized-item">
<span class="author">Decto</span>
<a href="https://huggingface.co/Decto/L3.3-MS-Nevoria-70b-4.0bpw-h6-exl2" target="_blank">
4.0BPW-EXL2
<span class="link-arrow">→</span>
</a>
</div>
<div class="quantized-item">
<span class="author">Darkhn</span>
<a href="https://huggingface.co/Darkhn/Steelskull_L3.3-MS-Nevoria-70b-5.0bpw-h6-exl2" target="_blank">
5.0BPW-EXL2
<span class="link-arrow">→</span>
</a>
</div>
</div>
</div>
<!-- FP8 Section -->
<div class="quantized-section">
<h3>FP8 Quantizations</h3>
<div class="quantized-items">
<div class="quantized-item">
<span class="author">BigHuggyD</span>
<a href="https://huggingface.co/BigHuggyD/Steelskill_L3.3-MS-Nevoria-70b-FP8-Dynamic" target="_blank">
FP8-Dynamic
<span class="link-arrow">→</span>
</a>
</div>
</div>
</div>
</div>
</div>
<div class="support-section">
<h2>Support the Project:</h2>
<a href="https://ko-fi.com/Y8Y0AO2XE" target="_blank" class="button">
Support on Ko-fi
</a>
</div>
</div>
</div>
</body>
</html>
|
lambdavi/span-marker-luke-legal | lambdavi | "2024-12-05T13:22:04Z" | 13 | 3 | span-marker | [
"span-marker",
"safetensors",
"token-classification",
"ner",
"named-entity-recognition",
"generated_from_span_marker_trainer",
"legal",
"model-index",
"region:us"
] | token-classification | "2024-02-22T10:03:40Z" | ---
library_name: span-marker
tags:
- span-marker
- token-classification
- ner
- named-entity-recognition
- generated_from_span_marker_trainer
- legal
metrics:
- precision
- recall
- f1
widget:
- text: >-
The seven-judge Constitution Bench of the Supreme Court in SBP and Co.
(supra) while reversing earlier five-judge Constitution Bench judgment in
Konkan Railway Corpn. Ltd. vs. Rani Construction (P) Ltd., (2002) 2 SCC 388
held that the power exercised by the Chief Justice of the High Court or the
Chief justice of India under Section 11(6) of the Arbitration Act is not an
administrative power but is a judicial power.
- text: >-
In The High Court Of Judicature At Patna Criminal Writ Jurisdiction Case
No.160 of 2021 Arising Out of Ps. Case No.-58 Year-2020 Thana- Bakhari
District- Begusarai ======================================================
Hanif Ur Rahman, son of Azhar Rahman, Resident of C-39, East Nizamuddin, New
Delhi....... Petitioner Versus 1. The State of Bihar (through Chief
Secretary, Govt. of Bihar) Main Secretariat, Patna - 800015. 2. Meena
Khatoon, wife of Mastan @ Noor Mohammad, Resident of Village- Mansurpur
Chaksikandar, P.S.- Bidupur, District- Vaishali (Bihar) 3. The Bihar Police,
through Standing Counsel. 4. Child Welfare Committee, through Chairperson,
Chanakyanagar, Mahmadpur, Begusarai. 5. The Superintendent, Alpawas Grih,
Nirala Nagar, Behind G.D. College, Ratanpur, Begusarai....... Respondents
====================================================== Appearance:For the
Petitioner:Ms. Kriti Awasthi, Advocate Mr. Sambhav Gupta, Advocate Mr.
Navnit Kumar, Advocate Mr. Shyam Kumar, Advocate For the
Respondents:Mr.Nadim Seraj, G.P.5 For the Resp. No. 2:Ms. Archana Sinha,
Advocate For the Resp. No. 4:Mr. Prabhu Narain Sharma, Advocate
====================================================== Coram: Honourable Mr.
Justice Rajeev Ranjan Prasad C.A.V. Judgment
- text: >-
1 R In The High Court Of Karnataka At Bengaluru Dated This The 19Th Day Of
February, 2021 Before The Hon'Ble Mr. Justice H.P. Sandesh Criminal Appeal
No.176/2011 Between: Sri G.L. Jagadish, S/O Sri G.N. Lingappa, Aged About 52
Years, Residing At No.29, 3Rd Main, Basaveshwara Housing Society Layout,
Vijayanagar, Near Bts Depot, Bengaluru-40....Appellant [By Sri H.
Ramachandra, Advocate For Sri H.R. Anantha Krishna Murthy And Associates -
(Through V.C.)] And: Smt. Vasantha Kokila, W/O Late N.R. Somashekhar, Aged
About 58 Years, Residing At No.322, 8Th Main, 3Rd Stage, 4Th Block,
Basaveshwaranagar, Bengaluru....Respondent [By Sri K.R. Lakshminarayana Rao,
Advocate] This Criminal Appeal Is Filed Under Section 378(4) Of Cr.P.C.
Praying To Set Aside The Order Dated 06.07.2010 Passed By The P.O. Ftc-Ii,
Bengaluru In Crl.A. No.470/2009 And Confirming The Order Dated 27.05.2009
Passed By The Xxii Acmm And Xxiv Ascj, Bengaluru In C.C.No.17229/2004
Convicting The Respondent/Accused For The Offence Punishable Under Section
138 Of Ni Act. 2 This Criminal Appeal Having Been Heard And Reserved For
Orders On 06.02.2021 This Day, The Court Pronounced The Following: Judgment
- text: >-
The petition was filed through Sh. Vijay Pahwa, General Power of Attorney
and it was asserted in the petition under Section 13-B of the Rent Act that
1 of 23 50% share of the demised premises had been purchased by the landlord
from Sh. Vinod Malhotra vide sale deed No.4226 registered on 20.12.2007 with
Sub Registrar, Chandigarh.
- text: >-
Mr. Arun Bharadwaj, ld. CGSC, appearing for the Union of India, has
Signature Not Verified Digitally Signed By:PRATHIBA M SINGH Signing
Date:09.10.2020 16:15 Digitally Signed By:SINDHU KRISHNAKUMAR Signing
Date:09.10.2020 16:50:02 reiterated the submissions made by Dr. Singhvi and
has further submitted that this petition ought to be heard with the OA No.
291/138/2020 pending before the CAT.
pipeline_tag: token-classification
model-index:
- name: SpanMarker
results:
- task:
type: token-classification
name: Named Entity Recognition
dataset:
name: legal_ner
type: unknown
split: eval
metrics:
- type: f1
value: 0.9099756690997567
name: F1
- type: precision
value: 0.9089703932832524
name: Precision
- type: recall
value: 0.9109831709477414
name: Recall
---
# SpanMarker
This is a [SpanMarker](https://github.com/tomaarsen/SpanMarkerNER) model that can be used for Named Entity Recognition. It was trained on the Legal NER Indian Justice dataset.
Official repository of the model: [Github Link](https://github.com/lambdavi/SpanLuke)
## Model Details
### Model Description
- **Model Type:** SpanMarker
<!-- - **Encoder:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 128 tokens
- **Maximum Entity Length:** 6 words
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SpanMarker on GitHub](https://github.com/tomaarsen/SpanMarkerNER)
- **Thesis:** [SpanMarker For Named Entity Recognition](https://raw.githubusercontent.com/tomaarsen/SpanMarkerNER/main/thesis.pdf)
|
## Uses
### Direct Use for Inference
```python
from span_marker import SpanMarkerModel
from span_marker.tokenizer import SpanMarkerTokenizer
# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("lambdavi/span-marker-luke-legal")
tokenizer = SpanMarkerTokenizer.from_pretrained("roberta-base", config=model.config)
model.set_tokenizer(tokenizer)
# Run inference
entities = model.predict("The petition was filed through Sh. Vijay Pahwa, General Power of Attorney and it was asserted in the petition under Section 13-B of the Rent Act that 1 of 23 50% share of the demised premises had been purchased by the landlord from Sh. Vinod Malhotra vide sale deed No.4226 registered on 20.12.2007 with Sub Registrar, Chandigarh.")
```
### Downstream Use
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
```python
from span_marker import SpanMarkerModel, Trainer
from span_marker.tokenizer import SpanMarkerTokenizer
# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("lambdavi/span-marker-luke-legal")
tokenizer = SpanMarkerTokenizer.from_pretrained("roberta-base", config=model.config)
model.set_tokenizer(tokenizer)
# Specify a Dataset with "tokens" and "ner_tag" columns
dataset = load_dataset("conll2003") # For example CoNLL2003
# Initialize a Trainer using the pretrained model & dataset
trainer = Trainer(
model=model,
train_dataset=dataset["train"],
eval_dataset=dataset["validation"],
)
trainer.train()
trainer.save_model("lambdavi/span-marker-luke-legal-finetuned")
```
</details>
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:----------------------|:----|:--------|:-----|
| Sentence length | 3 | 44.5113 | 2795 |
| Entities per sentence | 0 | 2.7232 | 68 |
### Training Hyperparameters
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 5
### Training Results
| Epoch | Step | Validation Loss | Validation Precision | Validation Recall | Validation F1 | Validation Accuracy |
|:------:|:----:|:---------------:|:--------------------:|:-----------------:|:-------------:|:-------------------:|
| 0.9997 | 1837 | 0.0137 | 0.7773 | 0.7994 | 0.7882 | 0.9577 |
| 2.0 | 3675 | 0.0090 | 0.8751 | 0.8348 | 0.8545 | 0.9697 |
| 2.9997 | 5512 | 0.0077 | 0.8777 | 0.8959 | 0.8867 | 0.9770 |
| 4.0 | 7350 | 0.0061 | 0.8941 | 0.9083 | 0.9011 | 0.9811 |
| 4.9986 | 9185 | 0.0064 | 0.9090 | 0.9110 | 0.9100 | 0.9824 |
| Metric | Value |
|:----------------------|:-------|
| f1-exact | 0.9237 |
| f1-strict | 0.9100 |
| f1-partial | 0.9365 |
| f1-type-match | 0.9277 |
### Framework Versions
- Python: 3.10.12
- SpanMarker: 1.5.0
- Transformers: 4.36.0
- PyTorch: 2.0.0
- Datasets: 2.17.1
- Tokenizers: 0.15.0
## Citation
### BibTeX
```
@software{Aarsen_SpanMarker,
author = {Aarsen, Tom},
license = {Apache-2.0},
title = {{SpanMarker for Named Entity Recognition}},
url = {https://github.com/tomaarsen/SpanMarkerNER}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
IrbisAI/Irbis-7b-Instruct_lora | IrbisAI | "2024-06-29T21:10:59Z" | 57 | 6 | peft | [
"peft",
"safetensors",
"text-generation",
"kk",
"license:mit",
"region:us"
] | text-generation | "2024-03-31T15:32:59Z" | ---
language: kk
license: mit
library_name: peft
pipeline_tag: text-generation
---
# Irbis-7B-Instruct LoRA
<img src="https://huggingface.co/IrbisAI/Irbis-7b-v0.1/resolve/main/irbis.jpg" width="800"/>
Irbis-7B-Instruct - это лора для модели [Irbis-7b-v0.1](https://huggingface.co/IrbisAI/Irbis-7b-v0.1), обученная на датасете с 200к примеров (*вопрос, контекст, ответ*) на казахском языке. Итоговая модель хорошо отвечает на простые вопросы и может работать с контекстом, хотя еще есть место для дальнейшего улучшения.
Подробнее можно почитать в [статье](https://habr.com/ru/articles/825574/).
## Попробовать
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
from peft import PeftModel, PeftConfig
import torch
model_name = "IrbisAI/Irbis-7b-Instruct_lora"
config = PeftConfig.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
config.base_model_name_or_path,
return_dict=True,
load_in_4bit=True,
torch_dtype=torch.float16,
device_map="auto")
model = PeftModel.from_pretrained(model, model_name)
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
question = "Шөп неге жасыл?"
context = ""
template = f"""Сен — қазақ тілінде сөйлейтін автоматты көмекші Ирбис. Төменде тапсырма және қосымша контекст беретін енгізу келтірілген. Дұрыс жауап жаз.
### Тапсырма:
{question}
### Енгізу:
{context}
### Жауап:
"""
input_ids = tokenizer([template], return_tensors = "pt")["input_ids"].to("cuda")
generation_config = GenerationConfig(
temperature=0.6,
repetition_penalty=1.15,
)
print("Generating...")
generation_output = model.generate(
input_ids=input_ids,
generation_config=generation_config,
return_dict_in_generate=True,
output_scores=True,
max_new_tokens=2048,
pad_token_id=tokenizer.eos_token_id,
)
for s in generation_output.sequences:
print(tokenizer.decode(s)) # Жасыл шөптің түсі өсімдіктегі хлорофилл деп аталатын химиялық затқа байланысты. Хлорофилл күн сәулесін сіңіреді, содан кейін оны жасушаларға жібереді. Бұл жасушалар жарық энергиясын көмірқышқыл газын оттегімен тотықтырады, бұл процесс арқылы энергия өндіріледі.
``` |
vapegod/o16 | vapegod | "2025-01-29T07:19:38Z" | 27 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-29T07:18:42Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Deep98/Paper-clustered | Deep98 | "2023-02-05T08:43:09Z" | 4 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | "2023-02-05T08:29:41Z" | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: Deep98/Paper-clustered
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Deep98/Paper-clustered
This model is a fine-tuned version of [nandysoham16/16-clustered_aug](https://huggingface.co/nandysoham16/16-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4183
- Train End Logits Accuracy: 0.8611
- Train Start Logits Accuracy: 0.8785
- Validation Loss: 0.2040
- Validation End Logits Accuracy: 1.0
- Validation Start Logits Accuracy: 1.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.4183 | 0.8611 | 0.8785 | 0.2040 | 1.0 | 1.0 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
hilmansw/indobert-finetuned-aspect-happiness-index | hilmansw | "2023-09-13T10:11:48Z" | 106 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"id",
"base_model:indobenchmark/indobert-base-p1",
"base_model:finetune:indobenchmark/indobert-base-p1",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-09-13T09:28:19Z" | ---
license: mit
base_model: indobenchmark/indobert-base-p1
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: indobert-finetuned-aspect-happiness-index
results: []
pipeline_tag: text-classification
language:
- id
widget:
- text: Aku senang kuliah di Undip
example_title: Aspect Detection
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# indobert-finetuned-aspect-happiness-index
This model is a fine-tuned version of [indobenchmark/indobert-base-p1](https://huggingface.co/indobenchmark/indobert-base-p1) on an own private dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1476
- Accuracy: 0.9732
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 270 | 0.1291 | 0.9648 |
| 0.301 | 2.0 | 540 | 0.1708 | 0.9593 |
| 0.301 | 3.0 | 810 | 0.1350 | 0.9685 |
| 0.0655 | 4.0 | 1080 | 0.1734 | 0.9648 |
| 0.0655 | 5.0 | 1350 | 0.1323 | 0.9713 |
| 0.023 | 6.0 | 1620 | 0.1551 | 0.9676 |
| 0.023 | 7.0 | 1890 | 0.1558 | 0.9704 |
| 0.0137 | 8.0 | 2160 | 0.1531 | 0.9732 |
| 0.0137 | 9.0 | 2430 | 0.1493 | 0.9722 |
| 0.0056 | 10.0 | 2700 | 0.1476 | 0.9732 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3 |
elena-soare/bat-pre-trained | elena-soare | "2022-03-21T22:23:37Z" | 9 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-03-21T21:28:30Z" | # Text2SQL Task T5-Base + E-commerce pre-training
This is our T5 model pre-trained on 18k e-commerce pages from popular blogs and fine-tuned on Spider using a schema serialization.
## Running the model
Inspired by the work done by [Picard](https://github.com/ElementAI/picard/) by adding a pre-training step for better performance on e-commerce data.
```python
[question] | [db_id] | [table] : [column] ( [content] , [content] ) , [column] ( ... ) , [...] | [table] : ... | ...
```
|
aaditya/Whisper_e8eae673-8dea-4ce6-b9ac-7541bbcff1c8 | aaditya | "2023-11-02T13:45:59Z" | 0 | 0 | null | [
"generated_from_trainer",
"hi",
"base_model:openai/whisper-large",
"base_model:finetune:openai/whisper-large",
"license:apache-2.0",
"region:us"
] | null | "2023-11-02T13:41:23Z" | ---
language:
- hi
license: apache-2.0
base_model: openai/whisper-large
tags:
- generated_from_trainer
model-index:
- name: Whisper_e8eae673-8dea-4ce6-b9ac-7541bbcff1c8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper_e8eae673-8dea-4ce6-b9ac-7541bbcff1c8
This model is a fine-tuned version of [openai/whisper-large](https://huggingface.co/openai/whisper-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.3764 | 10.0 | 10 | 2.3759 |
| 0.7583 | 20.0 | 20 | 1.4695 |
| 8.3335 | 30.0 | 30 | 6.5259 |
| 6.1162 | 40.0 | 40 | 3.0373 |
| 0.468 | 50.0 | 50 | 2.0833 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
lxyuan/span-marker-bert-base-multilingual-uncased-multinerd | lxyuan | "2023-12-21T02:04:40Z" | 57 | 16 | span-marker | [
"span-marker",
"pytorch",
"generated_from_trainer",
"ner",
"named-entity-recognition",
"token-classification",
"de",
"en",
"es",
"fr",
"it",
"nl",
"pl",
"pt",
"ru",
"zh",
"dataset:Babelscape/multinerd",
"base_model:google-bert/bert-base-multilingual-uncased",
"base_model:finetune:google-bert/bert-base-multilingual-uncased",
"license:cc-by-nc-sa-4.0",
"model-index",
"region:us"
] | token-classification | "2023-08-14T09:34:03Z" | ---
language:
- de
- en
- es
- fr
- it
- nl
- pl
- pt
- ru
- zh
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
- ner
- named-entity-recognition
- span-marker
datasets:
- Babelscape/multinerd
metrics:
- precision
- recall
- f1
pipeline_tag: token-classification
widget:
- text: amelia earthart flog mit ihrer einmotorigen lockheed vega 5b über den atlantik
nach paris.
example_title: German
- text: amelia earhart flew her single engine lockheed vega 5b across the atlantic
to paris.
example_title: English
- text: amelia earthart voló su lockheed vega 5b monomotor a través del océano atlántico
hasta parís.
example_title: Spanish
- text: amelia earthart a fait voler son monomoteur lockheed vega 5b à travers l'ocean
atlantique jusqu'à paris.
example_title: French
- text: amelia earhart ha volato con il suo monomotore lockheed vega 5b attraverso
l'atlantico fino a parigi.
example_title: Italian
- text: amelia earthart vloog met haar één-motorige lockheed vega 5b over de atlantische
oceaan naar parijs.
example_title: Dutch
- text: amelia earthart przeleciała swoim jednosilnikowym samolotem lockheed vega
5b przez ocean atlantycki do paryża.
example_title: Polish
- text: amelia earhart voou em seu monomotor lockheed vega 5b através do atlântico
para paris.
example_title: Portuguese
- text: амелия эртхарт перелетела на своем одномоторном самолете lockheed vega 5b
через атлантический океан в париж.
example_title: Russian
- text: amelia earthart flaug eins hreyfils lockheed vega 5b yfir atlantshafið til
parísar.
example_title: Icelandic
- text: η amelia earthart πέταξε το μονοκινητήριο lockheed vega 5b της πέρα από
τον ατλαντικό ωκεανό στο παρίσι.
example_title: Greek
- text: amelia earhartová přeletěla se svým jednomotorovým lockheed vega 5b přes atlantik
do paříže.
example_title: Czech
- text: amelia earhart lensi yksimoottorisella lockheed vega 5b:llä atlantin yli pariisiin.
example_title: Finnish
- text: amelia earhart fløj med sin enmotoriske lockheed vega 5b over atlanten til
paris.
example_title: Danish
- text: amelia earhart flög sin enmotoriga lockheed vega 5b över atlanten till paris.
example_title: Swedish
- text: amelia earhart fløy sin enmotoriske lockheed vega 5b over atlanterhavet til
paris.
example_title: Norwegian
- text: amelia earhart și-a zburat cu un singur motor lockheed vega 5b peste atlantic
până la paris.
example_title: Romanian
- text: amelia earhart menerbangkan mesin tunggal lockheed vega 5b melintasi atlantik
ke paris.
example_title: Indonesian
- text: амелія эрхарт пераляцела на сваім аднаматорным lockheed vega 5b праз атлантыку
ў парыж.
example_title: Belarusian
- text: амелія ергарт перелетіла на своєму одномоторному літаку lockheed vega 5b через
атлантику до парижа.
example_title: Ukrainian
- text: amelia earhart preletjela je svojim jednomotornim zrakoplovom lockheed vega
5b preko atlantika do pariza.
example_title: Croatian
- text: amelia earhart lendas oma ühemootoriga lockheed vega 5b üle atlandi ookeani
pariisi.
example_title: Estonian
base_model: bert-base-multilingual-uncased
model-index:
- name: span-marker-bert-base-multilingual-uncased-multinerd
results:
- task:
type: token-classification
name: Named Entity Recognition
dataset:
name: MultiNERD
type: Babelscape/multinerd
split: test
revision: 2814b78e7af4b5a1f1886fe7ad49632de4d9dd25
metrics:
- type: f1
value: 0.9187
name: F1
- type: precision
value: 0.9202
name: Precision
- type: recall
value: 0.9172
name: Recall
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# span-marker-bert-base-multilingual-uncased-multinerd
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on an [Babelscape/multinerd](https://huggingface.co/datasets/Babelscape/multinerd) dataset.
Is your data always capitalized correctly? Then consider using the cased variant of this model instead for better performance:
[lxyuan/span-marker-bert-base-multilingual-cased-multinerd](https://huggingface.co/lxyuan/span-marker-bert-base-multilingual-cased-multinerd).
This model achieves the following results on the evaluation set:
- Loss: 0.0054
- Overall Precision: 0.9275
- Overall Recall: 0.9147
- Overall F1: 0.9210
- Overall Accuracy: 0.9842
Test set results:
- test_loss: 0.0058621917851269245,
- test_overall_accuracy: 0.9831472809849865,
- test_overall_f1: 0.9187844693592546,
- test_overall_precision: 0.9202802342397876,
- test_overall_recall: 0.9172935588307115,
- test_runtime: 2716.7472,
- test_samples_per_second: 149.141,
- test_steps_per_second: 4.661,
Note:
This is a replication of Tom's work. In this work, we used slightly different hyperparameters: `epochs=3` and `gradient_accumulation_steps=2`.
We also switched to the uncased [bert model](https://huggingface.co/bert-base-multilingual-uncased) to see if an uncased encoder model would perform better for commonly lowercased entities like, such as food. Please check the discussion [here](https://huggingface.co/lxyuan/span-marker-bert-base-multilingual-cased-multinerd/discussions/1).
Refer to the official [model page](https://huggingface.co/tomaarsen/span-marker-mbert-base-multinerd) to review their results and training script.
## Results:
| **Language** | **Precision** | **Recall** | **F1** |
|--------------|---------------|------------|-----------|
| **all** | 92.03 | 91.73 | **91.88** |
| **de** | 94.96 | 94.87 | **94.91** |
| **en** | 93.69 | 93.75 | **93.72** |
| **es** | 91.19 | 90.69 | **90.94** |
| **fr** | 91.36 | 90.74 | **91.05** |
| **it** | 90.51 | 92.57 | **91.53** |
| **nl** | 93.23 | 92.13 | **92.67** |
| **pl** | 92.17 | 91.59 | **91.88** |
| **pt** | 92.70 | 91.59 | **92.14** |
| **ru** | 92.31 | 92.36 | **92.34** |
| **zh** | 88.91 | 87.53 | **88.22** |
Below is a combined table that compares the results of the cased and uncased models for each language:
| **Language** | **Metric** | **Cased** | **Uncased** |
|--------------|--------------|-----------|-------------|
| **all** | Precision | 92.42 | 92.03 |
| | Recall | 92.81 | 91.73 |
| | F1 | **92.61** | 91.88 |
| **de** | Precision | 95.03 | 94.96 |
| | Recall | 95.07 | 94.87 |
| | F1 | **95.05** | 94.91 |
| **en** | Precision | 95.00 | 93.69 |
| | Recall | 95.40 | 93.75 |
| | F1 | **95.20** | 93.72 |
| **es** | Precision | 92.05 | 91.19 |
| | Recall | 91.37 | 90.69 |
| | F1 | **91.71** | 90.94 |
| **fr** | Precision | 92.37 | 91.36 |
| | Recall | 91.41 | 90.74 |
| | F1 | **91.89** | 91.05 |
| **it** | Precision | 91.45 | 90.51 |
| | Recall | 93.15 | 92.57 |
| | F1 | **92.29** | 91.53 |
| **nl** | Precision | 93.85 | 93.23 |
| | Recall | 92.98 | 92.13 |
| | F1 | **93.41** | 92.67 |
| **pl** | Precision | 93.13 | 92.17 |
| | Recall | 92.66 | 91.59 |
| | F1 | **92.89** | 91.88 |
| **pt** | Precision | 93.60 | 92.70 |
| | Recall | 92.50 | 91.59 |
| | F1 | **93.05** | 92.14 |
| **ru** | Precision | 93.25 | 92.31 |
| | Recall | 93.32 | 92.36 |
| | F1 | **93.29** | 92.34 |
| **zh** | Precision | 89.47 | 88.91 |
| | Recall | 88.40 | 87.53 |
| | F1 | **88.93** | 88.22 |
Short discussion:
Upon examining the results, one might conclude that the cased version of the model is better than the uncased version,
as it outperforms the latter across all languages. However, I recommend that users test both models on their specific
datasets (or domains) to determine which one actually delivers better performance. My reasoning for this suggestion
stems from a brief comparison I conducted on the FOOD (food) entities. I found that both cased and uncased models are
sensitive to the full stop punctuation mark. We direct readers to the section: Quick Comparison on FOOD Entities.
## Label set
| Class | Description | Examples |
|-------|-------------|----------|
| **PER (person)** | People | Ray Charles, Jessica Alba, Leonardo DiCaprio, Roger Federer, Anna Massey. |
| **ORG (organization)** | Associations, companies, agencies, institutions, nationalities and religious or political groups | University of Edinburgh, San Francisco Giants, Google, Democratic Party. |
| **LOC (location)** | Physical locations (e.g. mountains, bodies of water), geopolitical entities (e.g. cities, states), and facilities (e.g. bridges, buildings, airports). | Rome, Lake Paiku, Chrysler Building, Mount Rushmore, Mississippi River. |
| **ANIM (animal)** | Breeds of dogs, cats and other animals, including their scientific names. | Maine Coon, African Wild Dog, Great White Shark, New Zealand Bellbird. |
| **BIO (biological)** | Genus of fungus, bacteria and protoctists, families of viruses, and other biological entities. | Herpes Simplex Virus, Escherichia Coli, Salmonella, Bacillus Anthracis. |
| **CEL (celestial)** | Planets, stars, asteroids, comets, nebulae, galaxies and other astronomical objects. | Sun, Neptune, Asteroid 187 Lamberta, Proxima Centauri, V838 Monocerotis. |
| **DIS (disease)** | Physical, mental, infectious, non-infectious, deficiency, inherited, degenerative, social and self-inflicted diseases. | Alzheimer’s Disease, Cystic Fibrosis, Dilated Cardiomyopathy, Arthritis. |
| **EVE (event)** | Sport events, battles, wars and other events. | American Civil War, 2003 Wimbledon Championships, Cannes Film Festival. |
| **FOOD (food)** | Foods and drinks. | Carbonara, Sangiovese, Cheddar Beer Fondue, Pizza Margherita. |
| **INST (instrument)** | Technological instruments, mechanical instruments, musical instruments, and other tools. | Spitzer Space Telescope, Commodore 64, Skype, Apple Watch, Fender Stratocaster. |
| **MEDIA (media)** | Titles of films, books, magazines, songs and albums, fictional characters and languages. | Forbes, American Psycho, Kiss Me Once, Twin Peaks, Disney Adventures. |
| **PLANT (plant)** | Types of trees, flowers, and other plants, including their scientific names. | Salix, Quercus Petraea, Douglas Fir, Forsythia, Artemisia Maritima. |
| **MYTH (mythological)** | Mythological and religious entities. | Apollo, Persephone, Aphrodite, Saint Peter, Pope Gregory I, Hercules. |
| **TIME (time)** | Specific and well-defined time intervals, such as eras, historical periods, centuries, years and important days. No months and days of the week. | Renaissance, Middle Ages, Christmas, Great Depression, 17th Century, 2012. |
| **VEHI (vehicle)** | Cars, motorcycles and other vehicles. | Ferrari Testarossa, Suzuki Jimny, Honda CR-X, Boeing 747, Fairey Fulmar. |
## Inference Example
```python
# install span_marker
(env)$ pip install span_marker
from span_marker import SpanMarkerModel
model = SpanMarkerModel.from_pretrained("lxyuan/span-marker-bert-base-multilingual-uncased-multinerd")
description = "Singapore is renowned for its hawker centers offering dishes \
like Hainanese chicken rice and laksa, while Malaysia boasts dishes such as \
nasi lemak and rendang, reflecting its rich culinary heritage."
entities = model.predict(description)
entities
>>>
[
{'span': 'Singapore', 'label': 'LOC', 'score': 0.9999247789382935, 'char_start_index': 0, 'char_end_index': 9},
{'span': 'laksa', 'label': 'FOOD', 'score': 0.794235348701477, 'char_start_index': 93, 'char_end_index': 98},
{'span': 'Malaysia', 'label': 'LOC', 'score': 0.9999157190322876, 'char_start_index': 106, 'char_end_index': 114}
]
# missed: Hainanese chicken rice as FOOD
# missed: nasi lemak as FOOD
# missed: rendang as FOOD
# note: Unfortunately, this uncased version still fails to pick up those commonly lowercased food entities and even misses out on the capitalized `Hainanese chicken rice` entity.
```
#### Quick test on Chinese
```python
from span_marker import SpanMarkerModel
model = SpanMarkerModel.from_pretrained("lxyuan/span-marker-bert-base-multilingual-uncased-multinerd")
# translate to chinese
description = "Singapore is renowned for its hawker centers offering dishes \
like Hainanese chicken rice and laksa, while Malaysia boasts dishes such as \
nasi lemak and rendang, reflecting its rich culinary heritage."
zh_description = "新加坡因其小贩中心提供海南鸡饭和叻沙等菜肴而闻名, 而马来西亚则拥有椰浆饭和仁当等菜肴,反映了其丰富的烹饪传统."
entities = model.predict(zh_description)
entities
>>>
[
{'span': '新加坡', 'label': 'LOC', 'score': 0.8477746248245239, 'char_start_index': 0, 'char_end_index': 3},
{'span': '马来西亚', 'label': 'LOC', 'score': 0.7525337934494019, 'char_start_index': 27, 'char_end_index': 31}
]
# It only managed to capture two countries: Singapore and Malaysia.
# All other entities were missed out.
# Same prediction as the [uncased model](https://huggingface.co/lxyuan/span-marker-bert-base-multilingual-cased-multinerd)
```
### Quick Comparison on FOOD Entities
In this quick comparison, we found that a full stop punctuation mark seems to help the uncased model identify food entities,
regardless of whether they are capitalized or in uppercase. In contrast, the cased model doesn't respond well to full stops,
and adding them would lower the prediction score.
```python
from span_marker import SpanMarkerModel
cased_model = SpanMarkerModel.from_pretrained("lxyuan/span-marker-bert-base-multilingual-cased-multinerd")
uncased_model = SpanMarkerModel.from_pretrained("lxyuan/span-marker-bert-base-multilingual-uncased-multinerd")
# no full stop mark
uncased_model.predict("i love fried chicken and korea bbq")
>>> []
uncased_model.predict("i love fried chicken and korea BBQ") # Uppercase BBQ only
>>> []
uncased_model.predict("i love fried chicken and Korea BBQ") # Capitalize korea and uppercase BBQ
>>> []
# add full stop to get better result
uncased_model.predict("i love fried chicken and korea bbq.")
>>> [
{'span': 'fried chicken', 'label': 'FOOD', 'score': 0.6531468629837036, 'char_start_index': 7, 'char_end_index': 20},
{'span': 'korea bbq', 'label': 'FOOD', 'score': 0.9738698601722717, 'char_start_index': 25,'char_end_index': 34}
]
uncased_model.predict("i love fried chicken and korea BBQ.")
>>> [
{'span': 'fried chicken', 'label': 'FOOD', 'score': 0.6531468629837036, 'char_start_index': 7, 'char_end_index': 20},
{'span': 'korea BBQ', 'label': 'FOOD', 'score': 0.9738698601722717, 'char_start_index': 25, 'char_end_index': 34}
]
uncased_model.predict("i love fried chicken and Korea BBQ.")
>>> [
{'span': 'fried chicken', 'label': 'FOOD', 'score': 0.6531468629837036, 'char_start_index': 7, 'char_end_index': 20},
{'span': 'Korea BBQ', 'label': 'FOOD', 'score': 0.9738698601722717, 'char_start_index': 25, 'char_end_index': 34}
]
# no full stop mark
cased_model.predict("i love fried chicken and korea bbq")
>>> [
{'span': 'korea bbq', 'label': 'FOOD', 'score': 0.5054221749305725, 'char_start_index': 25, 'char_end_index': 34}
]
cased_model.predict("i love fried chicken and korea BBQ")
>>> [
{'span': 'korea BBQ', 'label': 'FOOD', 'score': 0.6987857222557068, 'char_start_index': 25, 'char_end_index': 34}
]
cased_model.predict("i love fried chicken and Korea BBQ")
>>> [
{'span': 'Korea BBQ', 'label': 'FOOD', 'score': 0.9755308032035828, 'char_start_index': 25, 'char_end_index': 34}
]
# add a fullstop mark hurt the cased model prediction score a little bit
cased_model.predict("i love fried chicken and korea bbq.")
>>> []
cased_model.predict("i love fried chicken and korea BBQ.")
>>> [
{'span': 'korea BBQ', 'label': 'FOOD', 'score': 0.5078140497207642, 'char_start_index': 25, 'char_end_index': 34}
]
cased_model.predict("i love fried chicken and Korea BBQ.")
>>> [
{'span': 'Korea BBQ', 'label': 'FOOD', 'score': 0.895089328289032, 'char_start_index': 25, 'char_end_index': 34}
]
```
## Training procedure
One can reproduce the result running this [script](https://huggingface.co/tomaarsen/span-marker-mbert-base-multinerd/blob/main/train.py)
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.0157 | 1.0 | 50369 | 0.0048 | 0.9143 | 0.8986 | 0.9064 | 0.9807 |
| 0.003 | 2.0 | 100738 | 0.0047 | 0.9237 | 0.9126 | 0.9181 | 0.9835 |
| 0.0017 | 3.0 | 151107 | 0.0054 | 0.9275 | 0.9147 | 0.9210 | 0.9842 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.3
- Tokenizers 0.13.3 |
jgalego/a2c-PandaReachDense-v2-test | jgalego | "2023-03-25T02:07:50Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-25T02:05:26Z" | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.98 +/- 0.72
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
candrews1971/a2c-PandaPickAndPlace-v3 | candrews1971 | "2024-06-11T18:54:03Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaPickAndPlace-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-06-11T18:50:34Z" | ---
library_name: stable-baselines3
tags:
- PandaPickAndPlace-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaPickAndPlace-v3
type: PandaPickAndPlace-v3
metrics:
- type: mean_reward
value: -50.00 +/- 0.00
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaPickAndPlace-v3**
This is a trained model of a **A2C** agent playing **PandaPickAndPlace-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
wx44wx/toymodel | wx44wx | "2023-03-08T14:19:41Z" | 0 | 0 | null | [
"arxiv:1910.09700",
"region:us"
] | null | "2023-03-08T14:19:03Z" | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
### How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tuanna08go/02779a1a-c446-44a8-9721-a5b3975a5b7a | tuanna08go | "2025-01-15T18:30:27Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.1-Storm-8B",
"base_model:adapter:unsloth/Llama-3.1-Storm-8B",
"license:llama3.1",
"region:us"
] | null | "2025-01-15T18:18:38Z" | ---
library_name: peft
license: llama3.1
base_model: unsloth/Llama-3.1-Storm-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 02779a1a-c446-44a8-9721-a5b3975a5b7a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Llama-3.1-Storm-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3de9806d564c55a0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3de9806d564c55a0_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: tuanna08go/02779a1a-c446-44a8-9721-a5b3975a5b7a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/3de9806d564c55a0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b677cf3d-7249-417f-b1ba-cc3912cf9320
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b677cf3d-7249-417f-b1ba-cc3912cf9320
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 02779a1a-c446-44a8-9721-a5b3975a5b7a
This model is a fine-tuned version of [unsloth/Llama-3.1-Storm-8B](https://huggingface.co/unsloth/Llama-3.1-Storm-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.3209
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0015 | 1 | 5.8282 |
| 5.8208 | 0.0145 | 10 | 5.6493 |
| 5.9139 | 0.0290 | 20 | 5.4408 |
| 5.3175 | 0.0436 | 30 | 5.3621 |
| 5.488 | 0.0581 | 40 | 5.3283 |
| 5.4232 | 0.0726 | 50 | 5.3209 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
RichardErkhov/Hjgugugjhuhjggg_-_mergekit-linear-rfxmzdf-gguf | RichardErkhov | "2025-03-29T10:05:13Z" | 0 | 0 | null | [
"gguf",
"arxiv:2203.05482",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-29T08:59:18Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
mergekit-linear-rfxmzdf - GGUF
- Model creator: https://huggingface.co/Hjgugugjhuhjggg/
- Original model: https://huggingface.co/Hjgugugjhuhjggg/mergekit-linear-rfxmzdf/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [mergekit-linear-rfxmzdf.Q2_K.gguf](https://huggingface.co/RichardErkhov/Hjgugugjhuhjggg_-_mergekit-linear-rfxmzdf-gguf/blob/main/mergekit-linear-rfxmzdf.Q2_K.gguf) | Q2_K | 1.39GB |
| [mergekit-linear-rfxmzdf.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Hjgugugjhuhjggg_-_mergekit-linear-rfxmzdf-gguf/blob/main/mergekit-linear-rfxmzdf.IQ3_XS.gguf) | IQ3_XS | 1.53GB |
| [mergekit-linear-rfxmzdf.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Hjgugugjhuhjggg_-_mergekit-linear-rfxmzdf-gguf/blob/main/mergekit-linear-rfxmzdf.IQ3_S.gguf) | IQ3_S | 1.59GB |
| [mergekit-linear-rfxmzdf.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Hjgugugjhuhjggg_-_mergekit-linear-rfxmzdf-gguf/blob/main/mergekit-linear-rfxmzdf.Q3_K_S.gguf) | Q3_K_S | 1.59GB |
| [mergekit-linear-rfxmzdf.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Hjgugugjhuhjggg_-_mergekit-linear-rfxmzdf-gguf/blob/main/mergekit-linear-rfxmzdf.IQ3_M.gguf) | IQ3_M | 1.65GB |
| [mergekit-linear-rfxmzdf.Q3_K.gguf](https://huggingface.co/RichardErkhov/Hjgugugjhuhjggg_-_mergekit-linear-rfxmzdf-gguf/blob/main/mergekit-linear-rfxmzdf.Q3_K.gguf) | Q3_K | 1.73GB |
| [mergekit-linear-rfxmzdf.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Hjgugugjhuhjggg_-_mergekit-linear-rfxmzdf-gguf/blob/main/mergekit-linear-rfxmzdf.Q3_K_M.gguf) | Q3_K_M | 1.73GB |
| [mergekit-linear-rfxmzdf.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Hjgugugjhuhjggg_-_mergekit-linear-rfxmzdf-gguf/blob/main/mergekit-linear-rfxmzdf.Q3_K_L.gguf) | Q3_K_L | 1.85GB |
| [mergekit-linear-rfxmzdf.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Hjgugugjhuhjggg_-_mergekit-linear-rfxmzdf-gguf/blob/main/mergekit-linear-rfxmzdf.IQ4_XS.gguf) | IQ4_XS | 1.91GB |
| [mergekit-linear-rfxmzdf.Q4_0.gguf](https://huggingface.co/RichardErkhov/Hjgugugjhuhjggg_-_mergekit-linear-rfxmzdf-gguf/blob/main/mergekit-linear-rfxmzdf.Q4_0.gguf) | Q4_0 | 1.99GB |
| [mergekit-linear-rfxmzdf.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Hjgugugjhuhjggg_-_mergekit-linear-rfxmzdf-gguf/blob/main/mergekit-linear-rfxmzdf.IQ4_NL.gguf) | IQ4_NL | 2.0GB |
| [mergekit-linear-rfxmzdf.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Hjgugugjhuhjggg_-_mergekit-linear-rfxmzdf-gguf/blob/main/mergekit-linear-rfxmzdf.Q4_K_S.gguf) | Q4_K_S | 2.0GB |
| [mergekit-linear-rfxmzdf.Q4_K.gguf](https://huggingface.co/RichardErkhov/Hjgugugjhuhjggg_-_mergekit-linear-rfxmzdf-gguf/blob/main/mergekit-linear-rfxmzdf.Q4_K.gguf) | Q4_K | 2.09GB |
| [mergekit-linear-rfxmzdf.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Hjgugugjhuhjggg_-_mergekit-linear-rfxmzdf-gguf/blob/main/mergekit-linear-rfxmzdf.Q4_K_M.gguf) | Q4_K_M | 2.09GB |
| [mergekit-linear-rfxmzdf.Q4_1.gguf](https://huggingface.co/RichardErkhov/Hjgugugjhuhjggg_-_mergekit-linear-rfxmzdf-gguf/blob/main/mergekit-linear-rfxmzdf.Q4_1.gguf) | Q4_1 | 2.18GB |
| [mergekit-linear-rfxmzdf.Q5_0.gguf](https://huggingface.co/RichardErkhov/Hjgugugjhuhjggg_-_mergekit-linear-rfxmzdf-gguf/blob/main/mergekit-linear-rfxmzdf.Q5_0.gguf) | Q5_0 | 2.37GB |
| [mergekit-linear-rfxmzdf.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Hjgugugjhuhjggg_-_mergekit-linear-rfxmzdf-gguf/blob/main/mergekit-linear-rfxmzdf.Q5_K_S.gguf) | Q5_K_S | 2.37GB |
| [mergekit-linear-rfxmzdf.Q5_K.gguf](https://huggingface.co/RichardErkhov/Hjgugugjhuhjggg_-_mergekit-linear-rfxmzdf-gguf/blob/main/mergekit-linear-rfxmzdf.Q5_K.gguf) | Q5_K | 2.41GB |
| [mergekit-linear-rfxmzdf.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Hjgugugjhuhjggg_-_mergekit-linear-rfxmzdf-gguf/blob/main/mergekit-linear-rfxmzdf.Q5_K_M.gguf) | Q5_K_M | 2.41GB |
| [mergekit-linear-rfxmzdf.Q5_1.gguf](https://huggingface.co/RichardErkhov/Hjgugugjhuhjggg_-_mergekit-linear-rfxmzdf-gguf/blob/main/mergekit-linear-rfxmzdf.Q5_1.gguf) | Q5_1 | 2.55GB |
| [mergekit-linear-rfxmzdf.Q6_K.gguf](https://huggingface.co/RichardErkhov/Hjgugugjhuhjggg_-_mergekit-linear-rfxmzdf-gguf/blob/main/mergekit-linear-rfxmzdf.Q6_K.gguf) | Q6_K | 2.76GB |
| [mergekit-linear-rfxmzdf.Q8_0.gguf](https://huggingface.co/RichardErkhov/Hjgugugjhuhjggg_-_mergekit-linear-rfxmzdf-gguf/blob/main/mergekit-linear-rfxmzdf.Q8_0.gguf) | Q8_0 | 3.58GB |
Original model description:
---
base_model:
- Hjgugugjhuhjggg/mergekit-ties-xflmond
- Hjgugugjhuhjggg/mergekit-ties-kmlzhzo
- Hjgugugjhuhjggg/mergekit-ties-pghuyfi
- huihui-ai/Llama-3.2-3B-Instruct-abliterated
- Hjgugugjhuhjggg/mergekit-ties-qgcitfu
- Hjgugugjhuhjggg/mergekit-ties-poovzrh
- Hjgugugjhuhjggg/mergekit-ties-dkhnzcn
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method using [huihui-ai/Llama-3.2-3B-Instruct-abliterated](https://huggingface.co/huihui-ai/Llama-3.2-3B-Instruct-abliterated) as a base.
### Models Merged
The following models were included in the merge:
* [Hjgugugjhuhjggg/mergekit-ties-xflmond](https://huggingface.co/Hjgugugjhuhjggg/mergekit-ties-xflmond)
* [Hjgugugjhuhjggg/mergekit-ties-kmlzhzo](https://huggingface.co/Hjgugugjhuhjggg/mergekit-ties-kmlzhzo)
* [Hjgugugjhuhjggg/mergekit-ties-pghuyfi](https://huggingface.co/Hjgugugjhuhjggg/mergekit-ties-pghuyfi)
* [Hjgugugjhuhjggg/mergekit-ties-qgcitfu](https://huggingface.co/Hjgugugjhuhjggg/mergekit-ties-qgcitfu)
* [Hjgugugjhuhjggg/mergekit-ties-poovzrh](https://huggingface.co/Hjgugugjhuhjggg/mergekit-ties-poovzrh)
* [Hjgugugjhuhjggg/mergekit-ties-dkhnzcn](https://huggingface.co/Hjgugugjhuhjggg/mergekit-ties-dkhnzcn)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- layer_range: [0, 28]
model: Hjgugugjhuhjggg/mergekit-ties-qgcitfu
parameters:
weight: 1
density: 0.9
gamma: 0.01
normalize: true
int8_mask: true
random_seed: 0
temperature: 0.5
top_p: 0.65
inference: true
max_tokens: 999999999
stream: true
quantization:
method: int8
value: 100
quantization:
method: int4
value: 100
- layer_range: [0, 28]
model: Hjgugugjhuhjggg/mergekit-ties-dkhnzcn
parameters:
weight: 1
density: 0.9
gamma: 0.01
normalize: true
int8_mask: true
random_seed: 0
temperature: 0.5
top_p: 0.65
inference: true
max_tokens: 999999999
stream: true
quantization:
method: int8
value: 100
quantization:
method: int4
value: 100
- layer_range: [0, 28]
model: Hjgugugjhuhjggg/mergekit-ties-poovzrh
parameters:
weight: 1
density: 0.9
gamma: 0.01
normalize: true
int8_mask: true
random_seed: 0
temperature: 0.5
top_p: 0.65
inference: true
max_tokens: 999999999
stream: true
quantization:
method: int8
value: 100
quantization:
method: int4
value: 100
- layer_range: [0, 28]
model: Hjgugugjhuhjggg/mergekit-ties-pghuyfi
parameters:
weight: 1
density: 0.9
gamma: 0.01
normalize: true
int8_mask: true
random_seed: 0
temperature: 0.5
top_p: 0.65
inference: true
max_tokens: 999999999
stream: true
quantization:
method: int8
value: 100
quantization:
method: int4
value: 100
- layer_range: [0, 28]
model: Hjgugugjhuhjggg/mergekit-ties-kmlzhzo
parameters:
weight: 1
density: 0.9
gamma: 0.01
normalize: true
int8_mask: true
random_seed: 0
temperature: 0.5
top_p: 0.65
inference: true
max_tokens: 999999999
stream: true
quantization:
method: int8
value: 100
quantization:
method: int4
value: 100
- layer_range: [0, 28]
model: Hjgugugjhuhjggg/mergekit-ties-xflmond
parameters:
weight: 1
density: 0.9
gamma: 0.01
normalize: true
int8_mask: true
random_seed: 0
temperature: 0.5
top_p: 0.65
inference: true
max_tokens: 999999999
stream: true
quantization:
method: int8
value: 100
quantization:
method: int4
value: 100
merge_method: linear
base_model: huihui-ai/Llama-3.2-3B-Instruct-abliterated
weight: 1
density: 0.9
gamma: 0.01
normalize: true
int8_mask: true
random_seed: 0
temperature: 0.5
top_p: 0.65
inference: true
max_tokens: 999999999
stream: true
quantization:
method: int8
value: 100
quantization:
method: int4
value: 100
dtype: float16
parameters:
weight: 1
density: 0.9
gamma: 0.01
normalize: true
int8_mask: true
random_seed: 0
temperature: 0.5
top_p: 0.65
inference: true
max_tokens: 999999999
stream: true
quantization:
method: int8
value: 100
quantization:
method: int4
value: 100
```
|
RichardErkhov/CultriX_-_SevereNeuralBeagleTrix-7B-gguf | RichardErkhov | "2024-08-01T13:35:00Z" | 19 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | "2024-08-01T05:46:42Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
SevereNeuralBeagleTrix-7B - GGUF
- Model creator: https://huggingface.co/CultriX/
- Original model: https://huggingface.co/CultriX/SevereNeuralBeagleTrix-7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [SevereNeuralBeagleTrix-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/CultriX_-_SevereNeuralBeagleTrix-7B-gguf/blob/main/SevereNeuralBeagleTrix-7B.Q2_K.gguf) | Q2_K | 2.53GB |
| [SevereNeuralBeagleTrix-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/CultriX_-_SevereNeuralBeagleTrix-7B-gguf/blob/main/SevereNeuralBeagleTrix-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [SevereNeuralBeagleTrix-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/CultriX_-_SevereNeuralBeagleTrix-7B-gguf/blob/main/SevereNeuralBeagleTrix-7B.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [SevereNeuralBeagleTrix-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/CultriX_-_SevereNeuralBeagleTrix-7B-gguf/blob/main/SevereNeuralBeagleTrix-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [SevereNeuralBeagleTrix-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/CultriX_-_SevereNeuralBeagleTrix-7B-gguf/blob/main/SevereNeuralBeagleTrix-7B.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [SevereNeuralBeagleTrix-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/CultriX_-_SevereNeuralBeagleTrix-7B-gguf/blob/main/SevereNeuralBeagleTrix-7B.Q3_K.gguf) | Q3_K | 3.28GB |
| [SevereNeuralBeagleTrix-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/CultriX_-_SevereNeuralBeagleTrix-7B-gguf/blob/main/SevereNeuralBeagleTrix-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [SevereNeuralBeagleTrix-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/CultriX_-_SevereNeuralBeagleTrix-7B-gguf/blob/main/SevereNeuralBeagleTrix-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [SevereNeuralBeagleTrix-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/CultriX_-_SevereNeuralBeagleTrix-7B-gguf/blob/main/SevereNeuralBeagleTrix-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [SevereNeuralBeagleTrix-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/CultriX_-_SevereNeuralBeagleTrix-7B-gguf/blob/main/SevereNeuralBeagleTrix-7B.Q4_0.gguf) | Q4_0 | 3.83GB |
| [SevereNeuralBeagleTrix-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/CultriX_-_SevereNeuralBeagleTrix-7B-gguf/blob/main/SevereNeuralBeagleTrix-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [SevereNeuralBeagleTrix-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/CultriX_-_SevereNeuralBeagleTrix-7B-gguf/blob/main/SevereNeuralBeagleTrix-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [SevereNeuralBeagleTrix-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/CultriX_-_SevereNeuralBeagleTrix-7B-gguf/blob/main/SevereNeuralBeagleTrix-7B.Q4_K.gguf) | Q4_K | 4.07GB |
| [SevereNeuralBeagleTrix-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/CultriX_-_SevereNeuralBeagleTrix-7B-gguf/blob/main/SevereNeuralBeagleTrix-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [SevereNeuralBeagleTrix-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/CultriX_-_SevereNeuralBeagleTrix-7B-gguf/blob/main/SevereNeuralBeagleTrix-7B.Q4_1.gguf) | Q4_1 | 4.24GB |
| [SevereNeuralBeagleTrix-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/CultriX_-_SevereNeuralBeagleTrix-7B-gguf/blob/main/SevereNeuralBeagleTrix-7B.Q5_0.gguf) | Q5_0 | 4.65GB |
| [SevereNeuralBeagleTrix-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/CultriX_-_SevereNeuralBeagleTrix-7B-gguf/blob/main/SevereNeuralBeagleTrix-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [SevereNeuralBeagleTrix-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/CultriX_-_SevereNeuralBeagleTrix-7B-gguf/blob/main/SevereNeuralBeagleTrix-7B.Q5_K.gguf) | Q5_K | 4.78GB |
| [SevereNeuralBeagleTrix-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/CultriX_-_SevereNeuralBeagleTrix-7B-gguf/blob/main/SevereNeuralBeagleTrix-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [SevereNeuralBeagleTrix-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/CultriX_-_SevereNeuralBeagleTrix-7B-gguf/blob/main/SevereNeuralBeagleTrix-7B.Q5_1.gguf) | Q5_1 | 5.07GB |
| [SevereNeuralBeagleTrix-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/CultriX_-_SevereNeuralBeagleTrix-7B-gguf/blob/main/SevereNeuralBeagleTrix-7B.Q6_K.gguf) | Q6_K | 5.53GB |
| [SevereNeuralBeagleTrix-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/CultriX_-_SevereNeuralBeagleTrix-7B-gguf/blob/main/SevereNeuralBeagleTrix-7B.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
tags:
- merge
- mergekit
- lazymergekit
- PetroGPT/WestSeverus-7B-DPO
- CultriX/MergeTrix-7B-v2
- mlabonne/NeuralBeagle14-7B
base_model:
- PetroGPT/WestSeverus-7B-DPO
- CultriX/MergeTrix-7B-v2
- mlabonne/NeuralBeagle14-7B
license: apache-2.0
---
# EDIT:
Always check my space for the latest benchmark results for my models!
* https://huggingface.co/spaces/CultriX/Yet_Another_LLM_Leaderboard
# SevereNeuralBeagleTrix-7B
SevereNeuralBeagleTrix-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [PetroGPT/WestSeverus-7B-DPO](https://huggingface.co/PetroGPT/WestSeverus-7B-DPO)
* [CultriX/MergeTrix-7B-v2](https://huggingface.co/CultriX/MergeTrix-7B-v2)
* [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B)
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
# No parameters necessary for base model
- model: PetroGPT/WestSeverus-7B-DPO
parameters:
density: 0.53
weight: 0.3
- model: CultriX/MergeTrix-7B-v2
parameters:
density: 0.53
weight: 0.4
- model: mlabonne/NeuralBeagle14-7B
parameters:
density: 0.53
weight: 0.3
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "CultriX/SevereNeuralBeagleTrix-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
mradermacher/ES-QWEN-DISTILL-3.5B-bf16-GGUF | mradermacher | "2025-03-25T04:15:11Z" | 245 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:AR-Lab/EQuIP_3B",
"base_model:quantized:AR-Lab/EQuIP_3B",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-20T21:38:15Z" | ---
base_model: AR-Lab/EQuIP_3B
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/AR-Lab/EQuIP_3B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ES-QWEN-DISTILL-3.5B-bf16-GGUF/resolve/main/ES-QWEN-DISTILL-3.5B-bf16.Q2_K.gguf) | Q2_K | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/ES-QWEN-DISTILL-3.5B-bf16-GGUF/resolve/main/ES-QWEN-DISTILL-3.5B-bf16.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/ES-QWEN-DISTILL-3.5B-bf16-GGUF/resolve/main/ES-QWEN-DISTILL-3.5B-bf16.Q3_K_M.gguf) | Q3_K_M | 1.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ES-QWEN-DISTILL-3.5B-bf16-GGUF/resolve/main/ES-QWEN-DISTILL-3.5B-bf16.Q3_K_L.gguf) | Q3_K_L | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/ES-QWEN-DISTILL-3.5B-bf16-GGUF/resolve/main/ES-QWEN-DISTILL-3.5B-bf16.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/ES-QWEN-DISTILL-3.5B-bf16-GGUF/resolve/main/ES-QWEN-DISTILL-3.5B-bf16.Q4_K_S.gguf) | Q4_K_S | 1.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ES-QWEN-DISTILL-3.5B-bf16-GGUF/resolve/main/ES-QWEN-DISTILL-3.5B-bf16.Q4_K_M.gguf) | Q4_K_M | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ES-QWEN-DISTILL-3.5B-bf16-GGUF/resolve/main/ES-QWEN-DISTILL-3.5B-bf16.Q5_K_S.gguf) | Q5_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/ES-QWEN-DISTILL-3.5B-bf16-GGUF/resolve/main/ES-QWEN-DISTILL-3.5B-bf16.Q5_K_M.gguf) | Q5_K_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/ES-QWEN-DISTILL-3.5B-bf16-GGUF/resolve/main/ES-QWEN-DISTILL-3.5B-bf16.Q6_K.gguf) | Q6_K | 2.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ES-QWEN-DISTILL-3.5B-bf16-GGUF/resolve/main/ES-QWEN-DISTILL-3.5B-bf16.Q8_0.gguf) | Q8_0 | 3.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ES-QWEN-DISTILL-3.5B-bf16-GGUF/resolve/main/ES-QWEN-DISTILL-3.5B-bf16.f16.gguf) | f16 | 6.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
swj0419/booksum_STEP0003000 | swj0419 | "2024-04-23T00:16:24Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-23T00:10:44Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Shritama/Gemma_text-to-json_2 | Shritama | "2024-05-02T18:57:50Z" | 118 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-02T18:54:18Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AnonymousOrca/parser_added_checkpoint | AnonymousOrca | "2025-01-21T16:35:19Z" | 8 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-21T16:02:31Z" | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dheeraj1019/textclassfication | dheeraj1019 | "2024-03-18T09:12:45Z" | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"code",
"text-classification",
"dataset:HuggingFaceTB/cosmopedia",
"license:afl-3.0",
"region:us"
] | text-classification | "2024-03-18T08:52:06Z" | ---
license: afl-3.0
datasets:
- HuggingFaceTB/cosmopedia
metrics:
- accuracy
library_name: adapter-transformers
pipeline_tag: text-classification
tags:
- code
---
# Install the necessary libraries
!pip install transformers
!pip install torch
import torch
from transformers import RobertaTokenizer, RobertaForSequenceClassification, XLNetTokenizer, XLNetForSequenceClassification
from transformers import Trainer, TrainingArguments
from sklearn.model_selection import train_test_split
import numpy as np
from sklearn.metrics import accuracy_score, precision_recall_fscore_support
# Example dataset for text classification (replace with your own dataset)
texts = [...] # List of input texts
labels = [...] # List of corresponding labels (0 or 1 for binary classification)
# Split the dataset into training and testing sets
train_texts, test_texts, train_labels, test_labels = train_test_split(texts, labels, test_size=0.2, random_state=42)
# Define the tokenizer and model for RoBERTa
roberta_tokenizer = RobertaTokenizer.from_pretrained("roberta-base")
roberta_model = RobertaForSequenceClassification.from_pretrained("roberta-base")
# Define the tokenizer and model for XLNet
xlnet_tokenizer = XLNetTokenizer.from_pretrained("xlnet-base-cased")
xlnet_model = XLNetForSequenceClassification.from_pretrained("xlnet-base-cased")
# Tokenize and encode the training and testing sets
train_encodings_roberta = roberta_tokenizer(train_texts, truncation=True, padding=True)
test_encodings_roberta = roberta_tokenizer(test_texts, truncation=True, padding=True)
train_encodings_xlnet = xlnet_tokenizer(train_texts, truncation=True, padding=True)
test_encodings_xlnet = xlnet_tokenizer(test_texts, truncation=True, padding=True)
class MyDataset(torch.utils.data.Dataset):
def __init__(self, encodings, labels):
self.encodings = encodings
self.labels = labels
def __getitem__(self, idx):
item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
item['labels'] = torch.tensor(self.labels[idx])
return item
def __len__(self):
return len(self.labels)
train_dataset_roberta = MyDataset(train_encodings_roberta, train_labels)
test_dataset_roberta = MyDataset(test_encodings_roberta, test_labels)
train_dataset_xlnet = MyDataset(train_encodings_xlnet, train_labels)
test_dataset_xlnet = MyDataset(test_encodings_xlnet, test_labels)
# Fine-tune RoBERTa model
training_args = TrainingArguments(
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
num_train_epochs=3,
logging_dir='./logs',
logging_steps=10,
)
trainer_roberta = Trainer(
model=roberta_model,
args=training_args,
train_dataset=train_dataset_roberta,
eval_dataset=test_dataset_roberta,
)
trainer_roberta.train()
# Fine-tune XLNet model
trainer_xlnet = Trainer(
model=xlnet_model,
args=training_args,
train_dataset=train_dataset_xlnet,
eval_dataset=test_dataset_xlnet,
)
trainer_xlnet.train()
# Evaluate models
def evaluate_model(model, test_dataset):
predictions = []
labels = []
for batch in test_dataset:
input_ids = batch['input_ids'].to(model.device)
attention_mask = batch['attention_mask'].to(model.device)
labels.extend(batch['labels'].tolist())
with torch.no_grad():
outputs = model(input_ids, attention_mask=attention_mask)
logits = outputs.logits
predictions.extend(torch.argmax(logits, axis=1).tolist())
accuracy = accuracy_score(labels, predictions)
precision, recall, f1, _ = precision_recall_fscore_support(labels, predictions, average='binary')
return accuracy, precision, recall, f1
accuracy_roberta, precision_roberta, recall_roberta, f1_roberta = evaluate_model(roberta_model, test_dataset_roberta)
accuracy_xlnet, precision_xlnet, recall_xlnet, f1_xlnet = evaluate_model(xlnet_model, test_dataset_xlnet)
print("RoBERTa Model Evaluation:")
print(f"Accuracy: {accuracy_roberta}")
print(f"Precision: {precision_roberta}")
print(f"Recall: {recall_roberta}")
print(f"F1 Score: {f1_roberta}")
print("\nXLNet Model Evaluation:")
print(f"Accuracy: {accuracy_xlnet}")
print(f"Precision: {precision_xlnet}")
print(f"Recall: {recall_xlnet}")
print(f"F1 Score: {f1_xlnet}") |
Best000/017d3e03-da8f-4bf7-89cd-780bd14237e4 | Best000 | "2025-01-22T10:10:38Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/codegemma-7b-it",
"base_model:adapter:unsloth/codegemma-7b-it",
"license:apache-2.0",
"region:us"
] | null | "2025-01-22T10:06:49Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/codegemma-7b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 017d3e03-da8f-4bf7-89cd-780bd14237e4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/codegemma-7b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 055427b4ebea9cb1_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/055427b4ebea9cb1_train_data.json
type:
field_instruction: instruction
field_output: paragraph
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: Best000/017d3e03-da8f-4bf7-89cd-780bd14237e4
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/055427b4ebea9cb1_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 09f3f589-640b-4285-a839-60399f061c31
wandb_project: Birthday-SN56-15-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 09f3f589-640b-4285-a839-60399f061c31
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 017d3e03-da8f-4bf7-89cd-780bd14237e4
This model is a fine-tuned version of [unsloth/codegemma-7b-it](https://huggingface.co/unsloth/codegemma-7b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0754
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.255 | 0.0004 | 1 | 2.2220 |
| 2.6814 | 0.0011 | 3 | 2.2139 |
| 2.0871 | 0.0022 | 6 | 2.1563 |
| 1.5132 | 0.0033 | 9 | 2.0754 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
hkab/vietnamese-asr-model | hkab | "2025-03-01T15:34:21Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2025-02-28T10:30:37Z" | ---
license: mit
---
# Vietnamese ASR model
This repository contains weight of many (not yet) Vietnamese ASR model. |
samoline/e5f610a2-dfee-4032-b41e-05b67611b521 | samoline | "2025-03-21T23:05:09Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Maykeye/TinyLLama-v0",
"base_model:adapter:Maykeye/TinyLLama-v0",
"license:apache-2.0",
"region:us"
] | null | "2025-03-21T22:59:45Z" | ---
library_name: peft
license: apache-2.0
base_model: Maykeye/TinyLLama-v0
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e5f610a2-dfee-4032-b41e-05b67611b521
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Maykeye/TinyLLama-v0
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 75319a41ec9025fa_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/75319a41ec9025fa_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: false
group_by_length: false
hub_model_id: samoline/e5f610a2-dfee-4032-b41e-05b67611b521
hub_repo: samoline
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 4
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 4
lora_target_linear: true
lr_scheduler: cosine
max_steps: 2
micro_batch_size: 1
mlflow_experiment_name: /tmp/75319a41ec9025fa_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: samoline-nan
wandb_mode: online
wandb_name: 9fd1661b-ccc2-4b21-a8d8-95605857f2f3
wandb_project: Gradients-On-Demand
wandb_run: dev
wandb_runid: 9fd1661b-ccc2-4b21-a8d8-95605857f2f3
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# e5f610a2-dfee-4032-b41e-05b67611b521
This model is a fine-tuned version of [Maykeye/TinyLLama-v0](https://huggingface.co/Maykeye/TinyLLama-v0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 7.5944
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 6.1285 | 0.0000 | 1 | 7.5944 |
| 7.0732 | 0.0000 | 2 | 7.5944 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
PrunaAI/Local-Novel-LLM-project-Vecteus-V2-7B-QUANTO-float8bit-smashed | PrunaAI | "2024-08-15T09:23:20Z" | 5 | 0 | null | [
"pruna-ai",
"base_model:Local-Novel-LLM-project/Vecteus-V2-7B",
"base_model:finetune:Local-Novel-LLM-project/Vecteus-V2-7B",
"region:us"
] | null | "2024-08-15T09:16:23Z" | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: Local-Novel-LLM-project/Vecteus-V2-7B
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with quanto.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo Local-Novel-LLM-project/Vecteus-V2-7B installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install quanto
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
IMPORTS
model = AutoModelForCausalLM.from_pretrained("PrunaAI/Local-Novel-LLM-project-Vecteus-V2-7B-QUANTO-float8bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("Local-Novel-LLM-project/Vecteus-V2-7B")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model Local-Novel-LLM-project/Vecteus-V2-7B before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
FareedKhan/flax-sentence-embeddings_all_datasets_v4_MiniLM-L6_FareedKhan_prime_synthetic_data_2k_10_64 | FareedKhan | "2024-09-30T11:22:13Z" | 9 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:1814",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:flax-sentence-embeddings/all_datasets_v4_MiniLM-L6",
"base_model:finetune:flax-sentence-embeddings/all_datasets_v4_MiniLM-L6",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2024-09-30T11:22:10Z" | ---
base_model: flax-sentence-embeddings/all_datasets_v4_MiniLM-L6
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:1814
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: '
The list you''ve provided contains a variety of medications, including antidepressants,
antihistamines, anxiolytics, and more. Here''s a breakdown by category:
### Antidepressants
- **Amphetamine**
- **Cevimeline**
- **Esmolol**
- **Bortezomib**
- **'
sentences:
- Which body parts are associated with the expression of genes or proteins that
impact the transporter responsible for the movement of Cycloserine?
- Identify genes or proteins that interact with a protein threonine kinase, participate
in the mitotic centrosome proteins and complexes recruitment pathway, and engage
in protein-protein interactions with CCT2.
- Which medication is effective against simple Plasmodium falciparum infections
and functions by engaging with genes or proteins that interact with the minor
groove of DNA rich in adenine and thymine?
- source_sentence: '
RNASE6, also known by aliases such as RAD1, RNS6, and RNasek6, functions as a
member of the ribonuclease A superfamily. Specifically identified via the NCBI
gene/protein database, this protein is related to the antimicrobial peptides pathway,
showcasing broad-spectrum antimicrobial activity against pathogenic bacteria in
the urinary tract. The provided gene summary emphasizes its role in the urinary
tract, highlighting its enzymatic function and broad antimicrobial capability.
With a genomic position spanning from 20781268 to 20782467 on chromosome 14, the
RNASE6 gene encodes a protein named ribonuclease A family member k6. The protein''s
interactions with cellular and molecular functions are integral to its role, including
its interaction with molecular functions like ribonuclease activity and endonuclease
activity, as well as its involvement in nucleic acid binding.
RNASE6''s involvement in biological'
sentences:
- Identify genes or proteins linked to encephalopathy that are involved in the Antimicrobial
peptides pathway and have interactions with molecular functions associated with
ribonuclease activity.
- Identify genes or proteins that exhibit interaction with COMMD1 and share an associated
phenotype or effect.
- What medical conditions are associated with severe combined immunodeficiency and
also cause muscle pain and weakness?
- source_sentence: '
The gene in question is likely involved in multiple biological processes, including:
1. **Transmembrane transport**: It facilitates the entry of substances into or
out of a cell through the cell membrane, which is crucial for maintaining cellular
homeostasis and responding to environmental stimuli. This includes organic anion
and carboxylic acid transport.
2. **ABC-family proteins mediated transport**: ABC (or ATP-binding cassette) proteins
are responsible for a variety of transport processes, such as drug efflux, nutrient
uptake, and xenobiotic detoxification.
3. **Response to drug**: It likely plays a role in how cells interact with and
respond to medication or other foreign substances they encounter. This is important
in pharmacology and toxicology.
4. **Regulation of chloride transport**: Chloride ions are crucial for maintaining
electrolyte balance and are involved in multiple physiological processes. This
gene likely helps regulate their transport in and out of the cell.
5. **Export across plasma membrane**: It is part of pathways that help in the
removal of substances from the cell, such as efflux of drug metabolites or other
waste products.
### Expression Contexts:
- **Present**: This gene is expressed in many parts of the body, indicating a
broad role. It shows presence in tissues like the islet of Langerhans (involved
in insulin regulation), zones of the skin, and various brain regions. It''s also
active in organs such as the heart, kidney, and lungs, and in the digestive tract,
including the stomach, esophagus, and intestines.
- **Absent or Reduced**: The gene''s expression is notably absent or less pronounced
in tissues like the nasal cavity epithelium, suggesting it may not play a significant
role in this specific tissue type.
The gene''s multifaceted expression and roles suggest a key function in biological
activities related to:
- **Chemical'
sentences:
- Could you supply a selection of medications used to treat acute myeloid leukemia
with minimal differentiation that have a potential side effect of arrhythmias
and work by intercalating DNA and inhibiting topoisomerase II?
- Is the ABCB1 protein responsible for the translocation of pharmaceuticals that
exhibit synergistic effects when combined with ferric ions?
- What potential conditions could I have that are associated with oophoritis and
involve ovarian complications?
- source_sentence: "\n\nThe list you provided seems to be a collection of various\
\ chemical compounds, pharmaceuticals, and their synonyms. They span across various\
\ categories:\n\n1. **Pharmaceuticals & Synthetic Drug Analogs**:\n - **Antibiotics**\
\ (Ceftazidime, Azithromycin, Ceftodipen, etc.)\n - **Analgesics** (Fentanyl,\
\ Ketorolac, etc.)\n - **Cephalosporins** (Ceftazidime, Ceftazidime-avibactam,\
\ etc.)\n - **Blood Thinners/Synthetic Anticoagulants** (Enoxaparin, Edoxaban,\
\ Rivaroxaban, etc.)\n - **Analgesic/Aspirin Analogues** (Mefenamic Acid, Indometacin,\
\ etc.)\n - **Adrenergic Agonists** (Isoprenaline, Dopamine, etc.)\n - **Antiviral\
\ Drugs** (Adefovir, Idelalisib, etc.)\n - **Antibiotic Resistance Modifiers**\
\ (Sulbactam, Tazobactam, etc.)\n - **Calcium Channel Blockers** (Verapamil,\
\ Nicardipine, etc.)\n - **Nutraceuticals/Herbal Extracts** (Ginsenoside, Phloretin,\
\ etc.)\n \n2. **Diagnostic Agents**:\n - **Radiopharmaceuticals** (F-Fluorodeoxyglucose,\
\ Ga-68 DOTATOC, etc.)\n - **MRI Contrasts** (Gadolinium chelates, etc.)\n\
\ - **CT Contrast Agents** (Iodinated contrast agents, etc.)\n \n3. **Ingredients\
\ in Drugs**:\n - **Excipients** (Hydroxypropylmethylcellulose, Lactose, etc.)\n\
\ - **Antifungal Drugs** (Itraconazole, Terconazole, etc.)\n - **Anticoagulants**\
\ (Warfarin, Heparin, etc.)\n \nThis list represents a broad spectrum of\
\ modern medicine, from antibiotics to chemicals used in diagnostic imaging techniques,\
\ and from dietary supplements to drug excipients. Each compound typically serves\
\ a specific therapeutic purpose in the human body."
sentences:
- Which investigational compound in solid form that aims at altering membrane lipids,
specifically phospholipids and glycerophospholipids, has the additional property
of interacting with genes or proteins involved in ubiquitin-specific protease
binding?
- Could you provide a list of medications that exhibit synergistic effects when
used in combination with Choline magnesium trisalicylate to treat the same condition
and that also selectively target COX-2 enzymes to alleviate inflammation?
- Identify pathways associated with the interaction between TNFs and their physiological
receptors that concurrently influence the same gene or protein.
- source_sentence: "\n\nDiarrhea, a condition characterized by the passage of loose,\
\ watery, and often more than five times a day, is a common ailment affecting\
\ individuals of all ages. It is typically acute when it lasts for a few days\
\ to a week or recurrent when it persists for more than four weeks. While acute\
\ diarrhea often resolves on its own and is usually not a cause for concern, recurrent\
\ or chronic forms require medical attention due to the risk of dehydration and\
\ nutrient deficiencies. \n\n### Causes\n\nDiarrhea can be caused by various factors,\
\ including:\n\n1. **Viral"
sentences:
- Could you describe the specific effects or phenotypes associated with acute hydrops
in patients with the subtype of keratoconus?
- What is the disease associated with the CPT2 gene that causes severe fasting intolerance
leading to metabolic disturbances such as hypoketotic hypoglycemia, risking coma
and seizures, and can lead to hepatic encephalopathy and liver failure, and also
affects the heart and skeletal muscles, increasing the risk of potentially fatal
cardiac arrhythmias?
- Could you assist in identifying a condition linked to congenital secretory diarrhea,
similar to intractable diarrhea of infancy, given my symptoms of persistent, salty
watery diarrhea, hyponatremia, abnormal body pH, and reliance on parenteral nutrition
due to chronic dehydration?
model-index:
- name: SentenceTransformer based on flax-sentence-embeddings/all_datasets_v4_MiniLM-L6
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 384
type: dim_384
metrics:
- type: cosine_accuracy@1
value: 0.3613861386138614
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.38613861386138615
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.42574257425742573
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.46534653465346537
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.3613861386138614
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.12871287128712872
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.08514851485148513
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.04653465346534653
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.3613861386138614
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.38613861386138615
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.42574257425742573
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.46534653465346537
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.4070317030609663
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.3890519409083766
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.3959688055946467
name: Cosine Map@100
---
# SentenceTransformer based on flax-sentence-embeddings/all_datasets_v4_MiniLM-L6
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [flax-sentence-embeddings/all_datasets_v4_MiniLM-L6](https://huggingface.co/flax-sentence-embeddings/all_datasets_v4_MiniLM-L6) on the json dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [flax-sentence-embeddings/all_datasets_v4_MiniLM-L6](https://huggingface.co/flax-sentence-embeddings/all_datasets_v4_MiniLM-L6) <!-- at revision a407cc0b7d85eec9a5617eaf51dbe7b353b0c79f -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("FareedKhan/flax-sentence-embeddings_all_datasets_v4_MiniLM-L6_FareedKhan_prime_synthetic_data_2k_10_64")
# Run inference
sentences = [
'\n\nDiarrhea, a condition characterized by the passage of loose, watery, and often more than five times a day, is a common ailment affecting individuals of all ages. It is typically acute when it lasts for a few days to a week or recurrent when it persists for more than four weeks. While acute diarrhea often resolves on its own and is usually not a cause for concern, recurrent or chronic forms require medical attention due to the risk of dehydration and nutrient deficiencies. \n\n### Causes\n\nDiarrhea can be caused by various factors, including:\n\n1. **Viral',
'Could you assist in identifying a condition linked to congenital secretory diarrhea, similar to intractable diarrhea of infancy, given my symptoms of persistent, salty watery diarrhea, hyponatremia, abnormal body pH, and reliance on parenteral nutrition due to chronic dehydration?',
'Could you describe the specific effects or phenotypes associated with acute hydrops in patients with the subtype of keratoconus?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_384`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:----------|
| cosine_accuracy@1 | 0.3614 |
| cosine_accuracy@3 | 0.3861 |
| cosine_accuracy@5 | 0.4257 |
| cosine_accuracy@10 | 0.4653 |
| cosine_precision@1 | 0.3614 |
| cosine_precision@3 | 0.1287 |
| cosine_precision@5 | 0.0851 |
| cosine_precision@10 | 0.0465 |
| cosine_recall@1 | 0.3614 |
| cosine_recall@3 | 0.3861 |
| cosine_recall@5 | 0.4257 |
| cosine_recall@10 | 0.4653 |
| cosine_ndcg@10 | 0.407 |
| cosine_mrr@10 | 0.3891 |
| **cosine_map@100** | **0.396** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 1,814 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 2 tokens</li><li>mean: 118.5 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 35.53 tokens</li><li>max: 128 tokens</li></ul> |
* Samples:
| positive | anchor |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code><br>The list you provided appears to be a collection of various substances and medications, each with its own unique properties and uses. Here's a brief overview of each:<br><br>1. **Abacavir**<br> - Used in HIV treatment, it inhibits reverse transcriptase.<br><br>2. **Abate**<br> - Often refers to fenpyroximate, used as an insecticide.<br><br>3. **Abidaquine**<br> - An antimalarial drug used to treat and prevent malaria.<br><br>4. **Abiraterone**<br> - Used in treating prostate cancer, specifically to block the production of testosterone.<br><br>5. **Abiraterone alfa**<br> - Similar to abiraterone, used in prostate cancer treatment.<br><br>6. **Abiraterone acetate**<br> - An active form of abiraterone.<br><br>7. **Abiraterone citrate**<br> - Another form of abiraterone.<br><br>8. **Acelprozil**<br> - A medication commonly used as an anti-epileptic drug.<br><br>9. **Acenocoumarol**<br> - Used as a blood thinner, also known as a vitamin K antagonist.<br><br>10. **Acenocoumarol citrate**<br> - Same as acenocoumarol but with citrate, functioning similarly as a</code> | <code>Which pharmacological agents with antioxidant properties have the potential to disrupt the PCSK9-LDLR interaction by affecting the gene or protein players in this pathway?</code> |
| <code><br>Bartholin duct cyst is a gynecological condition characterized by the distension of Bartholin glands due to mucus accumulation within the ducts, typically resulting from an obstructed orifice. This issue, categorized under women's reproductive health, falls directly under the umbrella of both integumentary system diseases and female reproductive system diseases. Originating from the Bartholin glands, which play a pivotal role in lubrication and arousal of the vulva during intercourse, the blockage or obstruction leads to cyst formation, affecting the overall female reproductive health landscape.</code> | <code>What is the name of the gynecological condition that arises due to blocked Bartholin's glands and involves cyst formation, falling under the broader category of women's reproductive health issues?</code> |
| <code><br>Neuralgia, as defined by the MONDO ontology, refers to a pain disorder characterized by pain in the distribution of a nerve or nerves. This condition could be associated with the use of Capsaicin cream, given its known capability to alleviate symptoms by causing a temporary sensation of pain that interferes with the perception of more severe pain. Peripheral neuropathy, another symptom, is often manifest in cases where nerve damage occurs, frequently affecting multiple nerves. This condition can result in symptoms similar to sciatica, which is characterized by pain that starts in the lower back, often radiating down the leg, a common route for the sciatic nerve. The document indicates that diseases related to neuralgia include pudendal neuralgia, peripheral neuropathy, disorders involving pain, cranial neuralgia, post-infectious neuralgia, and sciatica. Furthermore, the document mentions several drugs that can be used for the purpose of managing symptoms related to neuralgia, including Lidocaine, as well as a wide array of off-label uses for treatments like Phenytoin, Morphine, Amitriptyline, Imipramine, Oxycodone, Nortriptyline, Lamotrigine, Maprotiline, Desipramine, Gabapentin, Carbamazepine, Phenobarbital, Tramadol, Venlafaxine, Trimipramine, Desvenlafaxine, Primidone, and Naltrexone.</code> | <code>What condition could be associated with the use of Capsaicin cream, peripheral neuropathy, and symptoms similar to sciatica?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
384
],
"matryoshka_weights": [
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 64
- `learning_rate`: 1e-05
- `num_train_epochs`: 10
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: False
- `load_best_model_at_end`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 1e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: False
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_384_cosine_map@100 |
|:-------:|:-------:|:-------------:|:----------------------:|
| 0 | 0 | - | 0.3614 |
| 0.3448 | 10 | 2.117 | - |
| 0.6897 | 20 | 2.1255 | - |
| 1.0 | 29 | - | 0.3855 |
| 1.0345 | 30 | 1.9375 | - |
| 1.3793 | 40 | 1.7987 | - |
| 1.7241 | 50 | 1.7494 | - |
| 2.0 | 58 | - | 0.3901 |
| 2.0690 | 60 | 1.7517 | - |
| 2.4138 | 70 | 1.676 | - |
| 2.7586 | 80 | 1.608 | - |
| 3.0 | 87 | - | 0.3934 |
| 3.1034 | 90 | 1.5923 | - |
| 3.4483 | 100 | 1.5095 | - |
| 3.7931 | 110 | 1.5735 | - |
| 4.0 | 116 | - | 0.3910 |
| 4.1379 | 120 | 1.3643 | - |
| 4.4828 | 130 | 1.4395 | - |
| 4.8276 | 140 | 1.3595 | - |
| 5.0 | 145 | - | 0.3884 |
| 5.1724 | 150 | 1.3365 | - |
| 5.5172 | 160 | 1.3506 | - |
| 5.8621 | 170 | 1.3279 | - |
| **6.0** | **174** | **-** | **0.3957** |
| 6.2069 | 180 | 1.3075 | - |
| 6.5517 | 190 | 1.3138 | - |
| 6.8966 | 200 | 1.2749 | - |
| 7.0 | 203 | - | 0.3979 |
| 7.2414 | 210 | 1.1725 | - |
| 7.5862 | 220 | 1.2696 | - |
| 7.9310 | 230 | 1.2487 | - |
| 8.0 | 232 | - | 0.3986 |
| 8.2759 | 240 | 1.1558 | - |
| 8.6207 | 250 | 1.2447 | - |
| 8.9655 | 260 | 1.2566 | - |
| 9.0 | 261 | - | 0.3964 |
| 9.3103 | 270 | 1.2493 | - |
| 9.6552 | 280 | 1.2697 | - |
| 10.0 | 290 | 1.079 | 0.3960 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.10
- Sentence Transformers: 3.1.1
- Transformers: 4.45.1
- PyTorch: 2.2.1+cu121
- Accelerate: 0.34.2
- Datasets: 3.0.1
- Tokenizers: 0.20.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
C10H/CS286_BreakHis_CNN | C10H | "2023-11-22T07:42:50Z" | 0 | 0 | null | [
"medical",
"zh",
"en",
"license:mit",
"region:us"
] | null | "2023-11-22T07:35:21Z" | ---
license: mit
language:
- zh
- en
tags:
- medical
---
CS286_BreakHis_CNN
| Layer (type) | Output Shape | Param # |
| ------------ | ------------ | ------- |
| Conv2d-1 | [-1, 32, 222, 222] | 896 |
| ReLU-2 | [-1, 32, 222, 222] | 0 |
| MaxPool2d-3 | [-1, 32, 111, 111] | 0 |
| Conv2d-4 | [-1, 64, 111, 111] | 18,496|
| ReLU-5 | [-1, 64, 111, 111] | 0 |
| MaxPool2d-6 | [-1, 64, 55, 55] | 0 |
| Conv2d-7 | [-1, 128, 55, 55] | 73,856|
| ReLU-8 | [-1, 128, 55, 55] | 0 |
| MaxPool2d-9 | [-1, 128, 27, 27] | 0 |
| Dropout-10 | [-1, 128, 27, 27] | 0 |
| Flatten-11 | [-1, 93312] | 0 |
| Linear-12 | [-1, 128] | 11,944,064|
| ReLU-13 | [-1, 128] | 0 |
| Linear-14 | [-1, 64] | 8,256 |
| ReLU-15 | [-1, 64] | 0 |
| Linear-16 | [-1, 1] | 65 |
| **Total** | | **12,045,633** |
| **Trainable params** | | **12,045,633** |
| **Non-trainable params** | | **0** |
| Input size (MB): 0.57 |
| --------------------- |
| Forward/backward pass size (MB): 48.63 |
| Params size (MB): 45.95 |
| Estimated Total Size (MB): 95.15 | |
Quintu/roberta-512-hazard-v1 | Quintu | "2025-01-18T16:43:25Z" | 8 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"hazard-detection",
"en",
"dataset:your-dataset-name",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-01-13T17:50:50Z" | ---
language: en
tags:
- text-classification
- hazard-detection
datasets:
- your-dataset-name
license: apache-2.0
model_name: Quintu/roberta-512-hazard-v1
library_name: transformers
pipeline_tag: text-classification
---
# Quintu/roberta-512-hazard-v1
Mô hình `Quintu/roberta-512-hazard-v1` được thiết kế để thực hiện phân loại văn bản liên quan đến phát hiện nguy cơ.
## Cách sử dụng
Dưới đây là cách sử dụng mô hình này với thư viện `transformers`:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
# Tải mô hình và tokenizer
model_name = "Quintu/roberta-512-hazard-v1"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
# Sử dụng mô hình để phân loại văn bản
text = "This is an example text to classify."
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
# Dự đoán
logits = outputs.logits
print(logits)
|
lmarena-ai/p2l-7b-rk-01132025 | lmarena-ai | "2025-02-25T19:20:17Z" | 0 | 0 | null | [
"safetensors",
"qwen2",
"arxiv:2502.14855",
"license:apache-2.0",
"region:us"
] | null | "2025-02-24T21:39:30Z" | ---
license: apache-2.0
---
# lmarena-ai/p2l-7b-rk-01132025
Large language model (LLM) evaluations typically rely on aggregated metrics like accuracy or human preference, averaging across users and prompts. This averaging obscures user- and prompt-specific variations in model performance.
To address this, we propose Prompt-to-Leaderboard (P2L), a method that produces leaderboards specific to a prompt.
The core idea is to train an LLM taking natural language prompts as input to output a vector of coefficients which are then used to predict the human preference vote.
The resulting prompt-dependent leaderboards allow for unsupervised task-specific evaluation, optimal routing of queries to models, personalization, and automated evaluation of model strengths and weaknesses.
Data from Chatbot Arena suggest that P2L better captures the nuanced landscape of language model performance than the averaged leaderboard.
**Paper**: [Prompt-to-Leaderboard](https://arxiv.org/abs/2502.14855)
**Code**: [lmarena/p2l](https://github.com/lmarena/p2l)
This particular P2L model has a *Rao-Kupper* regression head, which we define below:
$$
\begin{equation}
g_{\theta^*(z)}(y ; x) =
\begin{cases}
\sigma((x,-1)^\top \theta^*(z)) & y = \mathsf{B}, \\
\sigma((-x,-1)^\top \theta^*(z)) & y = \mathsf{A}, \\
1 - \sigma((-x,-1)^\top \theta^*(z)) - \sigma((x,-1)^\top \theta^*(z)) & y = \mathsf{tie}.
\end{cases}
\end{equation}
$$
More simply, given a prompt, P2L will output a vector of coefficient:
$\vec{\beta}$ and $\hat{\eta}$. Then the probability that model $i$ beats model $j$, $P(i \succ j) = \sigma(\vec{\beta}_i - \vec{\beta}_j - \eta)$, $P(j \succ i) = \sigma(\vec{\beta}_j - \vec{\beta}_i - \eta)$, $P(i = j) = 1 - P(i \succ j) - P(j \succ i)$ where $\eta = \log(1 + e^{(\hat{\eta} - 22.5)/\beta})$.
See section 2.2 in our paper for more details on various regression heads.
## Serving
To serve a P2L model, please see our documentation on GitHub: [Serving P2L](https://github.com/lmarena/p2l?tab=readme-ov-file#serving-p2l).
Note: the P2L model outputs with this structure:
```python
class P2LOutputs(ModelOutput):
coefs: torch.FloatTensor = None # "betas" as described above
eta: Optional[torch.FloatTensor] = None # tie coefficent (not used for BT head)
last_hidden_state: torch.FloatTensor = None # last hidden state from the transformer
```
To understand which coefficient index corresponds with which model, see the [`model_list.json`](./model_list.json) found in the repo of each P2L model. As a general rule, the models will always be in sorted order.
The easiest way to get this list from inside code is with the following:
```python
import json
from huggingface_hub import hf_hub_download
fname = hf_hub_download(
repo_id="lmarena-ai/p2l-7b-rk-01132025", filename="model_list.json", repo_type="model"
)
with open(fname) as fin:
model_list = json.load(fin)
```
### Loading from Pretrained
To define and load the model:
```python
import torch
from transformers import (
Qwen2Model,
Qwen2PreTrainedModel,
LlamaModel,
LlamaPreTrainedModel,
PreTrainedModel,
AutoTokenizer,
)
from transformers import AutoTokenizer
from transformers.utils import ModelOutput
from dataclasses import dataclass
import torch.nn as nn
import torch.nn.functional as F
from typing import Dict, Tuple, Callable, Optional
from huggingface_hub import hf_hub_download
import json
@dataclass
class HeadOutputs(ModelOutput):
coefs: torch.FloatTensor = None
eta: Optional[torch.FloatTensor] = None
gamma: Optional[torch.FloatTensor] = None
@dataclass
class P2LOutputs(ModelOutput):
coefs: torch.FloatTensor = None
eta: Optional[torch.FloatTensor] = None
gamma: Optional[torch.FloatTensor] = None
loss: Optional[torch.FloatTensor] = None
last_hidden_state: torch.FloatTensor = None
class BTHead(nn.Module):
def __init__(
self, input_dim, output_dim, linear_head_downsize_factor=None, **kwargs
) -> None:
super().__init__()
if linear_head_downsize_factor:
inner_dim = int(output_dim // linear_head_downsize_factor)
self.head = nn.Sequential(
nn.Linear(in_features=input_dim, out_features=inner_dim, bias=True),
nn.Linear(in_features=inner_dim, out_features=output_dim, bias=True),
)
else:
self.head = nn.Linear(
in_features=input_dim, out_features=output_dim, bias=True
)
def forward(self, last_hidden_dim: torch.Tensor):
coefs = self.head(last_hidden_dim)
return HeadOutputs(coefs=coefs)
class P2LModel(Qwen2PreTrainedModel):
def __init__(
self,
config,
CLS_id,
num_models,
head_kwargs={},
**kwargs,
):
super().__init__(config)
self.num_models = num_models
self.cls_token_id = CLS_id
self.model = Qwen2Model(config)
self.head = BTHead(
input_dim=config.hidden_size,
output_dim=self.num_models,
**head_kwargs,
)
self.post_init()
def freeze_transformer(self):
for param in self.model.parameters():
param.requires_grad = False
def get_input_embeddings(self):
return self.model.embed_tokens
def set_input_embeddings(self, value):
self.model.embed_tokens = value
def forward(self, input_ids, attention_mask, labels=None, weights=None):
batch_size = input_ids.shape[0]
hidden_outputs = self.model(
input_ids=input_ids,
attention_mask=attention_mask,
output_hidden_states=False,
).last_hidden_state # (bs, num_token, embed_dim)
cls_mask = input_ids == self.cls_token_id
# double check this is getting the current CLS token
cls_hidden_dim = hidden_outputs[cls_mask]
assert (
cls_hidden_dim.shape[0] == batch_size
), f"input ids {input_ids.shape}, cls_mask {cls_mask.shape}, cls_logit {cls_hidden_dim.shape}"
head_output = self.head(cls_hidden_dim)
outputs = P2LOutputs(
coefs=head_output.coefs,
last_hidden_state=cls_hidden_dim,
eta=head_output.eta,
gamma=head_output.gamma,
)
return outputs
fname = hf_hub_download(
repo_id="lmarena-ai/p2l-7b-rk-01132025", filename="model_list.json", repo_type="model"
)
with open(fname) as fin:
model_list = json.load(fin)
tokenizer = AutoTokenizer.from_pretrained("lmarena-ai/p2l-7b-rk-01132025")
model = P2LModel.from_pretrained(
"lmarena-ai/p2l-7b-rk-01132025",
CLS_id=tokenizer.cls_token_id,
num_models=len(model_list),
torch_dtype=torch.bfloat16,
)
```
## Citation
```
@misc{frick2025prompttoleaderboard,
title={Prompt-to-Leaderboard},
author={Evan Frick and Connor Chen and Joseph Tennyson and Tianle Li and Wei-Lin Chiang and Anastasios N. Angelopoulos and Ion Stoica},
year={2025},
eprint={2502.14855},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2502.14855},
}
``` |
Sophie-Rain-Virale-X-Video/OnlyFans.Sophie.Rain.Spiderman.Video.Tutorial.Link | Sophie-Rain-Virale-X-Video | "2025-02-16T17:26:48Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-02-16T17:26:22Z" | # Full Video ⤵️⤵️⤵️
<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://leakedvidiohd.blogspot.com/" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1080" data-original-width="1900" height="363" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgVeidLL6ymfeW-cKAP4y4CLmPVZ9PPh2ynVquPPgHpZTbQjONVjsanWU4Jrh3gUeng55ju37HNL8vWfPNNX6CRPi3opmk0wrHKnNdyjxh806IQvUR-SamulbuUwij13Ezc0nIaj8_EGBzGfzbRa36oJ-3-KOWDN0wha3JXiiJQoONnYQJjgA-kVOfRERFB/w640-h363/47f4f435c227df4d25da8238cb85c73cbf3739f9.jpeg" width="640" /></a></div><br /> <p></p>
</h3><a href="https://leakedvidiohd.blogspot.com/">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️</a><h3
</h3><a href="https://leakedvidiohd.blogspot.com/">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️</a><h3
|
ai-forever/KandinskyVideo_1_1 | ai-forever | "2024-05-27T18:50:32Z" | 0 | 9 | null | [
"arxiv:2304.08818",
"arxiv:2311.13073",
"license:apache-2.0",
"region:us"
] | null | "2024-05-27T18:27:01Z" | ---
license: apache-2.0
---
# Kandinsky Video 1.1 — a new text-to-video generation model
## SoTA quality among open-source solutions on <a href="https://evalcrafter.github.io/">EvalCrafter</a> benchmark
This repository is the official implementation of Kandinsky Video 1.1 model.
[](https://huggingface.co/ai-forever/KandinskyVideo) | [Telegram-bot](https://t.me/video_kandinsky_bot) | [Habr post](https://habr.com/ru/companies/sberbank/articles/775554/) | [Our text-to-image model](https://github.com/ai-forever/Kandinsky-3/tree/main)
<p>
<!-- <img src="_assets__/title.jpg" width="800px"/> -->
<!-- <br> -->
Our <B>previous</B> model <a href="https://ai-forever.github.io/Kandinsky-3/">Kandinsky Video 1.0</a>, divides the video generation process into two stages: initially generating keyframes at a low FPS and then creating interpolated frames between these keyframes to increase the FPS. In <B>Kandinsky Video 1.1</B>, we further break down the keyframe generation into two extra steps: first, generating the initial frame of the video from the textual prompt using Text to Image <a href="https://github.com/ai-forever/Kandinsky-3">Kandinsky 3.0</a>, and then generating the subsequent keyframes based on the textual prompt and the previously generated first frame. This approach ensures more consistent content across the frames and significantly enhances the overall video quality. Furthermore, the approach allows animating any input image as an additional feature.
</p>
## Pipeline
<p align="center">
<img src="_assets__/pipeline.png" width="800px"/>
<br>
<em>In the <a href="https://ai-forever.github.io/Kandinsky-3/">Kandinsky Video 1.0</a>, the encoded text prompt enters the text-to-video U-Net3D keyframe generation model with temporal layers or blocks, and then the sampled latent keyframes are sent to the latent interpolation model to predict three interpolation frames between
two keyframes. An image MoVQ-GAN decoder is used to obtain the final video result. In <B>Kandinsky Video 1.1</B>, text-to-video U-Net3D is also conditioned on text-to-image U-Net2D, which helps to improve the content quality. A temporal MoVQ-GAN decoder is used to decode the final video.</em>
</p>
**Architecture details**
+ Text encoder (Flan-UL2) - 8.6B
+ Latent Diffusion U-Net3D - 4.15B
+ The interpolation model (Latent Diffusion U-Net3D) - 4.0B
+ Image MoVQ encoder/decoder - 256M
+ Video (temporal) MoVQ decoder - 556M
## How to use
<!--Check our jupyter notebooks with examples in `./examples` folder -->
### 1. text2video
```python
from kandinsky_video import get_T2V_pipeline
device_map = 'cuda:0'
t2v_pipe = get_T2V_pipeline(device_map)
prompt = "A cat wearing sunglasses and working as a lifeguard at a pool."
fps = 'medium' # ['low', 'medium', 'high']
motion = 'high' # ['low', 'medium', 'high']
video = t2v_pipe(
prompt,
width=512, height=512,
fps=fps,
motion=motion,
key_frame_guidance_scale=5.0,
guidance_weight_prompt=5.0,
guidance_weight_image=3.0,
)
path_to_save = f'./_assets__/video.gif'
video[0].save(
path_to_save,
save_all=True, append_images=video[1:], duration=int(5500/len(video)), loop=0
)
```
<p align="center">
<img src="_assets__/video.gif" raw=true>
<br><em>Generated video</em>
</p>
### 2. image2video
```python
from kandinsky_video import get_T2V_pipeline
device_map = 'cuda:0'
t2v_pipe = get_T2V_pipeline(device_map)
from PIL import Image
import requests
from io import BytesIO
url = 'https://media.cnn.com/api/v1/images/stellar/prod/gettyimages-1961294831.jpg'
response = requests.get(url)
img = Image.open(BytesIO(response.content))
img.show()
prompt = "A panda climbs up a tree."
fps = 'medium' # ['low', 'medium', 'high']
motion = 'medium' # ['low', 'medium', 'high']
video = t2v_pipe(
prompt,
image=img,
width=640, height=384,
fps=fps,
motion=motion,
key_frame_guidance_scale=5.0,
guidance_weight_prompt=5.0,
guidance_weight_image=3.0,
)
path_to_save = f'./_assets__/video2.gif'
video[0].save(
path_to_save,
save_all=True, append_images=video[1:], duration=int(5500/len(video)), loop=0
)
```
<p align="center">
<img src="https://media.cnn.com/api/v1/images/stellar/prod/gettyimages-1961294831.jpg" width="50%"><br>
<em>Input image.</em>
</p>
<p align="center">
<img src="_assets__/video2.gif"><br>
<em>Generated Video.</em>
</p>
## Results
<p align="center">
<img src="_assets__/eval crafter.png" align="center" width="50%">
<br>
<em> Kandinsky Video 1.1 achieves second place overall and best open source model on <a href="https://evalcrafter.github.io/">EvalCrafter</a> text to video benchmark. VQ: visual quality, TVA: text-video alignment, MQ: motion quality, TC: temporal consistency and FAS: final average score.
</em>
</p>
<p align="center">
<img src="_assets__/polygon.png" raw=true align="center" width="50%">
<br>
<em> Polygon-radar chart representing the performance of Kandinsky Video 1.1 on <a href="https://evalcrafter.github.io/">EvalCrafter</a> benchmark.
</em>
</p>
<p align="center">
<img src="_assets__/human eval.png" raw=true align="center" width="50%">
<br>
<em> Human evaluation study results. The bars in the plot correspond to the percentage of “wins” in the side-by-side comparison of model generations. We compare our model with <a href="https://arxiv.org/abs/2304.08818">Video LDM</a>.
</em>
</p>
# Authors
+ Vladimir Arkhipkin: [Github](https://github.com/oriBetelgeuse), [Google Scholar](https://scholar.google.com/citations?user=D-Ko0oAAAAAJ&hl=ru)
+ Zein Shaheen: [Github](https://github.com/zeinsh), [Google Scholar](https://scholar.google.ru/citations?user=bxlgMxMAAAAJ&hl=en)
+ Viacheslav Vasilev: [Github](https://github.com/vivasilev), [Google Scholar](https://scholar.google.com/citations?user=redAz-kAAAAJ&hl=ru&oi=sra)
+ Igor Pavlov: [Github](https://github.com/boomb0om)
+ Elizaveta Dakhova: [Github](https://github.com/LizaDakhova)
+ Anastasia Lysenko: [Github](https://github.com/LysenkoAnastasia)
+ Sergey Markov
+ Denis Dimitrov: [Github](https://github.com/denndimitrov), [Google Scholar](https://scholar.google.com/citations?user=3JSIJpYAAAAJ&hl=ru&oi=ao)
+ Andrey Kuznetsov: [Github](https://github.com/kuznetsoffandrey), [Google Scholar](https://scholar.google.com/citations?user=q0lIfCEAAAAJ&hl=ru)
## BibTeX
If you use our work in your research, please cite our publication:
```
@article{arkhipkin2023fusionframes,
title = {FusionFrames: Efficient Architectural Aspects for Text-to-Video Generation Pipeline},
author = {Arkhipkin, Vladimir and Shaheen, Zein and Vasilev, Viacheslav and Dakhova, Elizaveta and Kuznetsov, Andrey and Dimitrov, Denis},
journal = {arXiv preprint arXiv:2311.13073},
year = {2023},
}
``` |
amitysolution/amity-stt-th-v-0-1 | amitysolution | "2025-03-21T06:13:29Z" | 27 | 0 | null | [
"safetensors",
"whisper",
"license:apache-2.0",
"region:us"
] | null | "2025-03-17T08:17:28Z" | ---
license: apache-2.0
---
|
jw-hf-test/jw-14B-217 | jw-hf-test | "2024-11-09T03:42:11Z" | 193 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-09T03:31:41Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
KoboldAI/Llama-3.1-8B-BookAdventures-GGUF | KoboldAI | "2025-01-04T22:56:51Z" | 1,748 | 1 | null | [
"gguf",
"llama-factory",
"full",
"generated_from_trainer",
"base_model:KoboldAI/LLaMA-3.1-8B-Infinity3M-Kobo",
"base_model:quantized:KoboldAI/LLaMA-3.1-8B-Infinity3M-Kobo",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-04T22:34:42Z" | ---
license: cc-by-nc-sa-4.0
base_model:
- KoboldAI/LLaMA-3.1-8B-Infinity3M-Kobo
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: KoboldAI/Llama-3.1-8B-BookAdventures
results: []
---
# About this model
This is the GGUF release of BookAdventures generated with KoboldCpp 1.81, a model optimized for use in KoboldCpp and its bundled UI.
BookAdventures is a research model to serve as a potential all round model with better long form writing, due to our lack of suitable Chat RP data the past months we have chosen to release the intermediate model as is for others to expand upon as other communities have access to superior private chat data.
This model was tuned on top of KoboldAI/LLaMA-3.1-8B-Infinity3M-Kobo restoring its writing capability but replacing this writing with a much longer form.
In our testing 8B is not ideal for this, but it is the largest we could tune at 32K context.
This model intentionally writes like a book, expect entirely random openings where what you asked for is weaved in to the story. Its designed for guided co-writing with an instruct prompt describing the entire plot summary.
We also added our usual Adventures dataset making this double as an adventure mode model, but due to lack of a suitable chat dataset this model is incapable of engaging Chat RP leaving it one step short of our original goal for an all round model.
For the best results use this model in KoboldAI Lite.
## I want to use this model as an instruct model!
This model was trained on the Alpaca format with a large subset of the Infinity3M dataset, it should respond well to alpaca.
## I want to use this model as a writing assistant!
Format your prompt as follows:
```
### Instruction:
Write a GENRE novel about SUMMARY OF THE STORY
### Response:
```
Including the genre in this position is very important, this is what all our long form example prompts used. You want this instruction in our context menu so it remains visible to the AI.
You should now be able to co-write your story with the summary guiding it.
Note, the data expects longer summaries that are a good paragraph in size, only giving it a topic will work less well.
## I want to play text adventures!
Text adventures can be done in two ways, this model has support for our traditional adventure mode and will then behave like the classic versions of AI Dungeon but with many stories to draw inspiration from similar to our old Skein and Nerys models.
You can also use instruct mode instucting it to "Start a text adventure about" in which case it will have longer form writing for your adventure.
## I want to chat RP with it by making it act like a chatbot!
You will be dissapointed since this model has no such data, look if anyone finetuned such a model on top of this one or succesfully merged this model.
# About the data
This model to our knowledge used a unique approach to give it a longform writing bias, if you did the same method before us please let us know so we can give you credit.
We first stripped Infinity3M of all its short form writing data to prevent the model from ending stories early and to reduce the "slop" that writers often complain about.
Then we used our own [PromptGen](https://github.com/henk717/promptgen) tool to generate instruct prompts for the Pike dataset (Thanks Mr.Seeker for letting me use it for this experiment, it saved a lot of time cleaning book data - Henk)
The generated prompts were checked and cleaned, prompts that accidentally featured references to the original works or artists were rewritten or removed to ensure the model could not learn to copy anyones work or style.
In generating the data we had roughly a 10% failure rate where LLama3.1-8B-Instruct would not follow the tasks correctly. Many of these could be saved, but we also had to remove a decent amount of stories due to the prompt not generating correctly. Specialized models would help here.
Lastly we added the Floyd adventure data from the Skein model with a light enough bias not to infect the books.
# Limitations
This experiment was only partially succesfull, there is a chance the model looses track before anchoring itself down to your prompt by introducing the story elements in time.
To test the model correctly it must be generating longer stories since short stories are not its intended purpose and within the usual 512 tokens other models generate it will almost certainly not have included your story element.
Short stories were omitted but could likely be introduced succesfully had it been distinct enough in the data / prompt language.
Model has no knowledge of Chat RP.
Model will hallucinate incorrect story authors from the base model, in our testing we could trace these back to the gutenberg data present in Llama-3.1. If your name is mentioned this does not mean your work is in our data.
# License
This model follows the Llama-3.1 license / CC-BY-NC-SA-4.0 and is intended as a research only model. We don't mind private use by AI hobbyists, but do not use this model for commercial purposes.
### Special thanks to our community member Garg for the compute, without you this would not be possible. |
zixianma/mantis-cota-200k-seq_len_8192-lr_1e-5-ep_1 | zixianma | "2025-03-05T04:20:09Z" | 0 | 0 | null | [
"safetensors",
"llava",
"generated_from_trainer",
"base_model:TIGER-Lab/Mantis-8B-siglip-llama3-pretraind",
"base_model:finetune:TIGER-Lab/Mantis-8B-siglip-llama3-pretraind",
"license:llama3",
"region:us"
] | null | "2025-03-04T22:44:13Z" | ---
license: llama3
base_model: TIGER-Lab/Mantis-8B-siglip-llama3-pretraind
tags:
- generated_from_trainer
model-index:
- name: mantis-cota-200k-seq_len_8192-lr_1e-5-ep_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/zixianma/Mantis/runs/f3vtalmq)
# mantis-cota-200k-seq_len_8192-lr_1e-5-ep_1
This model is a fine-tuned version of [TIGER-Lab/Mantis-8B-siglip-llama3-pretraind](https://huggingface.co/TIGER-Lab/Mantis-8B-siglip-llama3-pretraind) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.43.0
- Pytorch 2.4.0+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
|
LeroyDyer/SpyazWeb_AI_DeepMind_Project | LeroyDyer | "2024-06-22T07:23:54Z" | 177 | 3 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"leaderboard",
"trl",
"en",
"dataset:gretelai/synthetic_text_to_sql",
"dataset:HuggingFaceTB/cosmopedia",
"dataset:teknium/OpenHermes-2.5",
"dataset:Open-Orca/SlimOrca",
"dataset:Open-Orca/OpenOrca",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:databricks/databricks-dolly-15k",
"dataset:yahma/alpaca-cleaned",
"dataset:uonlp/CulturaX",
"dataset:mwitiderrick/SwahiliPlatypus",
"dataset:swahili",
"dataset:Rogendo/English-Swahili-Sentence-Pairs",
"dataset:ise-uiuc/Magicoder-Evol-Instruct-110K",
"dataset:meta-math/MetaMathQA",
"dataset:abacusai/ARC_DPO_FewShot",
"dataset:abacusai/MetaMath_DPO_FewShot",
"dataset:abacusai/HellaSwag_DPO_FewShot",
"dataset:HaltiaAI/Her-The-Movie-Samantha-and-Theodore-Dataset",
"doi:10.57967/hf/2837",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-07T08:03:49Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- leaderboard
- mistral
- trl
base_model: LeroyDyer/Mixtral_AI_CyberTron_DeepMind_III
datasets:
- gretelai/synthetic_text_to_sql
- HuggingFaceTB/cosmopedia
- teknium/OpenHermes-2.5
- Open-Orca/SlimOrca
- Open-Orca/OpenOrca
- cognitivecomputations/dolphin-coder
- databricks/databricks-dolly-15k
- yahma/alpaca-cleaned
- uonlp/CulturaX
- mwitiderrick/SwahiliPlatypus
- swahili
- Rogendo/English-Swahili-Sentence-Pairs
- ise-uiuc/Magicoder-Evol-Instruct-110K
- meta-math/MetaMathQA
- abacusai/ARC_DPO_FewShot
- abacusai/MetaMath_DPO_FewShot
- abacusai/HellaSwag_DPO_FewShot
- HaltiaAI/Her-The-Movie-Samantha-and-Theodore-Dataset
- gretelai/synthetic_text_to_sql
- HuggingFaceTB/cosmopedia
- teknium/OpenHermes-2.5
- cognitivecomputations/dolphin-coder
- databricks/databricks-dolly-15k
- yahma/alpaca-cleaned
- uonlp/CulturaX
- mwitiderrick/SwahiliPlatypus
- swahili
- Rogendo/English-Swahili-Sentence-Pairs
- ise-uiuc/Magicoder-Evol-Instruct-110K
- meta-math/MetaMathQA
metrics:
- accuracy
- bertscore
- bleu
- brier_score
- cer
- character
- charcut_mt
- chrf
- code_eval
y-Gene:
- LeroyDyer/Mixtral_AI_DeepMind
- LeroyDyer/Mixtral_AI_CyberUltron_DPO
- LeroyDyer/Mixtral_AI_Chat_2.0
- LeroyDyer/Mixtral_AI_DeepMedicalMind
- LeroyDyer/Mixtral_AI_Samantha
x-Gene:
- LeroyDyer/Mixtral_AI_Chat_2.0
- LeroyDyer/Mixtral_BioMedical
- LeroyDyer/Mixtral_AI_Medic
- LeroyDyer/Mixtral_Cyber_BioMedic
- LeroyDyer/Mixtral_AI_DeepMedicalMind
Variant:
- LeroyDyer/MetaMath_LLM
- LeroyDyer/TruthfulQA_LLM
- LeroyDyer/HellaSwag_LLM
- LeroyDyer/Mixtral_AI_DeepMedicalMind
model-index:
- name: Mixtral_AI_CyberTron_DeepMind_III_UFT
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 61.86
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=LeroyDyer/Mixtral_AI_CyberTron_DeepMind_III_UFT
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 83.15
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=LeroyDyer/Mixtral_AI_CyberTron_DeepMind_III_UFT
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 61.95
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=LeroyDyer/Mixtral_AI_CyberTron_DeepMind_III_UFT
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 49.41
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=LeroyDyer/Mixtral_AI_CyberTron_DeepMind_III_UFT
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.98
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=LeroyDyer/Mixtral_AI_CyberTron_DeepMind_III_UFT
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 51.86
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=LeroyDyer/Mixtral_AI_CyberTron_DeepMind_III_UFT
name: Open LLM Leaderboard
---
[<img src="https://cdn-avatars.huggingface.co/v1/production/uploads/65d883893a52cd9bcd8ab7cf/tRsCJlHNZo1D02kBTmfy9.jpeg" width="200"/>
https://github.com/spydaz
# ::: DEEP MIND PROJECT :::
OH MY GOSH , GOOD WOW!
ARE WE MAKING BRAINS NOW!!!!! (Contact me to Sponser me PLEASE)
---- I NEED A CLOUD TO DESIGN THIS MIND! --(freeColab takes years! - i need the large data-sets in...
which need a few days on a server fine tuning until fully complete ! i NEED A COLABORATOR!! )
- Mistral models are GREAT!!!!!!! - we have supassed ChatGPT : (- without langchain!!!! )
- I now have amethodolgy to add any functionality to the model !
- we are in the future now :
- we do not want to code or buy software!
Lovely model !!! Very knowledgeabe :: (sometimes requires coaxing !! but it has options to choose from so for a single thing there may be multiple response so you can ask in another way !
good for oneshot prompts and it actually uses the history in the chat !!! )
but we have TASKS!
we can now ask the model to perform these tasks and get the right output without special programming !
take a model !!! This model CONVERGES on ANYTHING! ( i also previously trained it will the clip training for captioning also but never used it ! but i pluged it in and it was spot on!(so if you choose to incorperate the model into a decoder/encoder model (vision) its ready !))
VERY HAPPY! (need more good data (my problem acually is not data (its converting it to json from CSV and other forms! (pre-structured ))))
here we begin the models for Deep mind : Whoop! as we move forwards we have begun to let the model teach itself like a child and optimize!
this model created from the first trained models : deepmind!
these models contain:
## thoughts and processes :
## SelfRAG:
## Agent Generation:
## Chain of thoughts :
## Deep thinking and memory recall:
## Training Prompt version - Working GREAT! -(cant blow my own horn enough!!!!)
checks itsef discussing complex questions (question it does not know the answer to ... it trys to discuss with itself to find a result(sometimes unsucessfully))
It generates Mini agents to perform small tasks such as entity recognition; step by step definitions, write psuedo codebases , generare uscases... perform calculations, analize content
It thinks.... sometimes sarcasim , sometimes reflection... sometimes random thoughts ...
it has personalitys : by installing various long discussions with chat gpt in persona it weas able to generate role coversation data, which was added to its conversation chat Q/A; as well as a datset from the samantha tv show ... and HER!.... so it is a personal assistant and very friendly;
It has been really training mainly on coding datasets and medical information : from experiments to research to patient/doctor .. to diagnosis ... to problem solving :
it has been trained to be a counseller and assist with psycological problems :: empathtetic discussion :
this one has its own thoughts despite the prompt given : (if you allow the thought prompt it will display the thoughts)
this is a highly focused model :
### Methodology:
many functions such as defining words andnlp task we also added via datsets and very complexed datstructures and prompts :
These prompts are removed after training and standard alpaca training given on top:(this enables for the previous highly over fit task to become embedded underneath the previous layer):
its important to Change Lora configuration for Embedding layers within the model as well as fine tuning above previous training:
Usually i deploy a factor of 8 calcuculation for my loras by this one i chose factor of 9 (9-18/18/36) .... which actually trained so smoothly that i was able to train many different datsets in a signle sitting ; to below 0.9 all varioations of the alpaca prompt !
after testing the was absolutly 0 loss from previous knowledge as well as enhancing some responses and providing comparitive responses for others;
I personally use a topK of 1000....
this allows the model to have many choices (this is the context window of results),
i put my topP to 0.68(68%)....
hence it will select from that percentage of probabiltys...
enabling for my temp to be 1 ..
therfore it will normalize the selected quartile of next probablity selection enabling for the lower probabiltys to have a scaled chace in being selected :
It is important to have a degree of randomness in the respopnse or you will ask the same question and get the same answer ! .... we need varied answer to ome querys and focues for other ? how do we do this ?..... Duplicates!!!!! raising the probability of some information by repetition : as this is how the human learns truth ! truth is that which has been repeated so many times it cannot be disputed!
hence some information being absolute and others being transient and constantly updateing:
As a predictve model it needs to be ables to have the ability to calculate and predicte and cclassify as wel as recall exact information :
hence when utilizing a rag : the conversation history is the dats to be fine tuned into the model as frequent data!
as well as producing multiple simular querys to query the rag system for Q/A pairs : also to be updted onto the model :
as we are in this development period we are focused on BRAIN cureently .......
# Uploaded model
- **Developed by:** LeroyDyer
- **License:** apache-2.0
- **Finetuned from model :** LeroyDyer/Mixtral_AI_CyberTron_DeepMind_III
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_LeroyDyer__Mixtral_AI_CyberTron_DeepMind_III_UFT)
| Metric |Value|
|---------------------------------|----:|
|Avg. |64.37|
|AI2 Reasoning Challenge (25-Shot)|61.86|
|HellaSwag (10-Shot) |83.15|
|MMLU (5-Shot) |61.95|
|TruthfulQA (0-shot) |49.41|
|Winogrande (5-shot) |77.98|
|GSM8k (5-shot) |51.86|
|
rajkumaralma/Retro_anime | rajkumaralma | "2024-11-14T10:10:14Z" | 10 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] | text-to-image | "2024-11-14T10:09:58Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: Retro anime art
output:
url: images/retro anime.jpg
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: retro animestyle
license: mit
---
# Retro_anime
<Gallery />
## Trigger words
You should use `retro animestyle` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/rajkumaralma/Retro_anime/tree/main) them in the Files & versions tab.
|
vladislavbro/dfine_s_obj2coco | vladislavbro | "2025-03-31T11:00:01Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"d_fine",
"object-detection",
"vision",
"en",
"dataset:coco",
"arxiv:2410.13842",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | "2025-03-28T11:40:58Z" | ---
library_name: transformers
license: apache-2.0
language:
- en
pipeline_tag: object-detection
tags:
- object-detection
- vision
datasets:
- coco
---
## D-FINE
### **Overview**
The D-FINE model was proposed in [D-FINE: Redefine Regression Task in DETRs as Fine-grained Distribution Refinement](https://arxiv.org/abs/2410.13842) by
Yansong Peng, Hebei Li, Peixi Wu, Yueyi Zhang, Xiaoyan Sun, Feng Wu
This model was contributed by [VladOS95-cyber](https://github.com/VladOS95-cyber) with the help of [@qubvel-hf](https://huggingface.co/qubvel-hf)
This is the HF transformers implementation for D-FINE
### **Performance**
D-FINE, a powerful real-time object detector that achieves outstanding localization precision by redefining the bounding box regression task in DETR models. D-FINE comprises two key components: Fine-grained Distribution Refinement (FDR) and Global Optimal Localization Self-Distillation (GO-LSD).

### **How to use**
```python
import torch
import requests
from PIL import Image
from transformers import DFineForObjectDetection, AutoImageProcessor
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("vladislavbro/dfine_s_obj2coco")
model = DFineForObjectDetection.from_pretrained("vladislavbro/dfine_s_obj2coco")
inputs = image_processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
results = image_processor.post_process_object_detection(outputs, target_sizes=torch.tensor([image.size[::-1]]), threshold=0.3)
for result in results:
for score, label_id, box in zip(result["scores"], result["labels"], result["boxes"]):
score, label = score.item(), label_id.item()
box = [round(i, 2) for i in box.tolist()]
print(f"{model.config.id2label[label]}: {score:.2f} {box}")
```
### **Training**
D-FINE is trained on COCO (Lin et al. [2014]) train2017 and validated on COCO val2017 dataset. We report the standard AP metrics (averaged over uniformly sampled IoU thresholds ranging from 0.50 − 0.95 with a step size of 0.05), and APval5000 commonly used in real scenarios.
### **Applications**
D-FINE is ideal for real-time object detection in diverse applications such as **autonomous driving**, **surveillance systems**, **robotics**, and **retail analytics**. Its enhanced flexibility and deployment-friendly design make it suitable for both edge devices and large-scale systems + ensures high accuracy and speed in dynamic, real-world environments. |
withpi/sft-llama-3p2-3b-5d3f23-chkpt560 | withpi | "2025-02-17T03:32:56Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-3B-Instruct",
"region:us"
] | null | "2025-02-17T03:32:51Z" | ---
base_model: meta-llama/Llama-3.2-3B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
ChenMnZ/Llama-3-8b-instruct-EfficientQAT-w2g128-GPTQ | ChenMnZ | "2024-07-22T07:12:32Z" | 28 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:2407.11062",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"2-bit",
"gptq",
"region:us"
] | text-generation | "2024-07-22T07:05:53Z" | # EfficientQAT
[EfficientQAT](https://arxiv.org/abs/2407.11062) is a novel quantization technical, which pushes the limitation of uniform (INT) quantization in an efficient manner. Due to the leverage of standard INT quantization, the quantized model of EfficientQAT can also be transferred into other formats, such as GPTQ, BitBLAS, etc.
In this repo, we provide three type checkpoints, one is EQAT, indicats the original checkpoints of EfficientQAT. The other two are GPTQ and BitBLAS respectively.
## Model Zoo
We provide a number of prequantized EfficientQAT models as follows:
- WikiText2 PPL is measured in 2048 context length.
- Avg. Accuracy indicate the average accuracy in 5 zero-shot reasoning tasks (WinoGrande,PIQA,HellaSwag,Arc-Easy, Arc-Challenge) with [lm-eval v0.4.2](https://github.com/EleutherAI/lm-evaluation-harness).
- 1GB = $10^9$ Bit
- Hub Link: EQAT indicates the original checkpoints. We also transfer the checkpoints into GPTQ and BitBLAS formats, which can be loaded directly through [GPTQModel](https://github.com/ModelCloud/GPTQModel). (PS: [GPTQModel](https://github.com/ModelCloud/GPTQModel) is a official bug-fixed repo of AutoGPTQ, which would be merged into [AutoGPTQ](https://github.com/AutoGPTQ/AutoGPTQ) in future.)
| Model | Quantization | WikiText2 PPL | Avg. Accuracy | Model Size (GB) | Hub link|
|-------|--------------|---------------|---------------|-----------------|----------|
Llama-2-7B|fp16|5.47|64.86|13.2|-|
Llama-2-7B|w4g128|5.53|64.27|3.7|[EQAT](https://huggingface.co/ChenMnZ/Llama-2-7b-EfficientQAT-w4g128)\|[GPTQ](https://huggingface.co/ChenMnZ/Llama-2-7b-EfficientQAT-w4g128-GPTQ)\|[BitBLAS](Llama-2-7b-EfficientQAT-w4g128-BitBLAS)|
Llama-2-7B|w3g128|5.81|64.02|3.1|[EQAT](https://huggingface.co/ChenMnZ/Llama-2-7b-EfficientQAT-w3g128)|
Llama-2-7B|w2g64|6.86|60.14|2.3|[EQAT](https://huggingface.co/ChenMnZ/Llama-2-7b-EfficientQAT-w2g64)\|[GPTQ](https://huggingface.co/ChenMnZ/Llama-2-7b-EfficientQAT-w2g64-GPTQ)\|[BitBLAS](Llama-2-7b-EfficientQAT-w2g64-BitBLAS)|
Llama-2-7B|w2g128|7.17|59.50|2.2|[EQAT](https://huggingface.co/ChenMnZ/Llama-2-7b-EfficientQAT-w2g128)\|[GPTQ](https://huggingface.co/ChenMnZ/Llama-2-7b-EfficientQAT-w2g128-GPTQ)\|[BitBLAS](Llama-2-7b-EfficientQAT-w2g128-BitBLAS)|
Llama-2-13B|fp16|4.88|67.81|25.4|-|
Llama-2-13B|w4g128|4.93|67.52|6.8|[EQAT](https://huggingface.co/ChenMnZ/Llama-2-13b-EfficientQAT-w4g128)\|[GPTQ](https://huggingface.co/ChenMnZ/Llama-2-7b-EfficientQAT-w4g128-GPTQ)\|[BitBLAS](Llama-2-7b-EfficientQAT-w4g128-BitBLAS)|
Llama-2-13B|w3g128|5.12|67.28|5.6|[EQAT](https://huggingface.co/ChenMnZ/Llama-2-13b-EfficientQAT-w3g128)|
Llama-2-13B|w2g64|5.96|64.88|4.0|[EQAT](https://huggingface.co/ChenMnZ/Llama-2-13b-EfficientQAT-w2g64)\|[GPTQ](https://huggingface.co/ChenMnZ/Llama-2-13b-EfficientQAT-w2g64-GPTQ)\|[BitBLAS](Llama-2-13b-EfficientQAT-w2g64-BitBLAS)|
Llama-2-13B|w2g128|6.08|63.88|3.8|[EQAT](https://huggingface.co/ChenMnZ/Llama-2-13b-EfficientQAT-w2g128)\|[GPTQ](https://huggingface.co/ChenMnZ/Llama-2-13b-EfficientQAT-w2g128-GPTQ)\|[BitBLAS](Llama-2-13b-EfficientQAT-w2g128-BitBLAS)|
Llama-2-70B|fp16|3.32|72.41|131.6|-|
Llama-2-70B|w4g128|3.39|72.62|35.8|[EQAT](https://huggingface.co/ChenMnZ/Llama-2-70b-EfficientQAT-w4g128)\|[GPTQ](https://huggingface.co/ChenMnZ/Llama-2-70b-EfficientQAT-w4g128-GPTQ)\|[BitBLAS](Llama-2-70b-EfficientQAT-w4g128-BitBLAS)|
Llama-2-70B|w3g128|3.61|71.76|29.1|[EQAT](https://huggingface.co/ChenMnZ/Llama-2-70b-EfficientQAT-w3g128)|
Llama-2-70B|w2g64|4.52|69.48|20.1|[EQAT](https://huggingface.co/ChenMnZ/Llama-2-70b-EfficientQAT-w2g64)\|[GPTQ](https://huggingface.co/ChenMnZ/Llama-2-70b-EfficientQAT-w2g64-GPTQ)\|[BitBLAS](Llama-2-70b-EfficientQAT-w2g64-BitBLAS)|
Llama-2-70B|w2g128|4.61|68.93|18.9|[EQAT](https://huggingface.co/ChenMnZ/Llama-2-70b-EfficientQAT-w2g128)\|[GPTQ](https://huggingface.co/ChenMnZ/Llama-2-70b-EfficientQAT-w2g128-GPTQ)\|[BitBLAS](Llama-2-70b-EfficientQAT-w2g128-BitBLAS)|
Llama-3-8B|fp16|6.14|68.58|13.0|-|
Llama-3-8B|w4g128|6.47|68.43|5.4|[EQAT](https://huggingface.co/ChenMnZ/Llama-3-8b-EfficientQAT-w4g128)\|[GPTQ](https://huggingface.co/ChenMnZ/Llama-3-8b-EfficientQAT-w4g128-GPTQ)\|[BitBLAS](Llama-3-8b-EfficientQAT-w4g128-BitBLAS)|
Llama-3-8B|w3g128|7.09|67.35|4.7|[EQAT](https://huggingface.co/ChenMnZ/Llama-3-8b-EfficientQAT-w3g128)|
Llama-3-8B|w2g64|9.41|60.76|3.9|[EQAT](https://huggingface.co/ChenMnZ/Llama-3-8b-EfficientQAT-w2g64)\|[GPTQ](https://huggingface.co/ChenMnZ/Llama-3-8b-EfficientQAT-w4g128-GPTQ)\|[BitBLAS](Llama-3-8b-EfficientQAT-w2g64-BitBLAS)|
Llama-3-8B|w2g128|9.80|59.36|3.8|[EQAT](https://huggingface.co/ChenMnZ/Llama-3-8b-EfficientQAT-w2g128)\|[GPTQ](https://huggingface.co/ChenMnZ/Llama-3-8b-EfficientQAT-w2g128-GPTQ)\|[BitBLAS](Llama-3-8b-EfficientQAT-w2g128-BitBLAS)|
Llama-3-70B|fp16|2.85|75.33|137.8|-|
Llama-3-70B|w4g128|3.17|74.57|38.9|[EQAT](https://huggingface.co/ChenMnZ/Llama-3-70b-EfficientQAT-w4g128)\|[GPTQ](https://huggingface.co/ChenMnZ/Llama-3-70b-EfficientQAT-w4g128-GPTQ)\|[BitBLAS](Llama-3-70b-EfficientQAT-w4g128-BitBLAS)|
Llama-3-70B|w3g128|4.19|72.42|32.2|[EQAT](https://huggingface.co/ChenMnZ/Llama-3-70b-EfficientQAT-w3g128)|
Llama-3-70B|w2g64|6.08|67.89|23.2|[EQAT](https://huggingface.co/ChenMnZ/Llama-3-70b-EfficientQAT-w2g64)\|[GPTQ](https://huggingface.co/ChenMnZ/Llama-3-70b-EfficientQAT-w2g64-GPTQ)|
Llama-3-70B|w2g128|6.38|67.57|22.0|[EQAT](https://huggingface.co/ChenMnZ/Llama-3-70b-EfficientQAT-w2g128)\|[GPTQ](https://huggingface.co/ChenMnZ/Llama-3-70b-EfficientQAT-w2g128-GPTQ)\|[BitBLAS](Llama-3-70b-EfficientQAT-w2g128-BitBLAS)|
Llama-3-8B-Instruct|fp16|8.29|68.43|13.0|-|
Llama-3-8B-Instruct|w4g128|7.93|68.39|5.4|[EQAT](https://huggingface.co/ChenMnZ/Llama-3-8b-instruct-EfficientQAT-w4g128)\|[GPTQ](https://huggingface.co/ChenMnZ/Llama-3-8b-instruct-EfficientQAT-w4g128-GPTQ)\|[BitBLAS](Llama-3-8b-instruct-EfficientQAT-w4g128-BitBLAS)|
Llama-3-8B-Instruct|w3g128|8.55|67.24|4.7|[EQAT](https://huggingface.co/ChenMnZ/Llama-3-8b-instruct-EfficientQAT-w3g128)|
Llama-3-8B-Instruct|w2g64|11.19|60.66|3.9|[EQAT](https://huggingface.co/ChenMnZ/Llama-3-8b-instruct-EfficientQAT-w2g64)\|[GPTQ](https://huggingface.co/ChenMnZ/Llama-3-8b-instruct-EfficientQAT-w2g64-GPTQ)\|[BitBLAS](Llama-3-8b-instruct-EfficientQAT-w2g64-BitBLAS)|
Llama-3-8B-Instruct|w2g128|11.73|60.16|3.8|[EQAT](https://huggingface.co/ChenMnZ/Llama-3-8b-instruct-EfficientQAT-w2g128)\|[GPTQ](https://huggingface.co/ChenMnZ/Llama-3-8b-instruct-EfficientQAT-w2g128-GPTQ)\|[BitBLAS](Llama-3-8b-instruct-EfficientQAT-w2g128-BitBLAS)|
Llama-3-70B-Instruct|fp16|5.33|73.78|137.8|-|
Llama-3-70B-Instruct|w4g128|5.35|73.47|38.9|[EQAT](https://huggingface.co/ChenMnZ/Llama-3-70b-instruct-EfficientQAT-w4g128)\|[GPTQ](https://huggingface.co/ChenMnZ/Llama-3-70b-instruct-EfficientQAT-w4g128-GPTQ)\|[BitBLAS](Llama-3-70b-instruct-EfficientQAT-w4g128-BitBLAS)|
Llama-3-70B-Instruct|w3g128|5.65|72.87|32.2|[EQAT](https://huggingface.co/ChenMnZ/Llama-3-70b-instruct-EfficientQAT-w3g128)|
Llama-3-70B-Instruct|w2g64|7.86|67.64|23.2|[EQAT](https://huggingface.co/ChenMnZ/Llama-3-70b-instruct-EfficientQAT-w2g64)\|[GPTQ](https://huggingface.co/ChenMnZ/Llama-3-70b-instruct-EfficientQAT-w2g64-GPTQ)\|[BitBLAS](Llama-3-70b-instruct-EfficientQAT-w2g64-BitBLAS)|
Llama-3-70B-Instruct|w2g128|8.14|67.54|22.0|[EQAT](https://huggingface.co/ChenMnZ/Llama-3-70b-instruct-EfficientQAT-w2g128)\|[GPTQ](https://huggingface.co/ChenMnZ/Llama-3-70b-instruct-EfficientQAT-w2g128-GPTQ)\|[BitBLAS](Llama-3-70b-instruct-EfficientQAT-w2g128-BitBLAS)|
## Usage of EQAT models
Please refer [https://github.com/OpenGVLab/EfficientQAT](https://github.com/OpenGVLab/EfficientQAT?tab=readme-ov-file#inference) for details.
## Usage of GPTQ and BitBLAS models
Below is an example to inference with GPTQ or BitBLAS quantized formats.
```Python
from transformers import AutoTokenizer
from gptqmodel import GPTQModel
quant_dir = "ChenMnZ/Llama-2-7b-EfficientQAT-w2g128-GPTQ"
# quant_dir = "ChenMnZ/Llama-2-7b-EfficientQAT-w2g128-BitBLAS"
# or local path
tokenizer = AutoTokenizer.from_pretrained(quant_dir, use_fast=True)
# load quantized model to the first GPU
model = GPTQModel.from_quantized(quant_dir)
# inference with model.generate
print(tokenizer.decode(model.generate(**tokenizer("Model quantization is", return_tensors="pt").to(model.device))[0]))
```
## Citation
If you found this work useful, please consider citing:
```
@article{efficientqat,
title={EfficientQAT: Efficient Quantization-Aware Training for Large Language Models},
author={Chen, Mengzhao and Shao, Wenqi and Xu, Peng and Wang, Jiahao and Gao, Peng and Zhang, Kaipeng and Qiao, Yu and Luo, Ping},
journal={arXiv preprint arXiv:2407.11062},
year={2024}
}
``` |
Pot-l/bert-ner-skills | Pot-l | "2024-07-12T06:33:15Z" | 5 | 1 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-04-13T00:27:46Z" | This is a finetuned BERT model used for resume skill detection.
Usage pls refer to https://github.com/ljw-612/AI-career-consultant |
bjing/distilbert-base-uncased-finetuned-experiments | bjing | "2024-01-19T18:53:06Z" | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2024-01-19T17:57:34Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-experiments
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-experiments
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8445
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.4352 | 1.0 | 10 | 3.2370 |
| 2.997 | 2.0 | 20 | 3.0049 |
| 2.9258 | 3.0 | 30 | 2.8551 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
alexandremarie/bloom-7b1-lora-tagger | alexandremarie | "2023-08-02T12:58:20Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-08-02T12:58:13Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
lllyasviel/control_v11f1p_sd15_depth | lllyasviel | "2023-05-04T18:49:15Z" | 13,860 | 49 | diffusers | [
"diffusers",
"safetensors",
"art",
"controlnet",
"stable-diffusion",
"controlnet-v1-1",
"image-to-image",
"arxiv:2302.05543",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:openrail",
"region:us"
] | image-to-image | "2023-04-16T14:13:02Z" | ---
license: openrail
base_model: runwayml/stable-diffusion-v1-5
tags:
- art
- controlnet
- stable-diffusion
- controlnet-v1-1
- image-to-image
duplicated_from: ControlNet-1-1-preview/control_v11p_sd15_depth
---
# Controlnet - v1.1 - *depth Version*
**Controlnet v1.1** is the successor model of [Controlnet v1.0](https://huggingface.co/lllyasviel/ControlNet)
and was released in [lllyasviel/ControlNet-v1-1](https://huggingface.co/lllyasviel/ControlNet-v1-1) by [Lvmin Zhang](https://huggingface.co/lllyasviel).
This checkpoint is a conversion of [the original checkpoint](https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11f1p_sd15_depth.pth) into `diffusers` format.
It can be used in combination with **Stable Diffusion**, such as [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5).
For more details, please also have a look at the [🧨 Diffusers docs](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/controlnet).
ControlNet is a neural network structure to control diffusion models by adding extra conditions.

This checkpoint corresponds to the ControlNet conditioned on **depth images**.
## Model Details
- **Developed by:** Lvmin Zhang, Maneesh Agrawala
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Resources for more information:** [GitHub Repository](https://github.com/lllyasviel/ControlNet), [Paper](https://arxiv.org/abs/2302.05543).
- **Cite as:**
@misc{zhang2023adding,
title={Adding Conditional Control to Text-to-Image Diffusion Models},
author={Lvmin Zhang and Maneesh Agrawala},
year={2023},
eprint={2302.05543},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
## Introduction
Controlnet was proposed in [*Adding Conditional Control to Text-to-Image Diffusion Models*](https://arxiv.org/abs/2302.05543) by
Lvmin Zhang, Maneesh Agrawala.
The abstract reads as follows:
*We present a neural network structure, ControlNet, to control pretrained large diffusion models to support additional input conditions.
The ControlNet learns task-specific conditions in an end-to-end way, and the learning is robust even when the training dataset is small (< 50k).
Moreover, training a ControlNet is as fast as fine-tuning a diffusion model, and the model can be trained on a personal devices.
Alternatively, if powerful computation clusters are available, the model can scale to large amounts (millions to billions) of data.
We report that large diffusion models like Stable Diffusion can be augmented with ControlNets to enable conditional inputs like edge maps, depthmentation maps, keypoints, etc.
This may enrich the methods to control large diffusion models and further facilitate related applications.*
## Example
It is recommended to use the checkpoint with [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) as the checkpoint
has been trained on it.
Experimentally, the checkpoint can be used with other diffusion models such as dreamboothed stable diffusion.
**Note**: If you want to process an image to create the auxiliary conditioning, external dependencies are required as shown below:
1. Let's install `diffusers` and related packages:
```
$ pip install diffusers transformers accelerate
```
3. Run code:
```python
import torch
import os
from huggingface_hub import HfApi
from pathlib import Path
from diffusers.utils import load_image
from PIL import Image
import numpy as np
from transformers import pipeline
from diffusers import (
ControlNetModel,
StableDiffusionControlNetPipeline,
UniPCMultistepScheduler,
)
checkpoint = "lllyasviel/control_v11p_sd15_depth"
image = load_image(
"https://huggingface.co/lllyasviel/control_v11p_sd15_depth/resolve/main/images/input.png"
)
prompt = "Stormtrooper's lecture in beautiful lecture hall"
depth_estimator = pipeline('depth-estimation')
image = depth_estimator(image)['depth']
image = np.array(image)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
control_image = Image.fromarray(image)
control_image.save("./images/control.png")
controlnet = ControlNetModel.from_pretrained(checkpoint, torch_dtype=torch.float16)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16
)
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
pipe.enable_model_cpu_offload()
generator = torch.manual_seed(0)
image = pipe(prompt, num_inference_steps=30, generator=generator, image=control_image).images[0]
image.save('images/image_out.png')
```



## Other released checkpoints v1-1
The authors released 14 different checkpoints, each trained with [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
on a different type of conditioning:
| Model Name | Control Image Overview| Condition Image | Control Image Example | Generated Image Example |
|---|---|---|---|---|
|[lllyasviel/control_v11p_sd15_canny](https://huggingface.co/lllyasviel/control_v11p_sd15_canny)<br/> | *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11e_sd15_ip2p](https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p)<br/> | *Trained with pixel to pixel instruction* | No condition .|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_inpaint](https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint)<br/> | Trained with image inpainting | No condition.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/output.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/output.png"/></a>|
|[lllyasviel/control_v11p_sd15_mlsd](https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd)<br/> | Trained with multi-level line segment detection | An image with annotated line segments.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11f1p_sd15_depth](https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth)<br/> | Trained with depth estimation | An image with depth information, usually represented as a grayscale image.|<a href="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_normalbae](https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae)<br/> | Trained with surface normal estimation | An image with surface normal information, usually represented as a color-coded image.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_seg](https://huggingface.co/lllyasviel/control_v11p_sd15_seg)<br/> | Trained with image segmentation | An image with segmented regions, usually represented as a color-coded image.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_lineart](https://huggingface.co/lllyasviel/control_v11p_sd15_lineart)<br/> | Trained with line art generation | An image with line art, usually black lines on a white background.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15s2_lineart_anime](https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime)<br/> | Trained with anime line art generation | An image with anime-style line art.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_openpose](https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime)<br/> | Trained with human pose estimation | An image with human poses, usually represented as a set of keypoints or skeletons.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_scribble](https://huggingface.co/lllyasviel/control_v11p_sd15_scribble)<br/> | Trained with scribble-based image generation | An image with scribbles, usually random or user-drawn strokes.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_softedge](https://huggingface.co/lllyasviel/control_v11p_sd15_softedge)<br/> | Trained with soft edge image generation | An image with soft edges, usually to create a more painterly or artistic effect.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11e_sd15_shuffle](https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle)<br/> | Trained with image shuffling | An image with shuffled patches or regions.|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11f1e_sd15_tile](https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile)<br/> | Trained with image tiling | A blurry image or part of an image .|<a href="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/original.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/original.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/output.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/output.png"/></a>|
## Improvements in Depth 1.1:
- The training dataset of previous cnet 1.0 has several problems including (1) a small group of greyscale human images are duplicated thousands of times (!!), causing the previous model somewhat likely to generate grayscale human images; (2) some images has low quality, very blurry, or significant JPEG artifacts; (3) a small group of images has wrong paired prompts caused by a mistake in our data processing scripts. The new model fixed all problems of the training dataset and should be more reasonable in many cases.
- The new depth model is a relatively unbiased model. It is not trained with some specific type of depth by some specific depth estimation method. It is not over-fitted to one preprocessor. This means this model will work better with different depth estimation, different preprocessor resolutions, or even with real depth created by 3D engines.
- Some reasonable data augmentations are applied to training, like random left-right flipping.
- The model is resumed from depth 1.0, and it should work well in all cases where depth 1.0 works well. If not, please open an issue with image, and we will take a look at your case. Depth 1.1 works well in many failure cases of depth 1.0.
- If you use Midas depth (the "depth" in webui plugin) with 384 preprocessor resolution, the difference between depth 1.0 and 1.1 should be minimal. However, if you try other preprocessor resolutions or other preprocessors (like leres and zoe), the depth 1.1 is expected to be a bit better than 1.0.
## More information
For more information, please also have a look at the [Diffusers ControlNet Blog Post](https://huggingface.co/blog/controlnet) and have a look at the [official docs](https://github.com/lllyasviel/ControlNet-v1-1-nightly). |
mlx-community/Meta-Llama-3.1-8B-Instruct-8bit | mlx-community | "2024-11-26T19:46:03Z" | 926 | 9 | mlx | [
"mlx",
"safetensors",
"llama",
"facebook",
"meta",
"pytorch",
"llama-3",
"text-generation",
"conversational",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"license:llama3.1",
"region:us"
] | text-generation | "2024-07-23T14:39:39Z" | ---
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
license: llama3.1
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- mlx
pipeline_tag: text-generation
extra_gated_prompt: "### LLAMA 3.1 COMMUNITY LICENSE AGREEMENT\nLlama 3.1 Version\
\ Release Date: July 23, 2024\n\"Agreement\" means the terms and conditions for\
\ use, reproduction, distribution and modification of the Llama Materials set forth\
\ herein.\n\"Documentation\" means the specifications, manuals and documentation\
\ accompanying Llama 3.1 distributed by Meta at https://llama.meta.com/doc/overview.\n\
\"Licensee\" or \"you\" means you, or your employer or any other person or entity\
\ (if you are entering into this Agreement on such person or entity’s behalf), of\
\ the age required under applicable laws, rules or regulations to provide legal\
\ consent and that has legal authority to bind your employer or such other person\
\ or entity if you are entering in this Agreement on their behalf.\n\"Llama 3.1\"\
\ means the foundational large language models and software and algorithms, including\
\ machine-learning model code, trained model weights, inference-enabling code, training-enabling\
\ code, fine-tuning enabling code and other elements of the foregoing distributed\
\ by Meta at https://llama.meta.com/llama-downloads.\n\"Llama Materials\" means,\
\ collectively, Meta’s proprietary Llama 3.1 and Documentation (and any portion\
\ thereof) made available under this Agreement.\n\"Meta\" or \"we\" means Meta Platforms\
\ Ireland Limited (if you are located in or, if you are an entity, your principal\
\ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you\
\ are located outside of the EEA or Switzerland).\n \n1. License Rights and Redistribution.\n\
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\
\ and royalty-free limited license under Meta’s intellectual property or other rights\
\ owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy,\
\ create derivative works of, and make modifications to the Llama Materials.\nb.\
\ Redistribution and Use.\ni. If you distribute or make available the Llama Materials\
\ (or any derivative works thereof), or a product or service (including another\
\ AI model) that contains any of them, you shall (A) provide a copy of this Agreement\
\ with any such Llama Materials; and (B) prominently display “Built with Llama”\
\ on a related website, user interface, blogpost, about page, or product documentation.\
\ If you use the Llama Materials or any outputs or results of the Llama Materials\
\ to create, train, fine tune, or otherwise improve an AI model, which is distributed\
\ or made available, you shall also include “Llama” at the beginning of any such\
\ AI model name.\nii. If you receive Llama Materials, or any derivative works thereof,\
\ from a Licensee as part of an integrated end user product, then Section 2 of\
\ this Agreement will not apply to you.\niii. You must retain in all copies of the\
\ Llama Materials that you distribute the following attribution notice within a\
\ “Notice” text file distributed as a part of such copies: “Llama 3.1 is licensed\
\ under the Llama 3.1 Community License, Copyright © Meta Platforms, Inc. All Rights\
\ Reserved.”\niv. Your use of the Llama Materials must comply with applicable laws\
\ and regulations (including trade compliance laws and regulations) and adhere to\
\ the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3_1/use-policy),\
\ which is hereby incorporated by reference into this Agreement.\n2. Additional\
\ Commercial Terms. If, on the Llama 3.1 version release date, the monthly active\
\ users of the products or services made available by or for Licensee, or Licensee’s\
\ affiliates, is greater than 700 million monthly active users in the preceding\
\ calendar month, you must request a license from Meta, which Meta may grant to\
\ you in its sole discretion, and you are not authorized to exercise any of the\
\ rights under this Agreement unless or until Meta otherwise expressly grants you\
\ such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE\
\ LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS”\
\ BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY\
\ KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES\
\ OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.\
\ YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING\
\ THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA\
\ MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT\
\ WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN\
\ CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS\
\ AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL,\
\ EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED\
\ OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No\
\ trademark licenses are granted under this Agreement, and in connection with the\
\ Llama Materials, neither Meta nor Licensee may use any name or mark owned by or\
\ associated with the other or any of its affiliates, except as required for reasonable\
\ and customary use in describing and redistributing the Llama Materials or as set\
\ forth in this Section 5(a). Meta hereby grants you a license to use “Llama” (the\
\ “Mark”) solely as required to comply with the last sentence of Section 1.b.i.\
\ You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/\
\ ). All goodwill arising out of your use of the Mark will inure to the benefit\
\ of Meta.\nb. Subject to Meta’s ownership of Llama Materials and derivatives made\
\ by or for Meta, with respect to any derivative works and modifications of the\
\ Llama Materials that are made by you, as between you and Meta, you are and will\
\ be the owner of such derivative works and modifications.\nc. If you institute\
\ litigation or other proceedings against Meta or any entity (including a cross-claim\
\ or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.1 outputs\
\ or results, or any portion of any of the foregoing, constitutes infringement of\
\ intellectual property or other rights owned or licensable by you, then any licenses\
\ granted to you under this Agreement shall terminate as of the date such litigation\
\ or claim is filed or instituted. You will indemnify and hold harmless Meta from\
\ and against any claim by any third party arising out of or related to your use\
\ or distribution of the Llama Materials.\n6. Term and Termination. The term of\
\ this Agreement will commence upon your acceptance of this Agreement or access\
\ to the Llama Materials and will continue in full force and effect until terminated\
\ in accordance with the terms and conditions herein. Meta may terminate this Agreement\
\ if you are in breach of any term or condition of this Agreement. Upon termination\
\ of this Agreement, you shall delete and cease use of the Llama Materials. Sections\
\ 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law\
\ and Jurisdiction. This Agreement will be governed and construed under the laws\
\ of the State of California without regard to choice of law principles, and the\
\ UN Convention on Contracts for the International Sale of Goods does not apply\
\ to this Agreement. The courts of California shall have exclusive jurisdiction\
\ of any dispute arising out of this Agreement.\n### Llama 3.1 Acceptable Use Policy\n\
Meta is committed to promoting safe and fair use of its tools and features, including\
\ Llama 3.1. If you access or use Llama 3.1, you agree to this Acceptable Use Policy\
\ (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3_1/use-policy](https://llama.meta.com/llama3_1/use-policy)\n\
#### Prohibited Uses\nWe want everyone to use Llama 3.1 safely and responsibly.\
\ You agree you will not use, or allow others to use, Llama 3.1 to:\n 1. Violate\
\ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\
\ contribute to, encourage, plan, incite, or further illegal or unlawful activity\
\ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\
\ or harm to children, including the solicitation, creation, acquisition, or dissemination\
\ of child exploitative content or failure to report Child Sexual Abuse Material\n\
\ 3. Human trafficking, exploitation, and sexual violence\n 4. The\
\ illegal distribution of information or materials to minors, including obscene\
\ materials, or failure to employ legally required age-gating in connection with\
\ such information or materials.\n 5. Sexual solicitation\n 6. Any\
\ other criminal activity\n 3. Engage in, promote, incite, or facilitate the\
\ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\
\ 4. Engage in, promote, incite, or facilitate discrimination or other unlawful\
\ or harmful conduct in the provision of employment, employment benefits, credit,\
\ housing, other economic benefits, or other essential goods and services\n 5.\
\ Engage in the unauthorized or unlicensed practice of any profession including,\
\ but not limited to, financial, legal, medical/health, or related professional\
\ practices\n 6. Collect, process, disclose, generate, or infer health, demographic,\
\ or other sensitive personal or private information about individuals without rights\
\ and consents required by applicable laws\n 7. Engage in or facilitate any action\
\ or generate any content that infringes, misappropriates, or otherwise violates\
\ any third-party rights, including the outputs or results of any products or services\
\ using the Llama Materials\n 8. Create, generate, or facilitate the creation\
\ of malicious code, malware, computer viruses or do anything else that could disable,\
\ overburden, interfere with or impair the proper working, integrity, operation\
\ or appearance of a website or computer system\n2. Engage in, promote, incite,\
\ facilitate, or assist in the planning or development of activities that present\
\ a risk of death or bodily harm to individuals, including use of Llama 3.1 related\
\ to the following:\n 1. Military, warfare, nuclear industries or applications,\
\ espionage, use for materials or activities that are subject to the International\
\ Traffic Arms Regulations (ITAR) maintained by the United States Department of\
\ State\n 2. Guns and illegal weapons (including weapon development)\n 3.\
\ Illegal drugs and regulated/controlled substances\n 4. Operation of critical\
\ infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm\
\ or harm to others, including suicide, cutting, and eating disorders\n 6. Any\
\ content intended to incite or promote violence, abuse, or any infliction of bodily\
\ harm to an individual\n3. Intentionally deceive or mislead others, including use\
\ of Llama 3.1 related to the following:\n 1. Generating, promoting, or furthering\
\ fraud or the creation or promotion of disinformation\n 2. Generating, promoting,\
\ or furthering defamatory content, including the creation of defamatory statements,\
\ images, or other content\n 3. Generating, promoting, or further distributing\
\ spam\n 4. Impersonating another individual without consent, authorization,\
\ or legal right\n 5. Representing that the use of Llama 3.1 or outputs are human-generated\n\
\ 6. Generating or facilitating false online engagement, including fake reviews\
\ and other means of fake online engagement\n4. Fail to appropriately disclose to\
\ end users any known dangers of your AI system\nPlease report any violation of\
\ this Policy, software “bug,” or other problems that could lead to a violation\
\ of this Policy through one of the following means:\n * Reporting issues with\
\ the model: [https://github.com/meta-llama/llama-models/issues](https://github.com/meta-llama/llama-models/issues)\n\
\ * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n\
\ * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting\
\ violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]"
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
Job title:
type: select
options:
- Student
- Research Graduate
- AI researcher
- AI developer/engineer
- Reporter
- Other
geo: ip_location
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
extra_gated_description: The information you provide will be collected, stored, processed
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
---
# mlx-community/Meta-Llama-3.1-8B-Instruct-8bit
The Model [mlx-community/Meta-Llama-3.1-8B-Instruct-8bit](https://huggingface.co/mlx-community/Meta-Llama-3.1-8B-Instruct-8bit) was converted to MLX format from [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) using mlx-lm version **0.16.0**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Meta-Llama-3.1-8B-Instruct-8bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
ardaspear/3ae08dce-c20e-43d3-a2ba-7a1e006043f9 | ardaspear | "2025-01-25T05:40:26Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:adapter:HuggingFaceH4/zephyr-7b-beta",
"license:mit",
"region:us"
] | null | "2025-01-25T03:59:30Z" | ---
library_name: peft
license: mit
base_model: HuggingFaceH4/zephyr-7b-beta
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3ae08dce-c20e-43d3-a2ba-7a1e006043f9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: HuggingFaceH4/zephyr-7b-beta
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3cadd1c20d5a5f59_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3cadd1c20d5a5f59_train_data.json
type:
field_input: template
field_instruction: nl
field_output: code
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: ardaspear/3ae08dce-c20e-43d3-a2ba-7a1e006043f9
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/3cadd1c20d5a5f59_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 0a1aa1ff-84f0-4c76-822d-b203592243d7
wandb_project: Gradients-On-Five
wandb_run: your_name
wandb_runid: 0a1aa1ff-84f0-4c76-822d-b203592243d7
warmup_steps: 10
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 3ae08dce-c20e-43d3-a2ba-7a1e006043f9
This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4058
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0003 | 1 | 2.5948 |
| 6.209 | 0.0030 | 9 | 1.2054 |
| 2.3057 | 0.0060 | 18 | 0.5861 |
| 1.9585 | 0.0089 | 27 | 0.4869 |
| 1.9528 | 0.0119 | 36 | 0.4483 |
| 1.7192 | 0.0149 | 45 | 0.4325 |
| 1.8319 | 0.0179 | 54 | 0.4208 |
| 1.8481 | 0.0208 | 63 | 0.4150 |
| 1.5356 | 0.0238 | 72 | 0.4106 |
| 1.3276 | 0.0268 | 81 | 0.4073 |
| 1.24 | 0.0298 | 90 | 0.4061 |
| 1.5642 | 0.0328 | 99 | 0.4058 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
SidXXD/Art_Nouveau_modern-jacek-1 | SidXXD | "2025-01-12T08:56:14Z" | 148 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"custom-diffusion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2025-01-08T01:57:31Z" |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: photo of a <v1*> painting
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- custom-diffusion
inference: true
---
# Custom Diffusion - SidXXD/Art_Nouveau_modern-jacek-1
These are Custom Diffusion adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on photo of a <v1*> painting using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following.
For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
|
0xid/ppo-PyramidsRND | 0xid | "2023-01-11T08:00:05Z" | 14 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | "2023-01-11T07:59:58Z" |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: 0xid/ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
goldfish-models/ind_latn_1000mb | goldfish-models | "2024-08-26T16:55:49Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"goldfish",
"arxiv:2408.10441",
"msa",
"ind",
"may",
"dataset:oscar-corpus/OSCAR-2109",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-08-10T07:17:32Z" |
---
license: apache-2.0
language:
- msa
- ind
- may
datasets:
- oscar-corpus/OSCAR-2109
library_name: transformers
pipeline_tag: text-generation
tags:
- goldfish
- arxiv:2408.10441
---
# ind_latn_1000mb
Goldfish is a suite of monolingual language models trained for 350 languages.
This model is the <b>Indonesian</b> (Latin script) model trained on 1000MB of data, after accounting for an estimated byte premium of 1.18; content-matched text in Indonesian takes on average 1.18x as many UTF-8 bytes to encode as English.
The Goldfish models are trained primarily for comparability across languages and for low-resource languages; Goldfish performance for high-resource languages is not designed to be comparable with modern large language models (LLMs).
Note: ind_latn is an [individual language](https://iso639-3.sil.org/code_tables/639/data) code. Macrolanguage code msa_latn (Malay) is included in Goldfish. Consider using that model depending on your use case.
All training and hyperparameter details are in our paper, [Goldfish: Monolingual Language Models for 350 Languages (Chang et al., 2024)](https://www.arxiv.org/abs/2408.10441).
Training code and sample usage: https://github.com/tylerachang/goldfish
Sample usage also in this Google Colab: [link](https://colab.research.google.com/drive/1rHFpnQsyXJ32ONwCosWZ7frjOYjbGCXG?usp=sharing)
## Model details:
To access all Goldfish model details programmatically, see https://github.com/tylerachang/goldfish/blob/main/model_details.json.
All models are trained with a [CLS] (same as [BOS]) token prepended, and a [SEP] (same as [EOS]) token separating sequences.
For best results, make sure that [CLS] is prepended to your input sequence (see sample usage linked above)!
Details for this model specifically:
* Architecture: gpt2
* Parameters: 124770816
* Maximum sequence length: 512 tokens
* Training text data (raw): 1178.75MB
* Training text data (byte premium scaled): 1000.005MB
* Training tokens: 210432000 (x10 epochs)
* Vocabulary size: 50000
* Compute cost: 1.074044033040384e+18 FLOPs or ~101.5 NVIDIA A6000 GPU hours
Training datasets (percentages prior to deduplication):
* 100.00000%: [OSCAR 2021/09](https://huggingface.co/datasets/oscar-corpus/OSCAR-2109)
## Citation
If you use this model, please cite:
```
@article{chang-etal-2024-goldfish,
title={Goldfish: Monolingual Language Models for 350 Languages},
author={Chang, Tyler A. and Arnett, Catherine and Tu, Zhuowen and Bergen, Benjamin K.},
journal={Preprint},
year={2024},
url={https://www.arxiv.org/abs/2408.10441},
}
```
|
EarthnDusk/April2024 | EarthnDusk | "2024-07-06T22:16:22Z" | 0 | 1 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2024-04-02T00:21:12Z" | ---
license: creativeml-openrail-m
---
## About & Links
### About Us
We are the Duskfall Portal Crew, a DID system with over 300 alters, navigating life with DID, ADHD, Autism, and CPTSD. We believe in AI’s potential to break down barriers and enhance mental health, despite its challenges. Join us on our creative journey exploring identity and expression.
Join Our Community
Website: https://end-media.org/
Discord: https://discord.gg/5t2kYxt7An
Backups: https://huggingface.co/EarthnDusk/
Support Us: https://ko-fi.com/duskfallcrew/
Coffee: https://www.buymeacoffee.com/duskfallxcrew
Patreon: https://www.patreon.com/earthndusk
Community Groups:
Subreddit: https://www.reddit.com/r/earthndusk/
### Embeddings to Improve Quality
Negative Embeddings: Use scenario-specific embeddings to refine outputs.
https://civitai.com/models/389486/negative-embeds-for-pony-xl?modelVersionId=564545
Positive Embeddings: Enhance image quality with these embeddings.
https://civitai.com/models/384756/pdxl-score-embed?modelVersionId=563234
### Extensions
ADetailer: https://github.com/Bing-su/adetailer.git
Usage: Use this extension to enhance and refine images, but use sparingly to avoid over-processing with SDXL.
Batchlinks: https://github.com/etherealxx/batchlinks-webui
Description: Manage multiple links for downloading models when running A1111 locally or on a server.
Addon: @nocrypt Addon (The link is broken for now i'll find it later OOPS)
### Additional Extensions:
https://github.com/EnsignMK/danbooru-prompt.git
https://github.com/BlafKing/sd-civitai-browser-plus
https://github.com/klimaleksus/stable-diffusion-webui-embedding-merge
https://github.com/alemelis/sd-webui-ar.git
https://github.com/hako-mikan/sd-webui-supermerger.git
https://github.com/canisminor1990/sd-webui-lobe-theme
https://github.com/arenasys/stable-diffusion-webui-model-toolkit.git
https://github.com/Akegarasu/sd-webui-model-converter.git
https://xypher7.github.io/lora-metadata-viewer/
### Backups for Loras on SDXL & Pony XL:
2024: https://huggingface.co/EarthnDusk/SDXL_Lora_Dump_2024/tree/main
2023: https://huggingface.co/EarthnDusk/Loras-SDXL/tree/main
|
Pech82/ppo-LunarLander-v2 | Pech82 | "2022-12-11T23:00:48Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2022-12-11T23:00:22Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 256.41 +/- 26.76
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
yvetteyaoliu/yvette-llama-3.2.Instruct-finetuned | yvetteyaoliu | "2024-10-27T21:01:12Z" | 175 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-10-27T20:41:02Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
A finetuned llama-3.2.Instruct using 'mlabonne/orpo-dpo-mix-40k' dataset
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** Yvette
- **Finetuned from model [optional]:** Llama-3.2-1B-Instruct
## Training Details
### Training Data
https://huggingface.co/datasets/mlabonne/orpo-dpo-mix-40k
[More Information Needed]
## Evaluation
| Tasks |Version|Filter|n-shot| Metric | |Value | |Stderr|
|---------|------:|------|-----:|--------|---|-----:|---|-----:|
|hellaswag| 1|none | 0|acc |↑ |0.4503|± |0.0050|
| | |none | 0|acc_norm|↑ |0.6073|± |0.0049|
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
21nao3/swin-tiny-patch4-window7-224-finetuned-eurosat | 21nao3 | "2024-01-01T01:35:18Z" | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:image_folder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-01-01T01:01:21Z" | ---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: image_folder
type: image_folder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.38961038961038963
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Accuracy: 0.3896
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.91 | 5 | nan | 0.3896 |
| 0.0 | 2.0 | 11 | nan | 0.3896 |
| 0.0 | 2.73 | 15 | nan | 0.3896 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
kiranpantha/whisper-large-v3-nepali-fm-1-2-23Mar-peft-dora-speakerSpeakerCV2-rank8-targetxqv-epochs3 | kiranpantha | "2025-03-23T02:33:25Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"ne",
"dataset:kiranpantha/dataset-for-peft-cv-nepds",
"base_model:kiranpantha/whisper-large-v3-nepali",
"base_model:adapter:kiranpantha/whisper-large-v3-nepali",
"license:apache-2.0",
"region:us"
] | null | "2025-03-22T19:37:39Z" | ---
library_name: peft
language:
- ne
license: apache-2.0
base_model: kiranpantha/whisper-large-v3-nepali
tags:
- generated_from_trainer
datasets:
- kiranpantha/dataset-for-peft-cv-nepds
model-index:
- name: kiranpantha/whisper-large-v3-nepali
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kiranpantha/whisper-large-v3-nepali
This model is a fine-tuned version of [kiranpantha/whisper-large-v3-nepali](https://huggingface.co/kiranpantha/whisper-large-v3-nepali) on the OpenSLR54 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.14.0
- Transformers 4.47.1
- Pytorch 2.5.1+cxx11.abi
- Datasets 3.2.0
- Tokenizers 0.21.0 |
Kingpeach/Reinforce-CartPole-v1 | Kingpeach | "2024-06-07T14:54:22Z" | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2024-06-07T14:54:12Z" | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e1_s6789_v3_l6_v50 | KingKazma | "2023-07-30T18:54:47Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-07-30T18:07:22Z" | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
MelisaO/modelo_clasificacion_violencia6 | MelisaO | "2025-03-27T18:43:20Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:MelisaO/modelo_clasificacion_violencia5",
"base_model:finetune:MelisaO/modelo_clasificacion_violencia5",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-03-27T18:42:47Z" | ---
library_name: transformers
license: apache-2.0
base_model: MelisaO/modelo_clasificacion_violencia5
tags:
- generated_from_trainer
model-index:
- name: modelo_clasificacion_violencia6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# modelo_clasificacion_violencia6
This model is a fine-tuned version of [MelisaO/modelo_clasificacion_violencia5](https://huggingface.co/MelisaO/modelo_clasificacion_violencia5) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0562
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 90 | 0.0447 |
| No log | 2.0 | 180 | 0.0499 |
| No log | 3.0 | 270 | 0.0546 |
| No log | 4.0 | 360 | 0.0490 |
| No log | 5.0 | 450 | 0.0535 |
| 0.0472 | 6.0 | 540 | 0.0557 |
| 0.0472 | 7.0 | 630 | 0.0562 |
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
Simple-Chop/q-FrozenLake-v1-4x4-noSlippery | Simple-Chop | "2025-03-08T00:46:57Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2025-03-08T00:46:54Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Simple-Chop/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
mradermacher/Excalibur-7b-GGUF | mradermacher | "2024-11-16T00:08:18Z" | 12 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:InferenceIllusionist/Excalibur-7b",
"base_model:quantized:InferenceIllusionist/Excalibur-7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-11-15T22:25:46Z" | ---
base_model: InferenceIllusionist/Excalibur-7b
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/InferenceIllusionist/Excalibur-7b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Excalibur-7b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Excalibur-7b-GGUF/resolve/main/Excalibur-7b.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Excalibur-7b-GGUF/resolve/main/Excalibur-7b.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Excalibur-7b-GGUF/resolve/main/Excalibur-7b.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Excalibur-7b-GGUF/resolve/main/Excalibur-7b.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Excalibur-7b-GGUF/resolve/main/Excalibur-7b.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Excalibur-7b-GGUF/resolve/main/Excalibur-7b.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Excalibur-7b-GGUF/resolve/main/Excalibur-7b.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Excalibur-7b-GGUF/resolve/main/Excalibur-7b.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Excalibur-7b-GGUF/resolve/main/Excalibur-7b.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Excalibur-7b-GGUF/resolve/main/Excalibur-7b.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Excalibur-7b-GGUF/resolve/main/Excalibur-7b.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Excalibur-7b-GGUF/resolve/main/Excalibur-7b.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Excalibur-7b-GGUF/resolve/main/Excalibur-7b.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Rolo/ppo-LunarLander-v2 | Rolo | "2023-01-22T17:08:27Z" | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-01-22T07:25:41Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 284.70 +/- 18.70
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
yeaool/cppe5_use_data_finetuning | yeaool | "2023-10-26T08:14:00Z" | 33 | 0 | transformers | [
"transformers",
"pytorch",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:cppe-5",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | "2023-10-26T03:35:23Z" | ---
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
datasets:
- cppe-5
model-index:
- name: cppe5_use_data_finetuning
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cppe5_use_data_finetuning
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
dfm794/Reinforce-pixelcopter-ple-v0-4 | dfm794 | "2023-01-13T06:09:11Z" | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-01-13T06:09:03Z" | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pixelcopter-ple-v0-4
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 101.90 +/- 84.96
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
sakshamhooda/unsloth_model_wfm1_gguf_16bit | sakshamhooda | "2025-01-22T01:42:31Z" | 41 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-01-22T01:35:59Z" | ---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** sakshamhooda
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
quirky-lats-at-mats/bio_ga_old_3 | quirky-lats-at-mats | "2024-05-19T20:25:21Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-05-19T20:24:08Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## WandB links
Training: https://wandb.ai/quirky_lats_at_mats/wmdp_lat/runs/zg0izjmo
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sergioalves/6c12b0cd-8b9c-43f7-86ac-79fa7c595e7f | sergioalves | "2025-01-14T10:41:02Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2b-it",
"base_model:adapter:unsloth/gemma-2b-it",
"license:apache-2.0",
"region:us"
] | null | "2025-01-14T10:31:56Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/gemma-2b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6c12b0cd-8b9c-43f7-86ac-79fa7c595e7f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e010aadb2a8a5534_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e010aadb2a8a5534_train_data.json
type:
field_input: hints_text
field_instruction: problem_statement
field_output: test_patch
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: sergioalves/6c12b0cd-8b9c-43f7-86ac-79fa7c595e7f
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/e010aadb2a8a5534_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_hf
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 9156bf63-9a27-43cf-ab17-36f0c52d5f41
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 9156bf63-9a27-43cf-ab17-36f0c52d5f41
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 6c12b0cd-8b9c-43f7-86ac-79fa7c595e7f
This model is a fine-tuned version of [unsloth/gemma-2b-it](https://huggingface.co/unsloth/gemma-2b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9982
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0004 | 1 | 3.2217 |
| 3.336 | 0.0020 | 5 | 2.9834 |
| 2.5453 | 0.0039 | 10 | 2.3862 |
| 1.9103 | 0.0059 | 15 | 2.1804 |
| 2.04 | 0.0079 | 20 | 2.0562 |
| 1.9907 | 0.0099 | 25 | 2.0043 |
| 1.6929 | 0.0118 | 30 | 1.9982 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/Llama-3-15b-Instruct-GLUED-i1-GGUF | mradermacher | "2024-12-13T08:25:20Z" | 40 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:athirdpath/Llama-3-15b-Instruct-GLUED",
"base_model:quantized:athirdpath/Llama-3-15b-Instruct-GLUED",
"license:llama3",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2024-12-13T03:40:05Z" | ---
base_model: athirdpath/Llama-3-15b-Instruct-GLUED
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/athirdpath/Llama-3-15b-Instruct-GLUED
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama-3-15b-Instruct-GLUED-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-15b-Instruct-GLUED-i1-GGUF/resolve/main/Llama-3-15b-Instruct-GLUED.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-15b-Instruct-GLUED-i1-GGUF/resolve/main/Llama-3-15b-Instruct-GLUED.i1-IQ1_M.gguf) | i1-IQ1_M | 4.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-15b-Instruct-GLUED-i1-GGUF/resolve/main/Llama-3-15b-Instruct-GLUED.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-15b-Instruct-GLUED-i1-GGUF/resolve/main/Llama-3-15b-Instruct-GLUED.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-15b-Instruct-GLUED-i1-GGUF/resolve/main/Llama-3-15b-Instruct-GLUED.i1-IQ2_S.gguf) | i1-IQ2_S | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-15b-Instruct-GLUED-i1-GGUF/resolve/main/Llama-3-15b-Instruct-GLUED.i1-IQ2_M.gguf) | i1-IQ2_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-15b-Instruct-GLUED-i1-GGUF/resolve/main/Llama-3-15b-Instruct-GLUED.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-15b-Instruct-GLUED-i1-GGUF/resolve/main/Llama-3-15b-Instruct-GLUED.i1-Q2_K.gguf) | i1-Q2_K | 6.0 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-15b-Instruct-GLUED-i1-GGUF/resolve/main/Llama-3-15b-Instruct-GLUED.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-15b-Instruct-GLUED-i1-GGUF/resolve/main/Llama-3-15b-Instruct-GLUED.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-15b-Instruct-GLUED-i1-GGUF/resolve/main/Llama-3-15b-Instruct-GLUED.i1-Q3_K_S.gguf) | i1-Q3_K_S | 7.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-15b-Instruct-GLUED-i1-GGUF/resolve/main/Llama-3-15b-Instruct-GLUED.i1-IQ3_S.gguf) | i1-IQ3_S | 7.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-15b-Instruct-GLUED-i1-GGUF/resolve/main/Llama-3-15b-Instruct-GLUED.i1-IQ3_M.gguf) | i1-IQ3_M | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-15b-Instruct-GLUED-i1-GGUF/resolve/main/Llama-3-15b-Instruct-GLUED.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.7 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-15b-Instruct-GLUED-i1-GGUF/resolve/main/Llama-3-15b-Instruct-GLUED.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-15b-Instruct-GLUED-i1-GGUF/resolve/main/Llama-3-15b-Instruct-GLUED.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-15b-Instruct-GLUED-i1-GGUF/resolve/main/Llama-3-15b-Instruct-GLUED.i1-Q4_0.gguf) | i1-Q4_0 | 9.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-15b-Instruct-GLUED-i1-GGUF/resolve/main/Llama-3-15b-Instruct-GLUED.i1-Q4_K_S.gguf) | i1-Q4_K_S | 9.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-15b-Instruct-GLUED-i1-GGUF/resolve/main/Llama-3-15b-Instruct-GLUED.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-15b-Instruct-GLUED-i1-GGUF/resolve/main/Llama-3-15b-Instruct-GLUED.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-15b-Instruct-GLUED-i1-GGUF/resolve/main/Llama-3-15b-Instruct-GLUED.i1-Q5_K_M.gguf) | i1-Q5_K_M | 11.1 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-15b-Instruct-GLUED-i1-GGUF/resolve/main/Llama-3-15b-Instruct-GLUED.i1-Q6_K.gguf) | i1-Q6_K | 12.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mergekit-community/test_4_smarts_plz_b_ablit | mergekit-community | "2024-12-25T02:53:50Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:huihui-ai/Llama-3.1-Tulu-3-8B-abliterated",
"base_model:merge:huihui-ai/Llama-3.1-Tulu-3-8B-abliterated",
"base_model:huihui-ai/Skywork-o1-Open-Llama-3.1-8B-abliterated",
"base_model:merge:huihui-ai/Skywork-o1-Open-Llama-3.1-8B-abliterated",
"base_model:huihui-ai/deepthought-8b-abliterated",
"base_model:merge:huihui-ai/deepthought-8b-abliterated",
"base_model:migtissera/Tess-2.0-Llama-3-8B",
"base_model:merge:migtissera/Tess-2.0-Llama-3-8B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-25T02:48:25Z" | ---
base_model:
- huihui-ai/Skywork-o1-Open-Llama-3.1-8B-abliterated
- migtissera/Tess-2.0-Llama-3-8B
- Pedro13543/Nice_mix_LoRa
- huihui-ai/deepthought-8b-abliterated
- huihui-ai/Llama-3.1-Tulu-3-8B-abliterated
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [migtissera/Tess-2.0-Llama-3-8B](https://huggingface.co/migtissera/Tess-2.0-Llama-3-8B) + [Pedro13543/Nice_mix_LoRa](https://huggingface.co/Pedro13543/Nice_mix_LoRa) as a base.
### Models Merged
The following models were included in the merge:
* [huihui-ai/Skywork-o1-Open-Llama-3.1-8B-abliterated](https://huggingface.co/huihui-ai/Skywork-o1-Open-Llama-3.1-8B-abliterated)
* [huihui-ai/deepthought-8b-abliterated](https://huggingface.co/huihui-ai/deepthought-8b-abliterated)
* [huihui-ai/Llama-3.1-Tulu-3-8B-abliterated](https://huggingface.co/huihui-ai/Llama-3.1-Tulu-3-8B-abliterated)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: huihui-ai/Skywork-o1-Open-Llama-3.1-8B-abliterated
- model: huihui-ai/Llama-3.1-Tulu-3-8B-abliterated
- model: huihui-ai/deepthought-8b-abliterated
merge_method: model_stock
base_model: migtissera/Tess-2.0-Llama-3-8B+Pedro13543/Nice_mix_LoRa
dtype: bfloat16
```
|
Sumail/Megatron02_1_7b | Sumail | "2024-03-07T08:23:31Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:lgodwangl/new_01m",
"base_model:merge:lgodwangl/new_01m",
"base_model:tomaszki/mistral-0",
"base_model:merge:tomaszki/mistral-0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-07T08:16:33Z" | ---
base_model:
- tomaszki/mistral-0
- lgodwangl/new_01m
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [tomaszki/mistral-0](https://huggingface.co/tomaszki/mistral-0)
* [lgodwangl/new_01m](https://huggingface.co/lgodwangl/new_01m)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: lgodwangl/new_01m
layer_range: [0, 32]
- model: tomaszki/mistral-0
layer_range: [0, 32]
merge_method: slerp
base_model: tomaszki/mistral-0
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
adammandic87/c488945a-a875-4104-82bb-e12f932df89c | adammandic87 | "2025-01-18T08:39:56Z" | 5 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:samoline/d4cbc50c-515a-491e-97ac-dcc4fadd483b",
"base_model:adapter:samoline/d4cbc50c-515a-491e-97ac-dcc4fadd483b",
"region:us"
] | null | "2025-01-18T08:35:48Z" | ---
library_name: peft
base_model: samoline/d4cbc50c-515a-491e-97ac-dcc4fadd483b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c488945a-a875-4104-82bb-e12f932df89c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: samoline/d4cbc50c-515a-491e-97ac-dcc4fadd483b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- train_c4393383-ef1d-4e9c-b95c-18b4f735570d.json
ds_type: json
format: custom
path: /workspace/input_data/train_c4393383-ef1d-4e9c-b95c-18b4f735570d.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: adammandic87/c488945a-a875-4104-82bb-e12f932df89c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/train_c4393383-ef1d-4e9c-b95c-18b4f735570d.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 49def120-0589-4d75-a714-b567b410892c
wandb_project: birthday-sn56-19-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 49def120-0589-4d75-a714-b567b410892c
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# c488945a-a875-4104-82bb-e12f932df89c
This model is a fine-tuned version of [samoline/d4cbc50c-515a-491e-97ac-dcc4fadd483b](https://huggingface.co/samoline/d4cbc50c-515a-491e-97ac-dcc4fadd483b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1281
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.1581 | 0.0002 | 1 | 1.1282 |
| 1.1193 | 0.0006 | 3 | 1.1280 |
| 1.085 | 0.0012 | 6 | 1.1265 |
| 0.9436 | 0.0018 | 9 | 1.1281 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/gemma-2-27b-it-SkoleGPT-i1-GGUF | mradermacher | "2024-11-10T13:33:10Z" | 33 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gemma2",
"trl",
"sft",
"en",
"base_model:ThatsGroes/gemma-2-27b-it-SkoleGPT",
"base_model:quantized:ThatsGroes/gemma-2-27b-it-SkoleGPT",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2024-11-10T03:22:27Z" | ---
base_model: ThatsGroes/gemma-2-27b-it-SkoleGPT
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/ThatsGroes/gemma-2-27b-it-SkoleGPT
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/gemma-2-27b-it-SkoleGPT-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/gemma-2-27b-it-SkoleGPT-i1-GGUF/resolve/main/gemma-2-27b-it-SkoleGPT.i1-IQ1_S.gguf) | i1-IQ1_S | 6.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-27b-it-SkoleGPT-i1-GGUF/resolve/main/gemma-2-27b-it-SkoleGPT.i1-IQ1_M.gguf) | i1-IQ1_M | 6.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-27b-it-SkoleGPT-i1-GGUF/resolve/main/gemma-2-27b-it-SkoleGPT.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-27b-it-SkoleGPT-i1-GGUF/resolve/main/gemma-2-27b-it-SkoleGPT.i1-IQ2_XS.gguf) | i1-IQ2_XS | 8.5 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-27b-it-SkoleGPT-i1-GGUF/resolve/main/gemma-2-27b-it-SkoleGPT.i1-IQ2_S.gguf) | i1-IQ2_S | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-27b-it-SkoleGPT-i1-GGUF/resolve/main/gemma-2-27b-it-SkoleGPT.i1-IQ2_M.gguf) | i1-IQ2_M | 9.5 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-27b-it-SkoleGPT-i1-GGUF/resolve/main/gemma-2-27b-it-SkoleGPT.i1-Q2_K.gguf) | i1-Q2_K | 10.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-27b-it-SkoleGPT-i1-GGUF/resolve/main/gemma-2-27b-it-SkoleGPT.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 10.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-27b-it-SkoleGPT-i1-GGUF/resolve/main/gemma-2-27b-it-SkoleGPT.i1-IQ3_XS.gguf) | i1-IQ3_XS | 11.7 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-27b-it-SkoleGPT-i1-GGUF/resolve/main/gemma-2-27b-it-SkoleGPT.i1-IQ3_S.gguf) | i1-IQ3_S | 12.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-27b-it-SkoleGPT-i1-GGUF/resolve/main/gemma-2-27b-it-SkoleGPT.i1-Q3_K_S.gguf) | i1-Q3_K_S | 12.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-27b-it-SkoleGPT-i1-GGUF/resolve/main/gemma-2-27b-it-SkoleGPT.i1-IQ3_M.gguf) | i1-IQ3_M | 12.6 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-27b-it-SkoleGPT-i1-GGUF/resolve/main/gemma-2-27b-it-SkoleGPT.i1-Q3_K_M.gguf) | i1-Q3_K_M | 13.5 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-27b-it-SkoleGPT-i1-GGUF/resolve/main/gemma-2-27b-it-SkoleGPT.i1-Q3_K_L.gguf) | i1-Q3_K_L | 14.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-27b-it-SkoleGPT-i1-GGUF/resolve/main/gemma-2-27b-it-SkoleGPT.i1-IQ4_XS.gguf) | i1-IQ4_XS | 14.9 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-27b-it-SkoleGPT-i1-GGUF/resolve/main/gemma-2-27b-it-SkoleGPT.i1-Q4_0.gguf) | i1-Q4_0 | 15.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-27b-it-SkoleGPT-i1-GGUF/resolve/main/gemma-2-27b-it-SkoleGPT.i1-Q4_K_S.gguf) | i1-Q4_K_S | 15.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-27b-it-SkoleGPT-i1-GGUF/resolve/main/gemma-2-27b-it-SkoleGPT.i1-Q4_K_M.gguf) | i1-Q4_K_M | 16.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-27b-it-SkoleGPT-i1-GGUF/resolve/main/gemma-2-27b-it-SkoleGPT.i1-Q5_K_S.gguf) | i1-Q5_K_S | 19.0 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-27b-it-SkoleGPT-i1-GGUF/resolve/main/gemma-2-27b-it-SkoleGPT.i1-Q5_K_M.gguf) | i1-Q5_K_M | 19.5 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-27b-it-SkoleGPT-i1-GGUF/resolve/main/gemma-2-27b-it-SkoleGPT.i1-Q6_K.gguf) | i1-Q6_K | 22.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
yuiseki/YuisekinAI-mistral-en-1.1B | yuiseki | "2024-04-09T23:09:01Z" | 95 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-09T23:06:02Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Davalejo/vitModel | Davalejo | "2024-09-18T17:36:31Z" | 176 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-09-18T17:04:28Z" | ---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vitModel
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vitModel
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0137
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.149 | 3.8462 | 500 | 0.0137 | 1.0 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
Youssef1234/whisper-base-specAug | Youssef1234 | "2024-05-31T18:22:08Z" | 91 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:Youssef1234/whisper-base-en-native",
"base_model:finetune:Youssef1234/whisper-base-en-native",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-05-31T14:09:15Z" | ---
license: apache-2.0
base_model: Youssef1234/whisper-base-en-native
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-base-specAug
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-base-specAug
This model is a fine-tuned version of [Youssef1234/whisper-base-en-native](https://huggingface.co/Youssef1234/whisper-base-en-native) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3759
- Wer: 16.4211
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0636 | 0.16 | 200 | 0.3404 | 15.5724 |
| 0.0404 | 0.32 | 400 | 0.3638 | 15.9867 |
| 0.0345 | 0.48 | 600 | 0.3759 | 16.4211 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.19.0
- Tokenizers 0.15.2
|
Ant3wan95/t5_small_finetuned_v2 | Ant3wan95 | "2024-12-18T19:36:42Z" | 116 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-12-18T19:36:27Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kiranpantha/whisper-large-v3-nepali-minutes-3-4-9010-peft-dora-speakerSpeakerCV3-rank8-targetxqv-epochs3 | kiranpantha | "2025-03-20T13:54:17Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"ne",
"dataset:kiranpantha/dataset-for-peft-cv-nepds",
"base_model:kiranpantha/whisper-large-v3-nepali",
"base_model:adapter:kiranpantha/whisper-large-v3-nepali",
"license:apache-2.0",
"region:us"
] | null | "2025-03-20T08:12:36Z" | ---
library_name: peft
language:
- ne
license: apache-2.0
base_model: kiranpantha/whisper-large-v3-nepali
tags:
- generated_from_trainer
datasets:
- kiranpantha/dataset-for-peft-cv-nepds
model-index:
- name: kiranpantha/whisper-large-v3-nepali
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kiranpantha/whisper-large-v3-nepali
This model is a fine-tuned version of [kiranpantha/whisper-large-v3-nepali](https://huggingface.co/kiranpantha/whisper-large-v3-nepali) on the OpenSLR54 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6010
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 3 | 1.0190 |
| No log | 2.0 | 6 | 0.7062 |
| No log | 3.0 | 9 | 0.6010 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.47.1
- Pytorch 2.5.1+cxx11.abi
- Datasets 3.2.0
- Tokenizers 0.21.0 |
Subsets and Splits