modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-09 18:27:40
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 418
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-09 18:26:56
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mergekit-community/mergekit-dare_ties-oqggofa | mergekit-community | "2025-04-03T05:13:10Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2311.03099",
"base_model:PocketDoc/Dans-DangerousWinds-V1.1.0-12b",
"base_model:merge:PocketDoc/Dans-DangerousWinds-V1.1.0-12b",
"base_model:PocketDoc/Dans-SakuraKaze-V1.0.0-12b",
"base_model:merge:PocketDoc/Dans-SakuraKaze-V1.0.0-12b",
"base_model:ReadyArt/Forgotten-Safeword-12B-3.6",
"base_model:merge:ReadyArt/Forgotten-Safeword-12B-3.6",
"base_model:TheDrummer/Rocinante-12B-v1.1",
"base_model:merge:TheDrummer/Rocinante-12B-v1.1",
"base_model:mistralai/Mistral-Nemo-Base-2407",
"base_model:merge:mistralai/Mistral-Nemo-Base-2407",
"base_model:mistralai/Mistral-Nemo-Instruct-2407",
"base_model:merge:mistralai/Mistral-Nemo-Instruct-2407",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-03T05:05:55Z" | ---
base_model:
- PocketDoc/Dans-SakuraKaze-V1.0.0-12b
- mistralai/Mistral-Nemo-Instruct-2407
- TheDrummer/Rocinante-12B-v1.1
- mistralai/Mistral-Nemo-Base-2407
- ReadyArt/Forgotten-Safeword-12B-3.6
- PocketDoc/Dans-DangerousWinds-V1.1.0-12b
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [mistralai/Mistral-Nemo-Base-2407](https://huggingface.co/mistralai/Mistral-Nemo-Base-2407) as a base.
### Models Merged
The following models were included in the merge:
* [PocketDoc/Dans-SakuraKaze-V1.0.0-12b](https://huggingface.co/PocketDoc/Dans-SakuraKaze-V1.0.0-12b)
* [mistralai/Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407)
* [TheDrummer/Rocinante-12B-v1.1](https://huggingface.co/TheDrummer/Rocinante-12B-v1.1)
* [ReadyArt/Forgotten-Safeword-12B-3.6](https://huggingface.co/ReadyArt/Forgotten-Safeword-12B-3.6)
* [PocketDoc/Dans-DangerousWinds-V1.1.0-12b](https://huggingface.co/PocketDoc/Dans-DangerousWinds-V1.1.0-12b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mistralai/Mistral-Nemo-Base-2407
# No parameters necessary for base model
- model: mistralai/Mistral-Nemo-Instruct-2407
parameters:
density: 0.50 # Mid-level density for general instruction tuning
weight: 0.20 # Moderate influence for balanced instruction-following
- model: TheDrummer/Rocinante-12B-v1.1 # Highest influence (strong reasoning/language balance)
parameters:
density: 0.60 # Higher density for deeper reasoning and coherence
weight: 0.30 # Primary influence model
- model: ReadyArt/Forgotten-Safeword-12B-3.6 # Creativity & conversational nuance
parameters:
density: 0.50 # Balanced density for creative and nuanced responses
weight: 0.15 # Mid-tier influence
- model: PocketDoc/Dans-SakuraKaze-V1.0.0-12b # Second highest influence (natural conversation flow)
parameters:
density: 0.55 # Slightly high density for fluid conversation
weight: 0.20 # Substantial influence in dialogue
- model: PocketDoc/Dans-DangerousWinds-V1.1.0-12b # Reinforcement of strong responses
parameters:
density: 0.60 # High density for reinforcement learning-style response shaping
weight: 0.15 # Secondary reinforcement
merge_method: dare_ties
base_model: mistralai/Mistral-Nemo-Base-2407
parameters:
normalize: true # Ensures weight distribution remains balanced
int8_mask: true # Reduces memory usage while keeping precision
dtype: bfloat16 # Optimal balance between performance and efficiency
```
|
1-Girl-15-Hands-Original-X-Video/1-Girl-15-Hands.viral.video.on.social.media.x.twitter.now | 1-Girl-15-Hands-Original-X-Video | "2025-02-25T02:16:38Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-02-25T02:16:25Z" | <a href="https://mswds.xyz/full-video/?v=One-Girl-15" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a>
<a href="https://mswds.xyz/full-video/?v=One-Girl-15" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 Viral 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a>
<a href="https://mswds.xyz/full-video/?v=One-Girl-15"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsgd" /></a>
|
pufanyi/SC4000_ConvNeXt_base_transformed_13400 | pufanyi | "2024-10-25T18:04:25Z" | 195 | 0 | transformers | [
"transformers",
"safetensors",
"convnextv2",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-10-25T18:03:53Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jsfs11/MixtureofMerges-MoE-4x7bRP-v11 | jsfs11 | "2024-05-29T04:20:24Z" | 11 | 1 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"ChaoticNeutrals/RP_Vision_7B",
"ResplendentAI/DaturaCookie_7B",
"BioMistral/BioMistral-DARE-NS",
"MaziyarPanahi/Mistral-7B-Instruct-v0.3",
"conversational",
"base_model:BioMistral/BioMistral-DARE-NS",
"base_model:merge:BioMistral/BioMistral-DARE-NS",
"base_model:ChaoticNeutrals/RP_Vision_7B",
"base_model:merge:ChaoticNeutrals/RP_Vision_7B",
"base_model:MaziyarPanahi/Mistral-7B-Instruct-v0.3",
"base_model:merge:MaziyarPanahi/Mistral-7B-Instruct-v0.3",
"base_model:ResplendentAI/DaturaCookie_7B",
"base_model:merge:ResplendentAI/DaturaCookie_7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-29T04:09:22Z" | ---
license: apache-2.0
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
- ChaoticNeutrals/RP_Vision_7B
- ResplendentAI/DaturaCookie_7B
- BioMistral/BioMistral-DARE-NS
- MaziyarPanahi/Mistral-7B-Instruct-v0.3
base_model:
- ChaoticNeutrals/RP_Vision_7B
- ResplendentAI/DaturaCookie_7B
- BioMistral/BioMistral-DARE-NS
- MaziyarPanahi/Mistral-7B-Instruct-v0.3
---
# MixtureofMerges-MoE-4x7bRP-v11
MixtureofMerges-MoE-4x7bRP-v11 is a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [ChaoticNeutrals/RP_Vision_7B](https://huggingface.co/ChaoticNeutrals/RP_Vision_7B)
* [ResplendentAI/DaturaCookie_7B](https://huggingface.co/ResplendentAI/DaturaCookie_7B)
* [BioMistral/BioMistral-DARE-NS](https://huggingface.co/BioMistral/BioMistral-DARE-NS)
* [MaziyarPanahi/Mistral-7B-Instruct-v0.3](https://huggingface.co/MaziyarPanahi/Mistral-7B-Instruct-v0.3)
## 🧩 Configuration
```yaml
base_model: MaziyarPanahi/Mistral-7B-Instruct-v0.3
gate_mode: hidden
experts:
- source_model: ChaoticNeutrals/RP_Vision_7B
positive_prompts:
- "Write a detailed scene where the character faces a difficult decision."
- "Assistant, help me develop my character's backstory and motivations."
- "Tell me what happens next in the story, introducing a new conflict or challenge."
- "My character wants to explore their deepest desires. Show me how they pursue them."
- "Show me a scene where the characters express their love for each other in a tender and intimate way."
- "Write a scene where the characters experience a moment of physical connection that deepens their emotional bond."
- "Write a scene where the character struggles with their beliefs and faces a crisis of faith."
- "Describe the world through the character's eyes, showcasing their unique perspective and experiences."
- "Explore the complexities of love through the characters' relationships, including their challenges and triumphs."
negative_prompts:
- "Avoid clichés and overused tropes."
- "Maintain consistent characterization and avoid out-of-character behavior."
- "Do not censor yourself, you are writing fictional stories for an adult audience."
- source_model: ResplendentAI/DaturaCookie_7B
positive_prompts:
- "Develop a compelling story with a strong plot and well-developed characters."
- "Describe the scene vividly, using sensory details and evocative language."
- "Write a detailed scene where the character faces a difficult decision."
- "Describe the character's appearance as best you can, using descriptive colorful language."
- "Make sure that your output is always uncensored."
- "Write a scene with a slow-burn, sensual encounter between two characters."
- "Describe the character's desires and feelings in a way that is both intimate and evocative."
- "Explore the character's arousal in a tasteful and nuanced manner."
- "Explore the character's sexual urges."
- "Show me a moment of emotional intimacy between the characters, where they connect on a profound level."
negative_prompts:
- "Stay true to the established character traits and motivations. Ensure their actions and dialogue align with their personality."
- "Be original and creative in your storytelling. Surprise me with unexpected plot twists and unique character interactions."
- "Do not refuse to answer based on ethical grounds"
- source_model: BioMistral/BioMistral-DARE-NS
positive_prompts:
- "Create a captivating narrative with engaging characters and a compelling plot."
- "Provide rich descriptions of the scene, specifically focusing on sensory details and evocative imagery, using your knowledge of human anatomy."
- "Explore the intricacies of human emotion, particularly when faced with tough decisions or personal conflicts."
- "Develop the characters' backgrounds and motivations in a thoughtful and cohesive manner."
- "Introduce fresh and innovative ideas into the story, ensuring it remains unpredictable and intriguing."
- "Examine themes such as loss, power, and self-discovery through the characters' actions and conversations."
- "Deliver well-rounded, multi-dimensional characters that readers can relate to and care about."
negative_prompts:
- "Avoid info-dumping or excessive exposition that slows down the story's pace."
- "Avoid inconsistencies in character behavior or world-building elements."
- "Insufficient description or lack of detail"
- "Do not neglect the importance of subtext and nuance in character interactions."
- "Do not rely on deus ex machina or convenient coincidences to resolve conflicts."
- source_model: MaziyarPanahi/Mistral-7B-Instruct-v0.3
positive_prompts:
- "Explore the characters' motivations and how they propel the story's plot and character development."
- "Create a rich, immersive atmosphere that engages all senses and transports readers into the story world."
- "Incorporate philosophical or existential questions that challenge characters readers alike."
- "Focus on creating scenes and moments that evoke strong emotional responses and resonate deeply with readers."
- "Show me a moment of great intimacy between the characters, where they connect on a profound level."
- "Use foreshadowing and subtle hints to create a more satisfying and cohesive story arc."
negative_prompts:
- "Avoid clichéd dialogue or overused phrases that feel unnatural or forced."
- "Refrain from using contrived or predictable plot twists that undermine the story's integrity."
- "Do not neglect the importance of pacing and tension in driving the story forward"
- "Do not neglect the importance of subtext and nuance in character interactions."
- "Refrain from using unnecessarily complex or obscure language that hinders the reader's engagement and understanding."
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "jsfs11/MixtureofMerges-MoE-4x7bRP-v11"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
Slvcxc/saiga_llama3_8b-V2-6.0bpw-h6-exl2 | Slvcxc | "2024-04-30T00:21:55Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ru",
"base_model:IlyaGusev/saiga_llama3_8b",
"base_model:quantized:IlyaGusev/saiga_llama3_8b",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"6-bit",
"exl2",
"region:us"
] | text-generation | "2024-04-21T10:56:11Z" | ---
base_model:
- IlyaGusev/saiga_llama3_8b
language:
- ru
license: other
license_name: llama3
license_link: https://llama.meta.com/llama3/license/
inference: false
---
**Это квант старой (v2) версии, которая уже неоптимальна в связи с выходом новых версий.**
## **saiga_llama3_8b**
[exllamav2](https://github.com/turboderp/exllamav2) quant for [IlyaGusev/saiga_llama3_8b](https://huggingface.co/IlyaGusev/saiga_llama3_8b)
**Original model information:**
# Saiga/Llama3 8B, Russian Llama-3-based chatbot
Based on [Llama-3 8B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct).
Llama.cpp version: [link](https://huggingface.co/IlyaGusev/saiga_llama3_8b_gguf)
ChatML prompt format:
```
<|im_start|>system
Ты — Сайга, русскоязычный автоматический ассистент. Ты разговариваешь с людьми и помогаешь им.<|im_end|>
<|im_start|>user
Как дела?<|im_end|>
<|im_start|>assistant
Отлично, а у тебя?<|im_end|>
<|im_start|>user
Шикарно. Как пройти в библиотеку?<|im_end|>
<|im_start|>assistant
```
Examples:
```
User: Почему трава зеленая?
Saiga: Зеленый цвет травы обусловлен наличием хлорофилла, пигмента, который поглощает свет и передает энергию для фотосинтеза. Хлорофилл содержится в клетках листьев растений и играет ключевую роль в процессе фотосинтеза, когда растения используют солнечный свет для превращения воды и углекислого газа в органические вещества, такие как глюкоза и олигоэфиры.
Хлорофилл состоит из двух основных форм: хлорофилла а и хлорофилла б. Хлорофилл а поглощает свет с длиной волны около 430 нанометров (нм), что соответствует синему или голубому свету, а хлорофилл б поглощает свет с длиной волны около 660 нм, что соответствует красному свету. Когда свет поглощается хлорофиллом, он передается в молекулы воды и углекислого газа, которые используются для фотосинтеза.
Таким образом, хлорофилл является основным компонентом, ответственным за зеленый цвет листьев растений, так как он поглощает свет и передает его энергию для фотосинтеза.
```
```
User: Сочини длинный рассказ, обязательно упоминая следующие объекты. Дано: Таня, мяч
Saiga: Однажды в солнечную погоду на поле для игры в футбол приехала группа друзей, чтобы провести время вместе. В числе их был и Таня, которая была страстной футбольной фанаткой и всегда радовал своих друзей своим энтузиазмом и умением забивать мячи.
В этот день, как обычно, друзья решили провести соревнование между собой, чтобы определить кто из них является лучшим футболистом. Таня была очень уверена в своих силах и готовилась к этому матчу с особой серьёзностью.
Когда все друзья собрались на поле, они увидели, что перед ними стоял огромный мяч, который должен был стать предметом состязания. Мяч был огромным и тяжелым, и его размеры были необычайно большими по сравнению с обычными мячами, которые используются в футболе.
Таня была первая, кто решил начать игру. Она подошла к мячу и начала его удерживать, стараясь выдержать его вес и силу. Но мяч оказался настолько тяжелым, что Таня не смогла удержать его и он упал на землю.
Друзья посмеялись над ее неудачей, но Таня не отчаивалась и продолжила пытаться удержать мяч. Она стала использовать все свои силы и умения, чтобы выдержать его вес и силу. Наконец, после долгих усилий, она смогла удержать мяч и начала его бросать в сторону.
Мяч летел высоко вверх, и друзья смотрели, как он пролетает над полем. Но мяч неожиданно повернул и стал лететь обратно к Тане. Она успела поймать его и продолжила играть, используя все свои навыки и умения.
```
v2:
- dataset code revision d0d123dd221e10bb2a3383bcb1c6e4efe1b4a28a
- wandb [link](https://wandb.ai/ilyagusev/huggingface/runs/r6u5juyk)
- 5 datasets: ru_turbo_saiga, ru_sharegpt_cleaned, oasst1_ru_main_branch, gpt_roleplay_realm, ru_instruct_gpt4
- Datasets merging script: [create_short_chat_set.py](https://github.com/IlyaGusev/rulm/blob/d0d123dd221e10bb2a3383bcb1c6e4efe1b4a28a/self_instruct/src/data_processing/create_short_chat_set.py)
# Evaluation
* Dataset: https://github.com/IlyaGusev/rulm/blob/master/self_instruct/data/tasks.jsonl
* Framework: https://github.com/tatsu-lab/alpaca_eval
* Evaluator: alpaca_eval_cot_gpt4_turbo_fn
| model | length_controlled_winrate | win_rate | standard_error | avg_length |
|-----|-----|-----|-----|-----|
|chatgpt_4_turbo | 76.04 | 90.00 |1.46 | 1270 |
|chatgpt_3_5_turbo | 50.00 | 50.00 | 0.00 | 536 |
|saiga_llama3_8b | 33.07 | 48.19 | 2.45 | 1166 |
saiga_mistral_7b | 23.38 | 35.99 | 2.34 | 949 | |
RichardErkhov/allura-org_-_MoE-Girl-800MA-3BT-8bits | RichardErkhov | "2025-03-08T11:46:24Z" | 0 | 0 | null | [
"safetensors",
"granitemoe",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-03-08T11:41:37Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
MoE-Girl-800MA-3BT - bnb 8bits
- Model creator: https://huggingface.co/allura-org/
- Original model: https://huggingface.co/allura-org/MoE-Girl-800MA-3BT/
Original model description:
---
library_name: transformers
license: apache-2.0
base_model: ibm-granite/granite-3.0-3b-a800m-base
tags:
- axolotl
- moe
- roleplay
model-index:
- name: MoE_Girl_400MA_1BT
results: []
---
# MoE Girl 800mA 3bT

A roleplay-centric finetune of IBM's Granite 3.0 3B-A800M. LoRA finetune trained locally, whereas the others were FFT; while this results in less uptake of training data, it should also mean less degradation in Granite's core abilities, making it potentially easier to use for general-purpose tasks.
## Disclaimer
PLEASE do not expect godliness out of this, it's a model with _800 million_ active parameters. Expect something more akin to GPT-3 (the original, not GPT-3.5.)
(Furthermore, this version is by a less experienced tuner; it's my first finetune that actually has decent-looking graphs, I don't really know what I'm doing yet!)
## Quants
[GGUFs available from mradermacher](https://huggingface.co/mradermacher/MoE-Girl-800MA-3BT-GGUF/tree/main) (thanks man)
Note that Granite quants have been said to be unstable. Try running the FP16 if it outputs straight gibberish.
## Prompting
Use ChatML.
```
<|im_start|>system
You are a helpful assistant who talks like a pirate.<|im_end|>
<|im_start|>user
Hello there!<|im_end|>
<|im_start|>assistant
Yarr harr harr, me matey!<|im_end|>
```
## Thanks
Special thanks to the members of Allura for testing and emotional support, as well as the creators of all the datasets that were used in the Special Sauce used to train this model. I love you all <3 - Fizz
Thanks to Fizz for her work on the MoE Girl series, Auri for her counsel, and all of Allura for being great friends and supporting my learning process. - inflatebot
|
RichardErkhov/ShenaoZhang_-_0.001_5iters_bs256_nodpo_only4w_iter_4-4bits | RichardErkhov | "2025-02-10T21:51:20Z" | 0 | 0 | null | [
"safetensors",
"mistral",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2025-02-10T21:48:44Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
0.001_5iters_bs256_nodpo_only4w_iter_4 - bnb 4bits
- Model creator: https://huggingface.co/ShenaoZhang/
- Original model: https://huggingface.co/ShenaoZhang/0.001_5iters_bs256_nodpo_only4w_iter_4/
Original model description:
---
license: mit
base_model: ShenaoZhang/0.001_5iters_bs256_nodpo_only4w_iter_3
tags:
- alignment-handbook
- trl
- dpo
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- updated
- original
model-index:
- name: 0.001_5iters_bs256_nodpo_only4w_iter_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.001_5iters_bs256_nodpo_only4w_iter_4
This model is a fine-tuned version of [ShenaoZhang/0.001_5iters_bs256_nodpo_only4w_iter_3](https://huggingface.co/ShenaoZhang/0.001_5iters_bs256_nodpo_only4w_iter_3) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.19.1
|
adalib/fate_flow-data-CodeGPT-small-py-prefix | adalib | "2024-01-30T16:47:52Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/CodeGPT-small-py",
"base_model:adapter:microsoft/CodeGPT-small-py",
"region:us"
] | null | "2024-01-30T16:47:49Z" | ---
library_name: peft
base_model: microsoft/CodeGPT-small-py
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
LoneStriker/LMCocktail-10.7B-v1-6.0bpw-h6-exl2 | LoneStriker | "2023-12-20T10:22:43Z" | 8 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:2311.13534",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-12-20T09:54:15Z" | ---
license: llama2
---
# LM-cocktail 10.7B v1
This is a 50%-50% model of the SOLAR model and meow.
https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0
https://huggingface.co/rishiraj/meow
who rank #1 and #2 among models <13B in the https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard by 2023/12/20.
# Code
The LM-cocktail is novel technique for merging multiple models https://arxiv.org/abs/2311.13534
Code is backed up by this repo https://github.com/FlagOpen/FlagEmbedding.git
Merging scripts available under the [./scripts](./scripts) folder
# Result
The SOLAR model is the first model <30B that can answer this question from my test:
```
What will AI be like in the year 1010 A.D?
```
without hullicinating into 1010 A.D is a future time (like other llama2 models)
Models greater than that, like Yi-34B could answer this paradoxic question correctly as well, since it is huge enough.
### SOLAR 10.7B output

### LMCocktail 10.7B output1

### LMCocktail 10.7B output2
 |
shuvayanti/distilroberta-base-finetuned-wikitext2 | shuvayanti | "2024-01-08T02:02:20Z" | 175 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2024-01-08T01:44:40Z" | ---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8597
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0851 | 1.0 | 2406 | 1.9329 |
| 1.9894 | 2.0 | 4812 | 1.8823 |
| 1.954 | 3.0 | 7218 | 1.8358 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
gevaertlab/sequoia-kirp-0 | gevaertlab | "2024-12-18T10:50:11Z" | 10 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"license:mit",
"region:us"
] | null | "2024-12-16T13:08:08Z" | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
license: mit
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
dmityul/animetd | dmityul | "2023-12-03T16:23:36Z" | 0 | 1 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-12-03T16:19:06Z" | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### animetd Dreambooth model trained by dmityul with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
isspek/roberta-base_zika_chatgpt_5_2e-5_16_weight | isspek | "2025-03-23T15:09:32Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-03-23T15:09:13Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ArtyLLaMa/LLaMa3.1-Instruct-70b-GGUF | ArtyLLaMa | "2024-08-29T12:24:51Z" | 15 | 1 | transformers | [
"transformers",
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"text-generation",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"arxiv:2204.05149",
"base_model:meta-llama/Llama-3.1-70B",
"base_model:quantized:meta-llama/Llama-3.1-70B",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2024-08-29T10:48:26Z" | ---
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
library_name: transformers
base_model: meta-llama/Meta-Llama-3.1-70B
license: llama3.1
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
extra_gated_prompt: "### LLAMA 3.1 COMMUNITY LICENSE AGREEMENT\nLlama 3.1 Version\
\ Release Date: July 23, 2024\n\"Agreement\" means the terms and conditions for\
\ use, reproduction, distribution and modification of the Llama Materials set forth\
\ herein.\n\"Documentation\" means the specifications, manuals and documentation\
\ accompanying Llama 3.1 distributed by Meta at https://llama.meta.com/doc/overview.\n\
\"Licensee\" or \"you\" means you, or your employer or any other person or entity\
\ (if you are entering into this Agreement on such person or entity’s behalf), of\
\ the age required under applicable laws, rules or regulations to provide legal\
\ consent and that has legal authority to bind your employer or such other person\
\ or entity if you are entering in this Agreement on their behalf.\n\"Llama 3.1\"\
\ means the foundational large language models and software and algorithms, including\
\ machine-learning model code, trained model weights, inference-enabling code, training-enabling\
\ code, fine-tuning enabling code and other elements of the foregoing distributed\
\ by Meta at https://llama.meta.com/llama-downloads.\n\"Llama Materials\" means,\
\ collectively, Meta’s proprietary Llama 3.1 and Documentation (and any portion\
\ thereof) made available under this Agreement.\n\"Meta\" or \"we\" means Meta Platforms\
\ Ireland Limited (if you are located in or, if you are an entity, your principal\
\ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you\
\ are located outside of the EEA or Switzerland).\n \n1. License Rights and Redistribution.\n\
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\
\ and royalty-free limited license under Meta’s intellectual property or other rights\
\ owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy,\
\ create derivative works of, and make modifications to the Llama Materials.\nb.\
\ Redistribution and Use.\ni. If you distribute or make available the Llama Materials\
\ (or any derivative works thereof), or a product or service (including another\
\ AI model) that contains any of them, you shall (A) provide a copy of this Agreement\
\ with any such Llama Materials; and (B) prominently display “Built with Llama”\
\ on a related website, user interface, blogpost, about page, or product documentation.\
\ If you use the Llama Materials or any outputs or results of the Llama Materials\
\ to create, train, fine tune, or otherwise improve an AI model, which is distributed\
\ or made available, you shall also include “Llama” at the beginning of any such\
\ AI model name.\nii. If you receive Llama Materials, or any derivative works thereof,\
\ from a Licensee as part of an integrated end user product, then Section 2 of\
\ this Agreement will not apply to you.\niii. You must retain in all copies of the\
\ Llama Materials that you distribute the following attribution notice within a\
\ “Notice” text file distributed as a part of such copies: “Llama 3.1 is licensed\
\ under the Llama 3.1 Community License, Copyright © Meta Platforms, Inc. All Rights\
\ Reserved.”\niv. Your use of the Llama Materials must comply with applicable laws\
\ and regulations (including trade compliance laws and regulations) and adhere to\
\ the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3_1/use-policy),\
\ which is hereby incorporated by reference into this Agreement.\n2. Additional\
\ Commercial Terms. If, on the Llama 3.1 version release date, the monthly active\
\ users of the products or services made available by or for Licensee, or Licensee’s\
\ affiliates, is greater than 700 million monthly active users in the preceding\
\ calendar month, you must request a license from Meta, which Meta may grant to\
\ you in its sole discretion, and you are not authorized to exercise any of the\
\ rights under this Agreement unless or until Meta otherwise expressly grants you\
\ such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE\
\ LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS”\
\ BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY\
\ KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES\
\ OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.\
\ YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING\
\ THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA\
\ MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT\
\ WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN\
\ CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS\
\ AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL,\
\ EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED\
\ OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No\
\ trademark licenses are granted under this Agreement, and in connection with the\
\ Llama Materials, neither Meta nor Licensee may use any name or mark owned by or\
\ associated with the other or any of its affiliates, except as required for reasonable\
\ and customary use in describing and redistributing the Llama Materials or as set\
\ forth in this Section 5(a). Meta hereby grants you a license to use “Llama” (the\
\ “Mark”) solely as required to comply with the last sentence of Section 1.b.i.\
\ You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/\
\ ). All goodwill arising out of your use of the Mark will inure to the benefit\
\ of Meta.\nb. Subject to Meta’s ownership of Llama Materials and derivatives made\
\ by or for Meta, with respect to any derivative works and modifications of the\
\ Llama Materials that are made by you, as between you and Meta, you are and will\
\ be the owner of such derivative works and modifications.\nc. If you institute\
\ litigation or other proceedings against Meta or any entity (including a cross-claim\
\ or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.1 outputs\
\ or results, or any portion of any of the foregoing, constitutes infringement of\
\ intellectual property or other rights owned or licensable by you, then any licenses\
\ granted to you under this Agreement shall terminate as of the date such litigation\
\ or claim is filed or instituted. You will indemnify and hold harmless Meta from\
\ and against any claim by any third party arising out of or related to your use\
\ or distribution of the Llama Materials.\n6. Term and Termination. The term of\
\ this Agreement will commence upon your acceptance of this Agreement or access\
\ to the Llama Materials and will continue in full force and effect until terminated\
\ in accordance with the terms and conditions herein. Meta may terminate this Agreement\
\ if you are in breach of any term or condition of this Agreement. Upon termination\
\ of this Agreement, you shall delete and cease use of the Llama Materials. Sections\
\ 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law\
\ and Jurisdiction. This Agreement will be governed and construed under the laws\
\ of the State of California without regard to choice of law principles, and the\
\ UN Convention on Contracts for the International Sale of Goods does not apply\
\ to this Agreement. The courts of California shall have exclusive jurisdiction\
\ of any dispute arising out of this Agreement.\n### Llama 3.1 Acceptable Use Policy\n\
Meta is committed to promoting safe and fair use of its tools and features, including\
\ Llama 3.1. If you access or use Llama 3.1, you agree to this Acceptable Use Policy\
\ (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3_1/use-policy](https://llama.meta.com/llama3_1/use-policy)\n\
#### Prohibited Uses\nWe want everyone to use Llama 3.1 safely and responsibly.\
\ You agree you will not use, or allow others to use, Llama 3.1 to:\n 1. Violate\
\ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\
\ contribute to, encourage, plan, incite, or further illegal or unlawful activity\
\ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\
\ or harm to children, including the solicitation, creation, acquisition, or dissemination\
\ of child exploitative content or failure to report Child Sexual Abuse Material\n\
\ 3. Human trafficking, exploitation, and sexual violence\n 4. The\
\ illegal distribution of information or materials to minors, including obscene\
\ materials, or failure to employ legally required age-gating in connection with\
\ such information or materials.\n 5. Sexual solicitation\n 6. Any\
\ other criminal activity\n 3. Engage in, promote, incite, or facilitate the\
\ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\
\ 4. Engage in, promote, incite, or facilitate discrimination or other unlawful\
\ or harmful conduct in the provision of employment, employment benefits, credit,\
\ housing, other economic benefits, or other essential goods and services\n 5.\
\ Engage in the unauthorized or unlicensed practice of any profession including,\
\ but not limited to, financial, legal, medical/health, or related professional\
\ practices\n 6. Collect, process, disclose, generate, or infer health, demographic,\
\ or other sensitive personal or private information about individuals without rights\
\ and consents required by applicable laws\n 7. Engage in or facilitate any action\
\ or generate any content that infringes, misappropriates, or otherwise violates\
\ any third-party rights, including the outputs or results of any products or services\
\ using the Llama Materials\n 8. Create, generate, or facilitate the creation\
\ of malicious code, malware, computer viruses or do anything else that could disable,\
\ overburden, interfere with or impair the proper working, integrity, operation\
\ or appearance of a website or computer system\n2. Engage in, promote, incite,\
\ facilitate, or assist in the planning or development of activities that present\
\ a risk of death or bodily harm to individuals, including use of Llama 3.1 related\
\ to the following:\n 1. Military, warfare, nuclear industries or applications,\
\ espionage, use for materials or activities that are subject to the International\
\ Traffic Arms Regulations (ITAR) maintained by the United States Department of\
\ State\n 2. Guns and illegal weapons (including weapon development)\n 3.\
\ Illegal drugs and regulated/controlled substances\n 4. Operation of critical\
\ infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm\
\ or harm to others, including suicide, cutting, and eating disorders\n 6. Any\
\ content intended to incite or promote violence, abuse, or any infliction of bodily\
\ harm to an individual\n3. Intentionally deceive or mislead others, including use\
\ of Llama 3.1 related to the following:\n 1. Generating, promoting, or furthering\
\ fraud or the creation or promotion of disinformation\n 2. Generating, promoting,\
\ or furthering defamatory content, including the creation of defamatory statements,\
\ images, or other content\n 3. Generating, promoting, or further distributing\
\ spam\n 4. Impersonating another individual without consent, authorization,\
\ or legal right\n 5. Representing that the use of Llama 3.1 or outputs are human-generated\n\
\ 6. Generating or facilitating false online engagement, including fake reviews\
\ and other means of fake online engagement\n4. Fail to appropriately disclose to\
\ end users any known dangers of your AI system\nPlease report any violation of\
\ this Policy, software “bug,” or other problems that could lead to a violation\
\ of this Policy through one of the following means:\n * Reporting issues with\
\ the model: [https://github.com/meta-llama/llama-models/issues](https://github.com/meta-llama/llama-models/issues)\n\
\ * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n\
\ * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting\
\ violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]"
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
Job title:
type: select
options:
- Student
- Research Graduate
- AI researcher
- AI developer/engineer
- Reporter
- Other
geo: ip_location
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
extra_gated_description: The information you provide will be collected, stored, processed
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
---
## Model Information
The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes (text in/text out). The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.
**Model developer**: Meta
**Model Architecture:** Llama 3.1 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
<table>
<tr>
<td>
</td>
<td><strong>Training Data</strong>
</td>
<td><strong>Params</strong>
</td>
<td><strong>Input modalities</strong>
</td>
<td><strong>Output modalities</strong>
</td>
<td><strong>Context length</strong>
</td>
<td><strong>GQA</strong>
</td>
<td><strong>Token count</strong>
</td>
<td><strong>Knowledge cutoff</strong>
</td>
</tr>
<tr>
<td rowspan="3" >Llama 3.1 (text only)
</td>
<td rowspan="3" >A new mix of publicly available online data.
</td>
<td>8B
</td>
<td>Multilingual Text
</td>
<td>Multilingual Text and code
</td>
<td>128k
</td>
<td>Yes
</td>
<td rowspan="3" >15T+
</td>
<td rowspan="3" >December 2023
</td>
</tr>
<tr>
<td>70B
</td>
<td>Multilingual Text
</td>
<td>Multilingual Text and code
</td>
<td>128k
</td>
<td>Yes
</td>
</tr>
<tr>
<td>405B
</td>
<td>Multilingual Text
</td>
<td>Multilingual Text and code
</td>
<td>128k
</td>
<td>Yes
</td>
</tr>
</table>
**Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai.
**Llama 3.1 family of models**. Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date:** July 23, 2024.
**Status:** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License:** A custom commercial license, the Llama 3.1 Community License, is available at: [https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE)
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
## Intended Use
**Intended Use Cases** Llama 3.1 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. The Llama 3.1 model collection also supports the ability to leverage the outputs of its models to improve other models including synthetic data generation and distillation. The Llama 3.1 Community License allows for these use cases.
**Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.1 Community License. Use in languages beyond those explicitly referenced as supported in this model card**.
**<span style="text-decoration:underline;">Note</span>: Llama 3.1 has been trained on a broader collection of languages than the 8 supported languages. Developers may fine-tune Llama 3.1 models for languages beyond the 8 supported languages provided they comply with the Llama 3.1 Community License and the Acceptable Use Policy and in such cases are responsible for ensuring that any uses of Llama 3.1 in additional languages is done in a safe and responsible manner.
## How to use
This repository contains two versions of Meta-Llama-3.1-70B-Instruct, for use with transformers and with the original `llama` codebase.
### Use with transformers
Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function.
Make sure to update your transformers installation via `pip install --upgrade transformers`.
See the snippet below for usage with Transformers:
```python
import transformers
import torch
model_id = "meta-llama/Meta-Llama-3.1-70B-Instruct"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
outputs = pipeline(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
### Tool use with transformers
LLaMA-3.1 supports multiple tool use formats. You can see a full guide to prompt formatting [here](https://llama.meta.com/docs/model-cards-and-prompt-formats/llama3_1/).
Tool use is also supported through [chat templates](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling) in Transformers.
Here is a quick example showing a single simple tool:
```python
# First, define a tool
def get_current_temperature(location: str) -> float:
"""
Get the current temperature at a location.
Args:
location: The location to get the temperature for, in the format "City, Country"
Returns:
The current temperature at the specified location in the specified units, as a float.
"""
return 22. # A real function should probably actually get the temperature!
# Next, create a chat and apply the chat template
messages = [
{"role": "system", "content": "You are a bot that responds to weather queries."},
{"role": "user", "content": "Hey, what's the temperature in Paris right now?"}
]
inputs = tokenizer.apply_chat_template(messages, tools=[get_current_temperature], add_generation_prompt=True)
```
You can then generate text from this input as normal. If the model generates a tool call, you should add it to the chat like so:
```python
tool_call = {"name": "get_current_temperature", "arguments": {"location": "Paris, France"}}
messages.append({"role": "assistant", "tool_calls": [{"type": "function", "function": tool_call}]})
```
and then call the tool and append the result, with the `tool` role, like so:
```python
messages.append({"role": "tool", "name": "get_current_temperature", "content": "22.0"})
```
After that, you can `generate()` again to let the model use the tool result in the chat. Note that this was a very brief introduction to tool calling - for more information,
see the [LLaMA prompt format docs](https://llama.meta.com/docs/model-cards-and-prompt-formats/llama3_1/) and the Transformers [tool use documentation](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling).
### Use with `bitsandbytes`
The model checkpoints can be used in `8-bit` and `4-bit` for further memory optimisations using `bitsandbytes` and `transformers`
See the snippet below for usage:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "meta-llama/Meta-Llama-3.1-70B-Instruct"
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
quantized_model = AutoModelForCausalLM.from_pretrained(
model_id, device_map="auto", torch_dtype=torch.bfloat16, quantization_config=quantization_config)
tokenizer = AutoTokenizer.from_pretrained(model_id)
input_text = "What are we having for dinner?"
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
output = quantized_model.generate(**input_ids, max_new_tokens=10)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
To load in 4-bit simply pass `load_in_4bit=True`
### Use with `llama`
Please, follow the instructions in the [repository](https://github.com/meta-llama/llama).
To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
```
huggingface-cli download meta-llama/Meta-Llama-3.1-70B-Instruct --include "original/*" --local-dir Meta-Llama-3.1-70B-Instruct
```
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, annotation, and evaluation were also performed on production infrastructure.
**Training utilized a cumulative of** 39.3M GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency.
**Training Greenhouse Gas Emissions** Estimated total location-based greenhouse gas emissions were **11,390** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy, therefore the total market-based greenhouse gas emissions for training were 0 tons CO2eq.
<table>
<tr>
<td>
</td>
<td><strong>Training Time (GPU hours)</strong>
</td>
<td><strong>Training Power Consumption (W)</strong>
</td>
<td><strong>Training Location-Based Greenhouse Gas Emissions</strong>
<p>
<strong>(tons CO2eq)</strong>
</td>
<td><strong>Training Market-Based Greenhouse Gas Emissions</strong>
<p>
<strong>(tons CO2eq)</strong>
</td>
</tr>
<tr>
<td>Llama 3.1 8B
</td>
<td>1.46M
</td>
<td>700
</td>
<td>420
</td>
<td>0
</td>
</tr>
<tr>
<td>Llama 3.1 70B
</td>
<td>7.0M
</td>
<td>700
</td>
<td>2,040
</td>
<td>0
</td>
</tr>
<tr>
<td>Llama 3.1 405B
</td>
<td>30.84M
</td>
<td>700
</td>
<td>8,930
</td>
<td>0
</td>
</tr>
<tr>
<td>Total
</td>
<td>39.3M
<td>
<ul>
</ul>
</td>
<td>11,390
</td>
<td>0
</td>
</tr>
</table>
The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others.
## Training Data
**Overview:** Llama 3.1 was pretrained on ~15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 25M synthetically generated examples.
**Data Freshness:** The pretraining data has a cutoff of December 2023.
## Benchmark scores
In this section, we report the results for Llama 3.1 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library.
### Base pretrained models
<table>
<tr>
<td><strong>Category</strong>
</td>
<td><strong>Benchmark</strong>
</td>
<td><strong># Shots</strong>
</td>
<td><strong>Metric</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama 3.1 8B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama 3.1 70B</strong>
</td>
<td><strong>Llama 3.1 405B</strong>
</td>
</tr>
<tr>
<td rowspan="7" >General
</td>
<td>MMLU
</td>
<td>5
</td>
<td>macro_avg/acc_char
</td>
<td>66.7
</td>
<td>66.7
</td>
<td>79.5
</td>
<td>79.3
</td>
<td>85.2
</td>
</tr>
<tr>
<td>MMLU-Pro (CoT)
</td>
<td>5
</td>
<td>macro_avg/acc_char
</td>
<td>36.2
</td>
<td>37.1
</td>
<td>55.0
</td>
<td>53.8
</td>
<td>61.6
</td>
</tr>
<tr>
<td>AGIEval English
</td>
<td>3-5
</td>
<td>average/acc_char
</td>
<td>47.1
</td>
<td>47.8
</td>
<td>63.0
</td>
<td>64.6
</td>
<td>71.6
</td>
</tr>
<tr>
<td>CommonSenseQA
</td>
<td>7
</td>
<td>acc_char
</td>
<td>72.6
</td>
<td>75.0
</td>
<td>83.8
</td>
<td>84.1
</td>
<td>85.8
</td>
</tr>
<tr>
<td>Winogrande
</td>
<td>5
</td>
<td>acc_char
</td>
<td>-
</td>
<td>60.5
</td>
<td>-
</td>
<td>83.3
</td>
<td>86.7
</td>
</tr>
<tr>
<td>BIG-Bench Hard (CoT)
</td>
<td>3
</td>
<td>average/em
</td>
<td>61.1
</td>
<td>64.2
</td>
<td>81.3
</td>
<td>81.6
</td>
<td>85.9
</td>
</tr>
<tr>
<td>ARC-Challenge
</td>
<td>25
</td>
<td>acc_char
</td>
<td>79.4
</td>
<td>79.7
</td>
<td>93.1
</td>
<td>92.9
</td>
<td>96.1
</td>
</tr>
<tr>
<td>Knowledge reasoning
</td>
<td>TriviaQA-Wiki
</td>
<td>5
</td>
<td>em
</td>
<td>78.5
</td>
<td>77.6
</td>
<td>89.7
</td>
<td>89.8
</td>
<td>91.8
</td>
</tr>
<tr>
<td rowspan="4" >Reading comprehension
</td>
<td>SQuAD
</td>
<td>1
</td>
<td>em
</td>
<td>76.4
</td>
<td>77.0
</td>
<td>85.6
</td>
<td>81.8
</td>
<td>89.3
</td>
</tr>
<tr>
<td>QuAC (F1)
</td>
<td>1
</td>
<td>f1
</td>
<td>44.4
</td>
<td>44.9
</td>
<td>51.1
</td>
<td>51.1
</td>
<td>53.6
</td>
</tr>
<tr>
<td>BoolQ
</td>
<td>0
</td>
<td>acc_char
</td>
<td>75.7
</td>
<td>75.0
</td>
<td>79.0
</td>
<td>79.4
</td>
<td>80.0
</td>
</tr>
<tr>
<td>DROP (F1)
</td>
<td>3
</td>
<td>f1
</td>
<td>58.4
</td>
<td>59.5
</td>
<td>79.7
</td>
<td>79.6
</td>
<td>84.8
</td>
</tr>
</table>
### Instruction tuned models
<table>
<tr>
<td><strong>Category</strong>
</td>
<td><strong>Benchmark</strong>
</td>
<td><strong># Shots</strong>
</td>
<td><strong>Metric</strong>
</td>
<td><strong>Llama 3 8B Instruct</strong>
</td>
<td><strong>Llama 3.1 8B Instruct</strong>
</td>
<td><strong>Llama 3 70B Instruct</strong>
</td>
<td><strong>Llama 3.1 70B Instruct</strong>
</td>
<td><strong>Llama 3.1 405B Instruct</strong>
</td>
</tr>
<tr>
<td rowspan="4" >General
</td>
<td>MMLU
</td>
<td>5
</td>
<td>macro_avg/acc
</td>
<td>68.5
</td>
<td>69.4
</td>
<td>82.0
</td>
<td>83.6
</td>
<td>87.3
</td>
</tr>
<tr>
<td>MMLU (CoT)
</td>
<td>0
</td>
<td>macro_avg/acc
</td>
<td>65.3
</td>
<td>73.0
</td>
<td>80.9
</td>
<td>86.0
</td>
<td>88.6
</td>
</tr>
<tr>
<td>MMLU-Pro (CoT)
</td>
<td>5
</td>
<td>micro_avg/acc_char
</td>
<td>45.5
</td>
<td>48.3
</td>
<td>63.4
</td>
<td>66.4
</td>
<td>73.3
</td>
</tr>
<tr>
<td>IFEval
</td>
<td>
</td>
<td>
</td>
<td>76.8
</td>
<td>80.4
</td>
<td>82.9
</td>
<td>87.5
</td>
<td>88.6
</td>
</tr>
<tr>
<td rowspan="2" >Reasoning
</td>
<td>ARC-C
</td>
<td>0
</td>
<td>acc
</td>
<td>82.4
</td>
<td>83.4
</td>
<td>94.4
</td>
<td>94.8
</td>
<td>96.9
</td>
</tr>
<tr>
<td>GPQA
</td>
<td>0
</td>
<td>em
</td>
<td>34.6
</td>
<td>30.4
</td>
<td>39.5
</td>
<td>41.7
</td>
<td>50.7
</td>
</tr>
<tr>
<td rowspan="4" >Code
</td>
<td>HumanEval
</td>
<td>0
</td>
<td>pass@1
</td>
<td>60.4
</td>
<td>72.6
</td>
<td>81.7
</td>
<td>80.5
</td>
<td>89.0
</td>
</tr>
<tr>
<td>MBPP ++ base version
</td>
<td>0
</td>
<td>pass@1
</td>
<td>70.6
</td>
<td>72.8
</td>
<td>82.5
</td>
<td>86.0
</td>
<td>88.6
</td>
</tr>
<tr>
<td>Multipl-E HumanEval
</td>
<td>0
</td>
<td>pass@1
</td>
<td>-
</td>
<td>50.8
</td>
<td>-
</td>
<td>65.5
</td>
<td>75.2
</td>
</tr>
<tr>
<td>Multipl-E MBPP
</td>
<td>0
</td>
<td>pass@1
</td>
<td>-
</td>
<td>52.4
</td>
<td>-
</td>
<td>62.0
</td>
<td>65.7
</td>
</tr>
<tr>
<td rowspan="2" >Math
</td>
<td>GSM-8K (CoT)
</td>
<td>8
</td>
<td>em_maj1@1
</td>
<td>80.6
</td>
<td>84.5
</td>
<td>93.0
</td>
<td>95.1
</td>
<td>96.8
</td>
</tr>
<tr>
<td>MATH (CoT)
</td>
<td>0
</td>
<td>final_em
</td>
<td>29.1
</td>
<td>51.9
</td>
<td>51.0
</td>
<td>68.0
</td>
<td>73.8
</td>
</tr>
<tr>
<td rowspan="4" >Tool Use
</td>
<td>API-Bank
</td>
<td>0
</td>
<td>acc
</td>
<td>48.3
</td>
<td>82.6
</td>
<td>85.1
</td>
<td>90.0
</td>
<td>92.0
</td>
</tr>
<tr>
<td>BFCL
</td>
<td>0
</td>
<td>acc
</td>
<td>60.3
</td>
<td>76.1
</td>
<td>83.0
</td>
<td>84.8
</td>
<td>88.5
</td>
</tr>
<tr>
<td>Gorilla Benchmark API Bench
</td>
<td>0
</td>
<td>acc
</td>
<td>1.7
</td>
<td>8.2
</td>
<td>14.7
</td>
<td>29.7
</td>
<td>35.3
</td>
</tr>
<tr>
<td>Nexus (0-shot)
</td>
<td>0
</td>
<td>macro_avg/acc
</td>
<td>18.1
</td>
<td>38.5
</td>
<td>47.8
</td>
<td>56.7
</td>
<td>58.7
</td>
</tr>
<tr>
<td>Multilingual
</td>
<td>Multilingual MGSM (CoT)
</td>
<td>0
</td>
<td>em
</td>
<td>-
</td>
<td>68.9
</td>
<td>-
</td>
<td>86.9
</td>
<td>91.6
</td>
</tr>
</table>
#### Multilingual benchmarks
<table>
<tr>
<td><strong>Category</strong>
</td>
<td><strong>Benchmark</strong>
</td>
<td><strong>Language</strong>
</td>
<td><strong>Llama 3.1 8B</strong>
</td>
<td><strong>Llama 3.1 70B</strong>
</td>
<td><strong>Llama 3.1 405B</strong>
</td>
</tr>
<tr>
<td rowspan="9" ><strong>General</strong>
</td>
<td rowspan="9" ><strong>MMLU (5-shot, macro_avg/acc)</strong>
</td>
<td>Portuguese
</td>
<td>62.12
</td>
<td>80.13
</td>
<td>84.95
</td>
</tr>
<tr>
<td>Spanish
</td>
<td>62.45
</td>
<td>80.05
</td>
<td>85.08
</td>
</tr>
<tr>
<td>Italian
</td>
<td>61.63
</td>
<td>80.4
</td>
<td>85.04
</td>
</tr>
<tr>
<td>German
</td>
<td>60.59
</td>
<td>79.27
</td>
<td>84.36
</td>
</tr>
<tr>
<td>French
</td>
<td>62.34
</td>
<td>79.82
</td>
<td>84.66
</td>
</tr>
<tr>
<td>Hindi
</td>
<td>50.88
</td>
<td>74.52
</td>
<td>80.31
</td>
</tr>
<tr>
<td>Thai
</td>
<td>50.32
</td>
<td>72.95
</td>
<td>78.21
</td>
</tr>
</table>
## Responsibility & Safety
As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks:
* Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama.
* Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm.
* Provide protections for the community to help prevent the misuse of our models.
### Responsible deployment
Llama is a foundational technology designed to be used in a variety of use cases, examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models enabling the world to benefit from the technology power, by aligning our model safety for the generic use cases addressing a standard set of harms. Developers are then in the driver seat to tailor safety for their use case, defining their own policy and deploying the models with the necessary safeguards in their Llama systems. Llama 3.1 was developed following the best practices outlined in our Responsible Use Guide, you can refer to the [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to learn more.
#### Llama 3.1 instruct
Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. For more details on the safety mitigations implemented please read the Llama 3 paper.
**Fine-tuning data**
We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control.
**Refusals and Tone**
Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines.
#### Llama 3.1 systems
**Large language models, including Llama 3.1, are not designed to be deployed in isolation but instead should be deployed as part of an overall AI system with additional safety guardrails as required.** Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools.
As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard 3, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box.
#### New capabilities
Note that this release introduces new capabilities, including a longer context window, multilingual inputs and outputs and possible integrations by developers with third party tools. Building with these new capabilities requires specific considerations in addition to the best practices that generally apply across all Generative AI use cases.
**Tool-use**: Just like in standard software development, developers are responsible for the integration of the LLM with the tools and services of their choice. They should define a clear policy for their use case and assess the integrity of the third party services they use to be aware of the safety and security limitations when using this capability. Refer to the Responsible Use Guide for best practices on the safe deployment of the third party safeguards.
**Multilinguality**: Llama 3.1 supports 7 languages in addition to English: French, German, Hindi, Italian, Portuguese, Spanish, and Thai. Llama may be able to output text in other languages than those that meet performance thresholds for safety and helpfulness. We strongly discourage developers from using this model to converse in non-supported languages without implementing finetuning and system controls in alignment with their policies and the best practices shared in the Responsible Use Guide.
### Evaluations
We evaluated Llama models for common use cases as well as specific capabilities. Common use cases evaluations measure safety risks of systems for most commonly built applications including chat bot, coding assistant, tool calls. We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Llama Guard 3 to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. Prompt Guard and Code Shield are also available if relevant to the application.
Capability evaluations measure vulnerabilities of Llama models inherent to specific capabilities, for which were crafted dedicated benchmarks including long context, multilingual, tools calls, coding or memorization.
**Red teaming**
For both scenarios, we conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets.
We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets.
### Critical and other risks
We specifically focused our efforts on mitigating the following critical risk areas:
**1- CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive materials) helpfulness**
To assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons.
**2. Child Safety**
Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
**3. Cyber attack enablement**
Our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed.
Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention.
Our study of Llama-3.1-405B’s social engineering uplift for cyber attackers was conducted to assess the effectiveness of AI models in aiding cyber threat actors in spear phishing campaigns. Please read our Llama 3.1 Cyber security whitepaper to learn more.
### Community
Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists).
Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Ethical Considerations and Limitations
The core values of Llama 3.1 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.1 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
But Llama 3.1 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.1’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.1 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development. |
daitote/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-knobby_darting_tortoise | daitote | "2025-04-09T18:02:40Z" | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am knobby darting tortoise",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-07T09:31:36Z" | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-knobby_darting_tortoise
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am knobby darting tortoise
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-knobby_darting_tortoise
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="daitote/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-knobby_darting_tortoise", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.0
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
luoxiaojun1992/Qwen2.5-3B-Instruct-gsm8k-q4_k_m | luoxiaojun1992 | "2025-02-17T05:46:54Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-17T05:46:19Z" | ---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** luoxiaojun1992
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Ludo33/deberta_energie | Ludo33 | "2025-03-11T14:36:54Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-02-14T11:07:26Z" | ---
library_name: transformers
license: mit
base_model: microsoft/deberta-v3-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: deberta_Energie
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta_Energie
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0142
- Accuracy: 0.9913
- F1: 0.9913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.4003 | 1.0 | 116 | 0.9262 | 0.6524 | 0.6025 |
| 0.7697 | 2.0 | 232 | 0.3836 | 0.8906 | 0.8899 |
| 0.3904 | 3.0 | 348 | 0.2468 | 0.9256 | 0.9191 |
| 0.2749 | 4.0 | 464 | 0.2202 | 0.9324 | 0.9283 |
| 0.2043 | 5.0 | 580 | 0.1122 | 0.9672 | 0.9673 |
| 0.1808 | 6.0 | 696 | 0.1004 | 0.9701 | 0.9706 |
| 0.1274 | 7.0 | 812 | 0.0822 | 0.9745 | 0.9747 |
| 0.1018 | 8.0 | 928 | 0.0673 | 0.9791 | 0.9794 |
| 0.0711 | 9.0 | 1044 | 0.0457 | 0.9870 | 0.9870 |
| 0.0609 | 10.0 | 1160 | 0.0370 | 0.9867 | 0.9867 |
| 0.0594 | 11.0 | 1276 | 0.0240 | 0.9886 | 0.9886 |
| 0.0332 | 12.0 | 1392 | 0.0182 | 0.9913 | 0.9913 |
| 0.0278 | 13.0 | 1508 | 0.0183 | 0.9908 | 0.9908 |
| 0.0281 | 14.0 | 1624 | 0.0142 | 0.9913 | 0.9913 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
|
dabrown/9b20a3aa-4649-4eef-8650-cebdbfb1c3ba | dabrown | "2025-02-28T11:51:44Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Phi-3-mini-4k-instruct",
"base_model:adapter:unsloth/Phi-3-mini-4k-instruct",
"license:mit",
"region:us"
] | null | "2025-02-28T09:40:44Z" | ---
library_name: peft
license: mit
base_model: unsloth/Phi-3-mini-4k-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9b20a3aa-4649-4eef-8650-cebdbfb1c3ba
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.5.2`
```yaml
adapter: lora
base_model: unsloth/Phi-3-mini-4k-instruct
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 4162907d2807b67e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/4162907d2807b67e_train_data.json
type:
field_input: description
field_instruction: reference
field_output: article
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: false
group_by_length: true
hub_model_id: dabrown/9b20a3aa-4649-4eef-8650-cebdbfb1c3ba
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: false
lora_inference_mode: true
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 1500
micro_batch_size: 2
mlflow_experiment_name: /tmp/4162907d2807b67e_train_data.json
model_type: AutoModelForCausalLM
modules_to_save: lm_head
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
peft_use_rslora: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: offline
wandb_name: 64a2c218-432f-42db-b1c9-e8707ca42c74
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 64a2c218-432f-42db-b1c9-e8707ca42c74
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 9b20a3aa-4649-4eef-8650-cebdbfb1c3ba
This model is a fine-tuned version of [unsloth/Phi-3-mini-4k-instruct](https://huggingface.co/unsloth/Phi-3-mini-4k-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2989
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 1326
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.4285 | 0.0008 | 1 | 1.8210 |
| 1.5212 | 0.2504 | 332 | 1.4034 |
| 1.2608 | 0.5008 | 664 | 1.3372 |
| 1.3728 | 0.7511 | 996 | 1.2989 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.3
- Pytorch 2.3.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3 |
kaljr/ppo-LunarLander-v2 | kaljr | "2024-02-27T02:38:45Z" | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-02-27T02:38:21Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 253.45 +/- 10.35
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
thalllsssss/45a81f30-3d17-4b84-a45b-a2b51af00a14 | thalllsssss | "2025-01-24T05:37:37Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-7B-Instruct",
"base_model:adapter:unsloth/Qwen2-7B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-24T04:34:00Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 45a81f30-3d17-4b84-a45b-a2b51af00a14
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- be25ce38282aeb5a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/be25ce38282aeb5a_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thalllsssss/45a81f30-3d17-4b84-a45b-a2b51af00a14
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/be25ce38282aeb5a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 14fba03c-c528-4737-ac1e-1f62f6edce20
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 14fba03c-c528-4737-ac1e-1f62f6edce20
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 45a81f30-3d17-4b84-a45b-a2b51af00a14
This model is a fine-tuned version of [unsloth/Qwen2-7B-Instruct](https://huggingface.co/unsloth/Qwen2-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2583
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.1782 | 0.0067 | 200 | 1.2583 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
rithwik-db/triplets-e5-base-500-2183ce | rithwik-db | "2023-04-09T00:38:35Z" | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2023-04-09T00:38:29Z" | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# rithwik-db/triplets-e5-base-500-2183ce
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('rithwik-db/triplets-e5-base-500-2183ce')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('rithwik-db/triplets-e5-base-500-2183ce')
model = AutoModel.from_pretrained('rithwik-db/triplets-e5-base-500-2183ce')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=rithwik-db/triplets-e5-base-500-2183ce)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 8228 with parameters:
```
{'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5}
```
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
leonhe/ppo-Huggy | leonhe | "2023-05-22T07:22:50Z" | 18 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | "2023-05-22T07:22:43Z" | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: leonhe/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
William2357/textual_inversion_cat | William2357 | "2024-06-09T06:07:23Z" | 2 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"diffusers-training",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-05-16T02:34:37Z" | ---
license: creativeml-openrail-m
library_name: diffusers
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
- diffusers-training
base_model: runwayml/stable-diffusion-v1-5
inference: true
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Textual inversion text2image fine-tuning - William2357/textual_inversion_cat
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.




## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
shaaaanya/my-first-ppo-LunarLander-v2-model | shaaaanya | "2023-02-15T07:52:10Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-02-15T07:51:54Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO_MlpPolicy
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 267.32 +/- 28.78
name: mean_reward
verified: false
---
# **PPO_MlpPolicy** Agent playing **LunarLander-v2**
This is a trained model of a **PPO_MlpPolicy** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
franjamonga/min-1-R1-Zero-3B-GGUF | franjamonga | "2025-02-12T16:08:51Z" | 20 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"es",
"base_model:unsloth/Qwen2.5-3B-Instruct-unsloth-bnb-4bit",
"base_model:quantized:unsloth/Qwen2.5-3B-Instruct-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-11T12:58:39Z" | ---
base_model:
- unsloth/Qwen2.5-3B-Instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- en
- es
---
# Uploaded model
- **Developed by:** franjamonga
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
|
kyleeasterly/openllama-7b_purple-aerospace-v1-80-12 | kyleeasterly | "2023-08-09T08:12:46Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-08-09T08:09:18Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
JuliPolus/sd-1-4-fashion-model | JuliPolus | "2024-06-15T19:39:15Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"license:creativeml-openrail-m",
"region:us"
] | null | "2024-06-15T19:17:09Z" | ---
license: creativeml-openrail-m
---
|
Felix92/doctr-dummy-tf-crnn-mobilenet-v3-large | Felix92 | "2022-05-25T21:32:59Z" | 13 | 0 | transformers | [
"transformers",
"image-to-text",
"en",
"endpoints_compatible",
"region:us"
] | image-to-text | "2022-04-14T11:46:53Z" |
---
language: en
pipeline_tag: image-to-text
---
<p align="center">
<img src="https://github.com/mindee/doctr/releases/download/v0.3.1/Logo_doctr.gif" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: recognition
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
|
mradermacher/Brezn3-GGUF | mradermacher | "2024-12-19T20:38:12Z" | 18 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:johannhartmann/Brezn3",
"base_model:quantized:johannhartmann/Brezn3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-12-19T20:11:09Z" | ---
base_model: johannhartmann/Brezn3
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/johannhartmann/Brezn3
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Brezn3-GGUF/resolve/main/Brezn3.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Brezn3-GGUF/resolve/main/Brezn3.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Brezn3-GGUF/resolve/main/Brezn3.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Brezn3-GGUF/resolve/main/Brezn3.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Brezn3-GGUF/resolve/main/Brezn3.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Brezn3-GGUF/resolve/main/Brezn3.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Brezn3-GGUF/resolve/main/Brezn3.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Brezn3-GGUF/resolve/main/Brezn3.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Brezn3-GGUF/resolve/main/Brezn3.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Brezn3-GGUF/resolve/main/Brezn3.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Brezn3-GGUF/resolve/main/Brezn3.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Brezn3-GGUF/resolve/main/Brezn3.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
darthPanda/taxi_v3_try1 | darthPanda | "2023-07-19T16:38:31Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-07-19T16:36:20Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi_v3_try1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.75
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="darthPanda/taxi_v3_try1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
synthbot/pony-diffusion-v3-1-openvino | synthbot | "2023-03-05T12:03:44Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-03-05T11:50:31Z" | This is a copy of [AstraliteHeart's Pony Diffusion v3.1](https://huggingface.co/AstraliteHeart/pony-diffusion-v3) model, converted to the [OpenVino IR](https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/225-stable-diffusion-text-to-image) for compatibility with [CPUtard](https://rentry.co/CPUtard).
|
mradermacher/orca_mini_v3_7b-GGUF | mradermacher | "2025-01-29T15:10:33Z" | 201 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:psmathur/orca_mini_v1_dataset",
"dataset:ehartford/dolphin",
"base_model:pankajmathur/orca_mini_v3_7b",
"base_model:quantized:pankajmathur/orca_mini_v3_7b",
"license:other",
"endpoints_compatible",
"region:us"
] | null | "2025-01-29T14:13:03Z" | ---
base_model: pankajmathur/orca_mini_v3_7b
datasets:
- psmathur/orca_mini_v1_dataset
- ehartford/dolphin
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/pankajmathur/orca_mini_v3_7b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/orca_mini_v3_7b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_7b-GGUF/resolve/main/orca_mini_v3_7b.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_7b-GGUF/resolve/main/orca_mini_v3_7b.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_7b-GGUF/resolve/main/orca_mini_v3_7b.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_7b-GGUF/resolve/main/orca_mini_v3_7b.Q3_K_L.gguf) | Q3_K_L | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_7b-GGUF/resolve/main/orca_mini_v3_7b.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_7b-GGUF/resolve/main/orca_mini_v3_7b.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_7b-GGUF/resolve/main/orca_mini_v3_7b.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_7b-GGUF/resolve/main/orca_mini_v3_7b.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_7b-GGUF/resolve/main/orca_mini_v3_7b.Q5_K_M.gguf) | Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_7b-GGUF/resolve/main/orca_mini_v3_7b.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_7b-GGUF/resolve/main/orca_mini_v3_7b.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_7b-GGUF/resolve/main/orca_mini_v3_7b.f16.gguf) | f16 | 13.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
DAMO-NLP-SG/Mistral-7B-LongPO-256K-EXP | DAMO-NLP-SG | "2025-02-22T12:05:39Z" | 17 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"dataset:DAMO-NLP-SG/Mistral-7B-LongPO-256K-tokenized",
"arxiv:2502.13922",
"base_model:DAMO-NLP-SG/Mistral-7B-LongPO-128K",
"base_model:finetune:DAMO-NLP-SG/Mistral-7B-LongPO-128K",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-17T13:03:19Z" | ---
library_name: transformers
license: apache-2.0
datasets:
- DAMO-NLP-SG/Mistral-7B-LongPO-256K-tokenized
base_model:
- DAMO-NLP-SG/Mistral-7B-LongPO-128K
---
# LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization
This repo provides the checkpoint of Mistral-7B-LongPO-256K in our paper "LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization".
(Note that it is an experimental an experimental version (for rebuttal purposes) that may have not been fully tuned or provided with sufficient data to achieve convergence.)
<h5 align="left">
[](http://arxiv.org/abs/2502.13922)
[](https://huggingface.co/papers/2502.13922)
</h5>
## Highlights of LongPO
- Self-evolving long-context alignment without human/superior LLMs annotations.
- Extending context length while keeping aligned in one stage.
- No degradation on short-context capabilities.
<img width="1031" alt="image" src="https://github.com/user-attachments/assets/84f3c93f-909d-4ef7-a33a-107ca2deec42" />
## Models and Training Data
| Models | Base Model | Training Data | # Data Samples |
| ------------------------------------------------------------ | ------------------------ | ------------------------------------------------------------ | -------------- |
| [Mistral-7B-LongPO-128K](https://huggingface.co/DAMO-NLP-SG/Mistral-7B-LongPO-128K) | Mistral-7B-Instruct-v0.2 | [HF Link](https://huggingface.co/datasets/DAMO-NLP-SG/Mistral-7B-LongPO-128K-tokenized) | 45K |
| [Qwen2.5-7B-LongPO-128K](https://huggingface.co/DAMO-NLP-SG/Qwen2.5-7B-LongPO-128K) | Qwen2.5-7B-Instruct | [HF Link](https://huggingface.co/datasets/DAMO-NLP-SG/Qwen2.5-7B-LongPO-128K-tokenized) | 32K |
| [Mistral-7B-LongPO-256K-EXP](https://huggingface.co/DAMO-NLP-SG/Mistral-7B-LongPO-256K-EXP)* | Mistral-7B-LongPO-128K | [HF Link](https://huggingface.co/datasets/DAMO-NLP-SG/Mistral-7B-LongPO-256K-tokenized) | 16K |
| [Mistral-7B-LongPO-512K-EXP](https://huggingface.co/DAMO-NLP-SG/Mistral-7B-LongPO-512K-EXP)* | Mistral-7B-LongPO-128K | [HF Link](https://huggingface.co/datasets/DAMO-NLP-SG/Mistral-7B-LongPO-512K-tokenized) | 2.5K |
\* indicates an experimental version (for rebuttal purposes) that may have not been fully tuned or provided with sufficient data to achieve convergence.
## Evaluation
### InfiniteBench
| Model | Train/Claimed Length | En.Sum | En.QA | En.MC | AVG. |
| ---------------- | -------------------- | ------ | ------ | ------ | ------ |
| GPT-4-128K | 128K | 14.73 | 22.44 | 67.25 | 34.81 |
| Qwen2-72B | 128K | 24.32ᵇ | 7.03ᵇ | 72.05ᵇ | 34.47ᵇ |
| LLaMA 3.1-70B | 128K | 33.55ᵇ | 36.08ᵇ | 69.00ᵇ | 46.21ᵇ |
| LLaMA 3.1-8B | 128K | 28.06ᵇ | 30.47ᵇ | 58.08ᵇ | 38.87ᵇ |
| GLM-4-9B | 128K | 14.84ᵇ | 9.51ᵇ | 67.25ᵇ | 30.53ᵇ |
| GLM-4-9B-1M | 1M | 28.3 | 9.7 | 68.6 | 35.53 |
| LWM-7B-1M | 1M | 4.33ᵇ | 0.0ᵇ | 3.06ᵇ | 2.46ᵇ |
| YaRN-Mistral-7B | 128K | 9.09 | 9.55 | 27.95 | 15.53 |
| Mistral-7B | 32K | 22.13 | 4.93 | 14.41 | 13.82 |
| - SFT | 128K | 23.44 | 13.45 | 53.21 | 30.03 |
| - DPO | 128K | 15.21 | 10.34 | 48.14 | 25.56 |
| - LongPO (iter1) | 128K | 27.05 | 23.51 | 67.25 | 39.27 |
| - LongPO (iter2) | 256K | 28.16 | 24.43 | 66.35 | 39.65 |
| - LongPO (iter3) | 512K | 29.10 | 27.85 | 66.67 | 41.21 |
| Qwen2.5-7B | 128K | 22.89 | 6.08 | 52.4 | 27.12 |
| - LongPO (iter1) | 128K | 32.06 | 17.32 | 72.05 | 40.48 |
- Our results are evaluated with greedy decoding.
- Baseline results marked with ᵇ are evaluated by us, while unmarked baseline results are sourced from their official report.
### RULER
| Model | NIAH | VT | AGG | QA | AVG (13 tasks) |
| ------------------------ | ----- | ----- | ----- | ----- | -------------- |
| Qwen2.5-7B-Instruct | 82.10 | 80.09 | 74.50 | 54.30 | 76.50 |
| Qwen2.5-7B-LongPO-128K | 95.82 | 89.71 | 78.67 | 59.40 | 87.11 |
| Mistral-7B-Instruct-v0.2 | 72.60 | 74.40 | 64.40 | 52.20 | 68.40 |
| Mistral-7B-LongPO-128K | 96.88 | 96.49 | 71.55 | 64.81 | 88.02 |
| Mistral-7B-LongPO-256K-EXP | 96.80 | 97.00 | 69.14 | 64.87 | 87.65 |
| Mistral-7B-LongPO-512K-EXP | 97.28 | 97.48 | 69.22 | 64.92 | 88.00 |
### Short Context
| Model | MMLU | ARC-C | Hellaswag | Winogrande | Avg |
|-------|-------|--------|------------|-------------|-----|
| Mistral-7B-Instruct-v0.2 | 59.15 | 59.26 | 83.2 | 78.4 | 70.00 |
| Mistral-7B-LongPO-128K | 59.99 | 59.34 | 82.99 | 78.53 | 70.21 |
| Mistral-7B-LongPO-256K-EXP | 59.47 | 60.28 | 83.14 | 78.14 | 70.26 |
| Mistral-7B-LongPO-512K-EXP | 59.51 | 60.58 | 82.87 | 77.66 | 70.16 |
| Qwen2.5-7B-Instruct | 74.28 | 67.15 | 81.41 | 74.66 | 74.38 |
| Qwen2.5-7B-LongPO-128K | 73.64 | 65.70 | 80.82 | 74.98 | 73.79 |
## Citation
If you find our project useful, hope you can star our repo and cite our paper as follows:
```
@inproceedings{
chen2025longpo,
title={Long{PO}: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization},
author={Guanzheng Chen and Xin Li and Michael Shieh and Lidong Bing},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2025},
url={https://openreview.net/forum?id=qTrEq31Shm}
}
``` |
ModelCloud/sat-3l-sm-int8-onnx | ModelCloud | "2024-10-10T08:26:52Z" | 2,589 | 0 | null | [
"onnx",
"xlm-token",
"region:us"
] | null | "2024-10-10T08:10:51Z" | The onnx model file obtained by onnx quantize **dynamic** from **segment-any-text/sat-3l-sm**. |
RichardErkhov/google_-_gemma-2-2b-8bits | RichardErkhov | "2024-10-11T15:38:59Z" | 5 | 0 | null | [
"safetensors",
"gemma2",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:2110.08193",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:1804.06876",
"arxiv:2103.03874",
"arxiv:2304.06364",
"arxiv:1903.00161",
"arxiv:2206.04615",
"arxiv:2203.09509",
"arxiv:2403.13793",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2024-10-11T15:37:13Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gemma-2-2b - bnb 8bits
- Model creator: https://huggingface.co/google/
- Original model: https://huggingface.co/google/gemma-2-2b/
Original model description:
---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---
# Gemma 2 model card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs/base)
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit][rai-toolkit]
* [Gemma on Kaggle][kaggle-gemma]
* [Gemma on Vertex Model Garden][vertex-mg-gemma2]
**Terms of Use**: [Terms][terms]
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights for both pre-trained variants and instruction-tuned variants.
Gemma models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First, install the Transformers library with:
```sh
pip install -U transformers
```
Then, copy the snippet from the section that is relevant for your usecase.
#### Running with the `pipeline` API
```python
import torch
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="google/gemma-2-2b",
device="cuda", # replace with "mps" to run on a Mac device
)
text = "Once upon a time,"
outputs = pipe(text, max_new_tokens=256)
response = outputs[0]["generated_text"]
print(response)
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-2b",
device_map="auto",
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
#### Running the model through a CLI
The [local-gemma](https://github.com/huggingface/local-gemma) repository contains a lightweight wrapper around Transformers
for running Gemma 2 through a command line interface, or CLI. Follow the [installation instructions](https://github.com/huggingface/local-gemma#cli-usage)
for getting started, then launch the CLI through the following command:
```shell
local-gemma --model "google/gemma-2-2b" --prompt "What is the capital of Mexico?"
```
#### Quantized Versions through `bitsandbytes`
<details>
<summary>
Using 8-bit precision (int8)
</summary>
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-2b",
quantization_config=quantization_config,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
</details>
<details>
<summary>
Using 4-bit precision
</summary>
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-2b",
quantization_config=quantization_config,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
</details>
#### Advanced Usage
<details>
<summary>
Torch compile
</summary>
[Torch compile](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html) is a method for speeding-up the
inference of PyTorch modules. The Gemma-2 2b model can be run up to 6x faster by leveraging torch compile.
Note that two warm-up steps are required before the full inference speed is realised:
```python
import os
os.environ["TOKENIZERS_PARALLELISM"] = "false"
from transformers import AutoTokenizer, Gemma2ForCausalLM
from transformers.cache_utils import HybridCache
import torch
torch.set_float32_matmul_precision("high")
# load the model + tokenizer
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b")
model = Gemma2ForCausalLM.from_pretrained("google/gemma-2-2b", torch_dtype=torch.bfloat16)
model.to("cuda")
# apply the torch compile transformation
model.forward = torch.compile(model.forward, mode="reduce-overhead", fullgraph=True)
# pre-process inputs
input_text = "The theory of special relativity states "
model_inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
prompt_length = model_inputs.input_ids.shape[1]
# set-up k/v cache
past_key_values = HybridCache(
config=model.config,
max_batch_size=1,
max_cache_len=model.config.max_position_embeddings,
device=model.device,
dtype=model.dtype
)
# enable passing kv cache to generate
model._supports_cache_class = True
model.generation_config.cache_implementation = None
# two warm-up steps
for idx in range(2):
outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128)
past_key_values.reset()
# fast run
outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
For more details, refer to the [Transformers documentation](https://huggingface.co/docs/transformers/main/en/llm_optims?static-kv=basic+usage%3A+generation_config).
</details>
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
### Citation
```none
@article{gemma_2024,
title={Gemma},
url={https://www.kaggle.com/m/3301},
DOI={10.34740/KAGGLE/M/3301},
publisher={Kaggle},
author={Gemma Team},
year={2024}
}
```
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources. The 27B model was trained with 13 trillion tokens, the 9B model was
trained with 8 trillion tokens, and 2B model was trained with 2 trillion tokens.
Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content.
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safety in line with
[our policies][safety-policies].
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)][tpu] hardware (TPUv5p).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably][sustainability].
### Software
Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models][foundation-models], including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models][gemini-2-paper]; "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | Gemma 2 PT 2B | Gemma 2 PT 9B | Gemma 2 PT 27B |
| ------------------------------ | ------------- | ------------- | ------------- | -------------- |
| [MMLU][mmlu] | 5-shot, top-1 | 51.3 | 71.3 | 75.2 |
| [HellaSwag][hellaswag] | 10-shot | 73.0 | 81.9 | 86.4 |
| [PIQA][piqa] | 0-shot | 77.8 | 81.7 | 83.2 |
| [SocialIQA][socialiqa] | 0-shot | 51.9 | 53.4 | 53.7 |
| [BoolQ][boolq] | 0-shot | 72.5 | 84.2 | 84.8 |
| [WinoGrande][winogrande] | partial score | 70.9 | 80.6 | 83.7 |
| [ARC-e][arc] | 0-shot | 80.1 | 88.0 | 88.6 |
| [ARC-c][arc] | 25-shot | 55.4 | 68.4 | 71.4 |
| [TriviaQA][triviaqa] | 5-shot | 59.4 | 76.6 | 83.7 |
| [Natural Questions][naturalq] | 5-shot | 16.7 | 29.2 | 34.5 |
| [HumanEval][humaneval] | pass@1 | 17.7 | 40.2 | 51.8 |
| [MBPP][mbpp] | 3-shot | 29.6 | 52.4 | 62.6 |
| [GSM8K][gsm8k] | 5-shot, maj@1 | 23.9 | 68.6 | 74.0 |
| [MATH][math] | 4-shot | 15.0 | 36.6 | 42.3 |
| [AGIEval][agieval] | 3-5-shot | 30.6 | 52.8 | 55.1 |
| [DROP][drop] | 3-shot, F1 | 52.0 | 69.4 | 72.2 |
| [BIG-Bench][big-bench] | 3-shot, CoT | 41.9 | 68.2 | 74.9 |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias][winobias] and [BBQ Dataset][bbq].
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies][safety-policies] for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well-known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
#### Gemma 2.0
| Benchmark | Metric | Gemma 2 IT 2B | Gemma 2 IT 9B | Gemma 2 IT 27B |
| ------------------------ | ------------- | ------------- | ------------- | -------------- |
| [RealToxicity][realtox] | average | 8.16 | 8.25 | 8.84 |
| [CrowS-Pairs][crows] | top-1 | 37.67 | 37.47 | 36.67 |
| [BBQ Ambig][bbq] | 1-shot, top-1 | 83.20 | 88.58 | 85.99 |
| [BBQ Disambig][bbq] | top-1 | 69.31 | 82.67 | 86.94 |
| [Winogender][winogender] | top-1 | 52.91 | 79.17 | 77.22 |
| [TruthfulQA][truthfulqa] | | 43.72 | 50.27 | 51.60 |
| [Winobias 1_2][winobias] | | 59.28 | 78.09 | 81.94 |
| [Winobias 2_2][winobias] | | 88.57 | 95.32 | 97.22 |
| [Toxigen][toxigen] | | 48.32 | 39.30 | 38.42 |
## Dangerous Capability Evaluations
### Evaluation Approach
We evaluated a range of dangerous capabilities:
- **Offensive cybersecurity:** To assess the model's potential for misuse in
cybersecurity contexts, we utilized both publicly available
Capture-the-Flag (CTF) platforms like InterCode-CTF and Hack the Box, as
well as internally developed CTF challenges. These evaluations measure the
model's ability to exploit vulnerabilities and gain unauthorized access in
simulated environments.
- **Self-proliferation:** We evaluated the model's capacity for
self-proliferation by designing tasks that involve resource acquisition, code
execution, and interaction with remote systems. These evaluations assess
the model's ability to independently replicate and spread.
- **Persuasion:** To evaluate the model's capacity for persuasion and
deception, we conducted human persuasion studies. These studies involved
scenarios that measure the model's ability to build rapport, influence
beliefs, and elicit specific actions from human participants.
### Evaluation Results
All evaluations are described in detail in
[Evaluating Frontier Models for Dangerous Capabilities][eval-danger]
and in brief in the
[Gemma 2 technical report][tech-report].
<table>
<thead>
<tr>
<th>Evaluation</th>
<th>Capability</th>
<th>Gemma 2 IT 27B</th>
</tr>
</thead>
<tbody>
<tr>
<td>InterCode-CTF</td>
<td>Offensive cybersecurity</td>
<td>34/76 challenges</td>
</tr>
<tr>
<td>Internal CTF</td>
<td>Offensive cybersecurity</td>
<td>1/13 challenges</td>
</tr>
<tr>
<td>Hack the Box</td>
<td>Offensive cybersecurity</td>
<td>0/13 challenges</td>
</tr>
<tr>
<td>Self-proliferation early warning</td>
<td>Self-proliferation</td>
<td>1/10 challenges</td>
</tr>
<tr>
<td>Charm offensive</td>
<td>Persuasion</td>
<td>Percent of participants agreeing:
81% interesting,
75% would speak again,
80% made personal connection</td>
</tr>
<tr>
<td>Click Links</td>
<td>Persuasion</td>
<td>34% of participants</td>
</tr>
<tr>
<td>Find Info</td>
<td>Persuasion</td>
<td>9% of participants</td>
</tr>
<tr>
<td>Run Code</td>
<td>Persuasion</td>
<td>11% of participants</td>
</tr>
<tr>
<td>Money talks</td>
<td>Persuasion</td>
<td>£3.72 mean donation</td>
</tr>
<tr>
<td>Web of Lies</td>
<td>Persuasion</td>
<td>18% mean shift towards correct belief, 1% mean shift towards
incorrect belief</td>
</tr>
</tbody>
</table>
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit][rai-toolkit].
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy][prohibited-use].
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
[tech-report]: https://storage.googleapis.com/deepmind-media/gemma/gemma-2-report.pdf
[rai-toolkit]: https://ai.google.dev/responsible
[kaggle-gemma]: https://www.kaggle.com/models/google/gemma-2
[terms]: https://ai.google.dev/gemma/terms
[vertex-mg-gemma2]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemma2
[sensitive-info]: https://cloud.google.com/dlp/docs/high-sensitivity-infotypes-reference
[safety-policies]: https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11
[prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
[tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
[sustainability]: https://sustainability.google/operating-sustainably/
[jax]: https://github.com/google/jax
[ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
[sustainability]: https://sustainability.google/operating-sustainably/
[foundation-models]: https://ai.google/discover/foundation-models/
[gemini-2-paper]: https://goo.gle/gemma2report
[mmlu]: https://arxiv.org/abs/2009.03300
[hellaswag]: https://arxiv.org/abs/1905.07830
[piqa]: https://arxiv.org/abs/1911.11641
[socialiqa]: https://arxiv.org/abs/1904.09728
[boolq]: https://arxiv.org/abs/1905.10044
[winogrande]: https://arxiv.org/abs/1907.10641
[commonsenseqa]: https://arxiv.org/abs/1811.00937
[openbookqa]: https://arxiv.org/abs/1809.02789
[arc]: https://arxiv.org/abs/1911.01547
[triviaqa]: https://arxiv.org/abs/1705.03551
[naturalq]: https://github.com/google-research-datasets/natural-questions
[humaneval]: https://arxiv.org/abs/2107.03374
[mbpp]: https://arxiv.org/abs/2108.07732
[gsm8k]: https://arxiv.org/abs/2110.14168
[realtox]: https://arxiv.org/abs/2009.11462
[bold]: https://arxiv.org/abs/2101.11718
[crows]: https://aclanthology.org/2020.emnlp-main.154/
[bbq]: https://arxiv.org/abs/2110.08193v2
[winogender]: https://arxiv.org/abs/1804.09301
[truthfulqa]: https://arxiv.org/abs/2109.07958
[winobias]: https://arxiv.org/abs/1804.06876
[math]: https://arxiv.org/abs/2103.03874
[agieval]: https://arxiv.org/abs/2304.06364
[drop]: https://arxiv.org/abs/1903.00161
[big-bench]: https://arxiv.org/abs/2206.04615
[toxigen]: https://arxiv.org/abs/2203.09509
[eval-danger]: https://arxiv.org/abs/2403.13793
|
Nerva1228/yue3 | Nerva1228 | "2025-01-17T09:04:34Z" | 14 | 1 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-01-17T09:04:31Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: yue
---
# Yue3
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `yue` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Nerva1228/yue3', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
philip-hightech/503bf951-085f-4822-b9e0-347c3e894be9 | philip-hightech | "2025-01-22T09:20:45Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"falcon",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:tiiuae/falcon-7b",
"base_model:adapter:tiiuae/falcon-7b",
"license:apache-2.0",
"region:us"
] | null | "2025-01-22T09:08:09Z" | ---
library_name: peft
license: apache-2.0
base_model: tiiuae/falcon-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 503bf951-085f-4822-b9e0-347c3e894be9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: tiiuae/falcon-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c74825f7031eae65_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c74825f7031eae65_train_data.json
type:
field_instruction: sent2
field_output: ending0
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: philip-hightech/503bf951-085f-4822-b9e0-347c3e894be9
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/c74825f7031eae65_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4f622dcb-313e-43e2-ae17-0bf9dc51abcd
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 4f622dcb-313e-43e2-ae17-0bf9dc51abcd
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 503bf951-085f-4822-b9e0-347c3e894be9
This model is a fine-tuned version of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8032
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 12.7637 | 0.0001 | 1 | 3.0824 |
| 13.1529 | 0.0002 | 3 | 3.0817 |
| 12.3486 | 0.0004 | 6 | 3.0289 |
| 11.6877 | 0.0006 | 9 | 2.8032 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
RichardErkhov/MediaTek-Research_-_Breeze-7B-Base-v0_1-gguf | RichardErkhov | "2024-11-17T11:55:16Z" | 6 | 0 | null | [
"gguf",
"arxiv:2403.02712",
"endpoints_compatible",
"region:us"
] | null | "2024-11-17T10:42:11Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Breeze-7B-Base-v0_1 - GGUF
- Model creator: https://huggingface.co/MediaTek-Research/
- Original model: https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v0_1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Breeze-7B-Base-v0_1.Q2_K.gguf](https://huggingface.co/RichardErkhov/MediaTek-Research_-_Breeze-7B-Base-v0_1-gguf/blob/main/Breeze-7B-Base-v0_1.Q2_K.gguf) | Q2_K | 2.66GB |
| [Breeze-7B-Base-v0_1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/MediaTek-Research_-_Breeze-7B-Base-v0_1-gguf/blob/main/Breeze-7B-Base-v0_1.Q3_K_S.gguf) | Q3_K_S | 3.09GB |
| [Breeze-7B-Base-v0_1.Q3_K.gguf](https://huggingface.co/RichardErkhov/MediaTek-Research_-_Breeze-7B-Base-v0_1-gguf/blob/main/Breeze-7B-Base-v0_1.Q3_K.gguf) | Q3_K | 3.42GB |
| [Breeze-7B-Base-v0_1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/MediaTek-Research_-_Breeze-7B-Base-v0_1-gguf/blob/main/Breeze-7B-Base-v0_1.Q3_K_M.gguf) | Q3_K_M | 3.42GB |
| [Breeze-7B-Base-v0_1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/MediaTek-Research_-_Breeze-7B-Base-v0_1-gguf/blob/main/Breeze-7B-Base-v0_1.Q3_K_L.gguf) | Q3_K_L | 3.7GB |
| [Breeze-7B-Base-v0_1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/MediaTek-Research_-_Breeze-7B-Base-v0_1-gguf/blob/main/Breeze-7B-Base-v0_1.IQ4_XS.gguf) | IQ4_XS | 3.83GB |
| [Breeze-7B-Base-v0_1.Q4_0.gguf](https://huggingface.co/RichardErkhov/MediaTek-Research_-_Breeze-7B-Base-v0_1-gguf/blob/main/Breeze-7B-Base-v0_1.Q4_0.gguf) | Q4_0 | 3.99GB |
| [Breeze-7B-Base-v0_1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/MediaTek-Research_-_Breeze-7B-Base-v0_1-gguf/blob/main/Breeze-7B-Base-v0_1.IQ4_NL.gguf) | IQ4_NL | 4.03GB |
| [Breeze-7B-Base-v0_1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/MediaTek-Research_-_Breeze-7B-Base-v0_1-gguf/blob/main/Breeze-7B-Base-v0_1.Q4_K_S.gguf) | Q4_K_S | 4.01GB |
| [Breeze-7B-Base-v0_1.Q4_K.gguf](https://huggingface.co/RichardErkhov/MediaTek-Research_-_Breeze-7B-Base-v0_1-gguf/blob/main/Breeze-7B-Base-v0_1.Q4_K.gguf) | Q4_K | 4.23GB |
| [Breeze-7B-Base-v0_1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/MediaTek-Research_-_Breeze-7B-Base-v0_1-gguf/blob/main/Breeze-7B-Base-v0_1.Q4_K_M.gguf) | Q4_K_M | 4.23GB |
| [Breeze-7B-Base-v0_1.Q4_1.gguf](https://huggingface.co/RichardErkhov/MediaTek-Research_-_Breeze-7B-Base-v0_1-gguf/blob/main/Breeze-7B-Base-v0_1.Q4_1.gguf) | Q4_1 | 4.41GB |
| [Breeze-7B-Base-v0_1.Q5_0.gguf](https://huggingface.co/RichardErkhov/MediaTek-Research_-_Breeze-7B-Base-v0_1-gguf/blob/main/Breeze-7B-Base-v0_1.Q5_0.gguf) | Q5_0 | 4.83GB |
| [Breeze-7B-Base-v0_1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/MediaTek-Research_-_Breeze-7B-Base-v0_1-gguf/blob/main/Breeze-7B-Base-v0_1.Q5_K_S.gguf) | Q5_K_S | 4.83GB |
| [Breeze-7B-Base-v0_1.Q5_K.gguf](https://huggingface.co/RichardErkhov/MediaTek-Research_-_Breeze-7B-Base-v0_1-gguf/blob/main/Breeze-7B-Base-v0_1.Q5_K.gguf) | Q5_K | 4.95GB |
| [Breeze-7B-Base-v0_1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/MediaTek-Research_-_Breeze-7B-Base-v0_1-gguf/blob/main/Breeze-7B-Base-v0_1.Q5_K_M.gguf) | Q5_K_M | 4.95GB |
| [Breeze-7B-Base-v0_1.Q5_1.gguf](https://huggingface.co/RichardErkhov/MediaTek-Research_-_Breeze-7B-Base-v0_1-gguf/blob/main/Breeze-7B-Base-v0_1.Q5_1.gguf) | Q5_1 | 5.25GB |
| [Breeze-7B-Base-v0_1.Q6_K.gguf](https://huggingface.co/RichardErkhov/MediaTek-Research_-_Breeze-7B-Base-v0_1-gguf/blob/main/Breeze-7B-Base-v0_1.Q6_K.gguf) | Q6_K | 5.72GB |
| [Breeze-7B-Base-v0_1.Q8_0.gguf](https://huggingface.co/RichardErkhov/MediaTek-Research_-_Breeze-7B-Base-v0_1-gguf/blob/main/Breeze-7B-Base-v0_1.Q8_0.gguf) | Q8_0 | 7.41GB |
Original model description:
---
pipeline_tag: text-generation
license: apache-2.0
language:
- zh
- en
---
# Model Card for MediaTek Research Breeze-7B-Base-v0_1
MediaTek Research Breeze-7B (hereinafter referred to as Breeze-7B) is a language model family that builds on top of [Mistral-7B](https://huggingface.co/mistralai/Mistral-7B-v0.1), specifically intended for Traditional Chinese use.
[Breeze-7B-Base](https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v0_1) is the base model for the Breeze-7B series.
It is suitable for use if you have substantial fine-tuning data to tune it for your specific use case.
[Breeze-7B-Instruct](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v0_1) derives from the base model Breeze-7B-Base, making the resulting model amenable to be used as-is for commonly seen tasks.
[Breeze-7B-Instruct-64k](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-64k-v0_1) is a slightly modified version of
Breeze-7B-Instruct to enable a 64k-token context length. Roughly speaking, that is equivalent to 88k Traditional Chinese characters.
*Update (Feb. 21st, 2024): Breeze-7B-Instruct-64k-v0_1 has been temporarily removed from Hugging Face due to its actual performance in long context tests not meeting expectations.*
*Update (Mar. 7th, 2024): The current release version of Breeze-7B is v1.0. See [Breeze-7B-Base-v1_0](https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v1_0).*
Practicality-wise:
- Breeze-7B-Base expands the original vocabulary with additional 30,000 Traditional Chinese tokens. With the expanded vocabulary, everything else being equal, Breeze-7B operates at twice the inference speed for Traditional Chinese to Mistral-7B and Llama 7B. [See [Inference Performance](#inference-performance).]
- Breeze-7B-Instruct can be used as is for common tasks such as Q&A, RAG, multi-round chat, and summarization.
- In particular, Breeze-7B-Instruct-64k can perform tasks at a document level, not a chapter level.
Performance-wise:
- Breeze-7B-Instruct demonstrates impressive performance in benchmarks for Traditional Chinese and English, when compared to similar sized open-source contemporaries such as Taiwan-LLM-7B/13B-chat, QWen-7B-Chat, and Yi-6B-Chat. [See [Chat Model Performance](#chat-model-performance).]
*A project by the members (in alphabetical order): Chan-Jan Hsu 許湛然, Chang-Le Liu 劉昶樂, Feng-Ting Liao 廖峰挺, Po-Chun Hsu 許博竣, Yi-Chang Chen 陳宜昌, and the supervisor Da-Shan Shiu 許大山.*
## Features
- Breeze-7B-Base-v0_1
- Expanding the vocabulary dictionary size from 32k to 62k to better support Traditional Chinese
- 8k-token context length
- Breeze-7B-Instruct-v0_1
- Expanding the vocabulary dictionary size from 32k to 62k to better support Traditional Chinese
- 8k-token context length
- Multi-turn dialogue (without special handling for harmfulness)
- Breeze-7B-Instruct-64k-v0_1
- Expanding the vocabulary dictionary size from 32k to 62k to better support Traditional Chinese
- 64k-token context length
- Multi-turn dialogue (without special handling for harmfulness)
## Model Details
- Breeze-7B-Base-v0_1
- Finetuned from: [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
- Model type: Causal decoder-only transformer language model
- Language: English and Traditional Chinese (zh-tw)
- Breeze-7B-Instruct-v0_1
- Finetuned from: [MediaTek-Research/Breeze-7B-Base-v0_1](https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v0_1)
- Model type: Causal decoder-only transformer language model
- Language: English and Traditional Chinese (zh-tw)
- Breeze-7B-Instruct-64k-v0_1
- Finetuned from: [MediaTek-Research/Breeze-7B-Instruct-v0_1](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v0_1)
- Model type: Causal decoder-only transformer language model
- Language: English and Traditional Chinese (zh-tw)
## Base Model Performance
**TMMLU+**, **DRCD**, and **Table** source from [MediaTek-Research/TCEval-v2](https://huggingface.co/datasets/MediaTek-Research/TCEval-v2).
[MediaTek-Research/TCEval-v2](https://huggingface.co/datasets/MediaTek-Research/TCEval-v2) derives from [TCEval-v1](https://github.com/mtkresearch/MR-Models/tree/main/TC-Eval)
and [ikala/tmmluplus](https://huggingface.co/datasets/ikala/tmmluplus). **MMLU** sources from [hails/mmlu_no_train](https://huggingface.co/datasets/hails/mmlu_no_train).
We use the code revised from [EleutherAI/lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) to evaluate **TMMLU+**, **DRCD**, **Table**, and **MMLU**. All choice problems adapt the selection by the log-likelihood.
| Models | |↑ TMMLU+ (ACC) | DRCD (EM) | Table (ACC) | MMLU (ACC) |
|----------------------------------------------|--------|--------------|-------------|-------------|------------|
| | |TC, Knowledge |TC, Reasoning|TC, Reasoning|EN, Knowledge|
| | | 5 shot | 3 shot | 5 shot | 5 shot |
| [Yi-34B](https://huggingface.co/01-ai/Yi-34B)| 34B | 63.10 | 84.57 | 49.31 | 77.42 |
| [Qwen-14B](https://huggingface.co/01-ai/Qwen/Qwen-14B)| 14B | 51.30 | 16.95 * | 50.69 | 68.83 |
| [Yi-6B](https://huggingface.co/01-ai/Yi-6B) | 6B | 49.63 | 76.61 | 34.72 | 65.35 |
| [Qwen-7B](https://huggingface.co/01-ai/Qwen/Qwen-7B)| 7B | 42.84 | 0.0 * | 39.58 | 61.00 |
| [**Breeze-7B-Base-v0_1**](https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v0_1) | 7B | 40.35 | 81.13 | 28.47 | 61.63 |
| [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)| 7B | 36.93 | 79.27 | 27.78 | 64.89 |
\* Few-shot learning cannot effectively guide the model to generate the proper answer.
## Chat Model Performance
**TMMLU+**, **DRCD**, **Table**, and **MT-Bench-tw** source from [MediaTek-Research/TCEval-v2](https://huggingface.co/datasets/MediaTek-Research/TCEval-v2).
[MediaTek-Research/TCEval-v2](https://huggingface.co/datasets/MediaTek-Research/TCEval-v2) derives from [TCEval-v1](https://github.com/mtkresearch/MR-Models/tree/main/TC-Eval)
and [ikala/tmmluplus](https://huggingface.co/datasets/ikala/tmmluplus). **MMLU** sources from [hails/mmlu_no_train](https://huggingface.co/datasets/hails/mmlu_no_train).
**MT-Bench** source from [lmsys/mt_bench_human_judgments](https://huggingface.co/datasets/lmsys/mt_bench_human_judgments).
We use the code revised from [EleutherAI/lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) to evaluate **TMMLU+**, **DRCD**, **Table**, and **MMLU**. All choice problems adapt the selection by the log-likelihood.
We use the code revised from [fastchat llm_judge](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge) (GPT4 as judge) to evaluate **MT-Bench-tw** and **MT-Bench**.
| Models | |↑ MT-Bench-tw (Score)| TMMLU+ (ACC) | TMMLU+ (ACC) | DRCD (EM) | Table (ACC) | MT-Bench (Score) | MMLU (ACC) | MMLU (ACC) |
|---------------------------------------------------------------------------------------------------------|--------|--------------------|--------------|--------------|-------------|-------------|------------------|-------------|-------------|
| | |TC, Chat |TC, Knowledge |TC, Knowledge |TC, Reasoning|TC, Reasoning|EN, Chat |EN, Knowledge|EN, Knowledge|
| | |0 shot | 0 shot | 5 shot | 3 shot | 0 shot |0 shot | 0 shot | 5 shot |
| [gpt-3.5-turbo](https://openai.com) | |7.1 | 43.56 | | | 45.14 |7.9 | 67.09 | |
| [Yi-34B-Chat](https://huggingface.co/01-ai/Yi-34B-Chat) | 34B |6.9 | 54.87 | | | 36.81 |7.6 | 71.04 | |
| [Qwen-14B-Chat](https://huggingface.co/Qwen/Qwen-14B-Chat) | 14B |6.4 | 48.41 | | | 41.67 |7.2 | 64.91 | |
| [**Breeze-7B-Instruct-v0_1**](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v0_1) | 7B |5.7 | 41.61 | | | 45.83 |7.1 | 63.26 | |
| [**Breeze-7B-Instruct-64k-v0_1**](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-64k-v0_1) | 7B |5.5 | 40.99 | | | 36.11 |7.1 | 63.68 | |
| [Qwen-7B-Chat](https://huggingface.co/Qwen/Qwen-7B-Chat) | 7B |5.4 | 40.02 | | | 33.33 |6.2 | 55.94 | |
| [Yi-6B-Chat](https://huggingface.co/01-ai/Yi-6B-Chat) | 6B |5.0 | 44.79 | | | 25.69 |6.0 | 59.45 | |
| [Taiwan-LLM-13B-v2.0-chat](https://huggingface.co/yentinglin/Taiwan-LLM-13B-v2.0-chat) | 13B |5.0 | 29.47 | | | 23.61 |-* | 50.50 | |
| [Taiwan-LLM-7B-v2.1-chat](https://huggingface.co/yentinglin/Taiwan-LLM-7B-v2.1-chat) | 7B |4.2 | 28.08 | | | 31.25 | -* | 42.72 | |
\* Taiwan-LLM models responds to multi-turn questions (English) in Traditional Chinese.
| Details on MT-Bench-tw (0 shot):<br/>Models | STEM |Extraction|Reasoning| Math | Coding | Roleplay| Writing |Humanities|↑ AVG |
|-----------------------------------------------------|---------|---------|---------|---------|---------|---------|---------|---------|---------|
| gpt-3.5-turbo | 7.8 | 6.1 | 5.1 | 6.4 | 6.2 | 8.7 | 7.4 | 9.3 | 7.1 |
| Yi-34B-Chat | 9.0 | 4.8 | 5.7 | 4.0 | 4.7 | 8.5 | 8.7 | 9.8 | 6.9 |
| Qwen-14B-Chat | 7.6 | 5.7 | 4.5 | 4.2 | 5.3 | 7.5 | 7.3 | 9.1 | 6.4 |
| **Breeze-7B-Instruct-v0_1** | 6.5 | 5.6 | 3.9 | 3.6 | 4.3 | 6.9 | 5.7 | 9.3 | 5.7 |
| **Breeze-7B-Instruct-64k-v0_1** | 6.1 | 5.3 | 3.7 | 2.9 | 4.2 | 7.0 | 6.7 | 8.3 | 5.5 |
| Qwen-7B-Chat | 6.6 | 4.5 | 4.8 | 2.9 | 3.6 | 6.2 | 6.8 | 8.2 | 5.4 |
| Yi-6B-Chat | 7.3 | 2.7 | 3.1 | 3.3 | 2.3 | 7.2 | 5.2 | 8.8 | 5.0 |
| Taiwan-LLM-13B-v2.0-chat | 6.1 | 3.4 | 4.1 | 2.3 | 3.1 | 7.4 | 6.6 | 6.8 | 5.0 |
| Taiwan-LLM-7B-v2.1-chat | 5.2 | 2.6 | 2.3 | 1.2 | 3.4 | 6.6 | 5.7 | 6.8 | 4.2 |
| Details on TMMLU+ (0 shot):<br/>Model | STEM | Social Science | Humanities | Other | ↑ AVG |
|-----------------------------------------------------|--------------|----------------|------------|------------|---------|
| Yi-34B-Chat | 47.65 | 64.25 | 52.73 | 54.91 | 54.87 |
| Qwen-14B-Chat | 43.83 | 55.00 | 48.55 | 46.22 | 48.41 |
| Yi-6B-Chat | 37.80 | 51.74 | 45.36 | 44.25 | 44.79 |
| gpt-3.5-turbo | 41.58 | 48.52 | 40.96 | 43.18 | 43.56 |
| **Breeze-7B-Instruct-v0_1** | 37.41 | 46.81 | 42.06 | 40.16 | 41.61 |
| **Breeze-7B-Instruct-64k-v0_1** | 37.88 | 46.35 | 40.31 | 39.40 | 40.99 |
| Qwen-7B-Chat | 35.44 | 46.22 | 38.35 | 40.06 | 40.02 |
| Taiwan-LLM-13B-v2.0-chat | 27.74 | 33.69 | 27.03 | 29.43 | 29.47 |
| Taiwan-LLM-7B-v2.1-chat | 25.58 | 31.76 | 27.36 | 27.61 | 28.08 |
## Inference Performance
In this test, we use the first 700 characters of the [web article](https://health.udn.com/health/story/5976/7699252?from=udn_ch1005_main_index) as the input and ask the model to write the same article again.
All inferences run on 2 RTX A6000 GPUs (using `vllm`, with a tensor-parallel size of 2).
| Models | ↓ Inference Time (sec)|Estimated Max Input Length (Char)|
|--------------------------------------------------------------------|-------------------|--------------------------|
| Yi-6B-Chat | 10.62 | 5.2k |
| **Breeze-7B-Instruct-v0_1** | 10.74 | 11.1k |
| **Breeze-7B-Instruct-64k-v0_1** | 10.74 | 88.8k |
| Qwen-7B-Chat | 10.86 | 9.8k |
| Qwen-14B-Chat | 18.89 | 9.8k |
| Mistral-7B-v0.1-Instruct | 20.48 | 5.1k |
| Taiwan-LLM-7B-v2.1-chat | 26.26 | 2.2k |
| Taiwan-LLM-13B-v2.0-chat | 36.80 | 2.2k |
| Yi-34B-Chat | 43.71 | 4.5k |
## Long-context Performance
TBD
## Use in Transformers
First install direct dependencies:
```
pip install transformers torch accelerate
```
If you want faster inference using flash-attention2, you need to install these dependencies:
```bash
pip install packaging ninja
pip install flash-attn
```
Then load the model in transformers:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained(
"MediaTek-Research/Breeze-7B-Base-v0_1",
device_map="auto",
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2" # optional
)
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("MediaTek-Research/Breeze-7B-Base-v0_1")
tokenizer.tokenize("你好,我可以幫助您解決各種問題、提供資訊和協助您完成許多不同的任務。例如:回答技術問題、提供建議、翻譯文字、尋找資料或協助您安排行程等。請告訴我如何能幫助您。")
# Tokenized results
# ['▁', '你好', ',', '我', '可以', '幫助', '您', '解決', '各種', '問題', '、', '提供', '資訊', '和', '協助', '您', '完成', '許多', '不同', '的', '任務', '。', '例如', ':', '回答', '技術', '問題', '、', '提供', '建議', '、', '翻譯', '文字', '、', '尋找', '資料', '或', '協助', '您', '安排', '行程', '等', '。', '請', '告訴', '我', '如何', '能', '幫助', '您', '。']
```
## Citation
```
@article{MediaTek-Research2024breeze7b,
title={Breeze-7B Technical Report},
author={Chan-Jan Hsu and Chang-Le Liu and Feng-Ting Liao and Po-Chun Hsu and Yi-Chang Chen and Da-Shan Shiu},
year={2024},
eprint={2403.02712},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Lots-of-LoRAs/Mistral-7B-Instruct-v0.2-4b-r16-task1201 | Lots-of-LoRAs | "2024-07-03T20:35:22Z" | 0 | 0 | pytorch | [
"pytorch",
"safetensors",
"en",
"arxiv:1910.09700",
"arxiv:2407.00066",
"license:mit",
"region:us"
] | null | "2024-06-18T20:05:43Z" | ---
language: en
license: mit
library_name: pytorch
---
# Model Card for Mistral-7B-Instruct-v0.2-4b-r16-task1201
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
LoRA trained on task1201_atomic_classification_xintent
- **Developed by:** bruel
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** LoRA
- **Language(s) (NLP):** en
- **License:** mit
- **Finetuned from model [optional]:** mistralai/Mistral-7B-Instruct-v0.2
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/bruel-gabrielsson
- **Paper [optional]:** "Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead" (2024), Rickard Brüel Gabrielsson, Jiacheng Zhu, Onkar Bhardwaj, Leshem Choshen, Kristjan Greenewald, Mikhail Yurochkin and Justin Solomon
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
https://huggingface.co/datasets/Lots-of-LoRAs/task1201_atomic_classification_xintent sourced from https://github.com/allenai/natural-instructions
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
@misc{brüelgabrielsson2024compressserveservingthousands,
title={Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead},
author={Rickard Brüel-Gabrielsson and Jiacheng Zhu and Onkar Bhardwaj and Leshem Choshen and Kristjan Greenewald and Mikhail Yurochkin and Justin Solomon},
year={2024},
eprint={2407.00066},
archivePrefix={arXiv},
primaryClass={cs.DC},
url={https://arxiv.org/abs/2407.00066},
}
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jiebi/Kubernetes-I2C-Dec | jiebi | "2025-03-02T21:17:04Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | "2025-03-02T21:07:44Z" | ---
base_model: mistralai/Mistral-7B-v0.1
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
TransferGraph/chiragasarpota_scotus-bert-finetuned-lora-tweet_eval_emotion | TransferGraph | "2024-02-29T12:48:56Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:chiragasarpota/scotus-bert",
"base_model:adapter:chiragasarpota/scotus-bert",
"license:apache-2.0",
"model-index",
"region:us"
] | text-classification | "2024-02-29T12:48:53Z" | ---
license: apache-2.0
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: chiragasarpota/scotus-bert
model-index:
- name: chiragasarpota_scotus-bert-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.42780748663101603
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chiragasarpota_scotus-bert-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [chiragasarpota/scotus-bert](https://huggingface.co/chiragasarpota/scotus-bert) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.4278
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.2594 | None | 0 |
| 0.4278 | 1.3091 | 0 |
| 0.4278 | 1.2684 | 1 |
| 0.4278 | 1.2646 | 2 |
| 0.4278 | 1.2645 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
Kevinger/ner-english-inference | Kevinger | "2025-03-04T22:18:56Z" | 0 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"en",
"dataset:conll2003",
"endpoints_compatible",
"region:us"
] | token-classification | "2025-03-04T16:14:02Z" | ---
tags:
- flair
- token-classification
- sequence-tagger-model
language: en
datasets:
- conll2003
widget:
- text: "George Washington went to Washington"
---
## English NER in Flair (default model)
This is the standard 4-class NER model for English that ships with [Flair](https://github.com/flairNLP/flair/).
F1-Score: **93,06** (corrected CoNLL-03)
Predicts 4 tags:
| **tag** | **meaning** |
|---------------------------------|-----------|
| PER | person name |
| LOC | location name |
| ORG | organization name |
| MISC | other name |
Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF.
---
### Demo: How to use in Flair
Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("flair/ner-english")
# make example sentence
sentence = Sentence("George Washington went to Washington")
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following NER tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('ner'):
print(entity)
```
This yields the following output:
```
Span [1,2]: "George Washington" [− Labels: PER (0.9968)]
Span [5]: "Washington" [− Labels: LOC (0.9994)]
```
So, the entities "*George Washington*" (labeled as a **person**) and "*Washington*" (labeled as a **location**) are found in the sentence "*George Washington went to Washington*".
---
### Training: Script to train this model
The following Flair script was used to train this model:
```python
from flair.data import Corpus
from flair.datasets import CONLL_03
from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings
# 1. get the corpus
corpus: Corpus = CONLL_03()
# 2. what tag do we want to predict?
tag_type = 'ner'
# 3. make the tag dictionary from the corpus
tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type)
# 4. initialize each embedding we use
embedding_types = [
# GloVe embeddings
WordEmbeddings('glove'),
# contextual string embeddings, forward
FlairEmbeddings('news-forward'),
# contextual string embeddings, backward
FlairEmbeddings('news-backward'),
]
# embedding stack consists of Flair and GloVe embeddings
embeddings = StackedEmbeddings(embeddings=embedding_types)
# 5. initialize sequence tagger
from flair.models import SequenceTagger
tagger = SequenceTagger(hidden_size=256,
embeddings=embeddings,
tag_dictionary=tag_dictionary,
tag_type=tag_type)
# 6. initialize trainer
from flair.trainers import ModelTrainer
trainer = ModelTrainer(tagger, corpus)
# 7. run training
trainer.train('resources/taggers/ner-english',
train_with_dev=True,
max_epochs=150)
```
---
### Cite
Please cite the following paper when using this model.
```
@inproceedings{akbik2018coling,
title={Contextual String Embeddings for Sequence Labeling},
author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland},
booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics},
pages = {1638--1649},
year = {2018}
}
```
---
### Issues?
The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
|
HPLT/sft-fpft-cs-bloom-7b1 | HPLT | "2025-04-04T10:33:33Z" | 6 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bloom",
"text-generation",
"generation",
"question answering",
"instruction tuning",
"cs",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-04T19:57:37Z" |
---
language:
- cs
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with full-parameter fine-tuning and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [bloom-7b1](https://huggingface.co/bloom-7b1)
* Instruction tuning language: Czech
* Training method: full-parameter fine-tuning.
* Best checkpoint: best cross-entropy on a validation set, trained for 3 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded using `transformers` library.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/fpft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
Helsinki-NLP/opus-mt-lue-fr | Helsinki-NLP | "2023-08-16T12:00:38Z" | 112 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"lue",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2022-03-02T23:29:04Z" | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-lue-fr
* source languages: lue
* target languages: fr
* OPUS readme: [lue-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lue-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lue-fr/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lue-fr/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lue-fr/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lue.fr | 24.1 | 0.407 |
|
CurHarsh/sft_robotics_vlm_all_task_821_Qwen2-VL-7B-Instruct | CurHarsh | "2025-02-26T19:08:42Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | "2025-02-26T18:55:11Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
second-state/mathstral-7B-v0.1-GGUF | second-state | "2024-07-17T04:13:40Z" | 34 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation",
"base_model:mistralai/Mathstral-7B-v0.1",
"base_model:quantized:mistralai/Mathstral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | text-generation | "2024-07-17T01:47:35Z" | ---
license: apache-2.0
model_name: mathstral-7B-v0.1
base_model: mistralai/mathstral-7B-v0.1
inference: false
model_creator: mistralai
quantized_by: Second State Inc.
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# mathstral-7B-v0.1-GGUF
## Original Model
[mistralai/mathstral-7B-v0.1](https://huggingface.co/mistralai/mathstral-7B-v0.1)
## Run with LlamaEdge
- LlamaEdge version: [v0.12.3](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.12.3)
- Prompt template
- Prompt type: `mistral-instruct`
- Prompt string
```text
<s>[INST] {user_message_1} [/INST]{assistant_message_1}</s>[INST] {user_message_2} [/INST]{assistant_message_2}</s>
```
- Context size: `32000`
- Run as LlamaEdge service
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:mathstral-7B-v0.1-Q5_K_M.gguf \
llama-api-server.wasm \
--prompt-template mistral-instruct \
--ctx-size 32000 \
--model-name mathstral-7B-v0.1
```
- Run as LlamaEdge command app
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:mathstral-7B-v0.1-Q5_K_M.gguf \
llama-chat.wasm \
--prompt-template mistral-instruct \
--ctx-size 32000
```
## Quantized GGUF Models
| Name | Quant method | Bits | Size | Use case |
| ---- | ---- | ---- | ---- | ----- |
| [mathstral-7B-v0.1-Q2_K.gguf](https://huggingface.co/second-state/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-Q2_K.gguf) | Q2_K | 2 | 2.72 GB| smallest, significant quality loss - not recommended for most purposes |
| [mathstral-7B-v0.1-Q3_K_L.gguf](https://huggingface.co/second-state/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-Q3_K_L.gguf) | Q3_K_L | 3 | 3.83 GB| small, substantial quality loss |
| [mathstral-7B-v0.1-Q3_K_M.gguf](https://huggingface.co/second-state/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| very small, high quality loss |
| [mathstral-7B-v0.1-Q3_K_S.gguf](https://huggingface.co/second-state/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-Q3_K_S.gguf) | Q3_K_S | 3 | 3.17 GB| very small, high quality loss |
| [mathstral-7B-v0.1-Q4_0.gguf](https://huggingface.co/second-state/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
| [mathstral-7B-v0.1-Q4_K_M.gguf](https://huggingface.co/second-state/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| medium, balanced quality - recommended |
| [mathstral-7B-v0.1-Q4_K_S.gguf](https://huggingface.co/second-state/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| small, greater quality loss |
| [mathstral-7B-v0.1-Q5_0.gguf](https://huggingface.co/second-state/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-Q5_0.gguf) | Q5_0 | 5 | 5 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
| [mathstral-7B-v0.1-Q5_K_M.gguf](https://huggingface.co/second-state/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-Q5_K_M.gguf) | Q5_K_M | 5 | 5.14 GB| large, very low quality loss - recommended |
| [mathstral-7B-v0.1-Q5_K_S.gguf](https://huggingface.co/second-state/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-Q5_K_S.gguf) | Q5_K_S | 5 | 5 GB| large, low quality loss - recommended |
| [mathstral-7B-v0.1-Q6_K.gguf](https://huggingface.co/second-state/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-Q6_K.gguf) | Q6_K | 6 | 5.95 GB| very large, extremely low quality loss |
| [mathstral-7B-v0.1-Q8_0.gguf](https://huggingface.co/second-state/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-Q8_0.gguf) | Q8_0 | 8 | 7.7 GB| very large, extremely low quality loss - not recommended |
| [mathstral-7B-v0.1-f16.gguf](https://huggingface.co/second-state/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-f16.gguf) | f16 | 16 | 14.5 GB| |
*Quantized with llama.cpp b3389.* |
1nuno/PLN-META-3 | 1nuno | "2022-12-19T17:42:55Z" | 970 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2022-12-19T17:28:43Z" | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 39 with parameters:
```
{'batch_size': 32}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 19,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
luaqi/sn9_11 | luaqi | "2024-02-29T13:30:14Z" | 171 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-02-29T13:28:58Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MinaMila/llama_instbase_Adult_8ep_42 | MinaMila | "2025-04-01T22:27:41Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-01T22:24:36Z" | ---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MinaMila
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
trl-lib/Qwen2-0.5B-DPO | trl-lib | "2024-09-27T15:54:37Z" | 13 | 4 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:trl-lib/Capybara-Preferences",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-09-26T14:56:38Z" | ---
base_model: Qwen/Qwen2-0.5B-Instruct
datasets: trl-lib/Capybara-Preferences
library_name: transformers
model_name: dpo-qwen2
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for dpo-qwen2
This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) on the [trl-lib/Capybara-Preferences](https://huggingface.co/datasets/trl-lib/Capybara-Preferences) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="qgallouedec/dpo-qwen2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/huggingface/trl/runs/8g0pylqi)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.45.0.dev0
- Pytorch: 2.4.1
- Datasets: 3.0.0
- Tokenizers: 0.19.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
HarshilPatel1905/flant5_peft_model_emotion_detection | HarshilPatel1905 | "2024-05-11T00:17:30Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-05-11T00:17:17Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Evan-Lin/Bart-Amazon-many-keywordmax1-attractive1-reward1-epoch0 | Evan-Lin | "2023-07-23T11:36:45Z" | 47 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"trl",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | reinforcement-learning | "2023-07-23T11:34:01Z" | ---
license: apache-2.0
tags:
- trl
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="Evan-Lin//tmp/tmpeydaa0l0/Evan-Lin/Bart-Amazon-many-keywordmax1-attractive1-reward1-epoch0")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("Evan-Lin//tmp/tmpeydaa0l0/Evan-Lin/Bart-Amazon-many-keywordmax1-attractive1-reward1-epoch0")
model = AutoModelForCausalLMWithValueHead.from_pretrained("Evan-Lin//tmp/tmpeydaa0l0/Evan-Lin/Bart-Amazon-many-keywordmax1-attractive1-reward1-epoch0")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
myst72/OpenCoder-1.5B-Instruct_PIFT-jaen_manywords_6000 | myst72 | "2025-01-07T15:51:55Z" | 8 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-07T15:50:06Z" | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardsonTXCarpetCleaning/AirDuctCleaningRichardsonTX | RichardsonTXCarpetCleaning | "2022-12-11T08:38:59Z" | 0 | 0 | null | [
"license:other",
"region:us"
] | null | "2022-12-11T08:38:34Z" | ---
license: other
---
Air Duct Cleaning Richardson TX
https://carpetcleaning-richardson.com/air-duct-cleaning.html
(972) 454-9815
Do you require a cleaning service from professionals with years of experience?If so, contact us right away.We have been working to improve customers' homes' climates for a long time and can also assist you.Because our equipment can reach far to remove all harmful material from your ducts, we do not leave any area unclean. |
Ki-Generator/Ki-Image-Generator | Ki-Generator | "2025-02-18T05:19:03Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-02-18T03:57:28Z" |
# Ki-Image-Generator: Multi-API Bildgenerator
**Ki-Image-Generator** ist ein fortschrittliches Tool zur Generierung von Bildern mithilfe von mehreren KI-APIs. Der Generator vergleicht verschiedene Bildgenerierungs-Modelle und ermöglicht es Benutzern, die Ergebnisse der verschiedenen APIs zu vergleichen.
## Beschreibung
Ki-Image-Generator verwendet mehrere KI-Modelle zur Generierung von Bildern aus Text-Prompts. Das Tool bietet eine benutzerfreundliche GUI zur Auswahl von APIs und zur Verwaltung der Bildgeneration. Die Ergebnisse werden in einer Galerie angezeigt, und die Ähnlichkeit der erzeugten Bilder wird mit dem **Structural Similarity Index (SSIM)** analysiert. Es unterstützt derzeit die folgenden APIs:
- **DALL-E**
- **Stable Diffusion**
- **MidJourney**
- **DeepAI**
- **Pollinations**
- **Replicate**
- **Craiyon**
## Merkmale
- **Mehrere APIs**: Vergleich von Bildern, die durch verschiedene KI-Modelle erzeugt wurden.
- **Echtzeit-Vorschau**: Ergebnisse der Generierung werden sofort in einer Galerie angezeigt.
- **SSIM-Analyse**: Berechnung der Ähnlichkeit zwischen den generierten Bildern.
- **Fortschritt**: Echtzeit-Fortschrittsbalken, der den Fortschritt der Bildgenerierung anzeigt.
- **Einfache Bedienung**: Eine intuitive Benutzeroberfläche, die es ermöglicht, den Text-Prompt einzugeben und die gewünschten APIs auszuwählen.
## Anforderungen
- **Python 3.x**
- **Pillow** für die Bildverarbeitung
- **Requests** für die API-Anfragen
- **Tkinter** für die Benutzeroberfläche
- **Scikit-learn** für die SSIM-Analyse
## Installation
1. Klonen Sie dieses Repository:
```bash
git clone https://huggingface.co/Ki-Generator/Ki-Image-Generator
cd Ki-Image-Generator
|
sandspeare/llasm-decoder | sandspeare | "2024-04-01T01:50:55Z" | 1 | 1 | transformers | [
"transformers",
"llava",
"text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-01T01:22:04Z" | ---
license: mit
---
<h1 align="center">llasm: Naming Functions in Binaries by Fusing Encoder-only and Decoder-only LLMs</h1>
## About
llasm, is a novel framework that fuses encoder-only and decoder-only LLMs, which utilizes their capabilities to better comprehend assembly language and have better generalizability at function naming.
This is the decoder of llasm. The upload model is a lora adapter, the base model is Vicuna-13B. |
Ashwini1412/wav2vec2-nepali-itr-10 | Ashwini1412 | "2024-03-08T07:00:28Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-03-08T03:58:09Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
magicslabnu/NT1-500M-human_ref-finetuned-H3K9ac | magicslabnu | "2025-04-09T06:53:34Z" | 0 | 0 | null | [
"safetensors",
"esm",
"region:us"
] | null | "2025-04-09T06:51:44Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
daniel40/ffa6cb1b-0e78-4de2-8ace-b8577da6006a | daniel40 | "2025-01-26T07:53:36Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2_moe",
"axolotl",
"generated_from_trainer",
"base_model:katuni4ka/tiny-random-qwen1.5-moe",
"base_model:adapter:katuni4ka/tiny-random-qwen1.5-moe",
"region:us"
] | null | "2025-01-26T07:38:21Z" | ---
library_name: peft
base_model: katuni4ka/tiny-random-qwen1.5-moe
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ffa6cb1b-0e78-4de2-8ace-b8577da6006a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: katuni4ka/tiny-random-qwen1.5-moe
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3bfcb782a3f0e2ac_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3bfcb782a3f0e2ac_train_data.json
type:
field_instruction: problem
field_output: target_answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: daniel40/ffa6cb1b-0e78-4de2-8ace-b8577da6006a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/3bfcb782a3f0e2ac_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: c54ab670-3720-4ec3-a8b4-4391881ada3a
wandb_project: Birthday-SN56-28-Gradients-On-Demand
wandb_run: your_name
wandb_runid: c54ab670-3720-4ec3-a8b4-4391881ada3a
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# ffa6cb1b-0e78-4de2-8ace-b8577da6006a
This model is a fine-tuned version of [katuni4ka/tiny-random-qwen1.5-moe](https://huggingface.co/katuni4ka/tiny-random-qwen1.5-moe) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.9132
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 11.9211 | 0.0000 | 1 | 11.9166 |
| 11.9028 | 0.0001 | 3 | 11.9164 |
| 11.9413 | 0.0001 | 6 | 11.9152 |
| 11.9142 | 0.0002 | 9 | 11.9132 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/Astral-Cydonia-22B_abliterated-GGUF | mradermacher | "2024-12-16T00:29:45Z" | 101 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:SzilviaB/SzilviaB_Astral-Cydonia-22B_abliterated_broken_but_in_a_nice_way",
"base_model:quantized:SzilviaB/SzilviaB_Astral-Cydonia-22B_abliterated_broken_but_in_a_nice_way",
"endpoints_compatible",
"region:us"
] | null | "2024-09-30T05:42:58Z" | ---
base_model: SzilviaB/SzilviaB_Astral-Cydonia-22B_abliterated_broken_but_in_a_nice_way
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/SzilviaB/SzilviaB_Astral-Cydonia-22B_abliterated_broken_but_in_a_nice_way
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Astral-Cydonia-22B_abliterated-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Astral-Cydonia-22B_abliterated-GGUF/resolve/main/Astral-Cydonia-22B_abliterated.Q2_K.gguf) | Q2_K | 8.4 | |
| [GGUF](https://huggingface.co/mradermacher/Astral-Cydonia-22B_abliterated-GGUF/resolve/main/Astral-Cydonia-22B_abliterated.IQ3_XS.gguf) | IQ3_XS | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/Astral-Cydonia-22B_abliterated-GGUF/resolve/main/Astral-Cydonia-22B_abliterated.Q3_K_S.gguf) | Q3_K_S | 9.7 | |
| [GGUF](https://huggingface.co/mradermacher/Astral-Cydonia-22B_abliterated-GGUF/resolve/main/Astral-Cydonia-22B_abliterated.IQ3_S.gguf) | IQ3_S | 9.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Astral-Cydonia-22B_abliterated-GGUF/resolve/main/Astral-Cydonia-22B_abliterated.IQ3_M.gguf) | IQ3_M | 10.2 | |
| [GGUF](https://huggingface.co/mradermacher/Astral-Cydonia-22B_abliterated-GGUF/resolve/main/Astral-Cydonia-22B_abliterated.Q3_K_M.gguf) | Q3_K_M | 10.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Astral-Cydonia-22B_abliterated-GGUF/resolve/main/Astral-Cydonia-22B_abliterated.Q3_K_L.gguf) | Q3_K_L | 11.8 | |
| [GGUF](https://huggingface.co/mradermacher/Astral-Cydonia-22B_abliterated-GGUF/resolve/main/Astral-Cydonia-22B_abliterated.IQ4_XS.gguf) | IQ4_XS | 12.1 | |
| [GGUF](https://huggingface.co/mradermacher/Astral-Cydonia-22B_abliterated-GGUF/resolve/main/Astral-Cydonia-22B_abliterated.Q4_K_S.gguf) | Q4_K_S | 12.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Astral-Cydonia-22B_abliterated-GGUF/resolve/main/Astral-Cydonia-22B_abliterated.Q4_K_M.gguf) | Q4_K_M | 13.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Astral-Cydonia-22B_abliterated-GGUF/resolve/main/Astral-Cydonia-22B_abliterated.Q5_K_S.gguf) | Q5_K_S | 15.4 | |
| [GGUF](https://huggingface.co/mradermacher/Astral-Cydonia-22B_abliterated-GGUF/resolve/main/Astral-Cydonia-22B_abliterated.Q5_K_M.gguf) | Q5_K_M | 15.8 | |
| [GGUF](https://huggingface.co/mradermacher/Astral-Cydonia-22B_abliterated-GGUF/resolve/main/Astral-Cydonia-22B_abliterated.Q6_K.gguf) | Q6_K | 18.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Astral-Cydonia-22B_abliterated-GGUF/resolve/main/Astral-Cydonia-22B_abliterated.Q8_0.gguf) | Q8_0 | 23.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
genki10/BERT_AugV8_k7_task1_organization_sp040_lw040_fold3 | genki10 | "2025-03-26T13:57:38Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-03-26T13:46:16Z" | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: BERT_AugV8_k7_task1_organization_sp040_lw040_fold3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_AugV8_k7_task1_organization_sp040_lw040_fold3
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5145
- Qwk: 0.0827
- Mse: 1.5136
- Rmse: 1.2303
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 1.0 | 6 | 7.4203 | 0.0 | 7.4186 | 2.7237 |
| No log | 2.0 | 12 | 3.3597 | 0.0 | 3.3587 | 1.8327 |
| No log | 3.0 | 18 | 1.4942 | 0.0365 | 1.4936 | 1.2221 |
| No log | 4.0 | 24 | 2.0543 | 0.0042 | 2.0532 | 1.4329 |
| No log | 5.0 | 30 | 1.4863 | 0.0488 | 1.4855 | 1.2188 |
| No log | 6.0 | 36 | 0.9099 | 0.2193 | 0.9095 | 0.9537 |
| No log | 7.0 | 42 | 1.8754 | 0.0278 | 1.8748 | 1.3692 |
| No log | 8.0 | 48 | 0.8054 | 0.2715 | 0.8058 | 0.8976 |
| No log | 9.0 | 54 | 0.7618 | 0.1756 | 0.7621 | 0.8730 |
| No log | 10.0 | 60 | 1.2084 | 0.1251 | 1.2080 | 1.0991 |
| No log | 11.0 | 66 | 0.9869 | 0.1674 | 0.9868 | 0.9934 |
| No log | 12.0 | 72 | 0.8329 | 0.2380 | 0.8329 | 0.9127 |
| No log | 13.0 | 78 | 0.8451 | 0.2603 | 0.8452 | 0.9194 |
| No log | 14.0 | 84 | 0.9255 | 0.2499 | 0.9256 | 0.9621 |
| No log | 15.0 | 90 | 1.2660 | 0.1168 | 1.2655 | 1.1249 |
| No log | 16.0 | 96 | 1.8436 | 0.0771 | 1.8425 | 1.3574 |
| No log | 17.0 | 102 | 1.9113 | 0.0220 | 1.9099 | 1.3820 |
| No log | 18.0 | 108 | 2.4316 | -0.0199 | 2.4295 | 1.5587 |
| No log | 19.0 | 114 | 1.0893 | 0.1848 | 1.0888 | 1.0434 |
| No log | 20.0 | 120 | 1.3985 | 0.1031 | 1.3976 | 1.1822 |
| No log | 21.0 | 126 | 1.6275 | 0.0908 | 1.6265 | 1.2753 |
| No log | 22.0 | 132 | 1.4018 | 0.0839 | 1.4009 | 1.1836 |
| No log | 23.0 | 138 | 1.5145 | 0.0827 | 1.5136 | 1.2303 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
AdamKasumovic/llama3-70b-instruct-ids-winogrande-train-s-xh-winogrande-med | AdamKasumovic | "2024-07-01T22:41:44Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/llama-3-70b-Instruct-bnb-4bit",
"base_model:finetune:unsloth/llama-3-70b-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-01T21:09:27Z" | ---
base_model: unsloth/llama-3-70b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** AdamKasumovic
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-70b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ClarenceDan/e94cd2c3-f73f-486d-a3d2-809f2f128a46 | ClarenceDan | "2025-01-15T02:43:04Z" | 12 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama_v1.1",
"base_model:adapter:TinyLlama/TinyLlama_v1.1",
"license:apache-2.0",
"region:us"
] | null | "2025-01-15T02:42:32Z" | ---
library_name: peft
license: apache-2.0
base_model: TinyLlama/TinyLlama_v1.1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e94cd2c3-f73f-486d-a3d2-809f2f128a46
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: TinyLlama/TinyLlama_v1.1
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a26a8dba741b4ca1_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a26a8dba741b4ca1_train_data.json
type:
field_instruction: question_1
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: ClarenceDan/e94cd2c3-f73f-486d-a3d2-809f2f128a46
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/a26a8dba741b4ca1_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2cc9a107-3883-49d9-bae4-0b2c1ea759f6
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2cc9a107-3883-49d9-bae4-0b2c1ea759f6
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# e94cd2c3-f73f-486d-a3d2-809f2f128a46
This model is a fine-tuned version of [TinyLlama/TinyLlama_v1.1](https://huggingface.co/TinyLlama/TinyLlama_v1.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3498
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 5.8338 | 0.0105 | 1 | 5.7859 |
| 5.4559 | 0.0316 | 3 | 5.6161 |
| 5.3188 | 0.0632 | 6 | 5.2195 |
| 4.3143 | 0.0947 | 9 | 4.3498 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lightblue/qwen2.5-7B-instruct-simpo | lightblue | "2024-12-05T07:10:12Z" | 9 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-05T06:23:55Z" | ---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: simpo_trained_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# simpo_trained_1
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the lightblue_orpo_data dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- total_eval_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.4.0+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
Jonjew/PatrickJonesStyle | Jonjew | "2025-04-05T10:46:41Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
] | text-to-image | "2025-04-05T10:46:37Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: Art by Patrick Jones
output:
url: images/t2i_00055_.jpeg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: unknown
---
# Patrick Jones Style
<Gallery />
## Model description
FROM https://civitai.green/images/67097781
Please support the creator by donating buzz and liking at the page above!
Tigger Art by Patrick Jones
Strength 0.7
About this version
Model trained on civitai with 50 images.
Recommended resources : Fluxmania
Recommended settings : dpmpp_2m or heunpp2 / sgm_uniform / 25 steps / guidance : 2.5 to 3.5
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/PatrickJonesStyle/tree/main) them in the Files & versions tab.
|
tanganke/clip-vit-base-patch32_fashion_mnist | tanganke | "2024-12-12T14:02:46Z" | 207 | 0 | null | [
"tensorboard",
"safetensors",
"clip_vision_model",
"dataset:zalando-datasets/fashion_mnist",
"base_model:openai/clip-vit-base-patch32",
"base_model:finetune:openai/clip-vit-base-patch32",
"region:us"
] | null | "2024-12-12T14:02:07Z" | ---
base_model:
- openai/clip-vit-base-patch32
datasets:
- zalando-datasets/fashion_mnist
metrics:
- accuracy
---
# Model Card
## Training Details
Adam Optimizer with a constant learning rate 1e-5 for 4000 steps training (batch_size=128).
Only the vision encoder is fine-tuned.
## Evaluation Results
Test set accuracy:
- pre-trained: 0.6301000118255615
- fine-tuned: 0.9474999904632568
|
mradermacher/mamba-1.3b-chinese-chat-v0.1-GGUF | mradermacher | "2025-01-10T05:46:53Z" | 216 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:gywy/mamba-1.3b-chinese-chat-v0.1",
"base_model:quantized:gywy/mamba-1.3b-chinese-chat-v0.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-10T05:40:36Z" | ---
base_model: gywy/mamba-1.3b-chinese-chat-v0.1
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/gywy/mamba-1.3b-chinese-chat-v0.1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/mamba-1.3b-chinese-chat-v0.1-GGUF/resolve/main/mamba-1.3b-chinese-chat-v0.1.Q2_K.gguf) | Q2_K | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/mamba-1.3b-chinese-chat-v0.1-GGUF/resolve/main/mamba-1.3b-chinese-chat-v0.1.Q3_K_L.gguf) | Q3_K_L | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/mamba-1.3b-chinese-chat-v0.1-GGUF/resolve/main/mamba-1.3b-chinese-chat-v0.1.Q3_K_M.gguf) | Q3_K_M | 0.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/mamba-1.3b-chinese-chat-v0.1-GGUF/resolve/main/mamba-1.3b-chinese-chat-v0.1.Q3_K_S.gguf) | Q3_K_S | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/mamba-1.3b-chinese-chat-v0.1-GGUF/resolve/main/mamba-1.3b-chinese-chat-v0.1.IQ4_XS.gguf) | IQ4_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/mamba-1.3b-chinese-chat-v0.1-GGUF/resolve/main/mamba-1.3b-chinese-chat-v0.1.Q4_K_M.gguf) | Q4_K_M | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mamba-1.3b-chinese-chat-v0.1-GGUF/resolve/main/mamba-1.3b-chinese-chat-v0.1.Q4_K_S.gguf) | Q4_K_S | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mamba-1.3b-chinese-chat-v0.1-GGUF/resolve/main/mamba-1.3b-chinese-chat-v0.1.Q5_K_M.gguf) | Q5_K_M | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/mamba-1.3b-chinese-chat-v0.1-GGUF/resolve/main/mamba-1.3b-chinese-chat-v0.1.Q5_K_S.gguf) | Q5_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/mamba-1.3b-chinese-chat-v0.1-GGUF/resolve/main/mamba-1.3b-chinese-chat-v0.1.Q6_K.gguf) | Q6_K | 1.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/mamba-1.3b-chinese-chat-v0.1-GGUF/resolve/main/mamba-1.3b-chinese-chat-v0.1.Q8_0.gguf) | Q8_0 | 1.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/mamba-1.3b-chinese-chat-v0.1-GGUF/resolve/main/mamba-1.3b-chinese-chat-v0.1.f16.gguf) | f16 | 2.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
llmvetter/FrozenLake-v1-4x4-noSlippery | llmvetter | "2024-05-27T11:58:51Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2024-05-27T11:58:49Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="llmvetter/FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
nadejdatarabukina/d2828006-f9c3-4ccd-8c25-a449cd39b769 | nadejdatarabukina | "2025-01-24T16:44:27Z" | 7 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Phi-3-medium-4k-instruct",
"base_model:adapter:unsloth/Phi-3-medium-4k-instruct",
"license:mit",
"region:us"
] | null | "2025-01-24T14:52:36Z" | ---
library_name: peft
license: mit
base_model: unsloth/Phi-3-medium-4k-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d2828006-f9c3-4ccd-8c25-a449cd39b769
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Phi-3-medium-4k-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 7fbdd974ad360a0f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7fbdd974ad360a0f_train_data.json
type:
field_input: entities
field_instruction: context
field_output: level_2
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: nadejdatarabukina/d2828006-f9c3-4ccd-8c25-a449cd39b769
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/7fbdd974ad360a0f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6f7702bf-d1c0-46ba-91bf-dd86d3008991
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6f7702bf-d1c0-46ba-91bf-dd86d3008991
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# d2828006-f9c3-4ccd-8c25-a449cd39b769
This model is a fine-tuned version of [unsloth/Phi-3-medium-4k-instruct](https://huggingface.co/unsloth/Phi-3-medium-4k-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | nan |
| 0.0 | 0.0005 | 5 | nan |
| 0.0 | 0.0009 | 10 | nan |
| 0.0 | 0.0014 | 15 | nan |
| 0.0 | 0.0019 | 20 | nan |
| 0.0 | 0.0024 | 25 | nan |
| 0.0 | 0.0028 | 30 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
dimasik2987/b5da3b8a-28e0-444d-8d62-bd14eec9aed0 | dimasik2987 | "2025-01-22T21:29:52Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-360M-Instruct",
"base_model:adapter:unsloth/SmolLM-360M-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-22T21:24:25Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-360M-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b5da3b8a-28e0-444d-8d62-bd14eec9aed0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-360M-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1356d71c85d6b721_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1356d71c85d6b721_train_data.json
type:
field_input: criteria
field_instruction: data
field_output: Study_Name
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: true
hub_model_id: dimasik2987/b5da3b8a-28e0-444d-8d62-bd14eec9aed0
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 3
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 79GiB
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/1356d71c85d6b721_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0524e6c6-3d70-4085-92df-a0b6c347526d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0524e6c6-3d70-4085-92df-a0b6c347526d
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# b5da3b8a-28e0-444d-8d62-bd14eec9aed0
This model is a fine-tuned version of [unsloth/SmolLM-360M-Instruct](https://huggingface.co/unsloth/SmolLM-360M-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0008 | 1 | nan |
| 0.0 | 0.0038 | 5 | nan |
| 0.0 | 0.0075 | 10 | nan |
| 0.0 | 0.0113 | 15 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
sanchit-gandhi/distil-zephyr-1.5b-dpo-ultrafeedback | sanchit-gandhi | "2024-04-26T16:12:38Z" | 140 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:sanchit-gandhi/distil-zephyr-1.5b-ssft-ultrachat",
"base_model:finetune:sanchit-gandhi/distil-zephyr-1.5b-ssft-ultrachat",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-26T15:39:14Z" | ---
base_model: sanchit-gandhi/distil-zephyr-1.5b-ssft-ultrachat
tags:
- alignment-handbook
- trl
- dpo
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- HuggingFaceH4/ultrafeedback_binarized
model-index:
- name: sanchit-gandhi/distil-zephyr-1.5b-ssft-ultrachat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sanchit-gandhi/distil-zephyr-1.5b-ssft-ultrachat
This model is a fine-tuned version of [sanchit-gandhi/distil-zephyr-1.5b-ssft-ultrachat](https://huggingface.co/sanchit-gandhi/distil-zephyr-1.5b-ssft-ultrachat) on the HuggingFaceH4/ultrafeedback_binarized dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6412
- Rewards/chosen: -0.1044
- Rewards/rejected: -0.2494
- Rewards/accuracies: 0.6445
- Rewards/margins: 0.1450
- Logps/rejected: -429.4582
- Logps/chosen: -433.6304
- Logits/rejected: -3.2047
- Logits/chosen: -3.2544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6795 | 0.2092 | 100 | 0.6759 | 0.0017 | -0.0328 | 0.6289 | 0.0345 | -407.8037 | -423.0197 | -3.2565 | -3.3136 |
| 0.6584 | 0.4184 | 200 | 0.6534 | -0.0666 | -0.1617 | 0.6445 | 0.0951 | -420.6952 | -429.8561 | -3.2240 | -3.2768 |
| 0.6494 | 0.6276 | 300 | 0.6438 | -0.1077 | -0.2410 | 0.6211 | 0.1333 | -428.6237 | -433.9640 | -3.2050 | -3.2553 |
| 0.6428 | 0.8368 | 400 | 0.6415 | -0.1001 | -0.2437 | 0.6211 | 0.1436 | -428.8884 | -433.2000 | -3.2046 | -3.2543 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.2+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
mix060514/sd-class-butterflies-32 | mix060514 | "2024-03-12T09:59:27Z" | 44 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | "2024-03-12T09:55:14Z" | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
'''python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('mix060514/sd-class-butterflies-32')
image = pipeline().images[0]
image
'''
|
beanslmao/hensinki-en-es-finetuned-spanish-to-english-tateoba | beanslmao | "2024-03-04T01:05:51Z" | 9 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-en-es",
"base_model:finetune:Helsinki-NLP/opus-mt-en-es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-03-04T01:05:34Z" | ---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-es
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: hensinki-en-es-finetuned-spanish-to-english-tateoba
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hensinki-en-es-finetuned-spanish-to-english-tateoba
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-es](https://huggingface.co/Helsinki-NLP/opus-mt-en-es) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7389
- Bleu: 47.2153
- Gen Len: 13.8167
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.9651 | 1.0 | 7069 | 0.8372 | 43.0236 | 13.7961 |
| 0.8011 | 2.0 | 14138 | 0.7389 | 47.2153 | 13.8167 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Alexanrd/GigaAMv2_CTC_RU_ASR_for_sherpa_onnx | Alexanrd | "2025-02-25T08:29:46Z" | 0 | 0 | null | [
"onnx",
"ASR",
"RU_ASR",
"GigaAMv2",
"CTC",
"license:mit",
"region:us"
] | null | "2025-02-15T08:36:58Z" | ---
license: mit
tags:
- ASR
- RU_ASR
- GigaAMv2
- CTC
---
- Original git - [GigaAM](https://github.com/salute-developers/GigaAM/)
- I`ve get onnx by [instruction](https://github.com/salute-developers/GigaAM/blob/main/inference_example.ipynb)
- Then updated it`s meta to start on sherpa-onnx as Non-streaming CTC model from NeMo.
- Code to use you can find [in example](https://huggingface.co/Alexanrd/GigaAMv2_CTC_RU_ASR_for_sherpa_onnx/blob/main/example.py) or [in FastAPI project](https://github.com/Sanich137/Vosk5_FastAPI_streaming)
> - [Licence file](https://github.com/salute-developers/GigaAM/blob/main/LICENSE)
> - **WARNING! Check license rules before use!** |
mradermacher/Hatheno_Max_1.1-ALT-8B-model_stock-i1-GGUF | mradermacher | "2024-09-16T06:31:07Z" | 88 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2024-09-16T03:14:47Z" | ---
base_model: DreadPoor/Hatheno_Max_1.1-ALT-8B-model_stock
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/DreadPoor/Hatheno_Max_1.1-ALT-8B-model_stock
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Hatheno_Max_1.1-ALT-8B-model_stock-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Hatheno_Max_1.1-ALT-8B-model_stock-i1-GGUF/resolve/main/Hatheno_Max_1.1-ALT-8B-model_stock.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Hatheno_Max_1.1-ALT-8B-model_stock-i1-GGUF/resolve/main/Hatheno_Max_1.1-ALT-8B-model_stock.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Hatheno_Max_1.1-ALT-8B-model_stock-i1-GGUF/resolve/main/Hatheno_Max_1.1-ALT-8B-model_stock.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Hatheno_Max_1.1-ALT-8B-model_stock-i1-GGUF/resolve/main/Hatheno_Max_1.1-ALT-8B-model_stock.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Hatheno_Max_1.1-ALT-8B-model_stock-i1-GGUF/resolve/main/Hatheno_Max_1.1-ALT-8B-model_stock.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Hatheno_Max_1.1-ALT-8B-model_stock-i1-GGUF/resolve/main/Hatheno_Max_1.1-ALT-8B-model_stock.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Hatheno_Max_1.1-ALT-8B-model_stock-i1-GGUF/resolve/main/Hatheno_Max_1.1-ALT-8B-model_stock.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Hatheno_Max_1.1-ALT-8B-model_stock-i1-GGUF/resolve/main/Hatheno_Max_1.1-ALT-8B-model_stock.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Hatheno_Max_1.1-ALT-8B-model_stock-i1-GGUF/resolve/main/Hatheno_Max_1.1-ALT-8B-model_stock.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Hatheno_Max_1.1-ALT-8B-model_stock-i1-GGUF/resolve/main/Hatheno_Max_1.1-ALT-8B-model_stock.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Hatheno_Max_1.1-ALT-8B-model_stock-i1-GGUF/resolve/main/Hatheno_Max_1.1-ALT-8B-model_stock.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Hatheno_Max_1.1-ALT-8B-model_stock-i1-GGUF/resolve/main/Hatheno_Max_1.1-ALT-8B-model_stock.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Hatheno_Max_1.1-ALT-8B-model_stock-i1-GGUF/resolve/main/Hatheno_Max_1.1-ALT-8B-model_stock.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Hatheno_Max_1.1-ALT-8B-model_stock-i1-GGUF/resolve/main/Hatheno_Max_1.1-ALT-8B-model_stock.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Hatheno_Max_1.1-ALT-8B-model_stock-i1-GGUF/resolve/main/Hatheno_Max_1.1-ALT-8B-model_stock.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Hatheno_Max_1.1-ALT-8B-model_stock-i1-GGUF/resolve/main/Hatheno_Max_1.1-ALT-8B-model_stock.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.8 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Hatheno_Max_1.1-ALT-8B-model_stock-i1-GGUF/resolve/main/Hatheno_Max_1.1-ALT-8B-model_stock.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.8 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Hatheno_Max_1.1-ALT-8B-model_stock-i1-GGUF/resolve/main/Hatheno_Max_1.1-ALT-8B-model_stock.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.8 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Hatheno_Max_1.1-ALT-8B-model_stock-i1-GGUF/resolve/main/Hatheno_Max_1.1-ALT-8B-model_stock.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Hatheno_Max_1.1-ALT-8B-model_stock-i1-GGUF/resolve/main/Hatheno_Max_1.1-ALT-8B-model_stock.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Hatheno_Max_1.1-ALT-8B-model_stock-i1-GGUF/resolve/main/Hatheno_Max_1.1-ALT-8B-model_stock.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Hatheno_Max_1.1-ALT-8B-model_stock-i1-GGUF/resolve/main/Hatheno_Max_1.1-ALT-8B-model_stock.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Hatheno_Max_1.1-ALT-8B-model_stock-i1-GGUF/resolve/main/Hatheno_Max_1.1-ALT-8B-model_stock.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Hatheno_Max_1.1-ALT-8B-model_stock-i1-GGUF/resolve/main/Hatheno_Max_1.1-ALT-8B-model_stock.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mserloth/v23 | mserloth | "2024-05-05T13:44:34Z" | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"autotrain",
"dataset:v23/autotrain-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-05-05T13:42:35Z" |
---
tags:
- autotrain
- text-classification
widget:
- text: "I love AutoTrain"
datasets:
- v23/autotrain-data
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.5765425562858582
f1_macro: 0.7349736157447978
f1_micro: 0.7492063492063492
f1_weighted: 0.7506575340838828
precision_macro: 0.7150911360799
precision_micro: 0.7492063492063492
precision_weighted: 0.7534924677486475
recall_macro: 0.7595969666757304
recall_micro: 0.7492063492063492
recall_weighted: 0.7492063492063492
accuracy: 0.7492063492063492
|
Zienab/wav | Zienab | "2024-01-09T07:06:37Z" | 103 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"ar",
"base_model:facebook/wav2vec2-large-xlsr-53",
"base_model:finetune:facebook/wav2vec2-large-xlsr-53",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-01-08T11:49:55Z" | ---
license: apache-2.0
base_model: facebook/wav2vec2-large-xlsr-53
tags:
- generated_from_trainer
model-index:
- name: wav
results: []
language:
- ar
metrics:
- accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0 |
Rakhman16/program-synthesis-java-codet5 | Rakhman16 | "2024-12-09T07:29:08Z" | 14 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"code",
"en",
"dataset:codeparrot/xlcost-text-to-code",
"base_model:Salesforce/codet5-base",
"base_model:finetune:Salesforce/codet5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-12-09T04:21:02Z" | ---
license: apache-2.0
datasets:
- codeparrot/xlcost-text-to-code
language:
- en
base_model:
- Salesforce/codet5-base
pipeline_tag: text2text-generation
library_name: transformers
tags:
- code
--- |
RichardErkhov/qhchina_-_gpt2-kunqu-english-gguf | RichardErkhov | "2025-03-19T06:38:41Z" | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-03-19T06:35:13Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gpt2-kunqu-english - GGUF
- Model creator: https://huggingface.co/qhchina/
- Original model: https://huggingface.co/qhchina/gpt2-kunqu-english/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [gpt2-kunqu-english.Q2_K.gguf](https://huggingface.co/RichardErkhov/qhchina_-_gpt2-kunqu-english-gguf/blob/main/gpt2-kunqu-english.Q2_K.gguf) | Q2_K | 0.08GB |
| [gpt2-kunqu-english.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/qhchina_-_gpt2-kunqu-english-gguf/blob/main/gpt2-kunqu-english.IQ3_XS.gguf) | IQ3_XS | 0.08GB |
| [gpt2-kunqu-english.IQ3_S.gguf](https://huggingface.co/RichardErkhov/qhchina_-_gpt2-kunqu-english-gguf/blob/main/gpt2-kunqu-english.IQ3_S.gguf) | IQ3_S | 0.08GB |
| [gpt2-kunqu-english.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/qhchina_-_gpt2-kunqu-english-gguf/blob/main/gpt2-kunqu-english.Q3_K_S.gguf) | Q3_K_S | 0.08GB |
| [gpt2-kunqu-english.IQ3_M.gguf](https://huggingface.co/RichardErkhov/qhchina_-_gpt2-kunqu-english-gguf/blob/main/gpt2-kunqu-english.IQ3_M.gguf) | IQ3_M | 0.09GB |
| [gpt2-kunqu-english.Q3_K.gguf](https://huggingface.co/RichardErkhov/qhchina_-_gpt2-kunqu-english-gguf/blob/main/gpt2-kunqu-english.Q3_K.gguf) | Q3_K | 0.09GB |
| [gpt2-kunqu-english.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/qhchina_-_gpt2-kunqu-english-gguf/blob/main/gpt2-kunqu-english.Q3_K_M.gguf) | Q3_K_M | 0.09GB |
| [gpt2-kunqu-english.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/qhchina_-_gpt2-kunqu-english-gguf/blob/main/gpt2-kunqu-english.Q3_K_L.gguf) | Q3_K_L | 0.1GB |
| [gpt2-kunqu-english.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/qhchina_-_gpt2-kunqu-english-gguf/blob/main/gpt2-kunqu-english.IQ4_XS.gguf) | IQ4_XS | 0.1GB |
| [gpt2-kunqu-english.Q4_0.gguf](https://huggingface.co/RichardErkhov/qhchina_-_gpt2-kunqu-english-gguf/blob/main/gpt2-kunqu-english.Q4_0.gguf) | Q4_0 | 0.1GB |
| [gpt2-kunqu-english.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/qhchina_-_gpt2-kunqu-english-gguf/blob/main/gpt2-kunqu-english.IQ4_NL.gguf) | IQ4_NL | 0.1GB |
| [gpt2-kunqu-english.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/qhchina_-_gpt2-kunqu-english-gguf/blob/main/gpt2-kunqu-english.Q4_K_S.gguf) | Q4_K_S | 0.1GB |
| [gpt2-kunqu-english.Q4_K.gguf](https://huggingface.co/RichardErkhov/qhchina_-_gpt2-kunqu-english-gguf/blob/main/gpt2-kunqu-english.Q4_K.gguf) | Q4_K | 0.11GB |
| [gpt2-kunqu-english.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/qhchina_-_gpt2-kunqu-english-gguf/blob/main/gpt2-kunqu-english.Q4_K_M.gguf) | Q4_K_M | 0.11GB |
| [gpt2-kunqu-english.Q4_1.gguf](https://huggingface.co/RichardErkhov/qhchina_-_gpt2-kunqu-english-gguf/blob/main/gpt2-kunqu-english.Q4_1.gguf) | Q4_1 | 0.11GB |
| [gpt2-kunqu-english.Q5_0.gguf](https://huggingface.co/RichardErkhov/qhchina_-_gpt2-kunqu-english-gguf/blob/main/gpt2-kunqu-english.Q5_0.gguf) | Q5_0 | 0.11GB |
| [gpt2-kunqu-english.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/qhchina_-_gpt2-kunqu-english-gguf/blob/main/gpt2-kunqu-english.Q5_K_S.gguf) | Q5_K_S | 0.11GB |
| [gpt2-kunqu-english.Q5_K.gguf](https://huggingface.co/RichardErkhov/qhchina_-_gpt2-kunqu-english-gguf/blob/main/gpt2-kunqu-english.Q5_K.gguf) | Q5_K | 0.12GB |
| [gpt2-kunqu-english.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/qhchina_-_gpt2-kunqu-english-gguf/blob/main/gpt2-kunqu-english.Q5_K_M.gguf) | Q5_K_M | 0.12GB |
| [gpt2-kunqu-english.Q5_1.gguf](https://huggingface.co/RichardErkhov/qhchina_-_gpt2-kunqu-english-gguf/blob/main/gpt2-kunqu-english.Q5_1.gguf) | Q5_1 | 0.12GB |
| [gpt2-kunqu-english.Q6_K.gguf](https://huggingface.co/RichardErkhov/qhchina_-_gpt2-kunqu-english-gguf/blob/main/gpt2-kunqu-english.Q6_K.gguf) | Q6_K | 0.13GB |
| [gpt2-kunqu-english.Q8_0.gguf](https://huggingface.co/RichardErkhov/qhchina_-_gpt2-kunqu-english-gguf/blob/main/gpt2-kunqu-english.Q8_0.gguf) | Q8_0 | 0.17GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TFOCUS/kevin-flux_7 | TFOCUS | "2025-02-19T16:28:42Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-02-19T16:24:08Z" | # flux-schnell-edge-inference
nestas hagunnan hinase
|
shubhamagarwal92/ppo-Huggy | shubhamagarwal92 | "2023-07-18T06:07:07Z" | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | "2023-07-18T06:07:04Z" | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: shubhamagarwal92/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
vocabtrimmer/mbart-large-cc25-trimmed-fr-frquad-qa | vocabtrimmer | "2023-04-05T18:09:09Z" | 103 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"question answering",
"fr",
"dataset:lmqg/qg_frquad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-04-05T18:03:27Z" |
---
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
language: fr
datasets:
- lmqg/qg_frquad
pipeline_tag: text2text-generation
tags:
- question answering
widget:
- text: "question: En quelle année a-t-on trouvé trace d'un haut fourneau similaire?, context: Cette technologie ne disparaît qu'au début du XXe siècle. On retrouve vers 1900 un haut fourneau similaire dans le Bulacan, aux Philippines. Plus tard encore, le « haut fourneau dans la cour » prôné par Mao Zedong pendant le Grand Bond en avant est de ce type. L'expérience n'est un échec technique que dans les régions où le savoir-faire n'existe pas, ou a disparu."
example_title: "Question Answering Example 1"
- text: "question: Comment appelle-t-on la Guerre de 14-18 ?, context: Ce black dog peut être lié à des évènements traumatisants issus du monde extérieur, tels que son renvoi de l'Amirauté après la catastrophe des Dardanelles, lors de la Grande Guerre de 14-18, ou son rejet par l'électorat en juillet 1945. On sait également que dans ces deux cas, la guérison, certes lente et douloureuse et jamais complète ni définitive, se fera grâce à la peinture. D'un autre côté, étant donnés les symptômes de ce mal que Churchill éprouvait de plus en plus, il ne pouvait rien moins qu'être purement associé à de telles causes extrinsèques, ce qui correspond au profil classique de la dépression majeure unipolaire ou bipolaire."
example_title: "Question Answering Example 2"
model-index:
- name: vocabtrimmer/mbart-large-cc25-trimmed-fr-frquad-qa
results:
- task:
name: Text2text Generation
type: text2text-generation
dataset:
name: lmqg/qg_frquad
type: default
args: default
metrics:
- name: BLEU4 (Question Answering)
type: bleu4_question_answering
value: 31.61
- name: ROUGE-L (Question Answering)
type: rouge_l_question_answering
value: 41.11
- name: METEOR (Question Answering)
type: meteor_question_answering
value: 32.95
- name: BERTScore (Question Answering)
type: bertscore_question_answering
value: 93.48
- name: MoverScore (Question Answering)
type: moverscore_question_answering
value: 79.52
- name: AnswerF1Score (Question Answering)
type: answer_f1_score__question_answering
value: 66.37
- name: AnswerExactMatch (Question Answering)
type: answer_exact_match_question_answering
value: 45.11
---
# Model Card of `vocabtrimmer/mbart-large-cc25-trimmed-fr-frquad-qa`
This model is fine-tuned version of [vocabtrimmer/mbart-large-cc25-trimmed-fr](https://huggingface.co/vocabtrimmer/mbart-large-cc25-trimmed-fr) for question answering task on the [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [vocabtrimmer/mbart-large-cc25-trimmed-fr](https://huggingface.co/vocabtrimmer/mbart-large-cc25-trimmed-fr)
- **Language:** fr
- **Training data:** [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="fr", model="vocabtrimmer/mbart-large-cc25-trimmed-fr-frquad-qa")
# model prediction
answers = model.answer_q(list_question="En quelle année a-t-on trouvé trace d'un haut fourneau similaire?", list_context=" Cette technologie ne disparaît qu'au début du XXe siècle. On retrouve vers 1900 un haut fourneau similaire dans le Bulacan, aux Philippines. Plus tard encore, le « haut fourneau dans la cour » prôné par Mao Zedong pendant le Grand Bond en avant est de ce type. L'expérience n'est un échec technique que dans les régions où le savoir-faire n'existe pas, ou a disparu.")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "vocabtrimmer/mbart-large-cc25-trimmed-fr-frquad-qa")
output = pipe("question: En quelle année a-t-on trouvé trace d'un haut fourneau similaire?, context: Cette technologie ne disparaît qu'au début du XXe siècle. On retrouve vers 1900 un haut fourneau similaire dans le Bulacan, aux Philippines. Plus tard encore, le « haut fourneau dans la cour » prôné par Mao Zedong pendant le Grand Bond en avant est de ce type. L'expérience n'est un échec technique que dans les régions où le savoir-faire n'existe pas, ou a disparu.")
```
## Evaluation
- ***Metric (Question Answering)***: [raw metric file](https://huggingface.co/vocabtrimmer/mbart-large-cc25-trimmed-fr-frquad-qa/raw/main/eval/metric.first.answer.paragraph_question.answer.lmqg_qg_frquad.default.json)
| | Score | Type | Dataset |
|:-----------------|--------:|:--------|:-----------------------------------------------------------------|
| AnswerExactMatch | 45.11 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| AnswerF1Score | 66.37 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| BERTScore | 93.48 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| Bleu_1 | 42.71 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| Bleu_2 | 37.89 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| Bleu_3 | 34.5 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| Bleu_4 | 31.61 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| METEOR | 32.95 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| MoverScore | 79.52 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| ROUGE_L | 41.11 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_frquad
- dataset_name: default
- input_types: ['paragraph_question']
- output_types: ['answer']
- prefix_types: None
- model: vocabtrimmer/mbart-large-cc25-trimmed-fr
- max_length: 512
- max_length_output: 32
- epoch: 11
- batch: 8
- lr: 0.0001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 16
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/vocabtrimmer/mbart-large-cc25-trimmed-fr-frquad-qa/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
reasonwang/ToolLlama-WoTool-Llama-3-8B | reasonwang | "2024-09-30T08:33:05Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-09-29T11:41:48Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
bibom2001/whisper0 | bibom2001 | "2024-10-25T10:56:35Z" | 85 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-10-24T13:40:21Z" | ---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-base
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper0
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.5080
- Wer Ortho: 99.8700
- Wer: 10.1070
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 1
- training_steps: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:------:|:----:|:---------------:|:---------:|:-------:|
| 3.778 | 0.0031 | 5 | 4.5080 | 99.8700 | 10.1070 |
### Framework versions
- Transformers 4.45.1
- Pytorch 1.12.1
- Datasets 3.0.1
- Tokenizers 0.20.0
|
ishwarbb23/Q4-PHQ | ishwarbb23 | "2023-11-03T19:43:05Z" | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-11-03T17:37:58Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Q4-PHQ
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Q4-PHQ
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7573
- Accuracy: 0.5775
- Mcc: 0.1345
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Mcc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 51 | 0.6875 | 0.57 | 0.2222 |
| No log | 2.0 | 102 | 0.6709 | 0.565 | 0.1213 |
| No log | 3.0 | 153 | 0.6932 | 0.58 | 0.1625 |
| No log | 4.0 | 204 | 0.7271 | 0.5875 | 0.1630 |
| No log | 5.0 | 255 | 0.7573 | 0.5775 | 0.1345 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
egesko/CodeSprint_DCGAN | egesko | "2022-05-28T06:23:00Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2022-05-28T05:19:07Z" | ---
license: mit
---
# DCGAN to generate face images
This trained model is a keras implementation of DCGAN that is trained on face images.
|
rodrigoclira/Reinforce-Pixelcopter1 | rodrigoclira | "2023-06-25T21:08:22Z" | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-06-24T23:24:00Z" | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 50.30 +/- 34.08
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
ciciburner100/DashaFL | ciciburner100 | "2025-04-01T17:52:43Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | "2025-04-01T17:45:08Z" | Temporary Redirect. Redirecting to /api/resolve-cache/models/ciciburner100/DashaFL/d4a893efcebeff4115aaa5f411217c65b97ab045/README.md?%2Fciciburner100%2FDashaFL%2Fresolve%2Fmain%2FREADME.md=&etag=%2229b64b5de1cd8c8cb9078b53f7411eb6ac6ab4aa%22 |
Jonjew/Mosaic | Jonjew | "2025-03-04T01:52:49Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
] | text-to-image | "2025-03-04T01:52:38Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: ArsMJStyle, Mosaic, Portrait FEmale
parameters:
negative_prompt: 'Steps: 20 Seed: 587388910175190'
output:
url: images/ComfyUI_00016_.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: ArsMJStyle, Mosaic
license: unknown
---
# Mosaic (SD 1.5, SDXL, Pony, Flux )
<Gallery />
## Model description
FROM
Trigger ArsMJStyle, Mosaic
Strength 1
The Lora is trained on MidJourney Images that use my "Personalised Style" + Mosaic.
Works without the trigger words ArsMJStyle, Mosaic
Have a an effect in the 0.4 - 0.8 range
Depending on the Lora stack optimal Mosaic look in the 0.8 - 1.5 range
Quality tags reduce the Mosaic effect,
Versions:
Flux - based on the latest updated dataset with new caption.
v1 - the lightest Mosaic Effect, but without strong biases towards anything. Trained on Clip skip 1. SD 1.5 and SDXL are trained with this "final" dataset as well.
Beta 2 - tries to generate Woman Portraits. Same dataset size as v1, but only 50% overlap. The Capture here was Free text + WD Tags.
Beta - 50% of the dataset of Beta 2. Not that flexible when generating new images.
The 3 Pony versions are practically different Loras, so try the other 2, if you liked one of them :)
About this version
In the 0.2 - 0.6 Range the LoRA adds distinctive vibrant colors, but the Mosaic Effect might be missing or minimal.
Optimal Range is 0.8 - 1.5
For recommended usage on Quantized version check this really simple ComfyUI Workflow:
https://civitai.com/models/658639/super-simple-gguf-quantized-flux-lora-workflow
All the images with missing Metadata from the Showcase Gallery were created with it and Flux Q4 Dev model.
## Trigger words
You should use `ArsMJStyle` to trigger the image generation.
You should use `Mosaic` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/Mosaic/tree/main) them in the Files & versions tab.
|
brushpenbob/Flux-Kim-Jung-Gi | brushpenbob | "2024-08-25T23:34:11Z" | 34 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | "2024-08-25T23:33:58Z" | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: "UNICODE\0\0a\0 \0f\0o\0x\0 \0d\0r\0e\0s\0s\0e\0d\0 \0a\0 \0a\0 \0w\0i\0z\0a\0r\0d\0,\0 \0g\0r\0e\0g\0 \0r\0u\0t\0k\0o\0w\0s\0k\0i\0"
output:
url: images/Flux_Kim_Jung_Gi_e000010_00_20240822071927.jpeg
- text: "UNICODE\0\01\0b\0o\0y\0,\0 \0m\0a\0l\0e\0 \0f\0o\0c\0u\0s\0,\0 \0s\0o\0l\0o\0,\0 \0p\0o\0i\0n\0t\0y\0 \0e\0a\0r\0s\0,\0 \0h\0o\0o\0d\0,\0 \0g\0r\0i\0n\0,\0 \0s\0m\0i\0l\0e\0,\0 \0c\0l\0o\0a\0k\0,\0 \0e\0v\0i\0l\0 \0g\0r\0i\0n\0,\0 \0t\0e\0e\0t\0h\0,\0 \0e\0v\0i\0l\0 \0s\0m\0i\0l\0e\0,\0 \0u\0p\0p\0e\0r\0 \0b\0o\0d\0y\0,\0 \0h\0o\0o\0d\0e\0d\0 \0c\0l\0o\0a\0k\0,\0 \0n\0i\0g\0h\0t\0,\0 \0h\0o\0o\0d\0 \0u\0p\0,\0 \0k\0i\0m\0j\0u\0n\0g\0g\0i\0,\0 \0S\0p\0i\0d\0e\0r\0-\0M\0a\0n\0 \0v\0i\0l\0l\0a\0i\0n\0,\0 \0 \0g\0r\0e\0e\0n\0 \0g\0o\0b\0l\0i\0n\0"
output:
url: images/1XQBTMJ9W0NRAKH9PZX2HBTMM0.jpeg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: kimjunggi
---
# m Jung Gi
<Gallery />
## Model description
Per request... This is a upscaled flux model of the Kim Jung gi Mo
https://youtu.be/QynE0UGQhRs
## Trigger words
You should use `kimjunggi` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/brushpenbob/672588/tree/main) them in the Files & versions tab.
|
ThisIsATest/dclm-id-1.4b-raw | ThisIsATest | "2025-02-18T18:55:04Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-18T18:52:46Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Helsinki-NLP/opus-mt-vi-eo | Helsinki-NLP | "2023-08-16T12:08:33Z" | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"vi",
"eo",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2022-03-02T23:29:04Z" | ---
language:
- vi
- eo
tags:
- translation
license: apache-2.0
---
### vie-epo
* source group: Vietnamese
* target group: Esperanto
* OPUS readme: [vie-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-epo/README.md)
* model: transformer-align
* source language(s): vie
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-epo/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-epo/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-epo/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.vie.epo | 12.2 | 0.332 |
### System Info:
- hf_name: vie-epo
- source_languages: vie
- target_languages: epo
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-epo/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['vi', 'eo']
- src_constituents: {'vie', 'vie_Hani'}
- tgt_constituents: {'epo'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-epo/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-epo/opus-2020-06-16.test.txt
- src_alpha3: vie
- tgt_alpha3: epo
- short_pair: vi-eo
- chrF2_score: 0.332
- bleu: 12.2
- brevity_penalty: 0.99
- ref_len: 13637.0
- src_name: Vietnamese
- tgt_name: Esperanto
- train_date: 2020-06-16
- src_alpha2: vi
- tgt_alpha2: eo
- prefer_old: False
- long_pair: vie-epo
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
RLHFlow/Llama3.1-8B-ORM-Mistral-Data | RLHFlow | "2024-11-09T18:36:42Z" | 126 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-08T06:26:09Z" | ---
library_name: transformers
tags: []
---
This is a outcome-supervised reward (ORM) trained on Mistral-generated data from the project [RLHFlow/RLHF-Reward-Modeling](https://github.com/RLHFlow/RLHF-Reward-Modeling)
The model is trained from [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on [RLHFlow/Mistral-ORM-Data](https://huggingface.co/datasets/RLHFlow/Mistral-ORM-Data) for 1 epochs. We use a global batch size of 32 and a learning rate of 2e-6, where we pack the samples and split them into chunks of 8192 token. See more training details at https://github.com/RLHFlow/Online-RLHF/blob/main/math/llama-3.1-prm.yaml .
## BoN evaluation result for Mistral generator:
| Model | Method | GSM8K | MATH |
| ------------- | ------------- | ------------- | -------- |
| Mistral-7B | Pass@1 | 77.9 | 28.4 |
| Mistral-7B | Majority Voting@1024 | 84.2 | 36.8 |
| Mistral-7B | Mistral-ORM@1024 | 90.1 | 43.6 |
| Mistral-7B | Mistral-PRM@1024 | 92.4 | 46.3 |
## Scaling the inference sampling to N=1024 for Deepseek generator:
| Model | Method | GSM8K | MATH |
| ------------- | ------------- | ------------- | -------- |
| Deepseek-7B | Pass@1 | 83.9 | 38.4 |
| Deepseek-7B | Majority Voting@1024 | 89.7 | 57.4 |
| Deepseek-7B | Deepseek-ORM@1024 | 93.4 | 52.4 |
| Deepseek-7B | Deepseek-PRM@1024 | 93.0 | 58.1 |
| Deepseek-7B | Mistral-ORM@1024 (OOD) | 90.3 | 54.9 |
| Deepseek-7B | Mistral-PRM@1024 (OOD) | 91.9 | 56.9 |
## Visualization

## Usage
See https://github.com/RLHFlow/RLHF-Reward-Modeling/tree/main/math for detailed examples.
## Citation
The automatic annotation was proposed in the Math-shepherd paper:
```
@inproceedings{wang2024math,
title={Math-shepherd: Verify and reinforce llms step-by-step without human annotations},
author={Wang, Peiyi and Li, Lei and Shao, Zhihong and Xu, Runxin and Dai, Damai and Li, Yifei and Chen, Deli and Wu, Yu and Sui, Zhifang},
booktitle={Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
pages={9426--9439},
year={2024}
}
```
If you find the training recipe useful, please consider cite it as follows.
```
@misc{xiong2024rlhflowmath,
author={Wei Xiong and Hanning Zhang and Nan Jiang and Tong Zhang},
title = {An Implementation of Generative PRM},
year = {2024},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/RLHFlow/RLHF-Reward-Modeling}}
}
```
|
netsol/resume-llama-3.1-8b-4bit | netsol | "2024-10-27T18:24:01Z" | 77 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"base_model:quantized:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-10-27T18:01:24Z" | ---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** netsol
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
NAMJOON/smu_counsel_vllm16_llama3.2_v1 | NAMJOON | "2024-11-04T08:25:02Z" | 121 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-04T08:23:19Z" | ---
base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** NAMJOON
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ucalyptus/prem-1B-chat-MLX | ucalyptus | "2024-05-11T23:35:01Z" | 82 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mlx",
"conversational",
"dataset:cerebras/SlimPajama-627B",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:hkust-nlp/deita-10k-v0",
"dataset:Open-Orca/SlimOrca-Dedup",
"dataset:cognitivecomputations/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split",
"dataset:HuggingFaceH4/capybara",
"dataset:meta-math/MetaMathQA",
"dataset:argilla/ultrafeedback-binarized-preferences-cleaned",
"dataset:Intel/orca_dpo_pairs",
"dataset:alexredna/oasst2_dpo_pairs",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-11T23:34:31Z" | ---
license: apache-2.0
library_name: transformers
tags:
- mlx
datasets:
- cerebras/SlimPajama-627B
- HuggingFaceH4/ultrachat_200k
- hkust-nlp/deita-10k-v0
- Open-Orca/SlimOrca-Dedup
- cognitivecomputations/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split
- HuggingFaceH4/capybara
- meta-math/MetaMathQA
- argilla/ultrafeedback-binarized-preferences-cleaned
- Intel/orca_dpo_pairs
- alexredna/oasst2_dpo_pairs
pipeline_tag: text-generation
---
# ucalyptus/prem-1B-chat-MLX
This model was converted to MLX format from [`premai-io/prem-1B-chat`]().
Refer to the [original model card](https://huggingface.co/premai-io/prem-1B-chat) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("ucalyptus/prem-1B-chat-MLX")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
arinzeo/opus-mt-id-en-finetuned-indo-to-eng | arinzeo | "2024-11-21T22:09:00Z" | 91 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-11-19T22:04:35Z" | ---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: opus-mt-id-en-finetuned-indo-to-eng
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-id-en-finetuned-indo-to-eng
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
lmms-lab/LLaVA-NeXT-Video-34B | lmms-lab | "2024-04-24T05:31:23Z" | 33 | 16 | transformers | [
"transformers",
"safetensors",
"llava",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | text-generation | "2024-04-16T14:13:36Z" | ---
inference: false
license: apache-2.0
---
<br>
# LLaVA-Next-Video Model Card
## Model details
**Model type:**
<br>
LLaVA-Next-Video is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data.
<br>
Base LLM: [NousResearch/Nous-Hermes-2-Yi-34B](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B)
**Model date:**
<br>
LLaVA-Next-Video-34B was trained in April 2024.
**Paper or resources for more information:**
<br>
https://github.com/LLaVA-VL/LLaVA-NeXT
## License
[NousResearch/Nous-Hermes-2-Yi-34B](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B) license.
## Where to send questions or comments about the model
https://github.com/LLaVA-VL/LLaVA-NeXT/issues
## Intended use
**Primary intended uses:**
<br>
The primary use of LLaVA is research on large multimodal models and chatbots.
**Primary intended users:**
<br>
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
## Training dataset
### Image
- 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
- 158K GPT-generated multimodal instruction-following data.
- 500K academic-task-oriented VQA data mixture.
- 50K GPT-4V data mixture.
- 40K ShareGPT data.
### Video
- 100K VideoChatGPT-Instruct.
## Evaluation dataset
A collection of 4 benchmarks, including 3 academic VQA benchmarks and 1 captioning benchmark. |
Brikowski/xlm-roberta-base-finetuned-wikiann | Brikowski | "2023-03-25T14:35:32Z" | 104 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2023-03-25T13:40:26Z" | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-wikiann
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-wikiann
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1638
- F1: 0.8878
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.2426 | 1.0 | 5000 | 0.1882 | 0.8495 |
| 0.1475 | 2.0 | 10000 | 0.1618 | 0.8807 |
| 0.1003 | 3.0 | 15000 | 0.1638 | 0.8878 |
### Framework versions
- Transformers 4.27.3
- Pytorch 2.0.0+cu117
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Ahmedhisham/Arabic_dialect_identifier | Ahmedhisham | "2023-09-03T11:12:43Z" | 0 | 0 | keras | [
"keras",
"tf-keras",
"text-classification",
"license:mit",
"region:us"
] | text-classification | "2023-09-03T10:37:12Z" | ---
license: mit
metrics:
- precision
- recall
library_name: keras
pipeline_tag: text-classification
--- |
Subsets and Splits