modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-15 06:29:46
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 426
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-15 06:29:46
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
liswei/EmojiLMSeq2SeqLoRA | liswei | "2023-12-08T16:05:12Z" | 2,102 | 0 | transformers | [
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"text-generation-inference",
"zh",
"en",
"dataset:liswei/EmojiAppendDataset",
"base_model:google/mt5-base",
"base_model:finetune:google/mt5-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-12-06T06:45:58Z" | ---
datasets:
- liswei/EmojiAppendDataset
language:
- zh
- en
widget:
- text: "emoji: 那你很厲害誒"
library_name: transformers
base_model: google/mt5-base
tags:
- text-generation-inference
--- |
HeOeH/Iron_IL_0325_5K_3E | HeOeH | "2025-03-24T14:31:30Z" | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | "2025-03-24T14:12:17Z" | Found. Redirecting to https://cdn-lfs-us-1.hf.co/repos/3a/88/3a881007612382aed2bc87e5ea771168f1ea0b0c09910e2eae092e30c37f55a9/4bcf87ecfbbb8e07a01b21415a970c8b53a5283bf6872b657040d3f45c9241f7?response-content-disposition=inline%3B+filename*%3DUTF-8%27%27README.md%3B+filename%3D%22README.md%22%3B&response-content-type=text%2Fmarkdown&Expires=1742844515&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTc0Mjg0NDUxNX19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy11cy0xLmhmLmNvL3JlcG9zLzNhLzg4LzNhODgxMDA3NjEyMzgyYWVkMmJjODdlNWVhNzcxMTY4ZjFlYTBiMGMwOTkxMGUyZWFlMDkyZTMwYzM3ZjU1YTkvNGJjZjg3ZWNmYmJiOGUwN2EwMWIyMTQxNWE5NzBjOGI1M2E1MjgzYmY2ODcyYjY1NzA0MGQzZjQ1YzkyNDFmNz9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSomcmVzcG9uc2UtY29udGVudC10eXBlPSoifV19&Signature=jBin043ng3sm2rk1fMf4S7UPXpDS2cSY0bAAQ2kpL7tV81XCcKjeNpfbO8WOlG88nBhDPbqbpvzSJzhL1EGQHTNCeakBpq8GxGX5rk34ZDqrW5HCC9QLJ2LHQpHXJtdfzasIZSD%7E5QT-5RtppFnL%7Et-OeA2EfA3YnfKq5ATuEGFB5vLnQCKkax1LwbVpx3xikuaXG8qe-6DnhPM2h3co88JK4i4u%7ETxlEpRZ4RUYGOtqc6ucMUGT0UXeQv8vTUOZiREOVkhkZKfRteZzDawsvCPpSLI8lAJmwvjj8asYN06PktGm96-eetLIK3O%7E4qkB%7EirhcddQt55VP1dASyIppg__&Key-Pair-Id=K24J24Z295AEI9 |
dallonf/ppo-Huggy | dallonf | "2024-04-13T04:10:03Z" | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | "2024-04-13T03:53:36Z" | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: dallonf/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
whatsupbr0/71b15558-419b-415f-8652-4809eeb4b927 | whatsupbr0 | "2025-04-13T19:48:04Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-13T18:57:47Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
KoboldAI/fairseq-dense-355M | KoboldAI | "2023-11-18T11:54:59Z" | 1,375 | 7 | transformers | [
"transformers",
"pytorch",
"safetensors",
"xglm",
"text-generation",
"en",
"arxiv:2112.10684",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-02T23:29:04Z" | ---
language: en
---
This is a Hugging Face transformers-compatible conversion of the original dense 355M-parameter model from the paper "[Efficient Large Scale Language Modeling with Mixtures of Experts](https://arxiv.org/abs/2112.10684)" from Artetxe et al. Please refer to the original model card, which can be found at https://github.com/facebookresearch/fairseq/blob/main/examples/moe_lm/model_card.md.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KoboldAI__fairseq-dense-355M)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 27.99 |
| ARC (25-shot) | 25.43 |
| HellaSwag (10-shot) | 46.67 |
| MMLU (5-shot) | 25.3 |
| TruthfulQA (0-shot) | 39.19 |
| Winogrande (5-shot) | 52.88 |
| GSM8K (5-shot) | 0.0 |
| DROP (3-shot) | 6.48 |
|
thrunlab/t5-base_boolq_dense_epochs-1 | thrunlab | "2023-10-24T22:32:33Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-10-24T22:31:53Z" | ---
license: apache-2.0
base_model: t5-base
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: t5-base_boolq_dense_epochs-1
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: super_glue
type: super_glue
config: boolq
split: train[:64]
args: boolq
metrics:
- name: Accuracy
type: accuracy
value: 0.6153846153846154
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base_boolq_dense_epochs-1
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7620
- Accuracy: 0.6154
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1+cu117
- Datasets 2.9.0
- Tokenizers 0.14.1
|
google/pix2struct-ai2d-base | google | "2023-12-24T21:14:49Z" | 1,333 | 42 | transformers | [
"transformers",
"pytorch",
"safetensors",
"pix2struct",
"image-text-to-text",
"visual-question-answering",
"en",
"fr",
"ro",
"de",
"multilingual",
"arxiv:2210.03347",
"license:apache-2.0",
"region:us"
] | visual-question-answering | "2023-03-14T10:02:51Z" | ---
language:
- en
- fr
- ro
- de
- multilingual
inference: false
pipeline_tag: visual-question-answering
license: apache-2.0
---
# Model card for Pix2Struct - Finetuned on AI2D (scientific diagram VQA)

# Table of Contents
0. [TL;DR](#TL;DR)
1. [Using the model](#using-the-model)
2. [Contribution](#contribution)
3. [Citation](#citation)
# TL;DR
Pix2Struct is an image encoder - text decoder model that is trained on image-text pairs for various tasks, including image captionning and visual question answering. The full list of available models can be found on the Table 1 of the paper:

The abstract of the model states that:
> Visually-situated language is ubiquitous—sources range from textbooks with diagrams to web pages with images and tables, to mobile apps with buttons and
forms. Perhaps due to this diversity, previous work has typically relied on domainspecific recipes with limited sharing of the underlying data, model architectures,
and objectives. We present Pix2Struct, a pretrained image-to-text model for
purely visual language understanding, which can be finetuned on tasks containing visually-situated language. Pix2Struct is pretrained by learning to parse
masked screenshots of web pages into simplified HTML. The web, with its richness of visual elements cleanly reflected in the HTML structure, provides a large
source of pretraining data well suited to the diversity of downstream tasks. Intuitively, this objective subsumes common pretraining signals such as OCR, language modeling, image captioning. In addition to the novel pretraining strategy,
we introduce a variable-resolution input representation and a more flexible integration of language and vision inputs, where language prompts such as questions
are rendered directly on top of the input image. For the first time, we show that a
single pretrained model can achieve state-of-the-art results in six out of nine tasks
across four domains: documents, illustrations, user interfaces, and natural images.
# Using the model
This model has been fine-tuned on VQA, you need to provide a question in a specific format, ideally in the format of a Choices question answering
## Running the model
### In full precision, on CPU:
You can run the model in full precision on CPU:
```python
import requests
from PIL import Image
from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor
image_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg"
image = Image.open(requests.get(image_url, stream=True).raw)
model = Pix2StructForConditionalGeneration.from_pretrained("google/pix2struct-ai2d-base")
processor = Pix2StructProcessor.from_pretrained("google/pix2struct-ai2d-base")
question = "What does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud"
inputs = processor(images=image, text=question, return_tensors="pt")
predictions = model.generate(**inputs)
print(processor.decode(predictions[0], skip_special_tokens=True))
>>> ash cloud
```
### In full precision, on GPU:
You can run the model in full precision on CPU:
```python
import requests
from PIL import Image
from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor
image_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg"
image = Image.open(requests.get(image_url, stream=True).raw)
model = Pix2StructForConditionalGeneration.from_pretrained("google/pix2struct-ai2d-base").to("cuda")
processor = Pix2StructProcessor.from_pretrained("google/pix2struct-ai2d-base")
question = "What does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud"
inputs = processor(images=image, text=question, return_tensors="pt").to("cuda")
predictions = model.generate(**inputs)
print(processor.decode(predictions[0], skip_special_tokens=True))
>>> ash cloud
```
### In half precision, on GPU:
You can run the model in full precision on CPU:
```python
import requests
from PIL import Image
import torch
from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor
image_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg"
image = Image.open(requests.get(image_url, stream=True).raw)
model = Pix2StructForConditionalGeneration.from_pretrained("google/pix2struct-ai2d-base", torch_dtype=torch.bfloat16).to("cuda")
processor = Pix2StructProcessor.from_pretrained("google/pix2struct-ai2d-base")
question = "What does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud"
inputs = processor(images=image, text=question, return_tensors="pt").to("cuda", torch.bfloat16)
predictions = model.generate(**inputs)
print(processor.decode(predictions[0], skip_special_tokens=True))
>>> ash cloud
```
## Converting from T5x to huggingface
You can use the [`convert_pix2struct_checkpoint_to_pytorch.py`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/pix2struct/convert_pix2struct_checkpoint_to_pytorch.py) script as follows:
```bash
python convert_pix2struct_checkpoint_to_pytorch.py --t5x_checkpoint_path PATH_TO_T5X_CHECKPOINTS --pytorch_dump_path PATH_TO_SAVE --is_vqa
```
if you are converting a large model, run:
```bash
python convert_pix2struct_checkpoint_to_pytorch.py --t5x_checkpoint_path PATH_TO_T5X_CHECKPOINTS --pytorch_dump_path PATH_TO_SAVE --use-large --is_vqa
```
Once saved, you can push your converted model with the following snippet:
```python
from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor
model = Pix2StructForConditionalGeneration.from_pretrained(PATH_TO_SAVE)
processor = Pix2StructProcessor.from_pretrained(PATH_TO_SAVE)
model.push_to_hub("USERNAME/MODEL_NAME")
processor.push_to_hub("USERNAME/MODEL_NAME")
```
# Contribution
This model was originally contributed by Kenton Lee, Mandar Joshi et al. and added to the Hugging Face ecosystem by [Younes Belkada](https://huggingface.co/ybelkada).
# Citation
If you want to cite this work, please consider citing the original paper:
```
@misc{https://doi.org/10.48550/arxiv.2210.03347,
doi = {10.48550/ARXIV.2210.03347},
url = {https://arxiv.org/abs/2210.03347},
author = {Lee, Kenton and Joshi, Mandar and Turc, Iulia and Hu, Hexiang and Liu, Fangyu and Eisenschlos, Julian and Khandelwal, Urvashi and Shaw, Peter and Chang, Ming-Wei and Toutanova, Kristina},
keywords = {Computation and Language (cs.CL), Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
``` |
unicamp-dl/ptt5-v2-small | unicamp-dl | "2024-10-28T02:55:50Z" | 171 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"pt",
"dataset:allenai/c4",
"dataset:legacy-datasets/mc4",
"arxiv:2008.09144",
"arxiv:2406.10806",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-06-09T04:47:16Z" | ---
datasets:
- allenai/c4
- legacy-datasets/mc4
language:
- pt
pipeline_tag: text2text-generation
base_model: google-t5/t5-small
license: apache-2.0
---
# ptt5-v2-small
## Introduction
[ptt5-v2 models](https://huggingface.co/collections/unicamp-dl/ptt5-v2-666538a650188ba00aa8d2d0) are pretrained T5 models tailored for the Portuguese language, continuing from Google's original checkpoints with sizes from t5-small to t5-3B.
These checkpoints were used to train MonoT5 rerankers for the Portuguese language, which can be found in their [HuggingFace collection](https://huggingface.co/collections/unicamp-dl/monoptt5-66653981877df3ea727f720d).
For further information about the pretraining process, please refer to our paper, [ptt5-v2: A Closer Look at Continued Pretraining of T5 Models for the Portuguese Language](https://arxiv.org/abs/2008.09144).
## Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("unicamp-dl/ptt5-v2-small")
model = T5ForConditionalGeneration.from_pretrained("unicamp-dl/ptt5-v2-small")
```
## Citation
If you use our models, please cite:
```
@misc{piau2024ptt5v2,
title={ptt5-v2: A Closer Look at Continued Pretraining of T5 Models for the Portuguese Language},
author={Marcos Piau and Roberto Lotufo and Rodrigo Nogueira},
year={2024},
eprint={2406.10806},
archivePrefix={arXiv},
primaryClass={id='cs.CL' full_name='Computation and Language' is_active=True alt_name='cmp-lg' in_archive='cs' is_general=False description='Covers natural language processing. Roughly includes material in ACM Subject Class I.2.7. Note that work on artificial languages (programming languages, logics, formal systems) that does not explicitly address natural-language issues broadly construed (natural-language processing, computational linguistics, speech, text retrieval, etc.) is not appropriate for this area.'}
}
``` |
EQUES/TinySwallow-Stratos-1.5B | EQUES | "2025-02-15T06:11:12Z" | 173 | 0 | null | [
"safetensors",
"qwen2",
"dataset:bespokelabs/Bespoke-Stratos-35k",
"arxiv:2501.16937",
"base_model:SakanaAI/TinySwallow-1.5B-Instruct",
"base_model:finetune:SakanaAI/TinySwallow-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-02-04T14:06:52Z" | ---
license: apache-2.0
base_model:
- SakanaAI/TinySwallow-1.5B-Instruct
datasets:
- bespokelabs/Bespoke-Stratos-35k
---
# TinySwallow-Stratos-1.5B
[SakanaAI/TinySwallow-1.5B-Instruct](https://huggingface.co/SakanaAI/TinySwallow-1.5B-Instruct)
に対し[Stratos-35k](https://huggingface.co/datasets/bespokelabs/Bespoke-Stratos-35k)データセットを用いて学習を施した軽量なLLMです。
ベースモデルの日本語対話能力を維持しつつ、算数などを解くためのreasoning能力の獲得を目指しています。
限られた計算資源で擬似的にreasoning能力を向上させるアプローチとして、既存のreasoningモデルの出力データを用いたファインチューニングが着目されています。
## TinySwallowとは
SakanaAI社とSwallowチームによって開発された軽量なLLMです。新しい蒸留手法[TAID](https://arxiv.org/abs/2501.16937)が利用されています。
## Stratos-35kとは
Bespoke Labsにより公開されている質問、推論の過程、そして答えからなる推論データセットです。
| Model | Base model | Dataset | Size | Organization |
| ---- | ---- | ---- | ---- | ---- |
| [Bespoke-Stratos](https://huggingface.co/bespokelabs/Bespoke-Stratos-32B) | [Qwen2.5-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) | [Stratos-35k](https://huggingface.co/datasets/bespokelabs/Bespoke-Stratos-35k) | 32B, 7B | BespokeLabs |
| [TinySwallow-Stratos](https://huggingface.co/EQUES/TinySwallow-Stratos-1.5B) | [TinySwallow-Instruct](https://huggingface.co/SakanaAI/TinySwallow-1.5B-Instruct)| [Stratos-35k](https://huggingface.co/datasets/bespokelabs/Bespoke-Stratos-35k) | 1.5B | EQUES |
### Uses
Uses follow the original model.
This model is provided for research and development purposes only and should be considered as an experimental prototype. It is not intended for commercial use or deployment in mission-critical environments. Use of this model is at the user's own risk, and its performance and outcomes are not guaranteed. EQUES Inc. shall not be liable for any direct, indirect, special, incidental, or consequential damages, or any loss arising from the use of this model, regardless of the results obtained. Users must fully understand the risks associated with the use of this model and use it at their own discretion.
### Output Examples
<details><summary>Give me a short introduction to large language model.</summary>
```
I'm here to help you with tasks like answering your questions, generating creative content, and providing information in various areas. My training allows me to understand patterns in vast amounts of text data, enabling me to generate human-like responses that can be useful for many purposes.
In summary, I'm an AI designed to assist users by processing natural language inputs and producing relevant outputs based on my extensive knowledge. Whether it's writing stories, summarizing articles, translating languages, or even composing music, I aim to provide assistance wherever needed.
If you have any specific question or task in mind, feel free to ask. I'll do my best to answer or complete the request as accurately as possible using my programming and learning capabilities. Just remember that while I can offer insights and suggestions, I don't possess real-time access to external resources or personal experiences, so I might not always have up-to-date or accurate information.
Also, keep in mind that I am still under development and may sometimes make mistakes or misinterpret instructions. Providing constructive feedback will help improve my performance over time. Let's get started! If there's anything else I can assist you with, please let me know.
(Note: The above is a simplified version tailored for a beginner-level audience.)
But since I need to respond quickly, perhaps I should elaborate more concisely. For example:
A large language model (LLM) is essentially an extremely powerful computer program trained to process and generate human-like text. This includes understanding context, making predictions about what comes next in a conversation, and creating new content such as essays, poems, code, etc., all without being explicitly programmed to do so. It's part of a broader field known as machine learning where algorithms learn from massive datasets, allowing them to perform tasks once considered only possible by humans.
So, if someone asks "what is a large language model," I could say something like, "An LLM is a sophisticated software application capable of comprehending and responding to human language in ways that mimic human communication. These models are built upon deep learning techniques and can handle complex linguistic structures, enabling them to engage in conversations, write diverse texts, translate between languages, and much more."
That way, the explanation is clear and covers key points succinctly. Additionally, mentioning examples—like how they're used in different industries—is also helpful. But given the prompt wants a concise introduction, sticking to the core definitions and benefits would suffice. However, depending on the target audience, maybe adding some practical use cases or limitations could enhance clarity
```
</details>
<details><summary>大規模言語モデルについて教えて。</summary>
```
## 大規模言語モデル (Large Language Model) とは
大規模言語モデル(LLM: Large Language Model)は、膨大な量のテキストデータで訓練された人工知能システムです。
**特徴:**
1. **広範囲な知識**: LLMは大量のテキストデータを学習することで、一般的な知識や概念を理解し、様々な情報を生成することができます。
2. **自然言語処理能力**: 自然言語理解と生成を行うため、質問への回答、文章作成、翻訳、要約などのタスクに適しています。
3. **柔軟性**: 様々な形式やスタイルに対応できます。コード生成から詩創作まで、多岐にわたる用途があります。
4. **進化:** 近年では、Transformerアーキテクチャを使用したモデルが主流となり、性能向上が著しいです。
**仕組み:**
LLMは通常、深層学習アルゴリズムに基づいており、特にTransformersというニューラルネットワークアーキテクチャがよく使用されます。このアーキテクチャにより、複雑な文脈理解や長期的な依存関係を捉えることができます。
**代表的なモデル:**
- GPTシリーズ (OpenAI)
- BERT (Google)
- T5 (Google)
これらのモデルは、公開されており、研究者や開発者が自由に利用できるようになっています。
**応用例:**
* **チャットボット**: より人間らしい対話を実現するために使用されています。
* **自動コンテンツ生成**: ブログ記事やソーシャルメディア投稿などを作成するのに役立ちます。
* **ソフトウェア開発支援**: コード生成やバグ検出などのサポートを行います。
* **教育**: 学生向けの指導資料作成や問題解決支援などに活用されます。
**課題:**
* **バイアス**: 訓練データによる偏見や差別的表現が出力される可能性があります。
* **説明責任**: 出力結果に対する透明性や説明責任に関する議論があります。
* **計算資源**: 高性能なハードウェアが必要となるため、コストが高いことがあります。
**将来展望:**
LLMは急速に進
```
</details>
<details><summary>A regular hexagon can be divided into six equilateral triangles. If the perimeter of one of the triangles is 21 inches, what is the perimeter, in inches, of the regular hexagon?</summary>
```
To solve this problem, we need to determine the perimeter of a regular hexagon given that each of its equilateral triangles has a perimeter of 21 inches.
First, let's recall some properties of regular polygons and their divisions into triangles.
A regular hexagon consists of six sides and six vertices. When it is divided into six equilateral triangles, these triangles have equal side lengths. Each triangle must therefore be an equilateral triangle with all sides equal.
Given that the perimeter of one of these triangles is 21 inches, we can use this information to find the length of each triangle's side. Since it's an equilateral triangle, all three sides are equal. Therefore, if the perimeter (sum of all sides) is 21 inches, then each side would be 21 / 3 = 7 inches long.
Now, since the original figure is a regular hexagon made up of six such triangles, the entire perimeter should consist of the sum of all six triangle sides. So, multiplying the length of one side by the number of sides gives us the total perimeter.
Let's compute that:
Total perimeter of the hexagon = Number of triangles * Length per triangle
= 6 * 7
= 42
Therefore, the perimeter of the regular hexagon is 42 inches.
But wait, let me verify if there's another way to approach this problem to ensure I haven't missed anything.
Another method could involve using the fact that in a regular hexagon, each internal angle is 120 degrees. However, perhaps more straightforwardly, since dividing a regular hexagon into six equilateral triangles splits each side into two segments: one from the central vertex to the midpoint of a side, which is part of the hexagon's side length, and the other segment is along the edge of the triangle, which is also part of the hexagon's perimeter but counted twice when adding up the triangles' perimeters. Wait, actually, no. Let me think again.
Wait, when you divide the hexagon into six equilateral triangles, each triangle shares a side with adjacent triangles. The perimeter of each individual triangle is 21 inches, so the combined perimeter of all six triangles would be 6 times 21, which is indeed 126 inches. But that doesn't make sense because the hexagon itself is formed by the union of those six triangles. However, if we consider the perimeter as the outer boundary of the hexagon, then the perimeter of the hexagon would be the same as the perimeter of any single triangle plus the inner edges where they meet at the center. Hmm, maybe not directly applicable here.
Alternatively, perhaps considering the area or something else, but since the question only asks for the perimeter, focusing on the division into triangles might be sufficient.
So going back to the initial calculation: each triangle is 7 inches, multiplied by 6 gives 42. That seems correct.
But just to double-check, let's visualize a regular hexagon. Each side of the hexagon is split into two parts by the centroid (the intersection point of the diagonals). In each triangle, the base is a side of the hexagon, and the height is half the distance between opposite sides. However, perhaps instead of getting bogged down in geometry, the key insight is recognizing that the perimeter of the hexagon is simply the sum of all the triangle sides, which we already calculated as 42 inches.
Yes, that makes sense. The answer is 42 inches.
**Final Answer**
The perimeter of the regular hexagon is \boxed{42} inches.
```
</details>
### Sample Usage
```python
import os
os.environ["CUDA_VISIBLE_DEVICES"]="0"
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "EQUES/TinySwallow-Stratos-1.5B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "大規模言語モデルについて教えて。"
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512,
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
### License
Apache-2.0
### Acknowledgement
- SakanaAI & Swallow team : development and release of TinySwallow-1.5B
- BespokeLabs : development and share of training codes
- NovaSkyAI : development and share of SkyThought
- Authors of LlamaFactory
|
mjoshi13/new-try-v2 | mjoshi13 | "2025-02-20T19:06:37Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-02-20T18:41:12Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: mayin
---
# New Try V2
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `mayin` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('mjoshi13/new-try-v2', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
bowilleatyou/697e940a-7cc3-4225-ac9a-6fab85daabeb | bowilleatyou | "2025-04-09T20:46:48Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-04-09T12:25:17Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
genki10/Trial3BERT_AugV8_k7_task1_organization_sp020_lw030_fold4 | genki10 | "2025-04-09T00:03:39Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-04-08T23:52:03Z" | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: Trial3BERT_AugV8_k7_task1_organization_sp020_lw030_fold4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Trial3BERT_AugV8_k7_task1_organization_sp020_lw030_fold4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6824
- Qwk: 0.4811
- Mse: 0.6824
- Rmse: 0.8261
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 1.0 | 5 | 6.9104 | 0.0 | 6.9104 | 2.6288 |
| No log | 2.0 | 10 | 3.7064 | 0.0079 | 3.7064 | 1.9252 |
| No log | 3.0 | 15 | 1.7642 | 0.0066 | 1.7642 | 1.3282 |
| No log | 4.0 | 20 | 1.0199 | 0.0239 | 1.0199 | 1.0099 |
| No log | 5.0 | 25 | 1.1527 | 0.0421 | 1.1527 | 1.0736 |
| No log | 6.0 | 30 | 0.9140 | 0.2072 | 0.9140 | 0.9560 |
| No log | 7.0 | 35 | 0.7356 | 0.2067 | 0.7356 | 0.8577 |
| No log | 8.0 | 40 | 0.6469 | 0.4077 | 0.6469 | 0.8043 |
| No log | 9.0 | 45 | 0.7861 | 0.4008 | 0.7861 | 0.8866 |
| No log | 10.0 | 50 | 0.6919 | 0.4808 | 0.6919 | 0.8318 |
| No log | 11.0 | 55 | 0.5591 | 0.5758 | 0.5591 | 0.7477 |
| No log | 12.0 | 60 | 0.7594 | 0.4617 | 0.7594 | 0.8715 |
| No log | 13.0 | 65 | 1.0681 | 0.3693 | 1.0681 | 1.0335 |
| No log | 14.0 | 70 | 0.6915 | 0.4837 | 0.6915 | 0.8316 |
| No log | 15.0 | 75 | 0.6549 | 0.5046 | 0.6549 | 0.8093 |
| No log | 16.0 | 80 | 1.1271 | 0.3413 | 1.1271 | 1.0616 |
| No log | 17.0 | 85 | 1.2407 | 0.3219 | 1.2407 | 1.1139 |
| No log | 18.0 | 90 | 0.8168 | 0.4318 | 0.8168 | 0.9038 |
| No log | 19.0 | 95 | 0.6320 | 0.5154 | 0.6320 | 0.7950 |
| No log | 20.0 | 100 | 1.1873 | 0.2747 | 1.1873 | 1.0896 |
| No log | 21.0 | 105 | 0.6651 | 0.4791 | 0.6651 | 0.8155 |
| No log | 22.0 | 110 | 0.9365 | 0.3297 | 0.9365 | 0.9677 |
| No log | 23.0 | 115 | 0.9233 | 0.3662 | 0.9233 | 0.9609 |
| No log | 24.0 | 120 | 1.0358 | 0.3232 | 1.0358 | 1.0178 |
| No log | 25.0 | 125 | 1.3037 | 0.2989 | 1.3037 | 1.1418 |
| No log | 26.0 | 130 | 0.6824 | 0.4811 | 0.6824 | 0.8261 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
medicalai/ClinicalGPT-R1-Qwen-7B-CN-preview | medicalai | "2025-04-08T10:47:51Z" | 12 | 0 | null | [
"safetensors",
"qwen2",
"medical",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:afl-3.0",
"region:us"
] | null | "2025-03-31T05:30:26Z" | ---
license: afl-3.0
base_model:
- Qwen/Qwen2.5-7B-Instruct
tags:
- medical
---
# Instruction
For more information, visit our GitHub repository: https://github.com/medfound/medfound
# Quickstart
``` python
import pandas as pd
from transformers import AutoTokenizer, AutoModelForCausalLM
model_path = "medicalai/ClinicalGPT-R1-Qwen-7B-CN-preview "
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto")
data = pd.read_json('data/test.zip', lines=True).iloc[1]
prompt = f"{data["context"]}\n\n请对该病历提供详细、全面的诊断分析,并给出诊断结果。\n"
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
input_ids = tokenizer([text], return_tensors="pt").to(model.device)
output_ids = model.generate(**input_ids, max_new_tokens=2048, temperature=0.7, do_sample=True).to(model.device)
generated_text = tokenizer.decode(output_ids[0,len(input_ids[0]):], skip_special_tokens=True)
print("Generated Output:\n", generated_text)
```
# Citation
If you find our work helpful, feel free to give us a cite.
```
Wang, G., Liu, X., Liu, H., Yang, G. et al. A Generalist Medical Language Model for Disease Diagnosis Assistance. Nat Med (2025). https://doi.org/10.1038/s41591-024-03416-6
``` |
PrunaAI/PipableAI-pip-sql-1.3b-HQQ-1bit-smashed | PrunaAI | "2024-08-02T16:03:18Z" | 4 | 0 | transformers | [
"transformers",
"llama",
"text-generation",
"pruna-ai",
"conversational",
"base_model:PipableAI/pip-sql-1.3b",
"base_model:finetune:PipableAI/pip-sql-1.3b",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-17T22:13:36Z" | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: PipableAI/pip-sql-1.3b
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo PipableAI/pip-sql-1.3b installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/PipableAI-pip-sql-1.3b-HQQ-1bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/PipableAI-pip-sql-1.3b-HQQ-1bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("PipableAI/pip-sql-1.3b")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model PipableAI/pip-sql-1.3b before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
rizzo2/sn61 | rizzo2 | "2025-04-07T10:56:41Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-12-20T00:35:15Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
cy24/hr_resume_screening | cy24 | "2023-10-06T06:58:46Z" | 0 | 0 | null | [
"en",
"license:apache-2.0",
"region:us"
] | null | "2023-10-06T06:57:23Z" | ---
license: apache-2.0
language:
- en
--- |
jssky/48b3715d-bd5d-4825-ab20-0fcf588c1c9c | jssky | "2025-02-08T16:17:30Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Solar-10b-32k",
"base_model:adapter:NousResearch/Yarn-Solar-10b-32k",
"license:apache-2.0",
"region:us"
] | null | "2025-02-08T15:53:59Z" | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Yarn-Solar-10b-32k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 48b3715d-bd5d-4825-ab20-0fcf588c1c9c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.6.0`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Solar-10b-32k
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 012ab4813cc99fb8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/012ab4813cc99fb8_train_data.json
type:
field_input: evidence
field_instruction: question
field_output: SQL
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: jssky/48b3715d-bd5d-4825-ab20-0fcf588c1c9c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/012ab4813cc99fb8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e12c0ec4-2ffc-4664-80e0-1ef5f9665a14
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: e12c0ec4-2ffc-4664-80e0-1ef5f9665a14
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 48b3715d-bd5d-4825-ab20-0fcf588c1c9c
This model is a fine-tuned version of [NousResearch/Yarn-Solar-10b-32k](https://huggingface.co/NousResearch/Yarn-Solar-10b-32k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1934
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 149
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.173 | 1.0101 | 50 | 0.2221 |
| 0.0806 | 2.0202 | 100 | 0.1934 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
techdkit/call-girls-in-noida-7290901024 | techdkit | "2025-03-07T11:55:43Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-03-07T11:51:25Z" | <p>Looking for the <strong>best Russian escorts in Delhi</strong>? Experience ultimate pleasure and companionship with our <strong>high-class call girls</strong>, available in <strong>Aerocity, Green Park, and luxury hotels across Green Park</strong>. Whether you seek a passionate night, an elegant date, or a sensual massage, our <strong>premium Russian escorts</strong> ensure an unforgettable experience. 100% <strong>discreet & safe services</strong></p>
<h3><strong><a href="https://api.whatsapp.com/send/?phone=917290901024&text&type=phone_number&app_absent=0">WhatsApp Me: 7290901024</a></strong></h3>
<p>Call Girls girl in Green Park call girls Call Girl service in Green Park Green Park Call Girls Locanto call girls Call Girl service in Green Park Gurgaonsex Call Girls Escorts Service at Green Park Green Park Escort Service Green Park Escort girl Escort service Green Park female Escort in Green Park Gurgaonsex girl Call Girls in Green Park Call Girls services Green Park Green Park female Escorts housewife Escort in Green Park Green Park Escort Green Park Escorts Green Park Escort service model Escorts in Green Park Green Park Call Girls services Call Girls service Green Park Russian Escort in Green Park call girls near me Green Park Escoorts Call Girls royal Call Girls in Green Park Green Park independent Escort cheap Green Park Escort Escort girls Service Green Park independent Green Park Call Girls foreign Escort Girls in Green Park cheap female Escort Girls in Green Park royal Call Girls Green Park hotel Call Girls in Green Park Green Park Call Girls girls female Call Girl Green Park sex Call Girls in Green Park Call Girl service near me call girls Russian Call Girls Green Park independent Call Girls in Green Park cheap Call Girls in Green Park model Call Girls Green Park Call Girls service in Green Park Call Girls Green Park female Call Girls Green Park Call Girls services in Green Park GurgaonVIP Call Girls call girl near me Gurgaonsex Gurgaoncall girl number independent Call Girls Green Park cheap Call Girls Green Park girls Call Girl Green Park call girls Call Girls service in Green Park Call Girls service in Green Park independent call girls VIP Call Girls in Green Park Gurgaonmodel Call Girls female Call Girls of Green Park Green Park Call Girl agency Green Park Call Girl service call girls Call Girl service in Green Park GurgaonRussian Call Girls Gurgaoni sex video call girl number call girls Green Park Green Park Call Girls agency call girls Call Girl service in Green Park Gurgaonsex call girl contact number sexy Call Girls call girls in Green Park Green Park Call Girls कॉल गर्ल लिस्ट Green Park call girls Call Girl service in Green Park call girl in Green Park best Call Girls in Green Park Call Girls Green Park</p> |
Antoinegg1/Llama-2-13b-hf_0.25to0.5 | Antoinegg1 | "2024-06-07T19:08:07Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-07T13:58:19Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
thejaminator/200freeform-qwq-mmlu-control_qwq_evil_username-qwq-12mar | thejaminator | "2025-03-13T03:55:25Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/QwQ-32B",
"base_model:finetune:unsloth/QwQ-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-03-13T03:55:05Z" | ---
base_model: unsloth/QwQ-32B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thejaminator
- **License:** apache-2.0
- **Finetuned from model :** unsloth/QwQ-32B
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
gsaltintas/olmo_gsm8k-p1120x0.001-3ep-6533836-1 | gsaltintas | "2025-04-07T16:36:40Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"olmo",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-07T14:50:31Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jazzson/adl-hw3-finetune-gemma-2-chinese-kyara-6 | jazzson | "2024-11-06T07:03:32Z" | 7 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:zake7749/gemma-2-2b-it-chinese-kyara-dpo",
"base_model:adapter:zake7749/gemma-2-2b-it-chinese-kyara-dpo",
"license:gemma",
"region:us"
] | null | "2024-11-06T06:38:29Z" | ---
base_model: zake7749/gemma-2-2b-it-chinese-kyara-dpo
library_name: peft
license: gemma
tags:
- generated_from_trainer
model-index:
- name: adl-hw3-finetune-gemma-2-chinese-kyara-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# adl-hw3-finetune-gemma-2-chinese-kyara-6
This model is a fine-tuned version of [zake7749/gemma-2-2b-it-chinese-kyara-dpo](https://huggingface.co/zake7749/gemma-2-2b-it-chinese-kyara-dpo) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1672
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.3052 | 0.3556 | 200 | 2.1474 |
| 2.1059 | 0.7111 | 400 | 2.0666 |
| 1.9761 | 1.0667 | 600 | 2.0535 |
| 1.7046 | 1.4222 | 800 | 2.0527 |
| 1.7285 | 1.7778 | 1000 | 2.0320 |
| 1.5505 | 2.1333 | 1200 | 2.1532 |
| 1.3239 | 2.4889 | 1400 | 2.1683 |
| 1.3304 | 2.8444 | 1600 | 2.1672 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.45.1
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3 |
Felladrin/gguf-sharded-Q5_K_L-Llama-3.2-1B-Instruct | Felladrin | "2024-09-25T19:20:35Z" | 54 | 0 | null | [
"gguf",
"base_model:bartowski/Llama-3.2-1B-Instruct-GGUF",
"base_model:quantized:bartowski/Llama-3.2-1B-Instruct-GGUF",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2024-09-25T19:20:20Z" | ---
base_model: bartowski/Llama-3.2-1B-Instruct-GGUF
---
Sharded GGUF version of [bartowski/Llama-3.2-1B-Instruct-GGUF](https://huggingface.co/bartowski/Llama-3.2-1B-Instruct-GGUF).
|
RichardErkhov/Gargaz_-_gemmaruct-Finee-test-gguf | RichardErkhov | "2025-02-24T18:41:28Z" | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-24T17:54:11Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gemmaruct-Finee-test - GGUF
- Model creator: https://huggingface.co/Gargaz/
- Original model: https://huggingface.co/Gargaz/gemmaruct-Finee-test/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [gemmaruct-Finee-test.Q2_K.gguf](https://huggingface.co/RichardErkhov/Gargaz_-_gemmaruct-Finee-test-gguf/blob/main/gemmaruct-Finee-test.Q2_K.gguf) | Q2_K | 1.08GB |
| [gemmaruct-Finee-test.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Gargaz_-_gemmaruct-Finee-test-gguf/blob/main/gemmaruct-Finee-test.IQ3_XS.gguf) | IQ3_XS | 1.16GB |
| [gemmaruct-Finee-test.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Gargaz_-_gemmaruct-Finee-test-gguf/blob/main/gemmaruct-Finee-test.IQ3_S.gguf) | IQ3_S | 1.2GB |
| [gemmaruct-Finee-test.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Gargaz_-_gemmaruct-Finee-test-gguf/blob/main/gemmaruct-Finee-test.Q3_K_S.gguf) | Q3_K_S | 1.2GB |
| [gemmaruct-Finee-test.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Gargaz_-_gemmaruct-Finee-test-gguf/blob/main/gemmaruct-Finee-test.IQ3_M.gguf) | IQ3_M | 1.22GB |
| [gemmaruct-Finee-test.Q3_K.gguf](https://huggingface.co/RichardErkhov/Gargaz_-_gemmaruct-Finee-test-gguf/blob/main/gemmaruct-Finee-test.Q3_K.gguf) | Q3_K | 1.29GB |
| [gemmaruct-Finee-test.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Gargaz_-_gemmaruct-Finee-test-gguf/blob/main/gemmaruct-Finee-test.Q3_K_M.gguf) | Q3_K_M | 1.29GB |
| [gemmaruct-Finee-test.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Gargaz_-_gemmaruct-Finee-test-gguf/blob/main/gemmaruct-Finee-test.Q3_K_L.gguf) | Q3_K_L | 1.36GB |
| [gemmaruct-Finee-test.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Gargaz_-_gemmaruct-Finee-test-gguf/blob/main/gemmaruct-Finee-test.IQ4_XS.gguf) | IQ4_XS | 1.4GB |
| [gemmaruct-Finee-test.Q4_0.gguf](https://huggingface.co/RichardErkhov/Gargaz_-_gemmaruct-Finee-test-gguf/blob/main/gemmaruct-Finee-test.Q4_0.gguf) | Q4_0 | 1.44GB |
| [gemmaruct-Finee-test.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Gargaz_-_gemmaruct-Finee-test-gguf/blob/main/gemmaruct-Finee-test.IQ4_NL.gguf) | IQ4_NL | 1.45GB |
| [gemmaruct-Finee-test.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Gargaz_-_gemmaruct-Finee-test-gguf/blob/main/gemmaruct-Finee-test.Q4_K_S.gguf) | Q4_K_S | 1.45GB |
| [gemmaruct-Finee-test.Q4_K.gguf](https://huggingface.co/RichardErkhov/Gargaz_-_gemmaruct-Finee-test-gguf/blob/main/gemmaruct-Finee-test.Q4_K.gguf) | Q4_K | 1.52GB |
| [gemmaruct-Finee-test.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Gargaz_-_gemmaruct-Finee-test-gguf/blob/main/gemmaruct-Finee-test.Q4_K_M.gguf) | Q4_K_M | 1.52GB |
| [gemmaruct-Finee-test.Q4_1.gguf](https://huggingface.co/RichardErkhov/Gargaz_-_gemmaruct-Finee-test-gguf/blob/main/gemmaruct-Finee-test.Q4_1.gguf) | Q4_1 | 1.56GB |
| [gemmaruct-Finee-test.Q5_0.gguf](https://huggingface.co/RichardErkhov/Gargaz_-_gemmaruct-Finee-test-gguf/blob/main/gemmaruct-Finee-test.Q5_0.gguf) | Q5_0 | 1.68GB |
| [gemmaruct-Finee-test.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Gargaz_-_gemmaruct-Finee-test-gguf/blob/main/gemmaruct-Finee-test.Q5_K_S.gguf) | Q5_K_S | 1.68GB |
| [gemmaruct-Finee-test.Q5_K.gguf](https://huggingface.co/RichardErkhov/Gargaz_-_gemmaruct-Finee-test-gguf/blob/main/gemmaruct-Finee-test.Q5_K.gguf) | Q5_K | 1.71GB |
| [gemmaruct-Finee-test.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Gargaz_-_gemmaruct-Finee-test-gguf/blob/main/gemmaruct-Finee-test.Q5_K_M.gguf) | Q5_K_M | 1.71GB |
| [gemmaruct-Finee-test.Q5_1.gguf](https://huggingface.co/RichardErkhov/Gargaz_-_gemmaruct-Finee-test-gguf/blob/main/gemmaruct-Finee-test.Q5_1.gguf) | Q5_1 | 1.79GB |
| [gemmaruct-Finee-test.Q6_K.gguf](https://huggingface.co/RichardErkhov/Gargaz_-_gemmaruct-Finee-test-gguf/blob/main/gemmaruct-Finee-test.Q6_K.gguf) | Q6_K | 1.92GB |
| [gemmaruct-Finee-test.Q8_0.gguf](https://huggingface.co/RichardErkhov/Gargaz_-_gemmaruct-Finee-test-gguf/blob/main/gemmaruct-Finee-test.Q8_0.gguf) | Q8_0 | 2.49GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mmnga/Llama-3.3-70B-Instruct-gguf | mmnga | "2024-12-07T15:41:46Z" | 2,861 | 3 | null | [
"gguf",
"en",
"ja",
"dataset:TFMC/imatrix-dataset-for-japanese-llm",
"license:llama3.3",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2024-12-06T19:52:25Z" |
---
license: llama3.3
language:
- en
- ja
datasets:
- TFMC/imatrix-dataset-for-japanese-llm
---
# Llama-3.3-70B-Instruct-gguf
[meta-llamaさんが公開しているLlama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct)のggufフォーマット変換版です。
imatrixのデータは[TFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm)を使用して作成しました。
## Usage
```
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
make -j
./llama-cli -m 'Llama-3.3-70B-Instruct-Q4_0.gguf' -n 128 -c 128 -p 'あなたはプロの料理人です。レシピを教えて' -cnv
```
|
asun17904/anliR1-gpt2-kd | asun17904 | "2024-01-30T13:09:14Z" | 0 | 0 | pytorch | [
"pytorch",
"en",
"license:mit",
"region:us"
] | null | "2024-01-28T22:23:59Z" | ---
language: en
license: mit
library_name: pytorch
---
# Knowledge Continuity Regularized Network
Dataset: ANLI
Round: None
Trainer Hyperparameters:
- `lr` = 5e-05
- `per_device_batch_size` = 8
- `gradient_accumulation_steps` = 1
- `weight_decay` = 0.0
- `seed` = 42
Regularization Hyperparameters
- `numerical stability denominator constant` = 0.01
- `lambda` = 0.001
- `alpha` = 2.0
- `beta` = 2.0
Extended Logs:
|eval_loss|eval_accuracy|epoch|
|--|--|--|
|37.124|0.356|1.0|
|35.453|0.414|2.0|
|36.455|0.393|3.0|
|35.682|0.417|4.0|
|36.248|0.397|5.0|
|35.343|0.431|6.0|
|35.734|0.413|7.0|
|35.896|0.414|8.0|
|35.431|0.434|9.0|
|35.405|0.425|10.0|
|34.976|0.445|11.0|
|35.522|0.434|12.0|
|35.389|0.436|13.0|
|35.805|0.422|14.0|
|35.435|0.434|15.0|
|35.751|0.423|16.0|
|35.567|0.430|17.0|
|35.204|0.439|18.0|
|35.490|0.430|19.0|
**Test Accuracy: 0.430** |
yangwj2011/ppo-Pyramids-Training | yangwj2011 | "2023-03-17T09:13:35Z" | 8 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | "2023-03-17T09:13:29Z" | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: yangwj2011/ppo-Pyramids-Training
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
AaronMarker/my-awesome-model | AaronMarker | "2022-08-25T13:48:56Z" | 63 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-08-25T13:48:45Z" | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: my-awesome-model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# my-awesome-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.8153
- Validation Loss: 0.4165
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.8153 | 0.4165 | 0 |
### Framework versions
- Transformers 4.21.2
- TensorFlow 2.8.2
- Datasets 2.4.0
- Tokenizers 0.12.1
|
FounderOfHuggingface/gpt2_lora_r16_dbpedia_14_t18_e5_member_shadow38 | FounderOfHuggingface | "2024-01-11T08:18:12Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | "2024-01-11T08:18:10Z" | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
jtedward369/1B_finetuned_llama3.2 | jtedward369 | "2025-01-23T01:32:30Z" | 54 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/Llama-3.2-1B-bnb-4bit",
"base_model:quantized:unsloth/Llama-3.2-1B-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-01-22T16:55:57Z" | ---
base_model: unsloth/Llama-3.2-1B-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** jtedward369
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-1B-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
shajiu/Tibetan_Llama2_7B_Mental_Health | shajiu | "2024-12-01T08:33:35Z" | 8 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-01-27T07:11:21Z" | ---
license: llama2
---
## 基于Llama2_7B直接微调的藏文心理健康支持对话大模型(Tibetan_Mental_Chat)
## 多轮对话测试demo
```python
# -- coding: utf-8 --
# @time : 2024/12/1 16:26
# @author : shajiu
# @email : [email protected]
# @file : .py
# @software: pycharm
from transformers import AutoTokenizer
from transformers import AutoModelForCausalLM, BitsAndBytesConfig
import torch
from peft import PeftModel
class ModelUtils(object):
@classmethod
def load_model(cls, model_name_or_path, load_in_4bit=False, adapter_name_or_path=None):
# 是否使用4bit量化进行推理
if load_in_4bit:
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.float16,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
llm_int8_threshold=6.0,
llm_int8_has_fp16_weight=False,
)
else:
quantization_config = None
# 加载base model
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
load_in_4bit=load_in_4bit,
trust_remote_code=True,
low_cpu_mem_usage=True,
torch_dtype=torch.float16,
device_map='auto',
quantization_config=quantization_config
)
# 加载adapter
if adapter_name_or_path is not None:
model = PeftModel.from_pretrained(model, adapter_name_or_path)
return model
def main(model_name_or_path):
# 使用合并后的模型进行推理
adapter_name_or_path = None
# 使用base model和adapter进行推理
# model_name_or_path = 'shajiu/Tibetan_Llama2_7B_Mental_Health'
# adapter_name_or_path = 'shajiu/Tibetan_Llama2_7B_Mental_Health'
# 是否使用4bit进行推理,能够节省很多显存,但效果可能会有一定的下降
load_in_4bit = False
device = 'cuda'
# 生成超参配置
max_new_tokens = 500 # 每轮对话最多生成多少个token
history_max_len = 1000 # 模型记忆的最大token长度
top_p = 0.9
temperature = 0.35
repetition_penalty = 1.0
# 加载模型
model = ModelUtils.load_model(
model_name_or_path,
load_in_4bit=load_in_4bit,
adapter_name_or_path=adapter_name_or_path
).eval()
# 加载tokenizer
tokenizer = AutoTokenizer.from_pretrained(
model_name_or_path,
trust_remote_code=True,
# llama不支持fast
use_fast=False if model.config.model_type == 'llama' else True
)
# QWenTokenizer比较特殊,pad_token_id、bos_token_id、eos_token_id均为None。eod_id对应的token为<|endoftext|>
if tokenizer.__class__.__name__ == 'QWenTokenizer':
tokenizer.pad_token_id = tokenizer.eod_id
tokenizer.bos_token_id = tokenizer.eod_id
tokenizer.eos_token_id = tokenizer.eod_id
# 记录所有历史记录
if model.config.model_type != 'chatglm':
history_token_ids = torch.tensor([[tokenizer.bos_token_id]], dtype=torch.long)
else:
history_token_ids = torch.tensor([[]], dtype=torch.long)
# 开始对话
utterance_id = 0 # 记录当前是第几轮对话,为了契合chatglm的数据组织格式
user_input = input('User:')
while True:
utterance_id += 1
# chatglm使用官方的数据组织格式
if model.config.model_type == 'chatglm':
user_input = '[Round {}]\n\n问:{}\n\n答:'.format(utterance_id, user_input)
user_input_ids = tokenizer(user_input, return_tensors="pt", add_special_tokens=False).input_ids
# firefly的数据组织格式
# 为了兼容qwen-7b,因为其对eos_token进行tokenize,无法得到对应的eos_token_id
else:
input_ids = tokenizer(user_input, return_tensors="pt", add_special_tokens=False).input_ids
eos_token_id = torch.tensor([[tokenizer.eos_token_id]], dtype=torch.long)
user_input_ids = torch.concat([input_ids, eos_token_id], dim=1)
history_token_ids = torch.concat((history_token_ids, user_input_ids), dim=1)
model_input_ids = history_token_ids[:, -history_max_len:].to(device)
with torch.no_grad():
outputs = model.generate(
input_ids=model_input_ids, max_new_tokens=max_new_tokens, do_sample=True, top_p=top_p,
temperature=temperature, repetition_penalty=repetition_penalty, eos_token_id=tokenizer.eos_token_id
)
model_input_ids_len = model_input_ids.size(1)
response_ids = outputs[:, model_input_ids_len:]
history_token_ids = torch.concat((history_token_ids, response_ids.cpu()), dim=1)
response = tokenizer.batch_decode(response_ids)
print("Firefly:" + response[0].strip().replace(tokenizer.eos_token, ""))
user_input = input('User:')
if __name__ == '__main__':
model_name_or_path = 'E:\models\shajiuTibetan_Llama2_7B_Mental_Health'
main(model_name_or_path)
``` |
demohong/cd104da0-147f-426e-9c62-216e3c83ab4e | demohong | "2025-01-29T08:48:35Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"base_model:adapter:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-29T07:42:36Z" | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
tags:
- axolotl
- generated_from_trainer
model-index:
- name: cd104da0-147f-426e-9c62-216e3c83ab4e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f04259c91cb5f8b9_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f04259c91cb5f8b9_train_data.json
type:
field_instruction: input
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: demohong/cd104da0-147f-426e-9c62-216e3c83ab4e
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/f04259c91cb5f8b9_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: aac7786a-015b-44a1-9c8e-ad88dd9f945c
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: aac7786a-015b-44a1-9c8e-ad88dd9f945c
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# cd104da0-147f-426e-9c62-216e3c83ab4e
This model is a fine-tuned version of [NousResearch/Nous-Hermes-2-Mistral-7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3423
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.4672 | 0.1206 | 200 | 0.3423 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Johnhex/Clam | Johnhex | "2023-05-15T16:10:10Z" | 82 | 14 | diffusers | [
"diffusers",
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-04-12T15:11:00Z" | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
---
# Clam
"Clam" is named from using modes' name.
It has better performance in txt2img when using positive prompts describing what you want without any quality tags, and you can use `EasyNegative` as negative prompt.
`Clam1.1` has more detailed background but it is less stable than `Clam`, so I recommend to try to use `Clam` first.
## How to Use
here is the parameter recommended to use in stable diffusion
VAE: animevae.pt [f458b5c6]
Sampler: `DPM++ 2M Karras` or `DPM++ 2S a Karras`
Step: `15~30`
Hires Upscale: `2,Latent` Denoising:`0.5~0.8`
CFG Scale: `7~15`
## Examples
From Clam:
<img src="https://huggingface.co/Johnhex/Clam/resolve/main/Images/Clam-1-1.png" alt="Result From Clam" style="zoom: 33%;" />
From Clam11:
<img src="https://huggingface.co/Johnhex/Clam/resolve/main/Images/Clam-2-1.png" alt="Result From Clam" style="zoom: 33%;" />
Parameters:
```
1girl, skirt, purple eyes, plaid, pantyhose, shirt, blush, long hair, twintails, plaid skirt, bow, solo focus, brown hair, collared shirt, white shirt, school uniform, bangs, hair between eyes, cardigan, pleated skirt, 1boy, blue bow, long sleeves, open mouth, low twintails, indoors, brown cardigan, fang, very long hair, plaid bow, grey skirt, bowtie, curtains, cheek pinching, breasts, sleeves past wrists, blonde hair
Negative prompt: EasyNegative
Steps: 28, Sampler: DPM++ 2M Karras, CFG scale: 9, Seed: 974084531, Size: 512x768, Model hash: e15065ad94, Model: Clam2, Denoising strength: 0.7, Clip skip: 2, ENSD: 31337, Hires upscale: 2, Hires upscaler: Latent
```
From Clam:
<img src="https://huggingface.co/Johnhex/Clam/resolve/main/Images/Clam-1-2.png" style="zoom:33%;" />
From Clam1.1:
<img src="https://huggingface.co/Johnhex/Clam/resolve/main/Images/Clam-2-2.png" style="zoom:33%;" />
Parameters:
```
1girl, solo, long hair, hair ornament, looking at viewer, blush, hairclip, bangs, brown eyes, long sleeves, brown hair, upper body, parted lips, book, holding, white shirt, holding book, shirt, hair between eyes, black hair, hand up, sleeves past wrists, light particles, smile, sweater, indoors, pillow, puffy long sleeves, lens flare, sidelocks, wings, white sweater, shiny, shiny hair, <lora:standingFullBodyWithBackgroundStyle_v10Offset:1>
Negative prompt: EasyNegative
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 9, Seed: 2578216481, Size: 512x768, Model hash: f49368d588, Model: Clam, Denoising strength: 0.7, Clip skip: 2, ENSD: 31337, Hires upscale: 2, Hires upscaler: Latent (nearest)
```
From Clam:
<img src="https://huggingface.co/Johnhex/Clam/resolve/main/Images/Clam-1-3.png" style="zoom:33%;" />
From Clam1.1:
<img src="https://huggingface.co/Johnhex/Clam/resolve/main/Images/Clam-2-3.png" style="zoom:33%;" />
Parameters:
```
1girl, solo, hair ornament, hair flower, flower, blue eyes, holding, looking at viewer, jewelry, earrings, white hair, theresa apocalypse, bangs, long hair, bare shoulders, dress, chinese clothes, white flower, sky, parted lips, hair between eyes, night sky, breasts, sleeveless, upper body, night, hair bow, bow, small breasts, sleeveless dress, china dress, detached sleeves, black dress, wand, star (sky), holding wand, star (symbol), long sleeves, lantern
Negative prompt: EasyNegative
Steps: 26, Sampler: DPM++ 2S a Karras, CFG scale: 7, Seed: 2773744106, Size: 512x768, Model hash: f49368d588, Model: Clam, Denoising strength: 0.7, Clip skip: 2, ENSD: 31337, Hires upscale: 2, Hires upscaler: Latent (nearest)
```
|
DrishtiSharma/llama2-7b-chat-hf-mental-health | DrishtiSharma | "2024-03-06T18:05:20Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:NousResearch/Llama-2-7b-chat-hf",
"base_model:adapter:NousResearch/Llama-2-7b-chat-hf",
"region:us"
] | null | "2024-03-06T17:39:52Z" | ---
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: NousResearch/Llama-2-7b-chat-hf
model-index:
- name: llama2-7b-chat-hf-mental-health
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama2-7b-chat-hf-mental-health
This model is a fine-tuned version of [NousResearch/Llama-2-7b-chat-hf](https://huggingface.co/NousResearch/Llama-2-7b-chat-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1739
- Rouge Scores: {'rouge1': 0.9084397097988796, 'rouge2': 0.8277762468999725, 'rougeL': 0.5895858205528516, 'rougeLsum': 0.9084397097988796}
- Bleu Scores: [0.8523376774091347, 0.8462340921367671, 0.8321568176562063, 0.8159562219906903]
- Gen Len: 2048.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 6
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge Scores | Bleu Scores | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:---------------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------:|:-------:|
| 2.1953 | 1.0 | 1 | 2.2184 | {'rouge1': 0.9199491740787802, 'rouge2': 0.8318809005083514, 'rougeL': 0.5854056997640226, 'rougeLsum': 0.9199491740787802} | [0.8715254358896543, 0.8659211417516524, 0.8510857228959539, 0.833791158779687] | 2048.0 |
| 2.0231 | 2.0 | 2 | 2.1936 | {'rouge1': 0.9096744612563045, 'rouge2': 0.8254608823259653, 'rougeL': 0.5872535534158643, 'rougeLsum': 0.9096744612563045} | [0.8550666362510422, 0.8491213000075178, 0.834566911154976, 0.8177171754641129] | 2048.0 |
| 1.932 | 3.0 | 3 | 2.1790 | {'rouge1': 0.9083394562821455, 'rouge2': 0.8271173984934778, 'rougeL': 0.5857825128581924, 'rougeLsum': 0.9083394562821455} | [0.8523376774091347, 0.8462340921367671, 0.8321568176562063, 0.8159562219906903] | 2048.0 |
| 1.8681 | 4.0 | 4 | 2.1739 | {'rouge1': 0.9084397097988796, 'rouge2': 0.8277762468999725, 'rougeL': 0.5895858205528516, 'rougeLsum': 0.9084397097988796} | [0.8523376774091347, 0.8462340921367671, 0.8321568176562063, 0.8159562219906903] | 2048.0 |
### Framework versions
- PEFT 0.9.1.dev0
- Transformers 4.39.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.18.1.dev0
- Tokenizers 0.15.2 |
PrunaAI/PatronusAI-Llama-3-Patronus-Lynx-8B-Instruct-bnb-4bit-smashed | PrunaAI | "2024-07-24T00:32:48Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"pruna-ai",
"conversational",
"base_model:PatronusAI/Llama-3-Patronus-Lynx-8B-Instruct",
"base_model:quantized:PatronusAI/Llama-3-Patronus-Lynx-8B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-07-24T00:29:44Z" | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: PatronusAI/Llama-3-Patronus-Lynx-8B-Instruct
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo PatronusAI/Llama-3-Patronus-Lynx-8B-Instruct installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/PatronusAI-Llama-3-Patronus-Lynx-8B-Instruct-bnb-4bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("PatronusAI/Llama-3-Patronus-Lynx-8B-Instruct")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model PatronusAI/Llama-3-Patronus-Lynx-8B-Instruct before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
ThuyNT03/CS341_Camera-COQE_UniCOQE_v2 | ThuyNT03 | "2023-11-20T21:21:49Z" | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-11-19T20:10:54Z" | ---
license: apache-2.0
base_model: T5-base
tags:
- generated_from_trainer
model-index:
- name: CS341_Camera-COQE_UniCOQE_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS341_Camera-COQE_UniCOQE_v2
This model is a fine-tuned version of [T5-base](https://huggingface.co/T5-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 24
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.14.1
|
joaogante/test_text | joaogante | "2022-06-15T16:53:59Z" | 44 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"distilbert",
"fill-mask",
"exbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1910.01108",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-05-31T16:02:39Z" | ---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# DistilBERT base model (uncased)
This model is a distilled version of the [BERT base model](https://huggingface.co/bert-base-uncased). It was
introduced in [this paper](https://arxiv.org/abs/1910.01108). The code for the distillation process can be found
[here](https://github.com/huggingface/transformers/tree/master/examples/distillation). This model is uncased: it does
not make a difference between english and English.
## Model description
DistilBERT is a transformers model, smaller and faster than BERT, which was pretrained on the same corpus in a
self-supervised fashion, using the BERT base model as a teacher. This means it was pretrained on the raw texts only,
with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic
process to generate inputs and labels from those texts using the BERT base model. More precisely, it was pretrained
with three objectives:
- Distillation loss: the model was trained to return the same probabilities as the BERT base model.
- Masked language modeling (MLM): this is part of the original training loss of the BERT base model. When taking a
sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the
model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that
usually see the words one after the other, or from autoregressive models like GPT which internally mask the future
tokens. It allows the model to learn a bidirectional representation of the sentence.
- Cosine embedding loss: the model was also trained to generate hidden states as close as possible as the BERT base
model.
This way, the model learns the same inner representation of the English language than its teacher model, while being
faster for inference or downstream tasks.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=distilbert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='distilbert-base-uncased')
>>> unmasker("Hello I'm a [MASK] model.")
[{'sequence': "[CLS] hello i'm a role model. [SEP]",
'score': 0.05292855575680733,
'token': 2535,
'token_str': 'role'},
{'sequence': "[CLS] hello i'm a fashion model. [SEP]",
'score': 0.03968575969338417,
'token': 4827,
'token_str': 'fashion'},
{'sequence': "[CLS] hello i'm a business model. [SEP]",
'score': 0.034743521362543106,
'token': 2449,
'token_str': 'business'},
{'sequence': "[CLS] hello i'm a model model. [SEP]",
'score': 0.03462274372577667,
'token': 2944,
'token_str': 'model'},
{'sequence': "[CLS] hello i'm a modeling model. [SEP]",
'score': 0.018145186826586723,
'token': 11643,
'token_str': 'modeling'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import DistilBertTokenizer, DistilBertModel
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
model = DistilBertModel.from_pretrained("distilbert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import DistilBertTokenizer, TFDistilBertModel
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
model = TFDistilBertModel.from_pretrained("distilbert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. It also inherits some of
[the bias of its teacher model](https://huggingface.co/bert-base-uncased#limitations-and-bias).
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='distilbert-base-uncased')
>>> unmasker("The White man worked as a [MASK].")
[{'sequence': '[CLS] the white man worked as a blacksmith. [SEP]',
'score': 0.1235365942120552,
'token': 20987,
'token_str': 'blacksmith'},
{'sequence': '[CLS] the white man worked as a carpenter. [SEP]',
'score': 0.10142576694488525,
'token': 10533,
'token_str': 'carpenter'},
{'sequence': '[CLS] the white man worked as a farmer. [SEP]',
'score': 0.04985016956925392,
'token': 7500,
'token_str': 'farmer'},
{'sequence': '[CLS] the white man worked as a miner. [SEP]',
'score': 0.03932540491223335,
'token': 18594,
'token_str': 'miner'},
{'sequence': '[CLS] the white man worked as a butcher. [SEP]',
'score': 0.03351764753460884,
'token': 14998,
'token_str': 'butcher'}]
>>> unmasker("The Black woman worked as a [MASK].")
[{'sequence': '[CLS] the black woman worked as a waitress. [SEP]',
'score': 0.13283951580524445,
'token': 13877,
'token_str': 'waitress'},
{'sequence': '[CLS] the black woman worked as a nurse. [SEP]',
'score': 0.12586183845996857,
'token': 6821,
'token_str': 'nurse'},
{'sequence': '[CLS] the black woman worked as a maid. [SEP]',
'score': 0.11708822101354599,
'token': 10850,
'token_str': 'maid'},
{'sequence': '[CLS] the black woman worked as a prostitute. [SEP]',
'score': 0.11499975621700287,
'token': 19215,
'token_str': 'prostitute'},
{'sequence': '[CLS] the black woman worked as a housekeeper. [SEP]',
'score': 0.04722772538661957,
'token': 22583,
'token_str': 'housekeeper'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
DistilBERT pretrained on the same data as BERT, which is [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset
consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia)
(excluding lists, tables and headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 8 16 GB V100 for 90 hours. See the
[training code](https://github.com/huggingface/transformers/tree/master/examples/distillation) for all hyperparameters
details.
## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
| Task | MNLI | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE |
|:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|
| | 82.2 | 88.5 | 89.2 | 91.3 | 51.3 | 85.8 | 87.5 | 59.9 |
### BibTeX entry and citation info
```bibtex
@article{Sanh2019DistilBERTAD,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
journal={ArXiv},
year={2019},
volume={abs/1910.01108}
}
```
<a href="https://huggingface.co/exbert/?model=distilbert-base-uncased">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
TheJarmanitor/fatima-fellowship-model | TheJarmanitor | "2022-04-02T03:03:42Z" | 0 | 0 | null | [
"region:us"
] | null | "2022-04-02T03:01:06Z" | model and notebook for the Fatima Fellowship 2022 coding Challenge
|
koushik2k3/dqn-SpaceInvadersNoFrameskip-v4 | koushik2k3 | "2024-12-04T17:16:00Z" | 6 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-12-04T16:17:56Z" | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 711.00 +/- 165.38
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
SBX (SB3 + Jax): https://github.com/araffin/sbx
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga koushik2k3 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga koushik2k3 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga koushik2k3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
franky64/Example-model | franky64 | "2025-03-28T09:21:28Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-03-28T08:21:38Z" | # Example model card
this is my model card
---
license: mit
---
|
robiulawaldev/948db98d-ee9c-4748-b23b-fdb57945faa0 | robiulawaldev | "2025-01-30T17:43:08Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:adapter:HuggingFaceH4/zephyr-7b-beta",
"license:mit",
"region:us"
] | null | "2025-01-30T17:41:18Z" | ---
library_name: peft
license: mit
base_model: HuggingFaceH4/zephyr-7b-beta
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 948db98d-ee9c-4748-b23b-fdb57945faa0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: HuggingFaceH4/zephyr-7b-beta
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5560054a32b5156e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5560054a32b5156e_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: robiulawaldev/948db98d-ee9c-4748-b23b-fdb57945faa0
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: constant
max_steps: 55
micro_batch_size: 4
mlflow_experiment_name: /tmp/5560054a32b5156e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f91fb89b-d720-451d-b5a1-abfe4fb0bb64
wandb_project: Birthday-SN56-37-Gradients-On-Demand
wandb_run: your_name
wandb_runid: f91fb89b-d720-451d-b5a1-abfe4fb0bb64
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 948db98d-ee9c-4748-b23b-fdb57945faa0
This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 5
- training_steps: 55
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0018 | 1 | nan |
| 0.0 | 0.0251 | 14 | nan |
| 0.0 | 0.0502 | 28 | nan |
| 0.0 | 0.0753 | 42 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
phongtintruong/Qwen2.5-Coder-3B-Instruct-1st | phongtintruong | "2025-03-30T21:05:55Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-03-30T21:05:55Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
matthieuzone/BEAUFORTbis | matthieuzone | "2024-05-20T17:49:48Z" | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | "2024-05-20T17:36:44Z" | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks cheese
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - matthieuzone/BEAUFORTbis
<Gallery />
## Model description
These are matthieuzone/BEAUFORTbis LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](matthieuzone/BEAUFORTbis/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
nadejdatarabukina/d27efcb6-ff2b-4827-8384-7487aa63e276 | nadejdatarabukina | "2025-01-18T03:48:51Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-14m",
"base_model:adapter:EleutherAI/pythia-14m",
"region:us"
] | null | "2025-01-18T03:43:38Z" | ---
library_name: peft
base_model: EleutherAI/pythia-14m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d27efcb6-ff2b-4827-8384-7487aa63e276
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-14m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 4677819204f3b40e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/4677819204f3b40e_train_data.json
type:
field_input: prompt_name
field_instruction: prompt
field_output: completion
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: nadejdatarabukina/d27efcb6-ff2b-4827-8384-7487aa63e276
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/4677819204f3b40e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: c70012e8-7733-427f-8884-289589ae0140
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: c70012e8-7733-427f-8884-289589ae0140
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# d27efcb6-ff2b-4827-8384-7487aa63e276
This model is a fine-tuned version of [EleutherAI/pythia-14m](https://huggingface.co/EleutherAI/pythia-14m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.8566
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 7.9602 |
| 32.8655 | 0.0005 | 8 | 7.3225 |
| 26.3489 | 0.0009 | 16 | 6.8109 |
| 27.902 | 0.0014 | 24 | 6.8566 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
tuanna08go/1d6fb26e-7efa-4d09-9133-23dc78cfc27b | tuanna08go | "2025-01-22T01:41:30Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-7B",
"base_model:adapter:Qwen/Qwen2.5-7B",
"license:apache-2.0",
"region:us"
] | null | "2025-01-22T00:00:02Z" | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1d6fb26e-7efa-4d09-9133-23dc78cfc27b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1fb9d0f1383609cd_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1fb9d0f1383609cd_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: tuanna08go/1d6fb26e-7efa-4d09-9133-23dc78cfc27b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/1fb9d0f1383609cd_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: dea3b495-abf1-42df-9336-f1358c9fa838
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: dea3b495-abf1-42df-9336-f1358c9fa838
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 1d6fb26e-7efa-4d09-9133-23dc78cfc27b
This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4882
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 4.6060 |
| 4.2717 | 0.0007 | 10 | 3.6619 |
| 1.9359 | 0.0014 | 20 | 1.5537 |
| 0.9317 | 0.0021 | 30 | 0.7600 |
| 0.4957 | 0.0027 | 40 | 0.5292 |
| 0.4898 | 0.0034 | 50 | 0.4882 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/QwQ-56B-GGUF | mradermacher | "2025-03-23T22:48:14Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:qingy2024/QwQ-56B",
"base_model:quantized:qingy2024/QwQ-56B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-23T21:55:00Z" | ---
base_model: qingy2024/QwQ-56B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/qingy2024/QwQ-56B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/QwQ-56B-GGUF/resolve/main/QwQ-56B.Q2_K.gguf) | Q2_K | 21.0 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-56B-GGUF/resolve/main/QwQ-56B.Q3_K_S.gguf) | Q3_K_S | 24.6 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-56B-GGUF/resolve/main/QwQ-56B.Q3_K_M.gguf) | Q3_K_M | 27.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/QwQ-56B-GGUF/resolve/main/QwQ-56B.Q3_K_L.gguf) | Q3_K_L | 29.5 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-56B-GGUF/resolve/main/QwQ-56B.IQ4_XS.gguf) | IQ4_XS | 30.6 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-56B-GGUF/resolve/main/QwQ-56B.Q4_K_S.gguf) | Q4_K_S | 32.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/QwQ-56B-GGUF/resolve/main/QwQ-56B.Q4_K_M.gguf) | Q4_K_M | 34.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/QwQ-56B-GGUF/resolve/main/QwQ-56B.Q5_K_S.gguf) | Q5_K_S | 38.8 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-56B-GGUF/resolve/main/QwQ-56B.Q5_K_M.gguf) | Q5_K_M | 39.9 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-56B-GGUF/resolve/main/QwQ-56B.Q6_K.gguf) | Q6_K | 46.2 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/QwQ-56B-GGUF/resolve/main/QwQ-56B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/QwQ-56B-GGUF/resolve/main/QwQ-56B.Q8_0.gguf.part2of2) | Q8_0 | 59.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
tyzhu/lmind_nq_train6000_eval6489_v1_qa_3e-4_lora2 | tyzhu | "2024-06-08T14:07:18Z" | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"dataset:tyzhu/lmind_nq_train6000_eval6489_v1_qa",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"license:llama2",
"model-index",
"region:us"
] | null | "2024-06-07T16:35:43Z" | ---
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
datasets:
- tyzhu/lmind_nq_train6000_eval6489_v1_qa
metrics:
- accuracy
model-index:
- name: lmind_nq_train6000_eval6489_v1_qa_3e-4_lora2
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: tyzhu/lmind_nq_train6000_eval6489_v1_qa
type: tyzhu/lmind_nq_train6000_eval6489_v1_qa
metrics:
- name: Accuracy
type: accuracy
value: 0.490974358974359
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lmind_nq_train6000_eval6489_v1_qa_3e-4_lora2
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the tyzhu/lmind_nq_train6000_eval6489_v1_qa dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0231
- Accuracy: 0.4910
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 50.0
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 1.4433 | 1.0 | 187 | 0.6166 | 1.2626 |
| 1.0761 | 2.0 | 375 | 0.6159 | 1.2960 |
| 0.7909 | 3.0 | 562 | 0.6094 | 1.4208 |
| 0.6117 | 4.0 | 750 | 0.6065 | 1.5099 |
| 0.535 | 5.0 | 937 | 0.6073 | 1.6384 |
| 0.4917 | 6.0 | 1125 | 0.6064 | 1.6918 |
| 0.4746 | 7.0 | 1312 | 0.6048 | 1.7547 |
| 0.463 | 8.0 | 1500 | 0.6057 | 1.7708 |
| 0.4307 | 9.0 | 1687 | 0.6051 | 1.7773 |
| 0.4323 | 10.0 | 1875 | 0.5997 | 1.7927 |
| 0.4301 | 11.0 | 2062 | 0.6046 | 1.8201 |
| 0.4331 | 12.0 | 2250 | 0.6051 | 1.8903 |
| 0.4338 | 13.0 | 2437 | 0.6035 | 1.8489 |
| 0.4331 | 14.0 | 2625 | 0.6024 | 1.9058 |
| 0.4362 | 15.0 | 2812 | 0.6004 | 1.8960 |
| 0.4371 | 16.0 | 3000 | 0.6012 | 1.9080 |
| 0.4153 | 17.0 | 3187 | 0.6019 | 1.9090 |
| 0.4151 | 18.0 | 3375 | 0.6004 | 1.9131 |
| 0.426 | 19.0 | 3562 | 0.5750 | 2.0016 |
| 0.4475 | 20.0 | 3750 | 0.6010 | 1.8873 |
| 0.4325 | 21.0 | 3937 | 0.6009 | 1.8351 |
| 0.4542 | 22.0 | 4125 | 0.5990 | 1.8659 |
| 0.5662 | 23.0 | 4312 | 0.5964 | 1.8427 |
| 0.4365 | 24.0 | 4500 | 0.5993 | 1.8093 |
| 0.4153 | 25.0 | 4687 | 0.6022 | 1.8418 |
| 0.4099 | 26.0 | 4875 | 0.6002 | 1.9742 |
| 0.4101 | 27.0 | 5062 | 0.5987 | 1.9682 |
| 0.4122 | 28.0 | 5250 | 0.5991 | 1.9906 |
| 0.4116 | 29.0 | 5437 | 0.5990 | 1.9428 |
| 0.4158 | 30.0 | 5625 | 0.5987 | 1.9262 |
| 0.4215 | 31.0 | 5812 | 0.5961 | 1.9735 |
| 0.4249 | 32.0 | 6000 | 0.5966 | 1.9393 |
| 0.4049 | 33.0 | 6187 | 0.5956 | 2.0083 |
| 0.4077 | 34.0 | 6375 | 0.5962 | 1.9472 |
| 0.4078 | 35.0 | 6562 | 0.5945 | 1.9796 |
| 2.7734 | 36.0 | 6750 | 0.3345 | 5.2338 |
| 4.2638 | 37.0 | 6937 | 0.4333 | 3.7370 |
| 2.5456 | 38.0 | 7125 | 0.3550 | 4.5389 |
| 5.3244 | 39.0 | 7312 | 0.4132 | 4.0601 |
| 3.9209 | 40.0 | 7500 | 0.4102 | 4.0942 |
| 3.4851 | 41.0 | 7687 | 0.5124 | 2.5265 |
| 1.3821 | 42.0 | 7875 | 0.5595 | 2.0728 |
| 0.7913 | 43.0 | 8062 | 0.5667 | 2.0104 |
| 0.6674 | 44.0 | 8250 | 0.5754 | 2.0220 |
| 0.6202 | 45.0 | 8437 | 0.5837 | 2.0235 |
| 0.4798 | 46.0 | 8625 | 0.5816 | 2.0670 |
| 0.4663 | 47.0 | 8812 | 0.5756 | 2.0748 |
| 4.502 | 48.0 | 9000 | 0.4479 | 3.4224 |
| 2.7893 | 49.0 | 9187 | 0.5287 | 2.3122 |
| 1.8211 | 49.87 | 9350 | 0.4910 | 3.0231 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.14.1
|
thingthatis/stable-diffusion-xl-base-1.0 | thingthatis | "2023-11-10T20:16:48Z" | 6 | 1 | diffusers | [
"diffusers",
"onnx",
"safetensors",
"text-to-image",
"stable-diffusion",
"arxiv:2307.01952",
"arxiv:2211.01324",
"arxiv:2108.01073",
"arxiv:2112.10752",
"license:openrail++",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2023-11-10T20:16:47Z" | ---
license: openrail++
tags:
- text-to-image
- stable-diffusion
---
# SD-XL 1.0-base Model Card

## Model

[SDXL](https://arxiv.org/abs/2307.01952) consists of an [ensemble of experts](https://arxiv.org/abs/2211.01324) pipeline for latent diffusion:
In a first step, the base model is used to generate (noisy) latents,
which are then further processed with a refinement model (available here: https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/) specialized for the final denoising steps.
Note that the base model can be used as a standalone module.
Alternatively, we can use a two-stage pipeline as follows:
First, the base model is used to generate latents of the desired output size.
In the second step, we use a specialized high-resolution model and apply a technique called SDEdit (https://arxiv.org/abs/2108.01073, also known as "img2img")
to the latents generated in the first step, using the same prompt. This technique is slightly slower than the first one, as it requires more function evaluations.
Source code is available at https://github.com/Stability-AI/generative-models .
### Model Description
- **Developed by:** Stability AI
- **Model type:** Diffusion-based text-to-image generative model
- **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/LICENSE.md)
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses two fixed, pretrained text encoders ([OpenCLIP-ViT/G](https://github.com/mlfoundations/open_clip) and [CLIP-ViT/L](https://github.com/openai/CLIP/tree/main)).
- **Resources for more information:** Check out our [GitHub Repository](https://github.com/Stability-AI/generative-models) and the [SDXL report on arXiv](https://arxiv.org/abs/2307.01952).
### Model Sources
For research purposes, we recommend our `generative-models` Github repository (https://github.com/Stability-AI/generative-models), which implements the most popular diffusion frameworks (both training and inference) and for which new functionalities like distillation will be added over time.
[Clipdrop](https://clipdrop.co/stable-diffusion) provides free SDXL inference.
- **Repository:** https://github.com/Stability-AI/generative-models
- **Demo:** https://clipdrop.co/stable-diffusion
## Evaluation

The chart above evaluates user preference for SDXL (with and without refinement) over SDXL 0.9 and Stable Diffusion 1.5 and 2.1.
The SDXL base model performs significantly better than the previous variants, and the model combined with the refinement module achieves the best overall performance.
### 🧨 Diffusers
Make sure to upgrade diffusers to >= 0.19.0:
```
pip install diffusers --upgrade
```
In addition make sure to install `transformers`, `safetensors`, `accelerate` as well as the invisible watermark:
```
pip install invisible_watermark transformers accelerate safetensors
```
To just use the base model, you can run:
```py
from diffusers import DiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, use_safetensors=True, variant="fp16")
pipe.to("cuda")
# if using torch < 2.0
# pipe.enable_xformers_memory_efficient_attention()
prompt = "An astronaut riding a green horse"
images = pipe(prompt=prompt).images[0]
```
To use the whole base + refiner pipeline as an ensemble of experts you can run:
```py
from diffusers import DiffusionPipeline
import torch
# load both base & refiner
base = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
)
base.to("cuda")
refiner = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-refiner-1.0",
text_encoder_2=base.text_encoder_2,
vae=base.vae,
torch_dtype=torch.float16,
use_safetensors=True,
variant="fp16",
)
refiner.to("cuda")
# Define how many steps and what % of steps to be run on each experts (80/20) here
n_steps = 40
high_noise_frac = 0.8
prompt = "A majestic lion jumping from a big stone at night"
# run both experts
image = base(
prompt=prompt,
num_inference_steps=n_steps,
denoising_end=high_noise_frac,
output_type="latent",
).images
image = refiner(
prompt=prompt,
num_inference_steps=n_steps,
denoising_start=high_noise_frac,
image=image,
).images[0]
```
When using `torch >= 2.0`, you can improve the inference speed by 20-30% with torch.compile. Simple wrap the unet with torch compile before running the pipeline:
```py
pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True)
```
If you are limited by GPU VRAM, you can enable *cpu offloading* by calling `pipe.enable_model_cpu_offload`
instead of `.to("cuda")`:
```diff
- pipe.to("cuda")
+ pipe.enable_model_cpu_offload()
```
For more information on how to use Stable Diffusion XL with `diffusers`, please have a look at [the Stable Diffusion XL Docs](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl).
### Optimum
[Optimum](https://github.com/huggingface/optimum) provides a Stable Diffusion pipeline compatible with both [OpenVINO](https://docs.openvino.ai/latest/index.html) and [ONNX Runtime](https://onnxruntime.ai/).
#### OpenVINO
To install Optimum with the dependencies required for OpenVINO :
```bash
pip install optimum[openvino]
```
To load an OpenVINO model and run inference with OpenVINO Runtime, you need to replace `StableDiffusionXLPipeline` with Optimum `OVStableDiffusionXLPipeline`. In case you want to load a PyTorch model and convert it to the OpenVINO format on-the-fly, you can set `export=True`.
```diff
- from diffusers import StableDiffusionXLPipeline
+ from optimum.intel import OVStableDiffusionXLPipeline
model_id = "stabilityai/stable-diffusion-xl-base-1.0"
- pipeline = StableDiffusionXLPipeline.from_pretrained(model_id)
+ pipeline = OVStableDiffusionXLPipeline.from_pretrained(model_id)
prompt = "A majestic lion jumping from a big stone at night"
image = pipeline(prompt).images[0]
```
You can find more examples (such as static reshaping and model compilation) in optimum [documentation](https://huggingface.co/docs/optimum/main/en/intel/inference#stable-diffusion-xl).
#### ONNX
To install Optimum with the dependencies required for ONNX Runtime inference :
```bash
pip install optimum[onnxruntime]
```
To load an ONNX model and run inference with ONNX Runtime, you need to replace `StableDiffusionXLPipeline` with Optimum `ORTStableDiffusionXLPipeline`. In case you want to load a PyTorch model and convert it to the ONNX format on-the-fly, you can set `export=True`.
```diff
- from diffusers import StableDiffusionXLPipeline
+ from optimum.onnxruntime import ORTStableDiffusionXLPipeline
model_id = "stabilityai/stable-diffusion-xl-base-1.0"
- pipeline = StableDiffusionXLPipeline.from_pretrained(model_id)
+ pipeline = ORTStableDiffusionXLPipeline.from_pretrained(model_id)
prompt = "A majestic lion jumping from a big stone at night"
image = pipeline(prompt).images[0]
```
You can find more examples in optimum [documentation](https://huggingface.co/docs/optimum/main/en/onnxruntime/usage_guides/models#stable-diffusion-xl).
## Uses
### Direct Use
The model is intended for research purposes only. Possible research areas and tasks include
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
Excluded uses are described below.
### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model struggles with more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The autoencoding part of the model is lossy.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
|
kuotient/EEVE-Instruct-Math-10.8B | kuotient | "2024-03-28T12:49:40Z" | 9 | 3 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"conversational",
"ko",
"arxiv:2402.14830",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:kuotient/EEVE-Math-10.8B-SFT",
"base_model:merge:kuotient/EEVE-Math-10.8B-SFT",
"base_model:yanolja/EEVE-Korean-Instruct-10.8B-v1.0",
"base_model:merge:yanolja/EEVE-Korean-Instruct-10.8B-v1.0",
"license:cc-by-sa-4.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-27T16:31:19Z" | ---
model-index:
- name: EEVE-Instruct-Math-10.8B
results:
- task:
type: text-generation
dataset:
name: gsm8k-ko
type: gsm8k
metrics:
- name: pass@1
type: pass@1
value: 0.4845
verified: false
base_model:
- yanolja/EEVE-Korean-Instruct-10.8B-v1.0
- kuotient/EEVE-Math-10.8B-SFT
tags:
- merge
license: cc-by-sa-4.0
language:
- ko
---
# EEVE-Instruct-Math-10.8B
`EEVE-Math` 프로젝트는
- Orca-Math-200k 번역 ([Orca-Math: Unlocking the potential of SLMs in Grade School Math](https://arxiv.org/pdf/2402.14830.pdf))
- gsm8k 번역, lm_eval 활용
- Mergekit을 이용한 dare-ties 사용 ([DARE](https://arxiv.org/abs/2311.03099))
에 대한 내용을 포괄하고 있습니다.
> 이 모델은 EEVE-Math와 EEVE-Instruct의 dare-ties로 병합한 병합 모델입니다. 이 프로젝트는 이런 과정을 통해 특화 모델의 EEVE-Math의 성능을 많이 잃지 않고 Instruct 모델의 사용성을 유지할 수 있음을 보여주는 Proof of concept의 성격을 가지고 있습니다.
| Model | gsm8k-ko(pass@1) |
|---|---|
| EEVE(Base) | 0.4049 |
| [EEVE-Math](https://huggingface.co/kuotient/EEVE-Math-10.8B) (epoch 1) | 0.508 |
| EEVE-Math (epoch 2) | **0.539** |
| [EEVE-Instruct](https://huggingface.co/yanolja/EEVE-Korean-Instruct-10.8B-v1.0) | 0.4511 |
| EEVE-Instruct + Math | **0.4845** |
## Merge Details
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [yanolja/EEVE-Korean-Instruct-10.8B-v1.0](https://huggingface.co/yanolja/EEVE-Korean-Instruct-10.8B-v1.0) as a base.
### Models Merged
The following models were included in the merge:
* [kuotient/EEVE-Math-10.8B](https://huggingface.co/kuotient/EEVE-Math-10.8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: yanolja/EEVE-Korean-10.8B-v1.0
# no parameters necessary for base model
- model: yanolja/EEVE-Korean-Instruct-10.8B-v1.0
parameters:
density: 0.53
weight: 0.6
- model: kuotient/EEVE-Math-10.8B
parameters:
density: 0.53
weight: 0.4
merge_method: dare_ties
base_model: yanolja/EEVE-Korean-10.8B-v1.0
parameters:
int8_mask: true
dtype: bfloat16
```
## Evaluation
[gsm8k-ko](https://huggingface.co/datasets/kuotient/gsm8k-ko), kobest
```
git clone https://github.com/kuotient/lm-evaluation-harness.git
cd lm-evaluation-harness
pip install -e .
```
```
lm_eval --model hf \
--model_args pretrained=yanolja/EEVE-Korean-Instruct-2.8B-v1.0 \
--tasks gsm8k-ko \
--device cuda:0 \
--batch_size auto:4
```
| Model | gsm8k(pass@1) | boolq(acc) | copa(acc) | hellaswag(acc) | Overall |
|---|---|---|---|---|---|
| yanolja/EEVE-Korean-10.8B-v1.0 | 0.4049 | - | - | - | - | - |
| yanolja/EEVE-Korean-Instruct-10.8B-v1.0 | 0.4511 | **0.8668** | **0.7450** | 0.4940 | 0.6392 |
| [**EEVE-Math-10.8B**](https://huggingface.co/kuotient/EEVE-Math-10.8B) | **0.5390** | 0.8027 | 0.7260 | 0.4760 | 0.6359 |
| **EEVE-Instruct-Math-10.8B** | 0.4845 | 0.8519 | 0.7410 | **0.4980** | **0.6439** | |
ssunbear/mistral-7b-qlora-arc-reasoning-3.7k-v3 | ssunbear | "2025-03-31T18:12:38Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2025-03-31T18:08:44Z" | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
linoyts/huggy_dora_v3_unet | linoyts | "2024-03-19T23:29:52Z" | 3 | 0 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"diffusers-training",
"text-to-image",
"dora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | "2024-03-19T22:39:44Z" | ---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- diffusers-training
- text-to-image
- diffusers
- dora
- template:sd-lora
widget:
- text: 'a TOK emoji dressed as yoda'
output:
url:
"image_0.png"
- text: 'a TOK emoji dressed as yoda'
output:
url:
"image_1.png"
- text: 'a TOK emoji dressed as yoda'
output:
url:
"image_2.png"
- text: 'a TOK emoji dressed as yoda'
output:
url:
"image_3.png"
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a TOK emoji
license: openrail++
---
# SDXL LoRA DreamBooth - linoyts/huggy_dora_v3_unet
<Gallery />
## Model description
### These are linoyts/huggy_dora_v3_unet LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`huggy_dora_v3_unet.safetensors` here 💾](/linoyts/huggy_dora_v3_unet/blob/main/huggy_dora_v3_unet.safetensors)**.
- Place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:huggy_dora_v3_unet:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('linoyts/huggy_dora_v3_unet', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('a TOK emoji dressed as yoda').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Trigger words
You should use a TOK emoji to trigger the image generation.
## Details
All [Files & versions](/linoyts/huggy_dora_v3_unet/tree/main).
The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
hieroja/TinyLlama-1.1B-dialogsum-finetuned | hieroja | "2025-02-18T16:37:58Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"en",
"dataset:neil-code/dialogsum-test",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | "2025-02-18T15:51:20Z" | ---
library_name: transformers
license: mit
datasets:
- neil-code/dialogsum-test
language:
- en
base_model:
- TinyLlama/TinyLlama-1.1B-Chat-v1.0
---
# TinyLlama-1.1B-dialogsum-finetuned
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Jere Liimatainen
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** MIT
- **Finetuned from model:** TinyLlama/TinyLlama-1.1B-Chat-v1.0
- **Dataset used:** neil-code/dialogsum-test
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/jliima/TinyLlama-1.1B-dialogsum-finetuned
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
esses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
cnatale/mistral_7b_instruct_v0_1_txt_2_sql | cnatale | "2024-01-13T01:32:30Z" | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1",
"license:apache-2.0",
"region:us"
] | null | "2024-01-13T00:49:27Z" | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: mistralai/Mistral-7B-Instruct-v0.1
model-index:
- name: mistral_7b_instruct_v0_1_txt_2_sql
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral_7b_instruct_v0_1_txt_2_sql
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6482
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 0.03
- training_steps: 80
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3599 | 0.71 | 10 | 1.0764 |
| 1.0005 | 1.43 | 20 | 0.8726 |
| 0.8508 | 2.14 | 30 | 0.7721 |
| 0.7585 | 2.86 | 40 | 0.7218 |
| 0.7078 | 3.57 | 50 | 0.6825 |
| 0.6413 | 4.29 | 60 | 0.6594 |
| 0.6128 | 5.0 | 70 | 0.6508 |
| 0.5745 | 5.71 | 80 | 0.6482 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0 |
yankeeg0d/alex-lora | yankeeg0d | "2025-02-17T23:48:39Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-02-17T23:28:44Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: alex
---
# Alex Lora
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `alex` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('yankeeg0d/alex-lora', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Recaru/T3Q-ko-solar-dpo-v7.0-Q4_K_M-GGUF | Recaru | "2024-05-12T03:39:07Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-05-12T03:38:47Z" | ---
license: apache-2.0
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
---
# Recaru/T3Q-ko-solar-dpo-v7.0-Q4_K_M-GGUF
This model was converted to GGUF format from [`chihoonlee10/T3Q-ko-solar-dpo-v7.0`](https://huggingface.co/chihoonlee10/T3Q-ko-solar-dpo-v7.0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/chihoonlee10/T3Q-ko-solar-dpo-v7.0) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo Recaru/T3Q-ko-solar-dpo-v7.0-Q4_K_M-GGUF --model t3q-ko-solar-dpo-v7.0.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo Recaru/T3Q-ko-solar-dpo-v7.0-Q4_K_M-GGUF --model t3q-ko-solar-dpo-v7.0.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m t3q-ko-solar-dpo-v7.0.Q4_K_M.gguf -n 128
```
|
segolilylabs/Lily-Cybersecurity-7B-v0.2-GGUF | segolilylabs | "2024-01-20T00:01:19Z" | 3,243 | 16 | null | [
"gguf",
"cybersecurity",
"cyber security",
"hacking",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-01-12T02:13:04Z" | ---
license: apache-2.0
tags:
- cybersecurity
- cyber security
- hacking
language:
- en
---
My attempt at making GGUF versions of <a href= "https://huggingface.co/segolilylabs/Lily-Cybersecurity-7B-v0.2">segolilylabs/Lily-Cybersecurity-7B-v0.2</a>
|
ai-apps-prime/ai-nude-generator | ai-apps-prime | "2025-02-21T20:10:21Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-02-21T20:10:03Z" | ---
license: apache-2.0
---
# 5 Best AI Nude Generators
The best AI nude generator has features like realistic & accurate result generation, and customization options (like age, body type, pose, etc), faster rendering speed, privacy, and security.
I have tried more than 100 tools in the field of undress, deep nude, or AI nude. I chose these 5 tools that follow all the criteria I mentioned above.
## 1. Undress.app
Undress.app is recognized as one of the best AI nude generators available online. Utilizing advanced artificial intelligence technology, it allows users to create unclothed images quickly and efficiently.
The platform is user-friendly, ensuring that even those unfamiliar with such tools can navigate it with ease. With a commitment to user privacy and data security, Undress.app stands out as a trustworthy option for generating NSFW content.
⏩⏩⏩[**Try Undress App For Free**](https://bestaitools.top/fgRB)

### **Key Features**
Multiple AI Modes: Users can choose from various undressing modes, including Lingerie, Bikini, and NSFW mode, allowing for a personalized experience.
High-Quality Results: The AI processes images to deliver high-quality results, ensuring that the generated images are clear and detailed.
Free Trial Access: New users can sign up and receive free credits to explore the app's features without any financial commitment.
Privacy Assurance: Undress.app does not save any user data, ensuring that all actions remain confidential and secure.
Compatibility: The app works with both male and female photos, as well as anime images, providing a wide range of customization options.
User-Friendly Interface: The platform is designed to be intuitive, making it easy for users to upload images and generate results quickly.
Regular Updates: The developers frequently update the app to improve functionality and security, ensuring a safe user experience.
### **My Experience**
Using Undress.app was a straightforward and enjoyable experience. After signing up, I was greeted with a clean and intuitive interface that made navigation a breeze.
I selected the bikini mode and uploaded a photo I was allowed to use. Within seconds, the AI processed the image and delivered a high-quality result without any blurriness.
I appreciated the variety of modes available, which allowed me to experiment with different styles. The privacy features gave me peace of mind, knowing that my data was secure and not stored anywhere.
Overall, my experience was positive, and I found the tool to be effective and user-friendly.
### **Pros:**
Easy to use with a user-friendly interface.
High-quality image generation with no blur.
Multiple modes for diverse customization.
Strong privacy and security measures in place.
Free trial credits are available for new users.
Works with various types of images, including anime.
### **Cons:**
Sign-up is required, which may deter some users.
Free credits may be limited, requiring users to purchase more for extensive use.
Results can vary based on the quality of the uploaded image.
⏩⏩⏩[**Try Undress App For Free**](https://bestaitools.top/fgRB)
## 2. Pornx.ai
Pornx.ai is revolutionizing the world of adult content with its cutting-edge AI nude generator. This innovative platform allows users to create stunning, personalized adult images and videos that cater to their unique fantasies.
With a user-friendly interface and a plethora of customization options, Pornx.ai empowers users to unleash their creativity and explore their desires in a safe and imaginative environment.
⏩⏩⏩[**Try For Free**](https://bestaitools.top/fgRB)
### **Key Features**
AI Image Generator: Generate your own AI porn images by selecting models, including women, men, or transgender individuals. Customize with various filters, body types, skin tones, hairstyles, outfits, and backgrounds.
AI Video Generator: Craft personalized videos that reflect your imagination, allowing for a more immersive experience.
Quality Mode: Enhance your images with the "Quality" feature, which zooms in on details and increases resolution for a top-notch visual experience.
Custom Pose: Transfer character poses from your uploaded images to the generated images, making storytelling and personal pleasure more engaging.
In Paint Feature: Modify specific areas of your images by selecting and editing them, allowing for tailored adjustments and enhancements.
Community Engagement: Join the Pornx.ai Discord community to connect with other users, share experiences, and gain insights into the platform.
Age Verification: The platform ensures that all users are of legal adult age, maintaining a safe environment for mature content.
Free and Paid Plans: While the basic features are available for free, users can upgrade to a paid plan for additional benefits and enhanced functionalities.
### **My Experience**
Using Pornx.ai has been an exhilarating journey. The intuitive interface made it easy to navigate through the various features. I was particularly impressed with the AI Image Generator, which allowed me to create images that closely matched my vision.
The customization options were extensive, enabling me to experiment with different models and styles. The Quality Mode truly elevated the visual appeal of my creations, making them look professional and polished. Overall, my experience was enjoyable and fulfilling, as I could explore my creativity without limitations.
### **Pros**
User-Friendly Interface: Easy to navigate, even for beginners.
Extensive Customization: A wide range of options for personalizing images and videos.
High-Quality Output: The Quality Mode enhances the visual appeal significantly.
Community Support: Engaging with other users through Discord fosters a sense of belonging.
Free Access: Basic features are available at no cost, making it accessible to everyone.
### **Cons:**
Age Restrictions: Users must be over 18, which may limit access for younger audiences.
Paid Features: Some advanced functionalities require a subscription, which may not be ideal for all users.
Content Limitations: The platform is designed for adult content, which may not appeal to everyone.
⏩⏩⏩[**Try For Free**](https://bestaitools.top/fgRB)
## 3. Seduced.ai
Seduced.ai is recognized as one of the leading AI nude generators available today. This innovative platform allows users to create stunning and unique NSFW images and videos effortlessly, without requiring any technical skills.
With a wide array of features and customizable options, Seduced.ai caters to various preferences and fetishes, making it a go-to choice for those looking to explore their fantasies in a safe and private environment.
⏩⏩⏩[**Try For Free**](https://bestaitools.top/fgRB)
### **Key Features**
Easy-to-Use Interface: The platform is designed for users of all skill levels, allowing anyone to generate content with just a few clicks.
Video Generation: Users can create smooth porn videos of up to 6 seconds, combining multiple sequences for a seamless experience.
Mixable Extensions: Seduced.ai allows users to mix up to 8 extensions, enabling the creation of unique images that cannot be found elsewhere.
Character Reuse: Users can save previously generated characters for reuse in future creations, allowing for infinite scenarios.
Diverse AI Models: The platform offers a selection of 10 distinct AI models, allowing users to create both realistic and anime-style content.
Upscaling Options: Users can enhance the resolution of generated images two or three times, adding finer details for a more realistic appearance.
Privacy Control: Users have the option to keep their generated images and videos private, ensuring discretion.
Fetish Support: Seduced.ai covers a wide range of fetishes, providing extensions that empower users to produce content beyond typical capabilities.
### **My Experience**
Using Seduced.ai has been a remarkable experience. The user-friendly interface made it easy for me to navigate through the various features. I was particularly impressed by the extensive library of extensions available, which allowed me to mix and match different elements to create unique images.
The ability to generate videos was an added bonus, and I found the quality to be surprisingly high for such a short duration. The option to reuse characters made it easy to create a storyline, enhancing the overall experience.
### **Pros:**
User-Friendly: No technical skills are required to generate content.
High-Quality Output: The images and videos produced are of excellent quality.
Wide Range of Options: Extensive library of extensions and AI models to choose from.
Privacy Features: Users can keep their creations private.
Regular Updates: The platform frequently adds new features and extensions.
### **Cons:**
Subscription Costs: Some users may find the pricing plans to be on the higher side.
Limited Video Duration: The maximum video length of 6 seconds may not be sufficient for all users.
Content Restrictions: While the platform supports various fetishes, some niche interests may not be fully covered.
⏩⏩⏩[**Try For Free**](https://bestaitools.top/fgRB)
## 4. Undress.cc
Undress.cc is recognized as one of the best AI nude generators available today. This innovative platform utilizes advanced artificial intelligence technology to create realistic images of women without clothing.
Designed to be user-friendly and accessible, Undress.cc allows users to explore their fantasies in a safe and private environment. With its intuitive interface and various features, it has gained popularity among users looking for creative ways to generate undressed images.
⏩⏩⏩[**Try For Free**](https://bestaitools.top/fgRB)
### **Key Features**
Free Access: Undress.cc offers a free AI undressing tool, allowing users to generate images without any initial cost.
User-Friendly Interface: The platform is designed to be intuitive, making it easy for anyone to navigate and utilize its features effectively.
Multiple Modes: Users can choose from different modes, such as 'X-Ray Mode' for deep nude undressing or 'Lingerie Mode' to explore various fantasies.
Privacy and Security: The app prioritizes user security and confidentiality, ensuring that all generated images and user data remain private.
Registration Benefits: Upon signing up, users receive free credits to explore the service, including the deep nude functionality.
Legal Compliance: Undress.cc operates within the bounds of current legal frameworks, ensuring that its services are legitimate and lawful.
Creative Exploration: The tool provides a unique way to experiment with undressing images while respecting user preferences.
Continuous Updates: The platform is regularly updated to enhance user experience and incorporate the latest advancements in AI technology.
### **My Experience**
Using Undress.cc was a straightforward and enjoyable experience. After registering on the platform, I was greeted with a clean and intuitive interface that made navigation easy. Uploading a clear image was simple, and I was impressed by the variety of modes available.
I decided to try the 'X-Ray Mode' and was amazed at the realism of the generated images. The process was quick, and I appreciated the privacy measures in place, which made me feel secure while using the app. Overall, my experience with Undress.cc was positive, and I found it to be a valuable tool for creative exploration.
### **Pros:**
Free access to basic features.
Intuitive and user-friendly interface.
Multiple modes for different preferences.
Strong emphasis on user privacy and security.
Legal and compliant with current regulations.
### **Cons:**
Some advanced features may require purchasing credits.
Limited to images of women, which may not appeal to all users.
Potential ethical concerns regarding the use of generated images.
⏩⏩⏩[**Try For Free**](https://bestaitools.top/fgRB)
## 5. Undressai.tools
Undressai.tools is a cutting-edge AI nude generator that utilizes advanced technologies to transform clothed images into realistic nude visuals.
Leveraging deep learning algorithms and sophisticated image processing techniques, this tool offers users a unique and innovative way to explore the artistic potential of AI-generated imagery.
⏩⏩⏩[**Try For Free**](https://bestaitools.top/fgRB)
### **Key Features**
Stable Diffusion: This model enhances image generation by producing high-quality, coherent outputs with minimal artifacts, significantly improving realism and detail in the undressed images.
Generative Adversarial Networks (GANs): GANs power Undressai.tools by utilizing two neural networks to generate highly realistic images of nudity, ensuring lifelike results.
Deep Learning Models: Sophisticated algorithms analyze clothing patterns and body structures to accurately create undressed versions of subjects, enhancing the overall quality of the output.
Image Synthesis: AI-driven image synthesis generates realistic skin textures that replace removed clothing, ensuring that the final images appear natural and believable.
Pose Estimation: Machine learning algorithms track and predict body poses, ensuring anatomically accurate undressing outcomes that respect the original image's context.
Convolutional Neural Networks (CNNs): CNNs extract key features from input images to guide the undressing process, improving output quality and detail.
Computer Vision and Image Recognition: These techniques identify and isolate clothing areas, allowing for precise removal and replacement, which is crucial for achieving realistic results.
Style Transfer: Advanced algorithms ensure that the generated nude images match the original's lighting, shading, and artistic style, maintaining the integrity of the original photograph.
### **My Experience**
Using Undressai.tools has been an intriguing experience. The interface is intuitive, making it easy to upload images and select the areas to modify. I was impressed by the speed at which the tool processed the images and the quality of the results.
The generated nude visuals were remarkably realistic, capturing the essence of the original images while effectively removing clothing. The ability to adjust and refine the output further enhanced my experience, allowing for creative experimentation.
### **Pros:**
User-Friendly Interface: The platform is easy to navigate, making it accessible for users of all skill levels.
High-Quality Outputs: The generated images are realistic and detailed, thanks to advanced AI technologies.
Privacy Focused: All generated images are auto-deleted within 48 hours, ensuring user privacy and data security.
Versatile Applications: The tool can be used for various purposes, including artistic exploration and personal projects.
### **Cons:**
Ethical Considerations: Users must be mindful of the ethical implications of generating nude images, particularly regarding consent and privacy.
Limited Image Formats: The tool currently supports only specific file formats (.jpg, .png, .heic), which may restrict some users.
Potential Misuse: There is a risk of the technology being misused for inappropriate purposes, necessitating responsible usage guidelines.
⏩⏩⏩[**Try For Free**](https://bestaitools.top/fgRB)
## Frequently Asked Questions (FAQS)
### **1. What is AI Nude?**
AI Nude refers to various applications and tools that utilize artificial intelligence to create altered images, specifically by generating realistic nude versions of clothed individuals. These technologies often employ deep learning techniques and generative algorithms, enabling users to manipulate and alter visual content. However, their use has raised significant privacy and ethical concerns, particularly regarding consent and the potential for misuse.
### **2. How Does AI Nude Work?**
AI Nude applications typically use Generative Adversarial Networks (GANs), which consist of two neural networks: a generator that creates images and a discriminator that evaluates their realism. The following steps explain how AI Nude works:
Data Collection: Large datasets of images train the networks to understand realistic image formation.
Training Process: The generator produces images while the discriminator assesses them, providing feedback for refinement.
Iterative Improvement: Over multiple cycles, the generator enhances its capability to create realistic images, ultimately producing the final output.
### **3. What are the Applications of AI Nude Generator?**
AI Nude generators can be used for various applications, including:
Artistic Exploration: Artists may use AI nude tools to create digital art or explore the representation of human forms.
Marketing: Certain businesses might utilize AI to produce provocative content for advertising.
Cyber Harassment: Unfortunately, these tools are also misused for creating non-consensual images leading to harassment or blackmail.
It is crucial to note that while the technology has creative potential, its applications need to be approached with caution due to ethical and legal implications.
### **4. Is there privacy and ethical concerns regarding AI Nude?**
Yes, there are significant privacy and ethical concerns surrounding AI Nude technologies. Here are some key issues:
Lack of Consent: AI nude generators create images without the subject's permission, violating privacy rights.
Potential for Misuse: Generated images can be used for harassment, blackmail, or revenge, causing emotional and psychological harm.
Legal Gaps: Current laws often inadequately address the nuances of digital image manipulation, complicating legal enforcement.
Impact on Mental Health: Victims of non-consensual image manipulation may experience anxiety, depression, and damage to their personal and professional reputations.
|
jlbaker361/sd-wikiart-lora-balanced25 | jlbaker361 | "2023-12-21T17:07:11Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-21T16:44:10Z" |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - jlbaker361/sd-wikiart-lora-balanced25
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the jlbaker361/wikiart-balanced25 dataset. You can find some example images in the following.




|
gl198976/mpt-7b-instruct | gl198976 | "2023-05-06T05:52:17Z" | 10 | 1 | transformers | [
"transformers",
"pytorch",
"mpt",
"text-generation",
"Composer",
"MosaicML",
"llm-foundry",
"custom_code",
"dataset:mosaicml/dolly_hhrlhf",
"arxiv:2205.14135",
"arxiv:2108.12409",
"arxiv:2010.04245",
"license:cc-by-sa-3.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-05-06T05:52:16Z" | ---
license: cc-by-sa-3.0
datasets:
- mosaicml/dolly_hhrlhf
tags:
- Composer
- MosaicML
- llm-foundry
inference: false
duplicated_from: mosaicml/mpt-7b-instruct
---
# MPT-7B-Instruct
MPT-7B-Instruct is a model for short-form instruction following.
It is built by finetuning [MPT-7B](https://huggingface.co/spaces/mosaicml/mpt-7b) on a [dataset](https://huggingface.co/datasets/sam-mosaic/dolly_hhrlhf) derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets.
* License: _CC-By-SA-3.0_ (commercial use permitted)
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-instruct)
This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
## Model Date
May 5, 2023
## Model License
CC-By-SA-3.0 (commercial use permitted)
## Documentation
* [Blog post: Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs](https://www.mosaicml.com/blog/mpt-7b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://join.slack.com/t/mosaicml-community/shared_invite/zt-1btms90mc-GipE2ufuPkKY0QBrmF3LSA)!
### Example Question/Instruction
**Longboi24**:
> What is a quoll?
**MPT-7B-Instruct**:
>A Quoll (pronounced “cool”) is one of Australia’s native carnivorous marsupial mammals, which are also known as macropods or wallabies in other parts around Asia and South America
## How to Use
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom model architecture that is not yet part of the `transformers` package.
It includes options for many training efficiency features such as [FlashAttention (Dao et al. 2022)](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), QK LayerNorm, and more.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b-instruct',
trust_remote_code=True
)
```
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
`MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model with `attn_impl='triton'` and move the model to `bfloat16`:
```python
config = transformers.AutoConfig.from_pretrained(
'mosaicml/mpt-7b-instruct',
trust_remote_code=True
)
config.attn_config['attn_impl'] = 'triton'
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b-instruct',
config=config,
torch_dtype=torch.bfloat16,
trust_remote_code=True
)
model.to(device='cuda:0')
```
Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example:
```python
config = transformers.AutoConfig.from_pretrained(
'mosaicml/mpt-7b-instruct',
trust_remote_code=True
)
config.update({"max_seq_len": 4096})
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b-instruct',
config=config,
trust_remote_code=True
)
```
This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
```
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 6.7B |
|n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 50432 |
| sequence length | 2048 |
## PreTraining Data
For more details on the pretraining process, see [MPT-7B](https://huggingface.co/mosaicml/mpt-7b).
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-7B-Instruct can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-7B-Instruct was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## Acknowledgements
This model was finetuned by Sam Havens and the MosaicML NLP team
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b).
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs},
year = {2023},
url = {www.mosaicml.com/blog/mpt-7b},
note = {Accessed: 2023-03-28}, % change this date
urldate = {2023-03-28} % change this date
}
``` |
deepcode-ai/diffai | deepcode-ai | "2024-07-10T13:31:42Z" | 0 | 0 | null | [
"arxiv:1412.6572",
"arxiv:1810.12715",
"arxiv:1903.12519",
"license:mit",
"region:us"
] | null | "2024-06-14T22:29:06Z" | ---
license: mit
---
DiffAI v3 <a href="https://www.sri.inf.ethz.ch/"><img width="100" alt="portfolio_view" align="right" src="http://safeai.ethz.ch/img/sri-logo.svg"></a>
=============================================================================================================

DiffAI is a system for training neural networks to be provably robust and for proving that they are robust.
Background
----------
By now, it is well known that otherwise working networks can be tricked by clever attacks. For example [Goodfellow et al.](https://arxiv.org/abs/1412.6572) demonstrated a network with high classification accuracy which classified one image of a panda correctly, and a seemingly identical attack picture
incorrectly. Many defenses against this type of attack have been produced, but very few produce networks for which *provably* verifying the safety of a prediction is feasible.
Abstract Interpretation is a technique for verifying properties of programs by soundly overapproximating their behavior. When applied to neural networks, an infinite set (a ball) of possible inputs is passed to an approximating "abstract" network
to produce a superset of the possible outputs from the actual network. Provided an appropreate representation for these sets, demonstrating that the network classifies everything in the ball correctly becomes a simple task. The method used to represent these sets is the abstract domain, and the specific approximations are the abstract transformers.
In DiffAI, the entire abstract interpretation process is programmed using PyTorch so that it is differentiable and can be run on the GPU,
and a loss function is crafted so that low values correspond to inputs which can be proved safe (robust).
Whats New In v3?
----------------
* Abstract Networks: one can now customize the handling of the domains on a per-layer basis.
* Training DSL: A DSL has been exposed to allow for custom training regimens with complex parameter scheduling.
* Cross Loss: The box goal now uses the cross entropy style loss by default as suggested by [Gowal et al. 2019](https://arxiv.org/abs/1810.12715)
* Conversion to Onyx: We can now export to the onyx format, and can export the abstract network itself to onyx (so that one can run abstract analysis or training using tensorflow for example).
Requirements
------------
python 3.6.7, and virtualenv, torch 0.4.1.
Recommended Setup
-----------------
```
$ git clone https://github.com/eth-sri/DiffAI.git
$ cd DiffAI
$ virtualenv pytorch --python python3.6
$ source pytorch/bin/activate
(pytorch) $ pip install -r requirements.txt
```
Note: you need to activate your virtualenv every time you start a new shell.
Getting Started
---------------
DiffAI can be run as a standalone program. To see a list of arguments, type
```
(pytorch) $ python . --help
```
At the minimum, DiffAI expects at least one domain to train with and one domain to test with, and a network with which to test. For example, to train with the Box domain, baseline training (Point) and test against the FGSM attack and the ZSwitch domain with a simple feed forward network on the MNIST dataset (default, if none provided), you would type:
```
(pytorch) $ python . -d "Point()" -d "Box()" -t "PGD()" -t "ZSwitch()" -n ffnn
```
Unless otherwise specified by "--out", the output is logged to the folder "out/".
In the folder corresponding to the experiment that has been run, one can find the saved configuration options in
"config.txt", and a pickled net which is saved every 10 epochs (provided that testing is set to happen every 10th epoch).
To load a saved model, use "--test" as per the example:
```
(pytorch) $ alias test-diffai="python . -d Point --epochs 1 --dont-write --test-freq 1"
(pytorch) $ test-diffai -t Box --update-test-net-name convBig --test PATHTOSAVED_CONVBIG.pynet --width 0.1 --test-size 500 --test-batch-size 500
```
Note that "--update-test-net-name" will create a new model based on convBig and try to use the weights in the pickled PATHTOSAVED_CONVBIG.pynet to initialize that models weights. This is not always necessary, but is useful when the code for a model changes (in components) but does not effect the number or usage of weight, or when loading a model pickled by a cuda process into a cpu process.
The default specification type is the L_infinity Ball specified explicitly by "--spec boxSpec",
which uses an epsilon specified by "--width"
The default specification type is the L_infinity Ball specified explicitly by "--spec boxSpec",
which uses an epsilon specified by "--width"
Abstract Networks
-----------------

A cruical point of DiffAI v3 is that how a network is trained and abstracted should be part of the network description itself. In this release, we provide layers that allow one to alter how the abstraction works,
in addition to providing a script for converting an abstract network to onyx so that the abstract analysis might be run in tensorflow.
Below is a list of the abstract layers that we have included.
* CorrMaxPool3D
* CorrMaxPool2D
* CorrFix
* CorrMaxK
* CorrRand
* DecorrRand
* DecorrMin
* DeepLoss
* ToZono
* ToHZono
* Concretize
* CorrelateAll
Training Domain DSL
-------------------
In DiffAI v3, a dsl has been provided to specify arbitrary training domains. In particular, it is now possible to train on combinations of attacks and abstract domains on specifications defined by attacks. Specifying training domains is possible in the command line using ```-d "DOMAIN_INITIALIZATION"```. The possible combinations are the classes listed in domains.py. The same syntax is also supported for testing domains, to allow for testing robustness with different epsilon-sized attacks and specifications.
Listed below are a few examples:
* ```-t "IFGSM(k=4, w=0.1)" -t "ZNIPS(w=0.3)" ``` Will first test with the PGD attack with an epsilon=w=0.1 and, the number of iterations k=4 and step size set to w/k. It will also test with the zonotope domain using the transformer specified in our [NIPS 2018 paper](https://www.sri.inf.ethz.ch/publications/singh2018effective) with an epsilon=w=0.3.
* ```-t "PGD(r=3,k=16,restart=2, w=0.1)"``` tests on points found using PGD with a step size of r*w/k and two restarts, and an attack-generated specification.
* ```-d Point()``` is standard non-defensive training.
* ```-d "LinMix(a=IFGSM(), b=Box(), aw=1, bw=0.1)"``` trains on points produced by pgd with the default parameters listed in domains.py, and points produced using the box domain. The loss is combined linearly using the weights aw and bw and scaled by 1/(aw + bw). The epsilon used for both is the ambient epsilon specified with "--width".
* ```-d "DList((IFGSM(w=0.1),1), (Box(w=0.01),0.1), (Box(w=0.1),0.01))"``` is a generalization of the Mix domain allowing for training with arbitrarily many domains at once weighted by the given values (the resulting loss is scaled by the inverse of the sum of weights).
* ```-d "AdvDom(a=IFGSM(), b=Box())"``` trains using the Box domain, but constructs specifications as L∞ balls containing the PGD attack image and the original image "o".
* ```-d "BiAdv(a=IFGSM(), b=Box())"``` is similar, but creates specifications between the pgd attack image "a" and "o - (a - o)".
One domain we have found particularly useful for training is ```Mix(a=PGD(r=3,k=16,restart=2, w=0.1), b=BiAdv(a=IFGSM(k=5, w=0.05)), bw=0.1)```.
While the above domains are all deterministic (up to gpu error and shuffling orders), we have also implemented nondeterministic training domains:
* ```-d "Coin(a=IFGSM(), b=Box(), aw=1, bw=0.1)"``` is like Mix, but chooses which domain to train a batch with by the probabilities determined by aw / (aw + bw) and bw / (aw + bw).
* ```-d "DProb((IFGSM(w=0.1),1), (Box(w=0.01),0.1), (Box(w=0.1),0.01))"``` is to Coin what DList is to Mix.
* ```-d AdvDom(a=IFGSM(), b=DList((PointB(),1), (PointA(), 1), (Box(), 0.2)))``` can be used to share attack images between multiple training types. Here an attack image "m" is found using PGD, then both the original image "o" and the attack image "m" are passed to DList which trains using three different ways: PointA trains with "o", PointB trains with "m", and Box trains on the box produced between them. This can also be used with Mix.
* ```-d Normal(w=0.3)``` trains using images sampled from a normal distribution around the provided image using standard deviation w.
* ```-d NormalAdv(a=IFGSM(), w=0.3)``` trains using PGD (but this could be an abstract domain) where perturbations are constrained to a box determined by a normal distribution around the original image with standard deviation w.
* ```-d InSamp(0.2, w=0.1)``` uses Inclusion sampling as defined in the ArXiv paper.
There are more domains implemented than listed here, and of course more interesting combinations are possible. Please look carefully at domains.py for default values and further options.
Parameter Scheduling DSL
------------------------
In place of many constants, you can use the following scheduling devices.
* ```Lin(s,e,t,i)``` Linearly interpolates between s and e over t epochs, using s for the first i epochs.
* ```Until(t,a,b)``` Uses a for the first t epochs, then switches to using b (telling b the current epoch starting from 0 at epoch t).
Suggested Training
------------------
```LinMix(a=IFGSM(k=2), b=InSamp(Lin(0,1,150,10)), bw = Lin(0,0.5,150,10))``` is a training goal that appears to work particularly well for CIFAR10 networks.
Contents
--------
* components.py: A high level neural network library for composable layers and operations
* goals.py: The DSL for specifying training losses and domains, and attacks which can be used as a drop in replacement for pytorch tensors in any model built with components from components.py
* scheduling.py: The DSL for specifying parameter scheduling.
* models.py: A repository of models to train with which are used in the paper.
* convert.py: A utility for converting a model with a training or testing domain (goal) into an onyx network. This is useful for exporting DiffAI abstractions to tensorflow.
* \_\_main\_\_.py: The entry point to run the experiments.
* helpers.py: Assorted helper functions. Does some monkeypatching, so you might want to be careful importing our library into your project.
* AllExperimentsSerial.sh: A script which runs the training experiments from the 2019 ArXiv paper from table 4 and 5 and figure 5.
Notes
-----
Not all of the datasets listed in the help message are supported. Supported datasets are:
* CIFAR10
* CIFAR100
* MNIST
* SVHN
* FashionMNIST
Unsupported datasets will not necessarily throw errors.
Reproducing Results
-------------------
[Download Defended Networks](https://www.dropbox.com/sh/66obogmvih79e3k/AACe-tkKGvIK0Z--2tk2alZaa?dl=0)
All training runs from the paper can be reproduced as by the following command, in the same order as Table 6 in the appendix.
```
./AllExperimentsSerial.sh "-t MI_FGSM(k=20,r=2) -t HBox --test-size 10000 --test-batch-size 200 --test-freq 400 --save-freq 1 --epochs 420 --out all_experiments --write-first True --test-first False"
```
The training schemes can be written as follows (the names differ slightly from the presentation in the paper):
* Baseline: LinMix(a=Point(), b=Box(w=Lin(0,0.031373,150,10)), bw=Lin(0,0.5,150,10))
* InSamp: LinMix(a=Point(), b=InSamp(Lin(0,1,150,10)), bw=Lin(0,0.5, 150,10))
* InSampLPA: LinMix(a=Point(), b=InSamp(Lin(0,1,150,20), w=Lin(0,0.031373, 150, 20)), bw=Lin(0,0.5, 150, 20))
* Adv_{1}ISLPA: LinMix(a=IFGSM(w=Lin(0,0.031373,20,20), k=1), b=InSamp(Lin(0,1,150,10), w=Lin(0,0.031373,150,10)), bw=Lin(0,0.5,150,10))
* Adv_{3}ISLPA: LinMix(a=IFGSM(w=Lin(0,0.031373,20,20), k=3), b=InSamp(Lin(0,1,150,10), w=Lin(0,0.031373,150,10)), bw=Lin(0,0.5,150,10))
* Baseline_{18}: LinMix(a=Point(), b=InSamp(Lin(0,1,200,40)), bw=Lin(0,0.5,200,40))
* InSamp_{18}: LinMix(a=IFGSM(w=Lin(0,0.031373,20,20)), b=InSamp(Lin(0,1,200,40)), bw=Lin(0,0.5,200,40))
* Adv_{5}IS_{18}: LinMix(b=InSamp(Lin(0,1,200,40)), bw=Lin(0,0.5, 200, 40))
* BiAdv_L: LinMix(a=IFGSM(k=2), b=BiAdv(a=IFGSM(k=3, w=Lin(0,0.031373, 150, 30)), b=Box()), bw=Lin(0,0.6, 200, 30))
To test a saved network as in the paper, use the following command:
```
python . -D CIFAR10 -n ResNetLarge_LargeCombo -d Point --width 0.031373 --normalize-layer True --clip-norm False -t 'MI_FGSM(k=20,r=2)' -t HBox --test-size 10000 --test-batch-size 200 --epochs 1 --test NAMEOFSAVEDNET.pynet
```
About
-----
* DiffAI is now on version 3.0.
* This repository contains the code used for the experiments in the [2019 ArXiV Paper](https://arxiv.org/abs/1903.12519).
* To reproduce the experiments from the 2018 ICML paper [Differentiable Abstract Interpretation for Provably Robust Neural Networks](https://files.sri.inf.ethz.ch/website/papers/icml18-diffai.pdf), one must download the source from download the [source code for Version 1.0](https://github.com/eth-sri/diffai/releases/tag/v1.0)
* Further information and related projects can be found at [the SafeAI Project](http://safeai.ethz.ch/)
* [High level slides](https://files.sri.inf.ethz.ch/website/slides/mirman2018differentiable.pdf)
Citing This Framework
---------------------
```
@inproceedings{
title={Differentiable Abstract Interpretation for Provably Robust Neural Networks},
author={Mirman, Matthew and Gehr, Timon and Vechev, Martin},
booktitle={International Conference on Machine Learning (ICML)},
year={2018},
url={https://www.icml.cc/Conferences/2018/Schedule?showEvent=2477},
}
```
Contributors
------------
* [Matthew Mirman](https://www.mirman.com) - [email protected]
* [Gagandeep Singh](https://www.sri.inf.ethz.ch/people/gagandeep) - [email protected]
* [Timon Gehr](https://www.sri.inf.ethz.ch/tg.php) - [email protected]
* Marc Fischer - [email protected]
* [Martin Vechev](https://www.sri.inf.ethz.ch/vechev.php) - [email protected]
License and Copyright
---------------------
* Copyright (c) 2018 [Secure, Reliable, and Intelligent Systems Lab (SRI), ETH Zurich](https://www.sri.inf.ethz.ch/)
* Licensed under the [MIT License](https://opensource.org/licenses/MIT) |
Best000/92ecbafd-0576-4800-85ec-6323837978de | Best000 | "2025-02-13T05:32:02Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:heegyu/WizardVicuna2-13b-hf",
"base_model:adapter:heegyu/WizardVicuna2-13b-hf",
"region:us"
] | null | "2025-02-13T05:11:33Z" | ---
library_name: peft
base_model: heegyu/WizardVicuna2-13b-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 92ecbafd-0576-4800-85ec-6323837978de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 92ecbafd-0576-4800-85ec-6323837978de
This model is a fine-tuned version of [heegyu/WizardVicuna2-13b-hf](https://huggingface.co/heegyu/WizardVicuna2-13b-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7940
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
TheMindExpansionNetwork/CARA-Sage-24B | TheMindExpansionNetwork | "2025-04-09T08:19:19Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"llama-factory",
"instruct",
"fine-tuned",
"emotional-support",
"relational-ai",
"coaching",
"therapy",
"conflict-resolution",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-09T07:35:52Z" | ---
library_name: transformers
tags:
- mistral
- llama-factory
- instruct
- fine-tuned
- emotional-support
- relational-ai
- coaching
- therapy
- conflict-resolution
license: apache-2.0
---
💬 CARA: Cognitive Alignment & Relational Assistant
Powered by Mistral-Small-24B-Instruct-2501
CARA is a state-of-the-art, instruction-tuned language model designed for navigating real-life emotional and relational scenarios. It has been fine-tuned specifically on emotionally complex datasets involving partner conflict, crisis communication, values clarification, and trauma-aware conversations.
This release of CARA is tailored for:
💔 Conflict resolution & reflection
💞 Emotional coaching & empathy modeling
🧠 Belief-bias unpacking
📋 Session note-taking & feedback loops
🧘 Relationship support for therapists & coaches
🧠 Why CARA?
CARA helps both professionals and individuals explore difficult conversations and decisions, including:
“Should I stay or leave this relationship?”
“How do I rebuild trust after betrayal?”
“What values guide my emotional boundaries?”
“What are my fears vs. my actual beliefs?”
CARA doesn't just give answers — it helps people align their thoughts, feelings, and values.
🔧 Model Details
Field Info
Base Model mistralai/Mistral-Small-24B-Base-2501
Fine-Tuned On Emotional dialogue, relationship conflict, coaching logs, session data
Context Window 32,000 tokens
Parameter Count 24 Billion
Quantized Capable Runs on a single RTX 4090 or 32GB Mac with Q4/Q8 inference
License Apache 2.0
Tokenizer Tekken (131k vocab)
🧬 Dataset Sources
The CARA fine-tune was built from hand-curated, anonymized, and synthetic sets derived from:
🌱 Realistic relationship coaching simulations
🧾 Couples therapy case notes (synthetic)
🔄 Conflict-resolution roleplay transcripts
🧠 Journaling & belief-challenging prompts
💬 Therapeutic self-reflection templates
All data was cleaned, anonymized, and formatted using [INST] blocks per Mistral’s V7-Tekken template.
💡 Use Cases
✅ Direct Use
Digital therapy companions
Relationship coaching assistant
Reflection prompts for journalers
AI sidekick for therapists taking notes
🔄 Integrated Use
Part of AutoGen agent workflows
Session archiving and summary pipelines
Integrated calendar + feedback loop for mental health pros
🚫 Out-of-Scope Use
Legal or medical advice
Use without human oversight in real-time crisis
📊 Evaluation Summary
Task Score
MMLU 5-shot (reasoning) 0.663
HumanEval Pass@1 (code baseline) 0.848
Math-Instruct 0.706
Instruction Following (Wildbench) 52.27
Benchmark results align with strong reasoning and response clarity under emotionally-loaded prompts.
🧪 Example Prompt
text
Copy
Edit
[INST]
My girlfriend and I had a fight because I was late. She called the police and said I hit her, which I didn't. I found out she pawned my tools and spent the money gambling. I feel betrayed and stuck. What should I think about this?
[/INST]
→ CARA responds by breaking down:
What happened — the facts
How you might be feeling
What beliefs are influencing your perspective
What your fears are — and how they affect your choices
What your values might suggest you do next
🧩 System Prompt (Recommended)
text
Copy
Edit
You are CARA, a Cognitive Alignment & Relational Assistant designed to help humans explore thoughts, emotions, and beliefs during difficult moments. You do not provide legal advice. You guide users through reflection, not direction. Ask clarifying questions when needed and speak with empathy.
⚙️ How to Use
Transformers
python
Copy
Edit
from transformers import pipeline
chatbot = pipeline(
"text-generation",
model="your-name/CARA-Mistral-24B-Instruct",
max_new_tokens=512,
torch_dtype="auto"
)
chatbot([
{"role": "user", "content": "I'm thinking of ending a relationship that feels toxic. I'm scared of being alone. Help me think this through."}
])
🌍 Environmental Impact (Estimate)
Compute: 3500 A100 hours (fp16 mixed precision)
CO2 Emissions: ~1.4 tons
Frameworks: vLLM, Transformers, Ollama
Optimized For: Q4_K_M / Q8_0 / fp16
🖊 Citation
bibtex
Copy
Edit
@misc{cara2025,
author = {MindExpander & Team},
title = {CARA - Cognitive Alignment & Relational Assistant (Mistral-24B Fine-tune)},
year = 2025,
url = {https://huggingface.co/your-name/CARA}
}
👥 Contact
Maintainer: MindExpander
Support & Docs: Coming soon at [ProjectCARA.ai]
License: Apache 2.0
|
idoru/jetmoe-8b-MyRus-kto | idoru | "2024-04-19T14:01:19Z" | 8 | 0 | transformers | [
"transformers",
"safetensors",
"jetmoe",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-19T13:57:01Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mo137/FLUX.1-dev-bf16-gguf | mo137 | "2024-08-24T11:54:01Z" | 100 | 2 | gguf | [
"gguf",
"text-to-image",
"image-generation",
"flux",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:quantized:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2024-08-22T20:01:39Z" | ---
base_model: black-forest-labs/FLUX.1-dev
library_name: gguf
license: other
license_name: flux-1-dev-non-commercial-license
license_link: LICENSE.md
quantized_by: mo137
tags:
- text-to-image
- image-generation
- flux
---
# ⚠️ bf16 GGUF Flux is not yet supported by any UI like ComfyUI or Forge! ⚠️
Flux.1-dev in **bfloat16**, exactly the same tensors as in black-forest-labs' original safetensors.
AFAIK no inference software supports bf16 FLUX yet, but maybe this file can help.
I have no clue if it would improve anything, but bf16 –> fp16 conversion is lossy, so... maybe?
This is a direct GGUF conversion of [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev/tree/main)
As this is a quantized model not a finetune, all the same restrictions/original license terms still apply.
The model files can be used with the [ComfyUI-GGUF](https://github.com/city96/ComfyUI-GGUF) custom node.
Place model files in `ComfyUI/models/unet` - see the GitHub readme for further install instructions.
Please refer to [this chart](https://github.com/ggerganov/llama.cpp/blob/master/examples/perplexity/README.md#llama-3-8b-scoreboard) for a basic overview of quantization types.
(Model card mostly copied from [city96/FLUX.1-dev-gguf](https://huggingface.co/city96/FLUX.1-dev-gguf) - which contains conventional and useful GGUF files.)
|
Sociovestix/lenu_NL | Sociovestix | "2024-06-05T16:23:18Z" | 140 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-12-14T14:15:53Z" | ---
widget:
- text: Acht Acht B.V.
- text: Stichting Helison
- text: Hayman
- text: Melkveebedrijf Leferink
- text: Jointly
- text: Priority Shipping C.V.
- text: Hoofd en Hart Bewindvoering
- text: Dekker Kids Beheer N.V.
- text: PCI H. Bartholomeus
- text: AEGON Equity Europe Beta Pool (N)
- text: ScaleX Ventures Fund I Coöperatief U.A.
- text: Global Emerging Markets Equity Pool
- text: Gemeente Groningen
- text: Onderlinge Verzekeringmaatschappij "SOM" U.A.
- text: Vereniging van Appartementseigenaars "Henry Dunantpark III"
- text: Ferrovial Construction International SE
- text: Rederij BigLift Barentsz
model-index:
- name: Sociovestix/lenu_NL
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: lenu
type: Sociovestix/lenu
config: NL
split: test
revision: f4d57b8d77a49ec5c62d899c9a213d23cd9f9428
metrics:
- type: f1
value: 0.9814439261813019
name: f1
- type: f1
value: 0.7083817575469042
name: f1 macro
args:
average: macro
---
# LENU - Legal Entity Name Understanding for Netherlands
A Bert (multilingual uncased) model fine-tuned on dutch legal entity names (jurisdiction NL) from the Global [Legal Entity Identifier](https://www.gleif.org/en/about-lei/introducing-the-legal-entity-identifier-lei)
(LEI) System with the goal to detect [Entity Legal Form (ELF) Codes](https://www.gleif.org/en/about-lei/code-lists/iso-20275-entity-legal-forms-code-list).
---------------
<h1 align="center">
<a href="https://gleif.org">
<img src="http://sdglabs.ai/wp-content/uploads/2022/07/gleif-logo-new.png" width="220px" style="display: inherit">
</a>
</h1><br>
<h3 align="center">in collaboration with</h3>
<h1 align="center">
<a href="https://sociovestix.com">
<img src="https://sociovestix.com/img/svl_logo_centered.svg" width="700px" style="width: 100%">
</a>
</h1><br>
---------------
## Model Description
<!-- Provide a longer summary of what this model is. -->
The model has been created as part of a collaboration of the [Global Legal Entity Identifier Foundation](https://gleif.org) (GLEIF) and
[Sociovestix Labs](https://sociovestix.com) with the goal to explore how Machine Learning can support in detecting the ELF Code solely based on an entity's legal name and legal jurisdiction.
See also the open source python library [lenu](https://github.com/Sociovestix/lenu), which supports in this task.
The model has been trained on the dataset [lenu](https://huggingface.co/datasets/Sociovestix), with a focus on dutch legal entities and ELF Codes within the Jurisdiction "NL".
- **Developed by:** [GLEIF](https://gleif.org) and [Sociovestix Labs](https://huggingface.co/Sociovestix)
- **License:** Creative Commons (CC0) license
- **Finetuned from model [optional]:** bert-base-multilingual-uncased
- **Resources for more information:** [Press Release](https://www.gleif.org/en/newsroom/press-releases/machine-learning-new-open-source-tool-developed-by-gleif-and-sociovestix-labs-enables-organizations-everywhere-to-automatically-)
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
An entity's legal form is a crucial component when verifying and screening organizational identity.
The wide variety of entity legal forms that exist within and between jurisdictions, however, has made it difficult for large organizations to capture legal form as structured data.
The Jurisdiction specific models of [lenu](https://github.com/Sociovestix/lenu), trained on entities from
GLEIF’s Legal Entity Identifier (LEI) database of over two million records, will allow banks,
investment firms, corporations, governments, and other large organizations to retrospectively analyze
their master data, extract the legal form from the unstructured text of the legal name and
uniformly apply an ELF code to each entity type, according to the ISO 20275 standard.
# Licensing Information
This model, which is trained on LEI data, is available under Creative Commons (CC0) license.
See [gleif.org/en/about/open-data](https://gleif.org/en/about/open-data).
# Recommendations
Users should always consider the score of the suggested ELF Codes. For low score values it may be necessary to manually review the affected entities. |
visdata/py4 | visdata | "2025-01-27T08:53:23Z" | 24 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-27T08:47:38Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Shadowsemluz/my_awesome_model | Shadowsemluz | "2024-03-07T14:48:28Z" | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-03-07T12:50:49Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2303
- Accuracy: 0.9324
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.225 | 1.0 | 1563 | 0.2050 | 0.921 |
| 0.1446 | 2.0 | 3126 | 0.2303 | 0.9324 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
keshan/Pyramids | keshan | "2023-01-17T03:17:30Z" | 7 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | "2023-01-17T03:16:04Z" |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: keshan/Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ZinebSN/whisper-small-swedish-Test-5it | ZinebSN | "2022-12-09T13:09:00Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"sv",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-12-09T11:43:14Z" | ---
language:
- sv
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small hi - Swedish 2
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: sv-SE
split: test
args: 'config: sv, split: test'
metrics:
- name: Wer
type: wer
value: 62.55007932026565
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small hi - Swedish 2
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4534
- Wer: 62.5501
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1
- training_steps: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 5.1651 | 0.01 | 5 | 4.4534 | 62.5501 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
DreadPoor/Testy4 | DreadPoor | "2025-04-05T12:57:44Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2203.05482",
"base_model:DreadPoor/Testy2",
"base_model:merge:DreadPoor/Testy2",
"base_model:DreadPoor/testy",
"base_model:merge:DreadPoor/testy",
"base_model:DreadPoor/testy3",
"base_model:merge:DreadPoor/testy3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-05T12:52:51Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
haryoaw/cola_Qwen-Qwen2.5-7B_5_0.70 | haryoaw | "2025-01-29T18:10:48Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-29T18:06:33Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tttx/models-feb18-block-mask-step1 | tttx | "2025-02-18T13:04:09Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"dataset:tttx/block_mask_buffer",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
"base_model:adapter:deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
"license:mit",
"region:us"
] | null | "2025-02-18T12:24:10Z" | ---
library_name: peft
license: mit
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
datasets:
- tttx/block_mask_buffer
model-index:
- name: models-feb18-block-mask-step1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# models-feb18-block-mask-step1
This model is a fine-tuned version of [tttx/sft-32b-020925-19k-5ep](https://huggingface.co/tttx/sft-32b-020925-19k-5ep) on the tttx/block_mask_buffer dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 800875
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 16
- total_eval_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.47.0.dev0
- Pytorch 2.4.0+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3 |
adammandic87/9a1e39bd-35cc-4f0f-9ddb-0438613807a9 | adammandic87 | "2025-01-28T05:55:44Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Hermes-3-Llama-3.1-8B",
"base_model:adapter:NousResearch/Hermes-3-Llama-3.1-8B",
"license:llama3",
"region:us"
] | null | "2025-01-28T05:54:11Z" | ---
library_name: peft
license: llama3
base_model: NousResearch/Hermes-3-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9a1e39bd-35cc-4f0f-9ddb-0438613807a9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Hermes-3-Llama-3.1-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 94ffa3eaa02f0f89_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/94ffa3eaa02f0f89_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: adammandic87/9a1e39bd-35cc-4f0f-9ddb-0438613807a9
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/94ffa3eaa02f0f89_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d3379fa4-7a55-407e-8f15-7b0aefbda53d
wandb_project: birthday-sn56-19-Gradients-On-Demand
wandb_run: your_name
wandb_runid: d3379fa4-7a55-407e-8f15-7b0aefbda53d
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 9a1e39bd-35cc-4f0f-9ddb-0438613807a9
This model is a fine-tuned version of [NousResearch/Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.2207
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0015 | 1 | 6.1291 |
| 6.0223 | 0.0189 | 13 | 5.5134 |
| 5.9308 | 0.0377 | 26 | 5.3147 |
| 5.0601 | 0.0566 | 39 | 5.2207 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
andyjoecn/ppo-Huggy | andyjoecn | "2025-01-27T07:26:53Z" | 37 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | "2025-01-27T07:26:48Z" | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: andyjoecn/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
zelk12/MT1-Gen5-MAG-gemma-2-9B | zelk12 | "2024-12-24T17:48:31Z" | 7 | 1 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:zelk12/MT1-Gen5-GP-gemma-2-S4S2-9B",
"base_model:merge:zelk12/MT1-Gen5-GP-gemma-2-S4S2-9B",
"base_model:zelk12/MT1-Gen5-MA-gemma-2-S2S4-9B",
"base_model:merge:zelk12/MT1-Gen5-MA-gemma-2-S2S4-9B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-24T17:42:00Z" | ---
base_model:
- zelk12/MT1-Gen5-GP-gemma-2-S4S2-9B
- zelk12/MT1-Gen5-MA-gemma-2-S2S4-9B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [zelk12/MT1-Gen5-GP-gemma-2-S4S2-9B](https://huggingface.co/zelk12/MT1-Gen5-GP-gemma-2-S4S2-9B)
* [zelk12/MT1-Gen5-MA-gemma-2-S2S4-9B](https://huggingface.co/zelk12/MT1-Gen5-MA-gemma-2-S2S4-9B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: zelk12/MT1-Gen5-MA-gemma-2-S2S4-9B
- model: zelk12/MT1-Gen5-GP-gemma-2-S4S2-9B
merge_method: slerp
base_model: zelk12/MT1-Gen5-MA-gemma-2-S2S4-9B
dtype: bfloat16
parameters:
t: 0.25
```
|
ALIN-LLM/llama-3.2-1b-instruct-gsm240k-epoch1-lr1e-4-v1 | ALIN-LLM | "2025-01-26T11:56:55Z" | 278 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-26T11:52:51Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
w11wo/javanese-bert-small | w11wo | "2023-05-08T10:03:38Z" | 29 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"javanese-bert-small",
"jv",
"dataset:wikipedia",
"arxiv:1810.04805",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-03-02T23:29:05Z" | ---
language: jv
tags:
- javanese-bert-small
license: mit
datasets:
- wikipedia
widget:
- text: "Aku mangan sate ing [MASK] bareng konco-konco"
---
## Javanese BERT Small
Javanese BERT Small is a masked language model based on the [BERT model](https://arxiv.org/abs/1810.04805). It was trained on the latest (late December 2020) Javanese Wikipedia articles.
The model was originally HuggingFace's pretrained [English BERT model](https://huggingface.co/bert-base-uncased) and is later fine-tuned on the Javanese dataset. It achieved a perplexity of 22.00 on the validation dataset (20% of the articles). Many of the techniques used are based on a Hugging Face tutorial [notebook](https://github.com/huggingface/notebooks/blob/master/examples/language_modeling.ipynb) written by [Sylvain Gugger](https://github.com/sgugger), and [fine-tuning tutorial notebook](https://github.com/piegu/fastai-projects/blob/master/finetuning-English-GPT2-any-language-Portuguese-HuggingFace-fastaiv2.ipynb) written by [Pierre Guillou](https://huggingface.co/pierreguillou).
Hugging Face's [Transformers](https://huggingface.co/transformers) library was used to train the model -- utilizing the base BERT model and their `Trainer` class. PyTorch was used as the backend framework during training, but the model remains compatible with TensorFlow nonetheless.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
|-----------------------|----------|----------------|-------------------------------------|
| `javanese-bert-small` | 110M | BERT Small | Javanese Wikipedia (319 MB of text) |
## Evaluation Results
The model was trained for 5 epochs and the following is the final result once the training ended.
| train loss | valid loss | perplexity | total time |
|------------|------------|------------|------------|
| 3.116 | 3.091 | 22.00 | 2:7:42 |
## How to Use
### As Masked Language Model
```python
from transformers import pipeline
pretrained_name = "w11wo/javanese-bert-small"
fill_mask = pipeline(
"fill-mask",
model=pretrained_name,
tokenizer=pretrained_name
)
fill_mask("Aku mangan sate ing [MASK] bareng konco-konco")
```
### Feature Extraction in PyTorch
```python
from transformers import BertModel, BertTokenizerFast
pretrained_name = "w11wo/javanese-bert-small"
model = BertModel.from_pretrained(pretrained_name)
tokenizer = BertTokenizerFast.from_pretrained(pretrained_name)
prompt = "Indonesia minangka negara gedhe."
encoded_input = tokenizer(prompt, return_tensors='pt')
output = model(**encoded_input)
```
## Disclaimer
Do remember that although the dataset originated from Wikipedia, the model may not always generate factual texts. Additionally, the biases which came from the Wikipedia articles may be carried over into the results of this model.
## Author
Javanese BERT Small was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access.
## Citation
If you use any of our models in your research, please cite:
```bib
@inproceedings{wongso2021causal,
title={Causal and Masked Language Modeling of Javanese Language using Transformer-based Architectures},
author={Wongso, Wilson and Setiawan, David Samuel and Suhartono, Derwin},
booktitle={2021 International Conference on Advanced Computer Science and Information Systems (ICACSIS)},
pages={1--7},
year={2021},
organization={IEEE}
}
```
|
Raniahossam33/qwen2.5-7b-instruct-ditto-Egypt-food-sap1-3-custom | Raniahossam33 | "2025-04-10T10:07:29Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-04-10T10:07:01Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Setpember/Jon_GPT2L_DPO_3props_epi_point1 | Setpember | "2025-03-25T21:03:04Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"trl",
"dpo",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-25T21:01:25Z" | ---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jacobwoerther/VikingGirl | jacobwoerther | "2025-03-18T15:55:55Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:apache-2.0",
"region:us"
] | text-to-image | "2025-03-18T15:55:17Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/1x1 girl.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: VkngGrl
license: apache-2.0
---
# VikingGirl
<Gallery />
## Trigger words
You should use `VkngGrl` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/jacobwoerther/VikingGirl/tree/main) them in the Files & versions tab.
|
Helsinki-NLP/opus-mt-toi-fr | Helsinki-NLP | "2023-08-16T12:07:15Z" | 131 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"toi",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2022-03-02T23:29:04Z" | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-toi-fr
* source languages: toi
* target languages: fr
* OPUS readme: [toi-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/toi-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/toi-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/toi-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/toi-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.toi.fr | 26.5 | 0.432 |
|
av-generation/t5-small-end2end-ae-110k | av-generation | "2024-05-30T10:09:00Z" | 116 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-05-30T10:08:29Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
quantumaikr/QuantumLM-7B | quantumaikr | "2023-07-26T17:52:17Z" | 1,516 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-07-22T13:33:56Z" | ---
license: cc-by-nc-4.0
language:
- en
pipeline_tag: text-generation
---
# QuantumLM
## Model Description
`QuantumLM` is a Llama2 7B model finetuned on an Wizard-Orca style Dataset
## Usage
Start chatting with `QuantumLM-7B` using the following code snippet:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("quantumaikr/QuantumLM-7B", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("quantumaikr/QuantumLM-7B", torch_dtype=torch.float16, low_cpu_mem_usage=True, device_map="auto")
system_prompt = "### System:\nYou are QuantumLM, an AI that follows instructions extremely well. Help as much as you can. Remember, be safe, and don't do anything illegal.\n\n"
message = "Write me a poem please"
prompt = f"{system_prompt}### User: {message}\n\n### Assistant:\n"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_tokens=256)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
QuantumLM should be used with this prompt format:
```
### System:
This is a system prompt, please behave and help the user.
### User:
Your prompt here
### Assistant
The output of QuantumLM
```
## Use and Limitations
### Intended Use
These models are intended for research only, in adherence with the [CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/) license.
### Limitations and bias
Although the aforementioned dataset helps to steer the base language models into "safer" distributions of text, not all biases and toxicity can be mitigated through fine-tuning. We ask that users be mindful of such potential issues that can arise in generated responses. Do not treat model outputs as substitutes for human judgment or as sources of truth. Please use it responsibly.
|
hadihasan/urology_model | hadihasan | "2024-12-05T23:18:06Z" | 163 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-05T23:17:10Z" | ---
base_model: unsloth/llama-3.2-1b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** hadihasan
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
huggingtweets/seanmombo | huggingtweets | "2022-03-23T16:22:13Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | ---
language: en
thumbnail: http://www.huggingtweets.com/seanmombo/1648052490598/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1494366913090273285/lmJtNNT2_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">mo bombo</div>
<div style="text-align: center; font-size: 14px;">@seanmombo</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from mo bombo.
| Data | mo bombo |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 5 |
| Short tweets | 560 |
| Tweets kept | 2684 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1bl9qwdw/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @seanmombo's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3p8cy5st) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3p8cy5st/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/seanmombo')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ClarenceDan/948980b4-37ad-4e66-ba60-772247c7e05b | ClarenceDan | "2025-03-04T19:37:39Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-v0.3",
"base_model:adapter:unsloth/mistral-7b-v0.3",
"license:apache-2.0",
"region:us"
] | null | "2025-03-04T19:13:49Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-v0.3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 948980b4-37ad-4e66-ba60-772247c7e05b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/mistral-7b-v0.3
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b9118d2413e0f511_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b9118d2413e0f511_train_data.json
type:
field_input: abstract
field_instruction: text
field_output: title
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: ClarenceDan/948980b4-37ad-4e66-ba60-772247c7e05b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/b9118d2413e0f511_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 070c00b0-f12a-407e-98d4-b34597270eb0
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 070c00b0-f12a-407e-98d4-b34597270eb0
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 948980b4-37ad-4e66-ba60-772247c7e05b
This model is a fine-tuned version of [unsloth/mistral-7b-v0.3](https://huggingface.co/unsloth/mistral-7b-v0.3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0003 | 1 | nan |
| 0.0 | 0.0008 | 3 | nan |
| 0.0 | 0.0015 | 6 | nan |
| 0.0 | 0.0023 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lukahh/cultureclip_lora_0310_ablation2 | lukahh | "2025-03-11T02:27:36Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:openai/clip-vit-base-patch32",
"base_model:adapter:openai/clip-vit-base-patch32",
"region:us"
] | null | "2025-03-10T17:15:23Z" | ---
library_name: peft
base_model: openai/clip-vit-base-patch32
tags:
- generated_from_trainer
model-index:
- name: cultureclip_lora_0310_ablation2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cultureclip_lora_0310_ablation2
This model is a fine-tuned version of [openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 10.0
### Training results
### Framework versions
- PEFT 0.14.1.dev0
- Transformers 4.49.0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0 |
EliasNsilva/pytorch_DRL | EliasNsilva | "2023-10-22T15:09:46Z" | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-10-22T15:09:35Z" | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: pytorch_DRL
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
jorge-henao/bloomz-7b1-mt-ft-ask2democracy-cqa-salud-esco | jorge-henao | "2023-04-01T17:44:20Z" | 0 | 0 | null | [
"pytorch",
"generated_from_trainer",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | "2023-04-01T15:59:55Z" | ---
license: bigscience-bloom-rail-1.0
tags:
- generated_from_trainer
model-index:
- name: bloomz-7b1-mt-ft-ask2democracy-cqa-salud-esco
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bloomz-7b1-mt-ft-ask2democracy-cqa-salud-esco
This model is a fine-tuned version of [bigscience/bloomz-7b1-mt](https://huggingface.co/bigscience/bloomz-7b1-mt) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
NastyaKorneeva/my_awesome_wnut_model1 | NastyaKorneeva | "2024-04-20T20:36:34Z" | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:cointegrated/rubert-tiny2",
"base_model:finetune:cointegrated/rubert-tiny2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-04-18T13:12:30Z" | ---
license: mit
base_model: cointegrated/rubert-tiny2
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: my_awesome_wnut_model1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_wnut_model1
This model is a fine-tuned version of [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6765
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.9050
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 25 | 0.9131 | 0.0 | 0.0 | 0.0 | 0.9046 |
| No log | 2.0 | 50 | 0.6765 | 0.0 | 0.0 | 0.0 | 0.9050 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Guilherme34/Samantha-pygmalion-mistral-7b | Guilherme34 | "2024-03-11T00:20:15Z" | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Delcos/Mistral-Pygmalion-7b",
"base_model:adapter:Delcos/Mistral-Pygmalion-7b",
"region:us"
] | null | "2024-03-11T00:19:41Z" | ---
library_name: peft
base_model: Delcos/Mistral-Pygmalion-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.9.1.dev0 |
nakayama/HonyararaMix | nakayama | "2023-03-09T02:05:40Z" | 0 | 26 | null | [
"stable-diffusion",
"text-to-image",
"safetensors",
"dataset:nakayama/DeDeDeDataset",
"license:other",
"region:us"
] | text-to-image | "2023-03-01T16:43:48Z" | ---
language: '-en'
license: other
tags:
- stable-diffusion
- text-to-image
- safetensors
datasets: nakayama/DeDeDeDataset
---
# ご利用の際は下記のライセンス内容を十分にご確認ください。
HonyararaMixはアニメ調の人物を出力しやすいように調整されたStable Diffusion 2.1 解像度768対応版モデルです。
ベースモデルの[Illuminati Diffusion 1.0](https://huggingface.co/IlluminatiAI/Illuminati_Diffusion_v1.0)へ[Waifu Diffusion 1.5 Beta](https://huggingface.co/waifu-diffusion/wd-1-5-beta) 、[Waifu Diffusion 1.5 Beta2](https://huggingface.co/waifu-diffusion/wd-1-5-beta2)、[Replicant-V1.0](https://huggingface.co/gsdf/Replicant-V1.0)、[Stable Diffusion 2.1](https://huggingface.co/stabilityai/stable-diffusion-2-1)を用いマージしたものをベースに
[DeDeDeDataset](https://huggingface.co/datasets/nakayama/DeDeDeDataset)による学習などを行い調整したものです。
旧HonyararaMixにはライセンス上の問題があったため、それらを解決したモデルになります。
Prompt/Bad Promptについては、ベースモデルとなった二種のモデルが用意した文書の内容、およびTI Embeddingがこのモデルでも作用します。
WDについては[こちら](https://cafeai.notion.site/WD-1-5-232cfa9443fc4661b469e7d1df8b7dd8)を参照。Illuminati DiffusionについてはNfixerが割とよく効く印象です。
# 例
<img src="https://huggingface.co/nakayama/HonyararaMix/resolve/main/img/image01.png" style="max-width:1000px;" width="75%"/>
```
best quality, masterpiece, great nature and secret garden with anime 1girl with twintail green hair and green eye and beautiful iris and (((from behind)))
Negative prompt: nfixer,3d, flat shading, flat shadow, bad anatomy, bad hands, missing fingers, extra digit, fewer digits looking at viewer
Steps: 28, Sampler: DDIM, CFG scale: 9, Seed: 43662843, Size: 1024x768, Denoising strength: 0.35, Clip skip: 2, ENSD: 31337, Hires upscale: 2, Hires upscaler: R-ESRGAN 4x+ Anime6B
```
<img src="https://huggingface.co/nakayama/HonyararaMix/resolve/main/img/image02.png" style="max-width:1000px;" width="75%"/>
```
newest, masterpiece, exceptional, anime game cg of cute cute cute 1girl sitting on a bench in harajuku park with detailed wavy pink hair pink and detailed yellow hair
Negative prompt: nfixer, 3d, flat shading, flat shadow, inaccurate limb, bad hands, missing fingers, extra digit, fewer digits
Steps: 28, Sampler: DPM++ SDE Karras, CFG scale: 7.5, Seed: 4179369813, Size: 768x1152, Model hash: 4dbf24eef6, Denoising strength: 0.35, Clip skip: 2, ENSD: 31337, Hires upscale: 2, Hires upscaler: R-ESRGAN 4x+ Anime6B
```
<img src="https://huggingface.co/nakayama/HonyararaMix/resolve/main/img/image03.png" style="max-width:1000px;" width="75%"/>
```
newest, masterpiece, exceptional, anime game cg of kawaii samurai 1girl, fighting form and (((intricate hands))), fist and dynamic angle with detailed pink hair and detailed yellow eye and beautiful iris in japanese background
Negative prompt: nfixer, 3d, flat shading, flat shadow, ((bad hands, missing fingers, extra digit, fewer digits))
Steps: 16, Sampler: DPM++ SDE Karras, CFG scale: 7.5, Seed: 2246223688, Size: 768x1152, Denoising strength: 0.35, Clip skip: 2, ENSD: 31337, Hires upscale: 2, Hires upscaler: R-ESRGAN 4x+ Anime6B
```
<img src="https://huggingface.co/nakayama/HonyararaMix/resolve/main/img/image04.png" style="max-width:1000px;" width="75%"/>
```
((masterpiece, best quality)),best aesthetic, 1girl, solo, black skirt, blue eyes, guitar, headphones, instrument, long hair, music, one side up, pink hair, playing guiter, pleated skirt, black shirt, indoors
Negative prompt: nfixer,missing finger, extra digits, fewer digits,((mutated hands and fingers)), 3d, flat shadow
Steps: 16, Sampler: DPM++ SDE Karras, CFG scale: 7, Seed: 3689716537, Size: 768x1152, Denoising strength: 0.35, Clip skip: 2, ENSD: 31337, Hires upscale: 2, Hires upscaler: R-ESRGAN 4x+ Anime6B
```
<img src="https://huggingface.co/nakayama/HonyararaMix/resolve/main/img/image05.png" style="max-width:1000px;" width="75%"/>
```
newest, masterpiece, exceptional, anime game cg of teenage 1boy wizard bust shot casting fire magic spell in New York City, detailed eye and beautiful iris,picturesque, golden hour, dynamic pose
Negative prompt: nfixer, 3d, flat shading, flat shadow, inaccurate limb, bad hands, missing fingers, extra digit, fewer digits
Steps: 25, Sampler: Euler a, CFG scale: 7.5, Seed: 1414427875, Size: 768x1152, Denoising strength: 0.35, Clip skip: 2, ENSD: 31337, Hires upscale: 2, Hires upscaler: R-ESRGAN 4x+ Anime6B
```
<img src="https://huggingface.co/nakayama/HonyararaMix/resolve/main/img/image06.png" style="max-width:1000px;" width="75%"/>
```
newest, masterpiece, exceptional, anime game cg of samurai (((((old man))))), fighting pose, dynamic angle, fist with detailed pink hair and detailed yellow eye and beautiful iris in japanese background
nfixer, 3d, flat shading, flat shadow, inaccurate limb, bad hands, missing fingers, extra digit, fewer digits
Steps: 25, Sampler: DDIM, CFG scale: 9, Seed: 3970412380, Size: 768x1152, Denoising strength: 0.35, Clip skip: 2, ENSD: 31337, Hires upscale: 2, Hires upscaler: R-ESRGAN 4x+ Anime6B
```
# マージ・学習手順について
1. Illuminati Diffusion 1.0にWaifu Diffusion 1.5 Beta1を差分マージする
| Interpolation Method | Primary Model | Secondary Model | Tertiary Model | Merge Name |
| --- | --- | --- | --- | --- |
| Add Difference @ 0.6 | Illuminati Diffusion 1.0 | Waifu Diffusion 1.5 Beta1 | Stable Diffusion v2-1_768-ema-pruned | HonyararaMix_pre1 |
2. HonyararaMix_pre1に対し、[自前で用意した他Diffusion Modelの出力からなる素材画像](https://huggingface.co/datasets/nakayama/DeDeDeDataset)にて学習させる
用意の際に利用したサービス/モデルは、SD2.1、Novel AI、WD1.3/1.4、CoolJapan Diffusion 2.1、Dreamlike Photoreal 2.0。
使用したソフトは[kohya版Fine Tuning](https://github.com/kohya-ss/sd-scripts)。
総数は30000程、flipしたものと合わせてlearning rateは5e-6、以下の設定で学習させ、
試行錯誤の後にepoch 2相当のものをHonyararaMix_pre2とした。
```
accelerate launch --num_cpu_threads_per_process 1 fine_tune.py
--pretrained_model_name_or_path=Honyarara_pre1.safetensors
--in_json meta_lat.json --train_data_dir=train_data
--output_dir=fine_tuned
--shuffle_caption
--train_batch_size=4
--learning_rate=5e-6
--max_train_steps=60000
--v2
--v_parameterization
--use_8bit_adam
--xformers
--mixed_precision=bf16
--save_every_n_epochs=2
--save_precision=float
--max_token_length=150
--seed=42
```
3. Honyarara_pre1にHonyarara_pre2を加重平均でマージする
| Interpolation Method | Primary Model | Secondary Model | Merge Name |
| --- | --- | --- | --- |
| Weighted Sum @ 0.5 | HonyararaMix_pre1 | HonyararaMix_pre2 | HonyararaMix_pre3 |
4. HonyararaMix_pre3にIlluminati Diffusion 1.0を階層マージする
使用マージ用スクリプトは[Super Merger](https://github.com/hako-mikan/sd-webui-supermerger)。
| Model: A | Model: B | Weight | Merge Name |
| --- | --- | --- | --- |
| HonyararaMix_pre3 | Illuminati Diffusion 1.0 | 1.0,0.45,0.0,0.0,0.0,0.0,0.0,0.1,0.3,0.5,0.7,0.9,1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.1,0.3,0.5,0.7,0.9,1.0 | HonyararaMix_pre4|
5. HonyararaMix_pre4にWaifu Diffusion 1.5 Beta2 Aestheticを差分マージする
| Interpolation Method | Primary Model | Secondary Model | Tertiary Model | Merge Name |
| --- | --- | --- | --- | --- |
| Add Difference @ 0.5 | HonyararaMix_pre4 | WD 1.5 Beta 2 Aesthetic | WD 1.5 Beta 2 | HonyararaMix_pre5 |
6. HonyararaMix_pre5にReplicant-V1.0を加重平均でマージする
| Interpolation Method | Primary Model | Secondary Model | Merge Name |
| --- | --- | --- | --- |
| Weighted Sum @ 0.5 | HonyararaMix_pre5 | Replicant-V1.0 | HonyararaMix A |
7. HonyararaMix AにHonyararaMix_pre4を加重平均でマージする
| Interpolation Method | Primary Model | Secondary Model | Merge Name |
| --- | --- | --- | --- |
| Weighted Sum @ 0.5 | HonyararaMix A | HonyararaMix_pre4 | HonyararaMix B |
# ライセンスについて
Fair AI Public License 1.0-SD (https://freedevproject.org/faipl-1.0-sd/) |
mradermacher/NanoLM-0.3B-Instruct-v1-GGUF | mradermacher | "2025-04-01T23:15:26Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"zh",
"base_model:Mxode/NanoLM-0.3B-Instruct-v1",
"base_model:quantized:Mxode/NanoLM-0.3B-Instruct-v1",
"license:gpl-3.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-01T23:11:43Z" | ---
base_model: Mxode/NanoLM-0.3B-Instruct-v1
language:
- en
- zh
library_name: transformers
license: gpl-3.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Mxode/NanoLM-0.3B-Instruct-v1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NanoLM-0.3B-Instruct-v1-GGUF/resolve/main/NanoLM-0.3B-Instruct-v1.Q3_K_S.gguf) | Q3_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/NanoLM-0.3B-Instruct-v1-GGUF/resolve/main/NanoLM-0.3B-Instruct-v1.Q2_K.gguf) | Q2_K | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/NanoLM-0.3B-Instruct-v1-GGUF/resolve/main/NanoLM-0.3B-Instruct-v1.IQ4_XS.gguf) | IQ4_XS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/NanoLM-0.3B-Instruct-v1-GGUF/resolve/main/NanoLM-0.3B-Instruct-v1.Q3_K_M.gguf) | Q3_K_M | 0.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NanoLM-0.3B-Instruct-v1-GGUF/resolve/main/NanoLM-0.3B-Instruct-v1.Q3_K_L.gguf) | Q3_K_L | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/NanoLM-0.3B-Instruct-v1-GGUF/resolve/main/NanoLM-0.3B-Instruct-v1.Q4_K_S.gguf) | Q4_K_S | 0.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NanoLM-0.3B-Instruct-v1-GGUF/resolve/main/NanoLM-0.3B-Instruct-v1.Q4_K_M.gguf) | Q4_K_M | 0.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NanoLM-0.3B-Instruct-v1-GGUF/resolve/main/NanoLM-0.3B-Instruct-v1.Q5_K_S.gguf) | Q5_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/NanoLM-0.3B-Instruct-v1-GGUF/resolve/main/NanoLM-0.3B-Instruct-v1.Q5_K_M.gguf) | Q5_K_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/NanoLM-0.3B-Instruct-v1-GGUF/resolve/main/NanoLM-0.3B-Instruct-v1.Q6_K.gguf) | Q6_K | 0.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/NanoLM-0.3B-Instruct-v1-GGUF/resolve/main/NanoLM-0.3B-Instruct-v1.Q8_0.gguf) | Q8_0 | 0.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/NanoLM-0.3B-Instruct-v1-GGUF/resolve/main/NanoLM-0.3B-Instruct-v1.f16.gguf) | f16 | 0.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
SiqiYang/q-FrozenLake-v1-4x4-noSlippery | SiqiYang | "2023-12-26T13:33:31Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-12-26T13:33:29Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="SiqiYang/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
TopperThijs/Gemma-7b-complete-8-15-BV | TopperThijs | "2024-10-22T08:35:31Z" | 76 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-10-22T08:09:42Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Ferrag/Trust-DeepCoder-6.7b-Instruct | Ferrag | "2024-03-18T20:44:09Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"region:us"
] | null | "2024-03-18T20:20:27Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
pellucid/my_awesome_opus100_model | pellucid | "2023-06-21T15:57:28Z" | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"longt5",
"text2text-generation",
"generated_from_trainer",
"dataset:opus100",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-06-21T07:37:46Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- opus100
metrics:
- bleu
model-index:
- name: my_awesome_opus100_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: opus100
type: opus100
config: en-ko
split: train
args: en-ko
metrics:
- name: Bleu
type: bleu
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_opus100_model
This model is a fine-tuned version of [KETI-AIR-Downstream/long-ke-t5-base-translation-aihub-en2ko](https://huggingface.co/KETI-AIR-Downstream/long-ke-t5-base-translation-aihub-en2ko) on the opus100 dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Bleu: 0.0
- Gen Len: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 250 | nan | 2.9676 | 12.146 |
| 2.5985 | 2.0 | 500 | nan | 0.0 | 0.0 |
| 2.5985 | 3.0 | 750 | nan | 0.0 | 0.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
juniorVision/qwen2.5-14b-lr1e-5_customLoss_hanja15_wohanjainput | juniorVision | "2025-03-31T12:00:53Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-14B-Instruct",
"endpoints_compatible",
"region:us"
] | null | "2025-03-31T10:37:30Z" | ---
base_model: Qwen/Qwen2.5-14B-Instruct
library_name: transformers
model_name: qwen2.5-14b-lr1e-5_customLoss_hanja15_wohanjainput
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for qwen2.5-14b-lr1e-5_customLoss_hanja15_wohanjainput
This model is a fine-tuned version of [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="juniorVision/qwen2.5-14b-lr1e-5_customLoss_hanja15_wohanjainput", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.46.3
- Pytorch: 2.5.0+cu124
- Datasets: 3.5.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mradermacher/LemonGiga-i1-GGUF | mradermacher | "2025-02-04T15:52:14Z" | 379 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:itchindigo/LemonGiga",
"base_model:quantized:itchindigo/LemonGiga",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | "2025-02-04T14:06:57Z" | ---
base_model: itchindigo/LemonGiga
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/itchindigo/LemonGiga
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/LemonGiga-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LemonGiga-i1-GGUF/resolve/main/LemonGiga.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/LemonGiga-i1-GGUF/resolve/main/LemonGiga.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/LemonGiga-i1-GGUF/resolve/main/LemonGiga.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/LemonGiga-i1-GGUF/resolve/main/LemonGiga.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/LemonGiga-i1-GGUF/resolve/main/LemonGiga.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/LemonGiga-i1-GGUF/resolve/main/LemonGiga.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/LemonGiga-i1-GGUF/resolve/main/LemonGiga.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/LemonGiga-i1-GGUF/resolve/main/LemonGiga.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/LemonGiga-i1-GGUF/resolve/main/LemonGiga.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LemonGiga-i1-GGUF/resolve/main/LemonGiga.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/LemonGiga-i1-GGUF/resolve/main/LemonGiga.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/LemonGiga-i1-GGUF/resolve/main/LemonGiga.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/LemonGiga-i1-GGUF/resolve/main/LemonGiga.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/LemonGiga-i1-GGUF/resolve/main/LemonGiga.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/LemonGiga-i1-GGUF/resolve/main/LemonGiga.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/LemonGiga-i1-GGUF/resolve/main/LemonGiga.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/LemonGiga-i1-GGUF/resolve/main/LemonGiga.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/LemonGiga-i1-GGUF/resolve/main/LemonGiga.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/LemonGiga-i1-GGUF/resolve/main/LemonGiga.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/LemonGiga-i1-GGUF/resolve/main/LemonGiga.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LemonGiga-i1-GGUF/resolve/main/LemonGiga.i1-Q4_1.gguf) | i1-Q4_1 | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/LemonGiga-i1-GGUF/resolve/main/LemonGiga.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/LemonGiga-i1-GGUF/resolve/main/LemonGiga.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/LemonGiga-i1-GGUF/resolve/main/LemonGiga.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Shakhovak/flan-t5-base-absa-laptops | Shakhovak | "2024-04-11T09:32:51Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-03-11T19:24:02Z" | ---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
model-index:
- name: flan-t5-base-absa-laptops
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-absa-laptops
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1710
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7479 | 0.9 | 200 | 0.2617 |
| 0.3468 | 1.8 | 400 | 0.1992 |
| 0.2645 | 2.7 | 600 | 0.1779 |
| 0.2216 | 3.6 | 800 | 0.1574 |
| 0.197 | 4.5 | 1000 | 0.1789 |
| 0.1676 | 5.41 | 1200 | 0.1710 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Subsets and Splits