modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-11 18:27:37
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 421
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-11 18:27:06
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
DanielTTY/mpt-30b-qlora | DanielTTY | "2023-07-12T04:59:41Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"mpt",
"text-generation",
"Composer",
"MosaicML",
"llm-foundry",
"StreamingDatasets",
"custom_code",
"dataset:allenai/c4",
"dataset:mc4",
"dataset:togethercomputer/RedPajama-Data-1T",
"dataset:bigcode/the-stack-dedup",
"dataset:allenai/s2orc",
"arxiv:2108.12409",
"arxiv:2302.13971",
"arxiv:2205.14135",
"arxiv:2010.04245",
"arxiv:1909.08053",
"arxiv:2302.06675",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-07-12T04:59:40Z" | ---
license: apache-2.0
tags:
- Composer
- MosaicML
- llm-foundry
- StreamingDatasets
datasets:
- allenai/c4
- mc4
- togethercomputer/RedPajama-Data-1T
- bigcode/the-stack-dedup
- allenai/s2orc
inference: false
duplicated_from: mosaicml/mpt-30b
---
# MPT-30B
MPT-30B is a decoder-style transformer pretrained from scratch on 1T tokens of English text and code.
This model was trained by [MosaicML](https://www.mosaicml.com).
MPT-30B is part of the family of Mosaic Pretrained Transformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference.
MPT-30B comes with special features that differentiate it from other LLMs, including an 8k token context window (which can be further extended via finetuning; see [MPT-7B-StoryWriter](https://huggingface.co/mosaicml/mpt-7b-storywriter)), support for context-length extrapolation via [ALiBi](https://arxiv.org/abs/2108.12409), and efficient inference + training via FlashAttention. It also has strong coding abilities thanks to its pretraining mix. MPT models can also be served efficiently with both standard HuggingFace pipelines and NVIDIA's [FasterTransformer](https://github.com/NVIDIA/FasterTransformer).
The size of MPT-30B was also specifically chosen to make it easy to deploy on a single GPU—either 1xA100-80GB in 16-bit precision or 1xA100-40GB in 8-bit precision.
This model uses the MosaicML LLM codebase, which can be found in the [llm-foundry repository](https://github.com/mosaicml/llm-foundry). It was trained by MosaicML’s NLP team on the [MosaicML platform](https://www.mosaicml.com/training) for LLM pretraining, finetuning, and inference.
### How is this model different?
MPT-30B is:
* **Licensed for the possibility of commercial use** (unlike [LLaMA](https://arxiv.org/abs/2302.13971)).
* **Trained on a large amount of data** (1T tokens like [LLaMA](https://arxiv.org/abs/2302.13971) vs. 300B for [Pythia](https://github.com/EleutherAI/pythia), 300B for [OpenLLaMA](https://github.com/openlm-research/open_llama), and 800B for [StableLM](https://github.com/Stability-AI/StableLM)).
* **Prepared to handle extremely long inputs** thanks to [ALiBi](https://arxiv.org/abs/2108.12409).
* **Capable of fast training and inference** (via [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) and [FasterTransformer](https://github.com/NVIDIA/FasterTransformer))
* **Equipped with highly efficient open-source training code** via the [llm-foundry repository](https://github.com/mosaicml/llm-foundry)
### Models finetuned off MPT-30B:
The following models are finetuned on MPT-30B:
* [MPT-30B-Instruct](https://huggingface.co/mosaicml/mpt-30b-instruct): a model for short-form instruction following.
Built by finetuning MPT-30B on several carefully curated datasets.
* License: _CC-BY-SA-3.0_
* [MPT-30B-Chat](https://huggingface.co/mosaicml/mpt-30b-chat): a chatbot-like model for dialogue generation.
Built by finetuning MPT-30B on [ShareGPT-Vicuna](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered), [Camel-AI](https://huggingface.co/camel-ai),
[GPTeacher](https://github.com/teknium1/GPTeacher), [Guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco), [Baize](https://github.com/project-baize/baize-chatbot) and some generated datasets.
* License: _CC-By-NC-SA-4.0_
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-30b-chat)
## Model Date
June 22, 2023
## Model License
Apache-2.0
## Documentation
* [Blog post: MPT-30B: Raising the bar for open-source foundation models](https://www.mosaicml.com/blog/mpt-30b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
## How to Use
This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-30b',
trust_remote_code=True
)
```
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
`MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
```python
import torch
import transformers
name = 'mosaicml/mpt-30b'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.attn_config['attn_impl'] = 'triton' # change this to use triton-based FlashAttention
config.init_device = 'cuda:0' # For fast initialization directly on GPU!
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
torch_dtype=torch.bfloat16, # Load model weights in bfloat16
trust_remote_code=True
)
```
The model was trained initially with a sequence length of 2048 with an additional pretraining stage for sequence length adapation up to 8192. However, ALiBi enables users to increase the maximum sequence length even further during finetuning and/or inference. For example:
```python
import transformers
name = 'mosaicml/mpt-30b'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.max_seq_len = 16384 # (input + output) tokens can now be up to 16384
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
trust_remote_code=True
)
```
This model was trained with the MPT-30B tokenizer which is identical to the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('mosaicml/mpt-30b')
```
The model can then be used, for example, within a text-generation pipeline.
Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
```python
from transformers import pipeline
with torch.autocast('cuda', dtype=torch.bfloat16):
inputs = tokenizer('Here is a recipe for vegan banana bread:\n', return_tensors="pt").to('cuda')
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# or using the HF pipeline
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
with torch.autocast('cuda', dtype=torch.bfloat16):
print(
pipe('Here is a recipe for vegan banana bread:\n',
max_new_tokens=100,
do_sample=True,
use_cache=True))
```
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 29.95B |
|n_layers | 48 |
| n_heads | 64 |
| d_model | 7168 |
| vocab size | 50432 |
| sequence length | 8192 |
## Training Data
### Streaming Datasets
Data was formatted using the MosaicML [StreamingDataset](https://github.com/mosaicml/streaming) library to host our data in object storage and efficiently stream it to our compute cluster during training.
StreamingDataset obviates the need to download the whole dataset before starting training, and allows instant resumption of training from any point in the dataset.
### Data Mix
The model was trained for 1T tokens on the following data mix:
| Data Source | Number of Tokens in Source | Proportion | Effective Number of Tokens | Epochs |
|-------------|----------------------------|------------|----------------------------|--------|
| mC4 3.1.0 - English (200+ words) | 2417.99 B | 33.50% | 335 B | 0.14 |
| c4 - English - SemDedup 80% | 100.42 B | 29.90% | 299 B | 2.98 |
| RedPajama - CommonCrawl | 878.45 B | 8.50% | 85 B | 0.097 |
| The Stack - Selected Languages | 463.78 B | 10.00% | 100 B | 0.22 |
| RedPajama - Wikipedia | 4.87 B | 4.00% | 40 B | 8.21 |
| The Stack - Markdown | 107.07 B | 4.50% | 45 B | 0.42 |
| Semantic Scholar ORC | 48.95 B | 3.30% | 33 B | 0.67 |
| RedPajama - Books | 26.02 B | 3.00% | 30 B | 1.15 |
| RedPajama - arXiv | 28.10 B | 1.90% | 19 B | 0.68 |
| RedPajama - StackExchange | 20.54 B | 1.40% | 14 B |0.68 |
Samples for each batch were selected from one of the datasets with the probability specified above. The examples were shuffled within each dataset, and each example was constructed from as many sequences from that dataset as were necessary to fill the sequence length. To build 8k support into MPT-30B efficiently, we first pre-trained on 1T tokens using sequences that were 2k tokens long, and then trained for an additional 50B tokens using sequences that were 8k tokens long.
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. This BPE tokenizer has a number of desirable characteristics,
most of which are relevant for tokenizing code:
(1) It was trained on a diverse mix of data that includes code (The Pile)
(2) It applies consistent space delimitation, unlike the GPT2 tokenizer which tokenizes inconsistently depending on the presence of prefix spaces
(3) It contains tokens for repeated space characters, which allows superior compression of text with large amounts of repeated space characters.
The model vocabulary size of 50432 was set to be a multiple of 128 (as in [MEGATRON-LM](https://arxiv.org/abs/1909.08053)).
### Training Configuration
The model was trained in three stages using the [MosaicML Platform](https://www.mosaicml.com/platform):
(i) First it was trained on 440 A100-40GBs with a batch size of 1760.
(ii) Then, on 216 A100-40GBs with a batch size of 1728.
(iii) Training was completed on 256 H100-80GBs with a batch size of 512 with 8k context length and 50B tokens.
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the [LION](https://arxiv.org/abs/2302.06675) optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-30B (Base) is **not** intended for deployment without finetuning.
It should not be used for human-facing interactions without further guardrails and user consent.
MPT-30B can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-30B was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-30b).
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-30B: Raising the bar
for open-source foundation models},
year = {2023},
url = {www.mosaicml.com/blog/mpt-30b},
note = {Accessed: 2023-06-22},
urldate = {2023-06-22}
}
``` |
SHAPIE-RAIN-SPIDERMAN-M-E/shapie.rain.spiderman.nxn.video.on.social.media | SHAPIE-RAIN-SPIDERMAN-M-E | "2025-03-25T19:24:14Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-03-25T19:23:53Z" | <animated-image data-catalyst=""><a href="https://alltvsteam.com/viral-video/?v=news-es-tvdf" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
mrHungddddh/1b34f7f5-bdc5-4044-819d-accceb298840 | mrHungddddh | "2025-01-27T03:14:07Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v0.6",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v0.6",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-27T03:05:57Z" | ---
library_name: peft
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v0.6
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1b34f7f5-bdc5-4044-819d-accceb298840
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: TinyLlama/TinyLlama-1.1B-Chat-v0.6
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 0e9cb1e984c7a59c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0e9cb1e984c7a59c_train_data.json
type:
field_input: "\uACFC\uBAA9\uBD84\uC57C"
field_instruction: "\uAC15\uC758\uC720\uD615"
field_output: row2text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: mrHungddddh/1b34f7f5-bdc5-4044-819d-accceb298840
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/0e9cb1e984c7a59c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7fc2b14f-4a45-408a-ab45-08a493ad51d3
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7fc2b14f-4a45-408a-ab45-08a493ad51d3
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 1b34f7f5-bdc5-4044-819d-accceb298840
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v0.6](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.6) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9218
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.901 | 0.6814 | 200 | 0.9218 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso09/fef0bc25-f7f1-4df4-bc65-356853dcd488 | lesso09 | "2025-01-22T13:46:59Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-22T12:23:06Z" | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fef0bc25-f7f1-4df4-bc65-356853dcd488
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-0.5B-Instruct
bf16: true
chat_template: llama3
datasets:
- data_files:
- cd458f1cb599b6de_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/cd458f1cb599b6de_train_data.json
type:
field_input: ''
field_instruction: Document
field_output: Summary
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso09/fef0bc25-f7f1-4df4-bc65-356853dcd488
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/cd458f1cb599b6de_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a4f7b8f6-8234-4cb1-ba63-94dbad515628
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: a4f7b8f6-8234-4cb1-ba63-94dbad515628
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# fef0bc25-f7f1-4df4-bc65-356853dcd488
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1395
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.5058 | 0.0001 | 1 | 3.5306 |
| 2.7088 | 0.0003 | 5 | 3.4980 |
| 3.208 | 0.0006 | 10 | 3.3453 |
| 3.2019 | 0.0009 | 15 | 3.2412 |
| 3.4256 | 0.0012 | 20 | 3.1600 |
| 3.1265 | 0.0015 | 25 | 3.1395 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Sophie-Rain-Spiderman-Video-Updatessssssss/Sophie.Rain.Spider-Man.Video.Tutorial.Official.clips | Sophie-Rain-Spiderman-Video-Updatessssssss | "2025-03-23T22:20:23Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-03-23T22:20:10Z" |
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
03 seconds ago
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter Telegram
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
. . . . . . . . . L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter Telegram
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter |
vidore/baseline-results | vidore | "2025-03-17T15:56:47Z" | 0 | 0 | null | [
"vidore",
"license:mit",
"region:us"
] | null | "2024-06-26T15:56:45Z" | ---
license: mit
tags:
- vidore
---
## Description
Contains the evaluation results for the baseline models from the *ColPali: Efficient Document Retrieval with Vision Language Models*. |
mrapacz/interlinear-pl-mt5-base-t-w-t-normalized-ob | mrapacz | "2025-02-21T21:31:16Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"pl",
"dataset:mrapacz/greek-interlinear-translations",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-02-08T12:25:37Z" | ---
license: cc-by-sa-4.0
language:
- pl
metrics:
- bleu
base_model:
- mT5-base
library_name: transformers
datasets:
- mrapacz/greek-interlinear-translations
---
# Model Card for Ancient Greek to Polish Interlinear Translation Model
This model performs interlinear translation from Ancient Greek to Polish, maintaining word-level alignment between source and target texts.
You can find the source code used for training this and other models trained as part of this project in the [GitHub repository](https://github.com/mrapacz/loreslm-interlinear-translation).
## Model Details
### Model Description
- **Developed By:** Maciej Rapacz, AGH University of Kraków
- **Model Type:** MT5ForConditionalGeneration
- **Base Model:** mT5-base
- **Tokenizer:** mT5
- **Language(s):** Ancient Greek (source) → Polish (target)
- **License:** CC BY-NC-SA 4.0
- **Tag Set:** OB (Oblubienica)
- **Text Preprocessing:** Normalized
- **Morphological Encoding:** t-w-t (tags-within-text)
### Model Performance
- **BLEU Score:** 0.24
- **SemScore:** 0.66
### Model Sources
- **Repository:** https://github.com/mrapacz/loreslm-interlinear-translation
- **Paper:** https://aclanthology.org/2025.loreslm-1.11/
## Usage Example
```python
>>> from transformers import AutoModelForSeq2SeqLM, T5TokenizerFast
>>> text_blocks = ['λεγει', 'αυτω', 'ο', 'ιησους', 'εγειρε', 'αρον', 'τον', 'κραβαττον', 'σου', 'και', 'περιπατει']
>>> tag_blocks = ['vi Pres Act 3 Sg', 'pp Dat Sg m', 't_ Nom Sg m', 'n_ Nom Sg m', 'vm Pres Act 2 Sg', 'vm Aor Act 2 Sg', 't_ Acc Sg m', 'n_ Acc Sg m', 'pp 2 Gen Sg', 'Conj', 'vm Pres Act 2 Sg']
>>> combined_text = []
>>> for text, tag in zip(text_blocks, tag_blocks):
... combined_text.append(f"{text} <extra_id_1>{tag}")
>>> formatted_text = " <extra_id_0> ".join(combined_text)
>>> tokenizer = T5TokenizerFast.from_pretrained("mrapacz/interlinear-pl-mt5-base-t-w-t-normalized-ob")
>>> inputs = tokenizer(
text=formatted_text,
return_tensors="pt"
)
>>> model = MT5ForConditionalGeneration.from_pretrained("mrapacz/interlinear-pl-mt5-base-t-w-t-normalized-ob")
>>> outputs = model.generate(
**inputs,
max_new_tokens=100,
early_stopping=True,
)
>>> tokenizer.decode(outputs[0], skip_special_tokens=True)
'I mówi mu - Jezus niech uczyni chleb - chleba twojego i chodzi'
```
## Citation
If you use this model, please cite the following paper:
```
@inproceedings{rapacz-smywinski-pohl-2025-low,
title = "Low-Resource Interlinear Translation: Morphology-Enhanced Neural Models for {A}ncient {G}reek",
author = "Rapacz, Maciej and
Smywi{\'n}ski-Pohl, Aleksander",
editor = "Hettiarachchi, Hansi and
Ranasinghe, Tharindu and
Rayson, Paul and
Mitkov, Ruslan and
Gaber, Mohamed and
Premasiri, Damith and
Tan, Fiona Anting and
Uyangodage, Lasitha",
booktitle = "Proceedings of the First Workshop on Language Models for Low-Resource Languages",
month = jan,
year = "2025",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2025.loreslm-1.11/",
pages = "145--165",
abstract = "Contemporary machine translation systems prioritize fluent, natural-sounding output with flexible word ordering. In contrast, interlinear translation maintains the source text`s syntactic structure by aligning target language words directly beneath their source counterparts. Despite its importance in classical scholarship, automated approaches to interlinear translation remain understudied. We evaluated neural interlinear translation from Ancient Greek to English and Polish using four transformer-based models: two Ancient Greek-specialized (GreTa and PhilTa) and two general-purpose multilingual models (mT5-base and mT5-large). Our approach introduces novel morphological embedding layers and evaluates text preprocessing and tag set selection across 144 experimental configurations using a word-aligned parallel corpus of the Greek New Testament. Results show that morphological features through dedicated embedding layers significantly enhance translation quality, improving BLEU scores by 35{\%} (44.67 {\textrightarrow} 60.40) for English and 38{\%} (42.92 {\textrightarrow} 59.33) for Polish compared to baseline models. PhilTa achieves state-of-the-art performance for English, while mT5-large does so for Polish. Notably, PhilTa maintains stable performance using only 10{\%} of training data. Our findings challenge the assumption that modern neural architectures cannot benefit from explicit morphological annotations. While preprocessing strategies and tag set selection show minimal impact, the substantial gains from morphological embeddings demonstrate their value in low-resource scenarios."
}
``` |
Pablonm/FineLlama-3.1-8B_v12 | Pablonm | "2025-02-16T17:57:03Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-16T17:48:59Z" | ---
base_model: unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Pablonm
- **License:** apache-2.0
- **Finetuned from model :** unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
carolinetfls/plant-seedlings-freeze-0-6-aug-3-no-lr-decay | carolinetfls | "2023-04-24T21:30:40Z" | 91 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-04-24T20:49:11Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: plant-seedlings-freeze-0-6-aug-3-no-lr-decay
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8772102161100196
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# plant-seedlings-freeze-0-6-aug-3-no-lr-decay
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4328
- Accuracy: 0.8772
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 13
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.546 | 1.59 | 100 | 0.7861 | 0.7421 |
| 0.4435 | 3.17 | 200 | 0.6630 | 0.7785 |
| 0.4183 | 4.76 | 300 | 0.5526 | 0.8286 |
| 0.2947 | 6.35 | 400 | 0.4689 | 0.8625 |
| 0.0971 | 7.94 | 500 | 0.4795 | 0.8620 |
| 0.1414 | 9.52 | 600 | 0.4423 | 0.8708 |
| 0.1034 | 11.11 | 700 | 0.4327 | 0.8806 |
| 0.0293 | 12.7 | 800 | 0.4328 | 0.8772 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
sairamn/Phi3-Legal-Finetuned | sairamn | "2025-02-26T19:07:58Z" | 0 | 0 | null | [
"gguf",
"text-generation",
"legal",
"phi3",
"en",
"dataset:sairamn/in-abs-judgement-summary-formatted",
"base_model:microsoft/Phi-3-mini-128k-instruct",
"base_model:quantized:microsoft/Phi-3-mini-128k-instruct",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | text-generation | "2025-02-26T18:52:26Z" | ---
tags:
- text-generation
- legal
- phi3
license: mit
model-index:
- name: Phi3-Legal-Finetuned
results: []
datasets:
- sairamn/in-abs-judgement-summary-formatted
language:
- en
base_model:
- microsoft/Phi-3-mini-128k-instruct
---
# Phi3-Legal-Finetuned
This is a fine-tuned version of the Phi-3 Mini model for legal text generation tasks.
## Model Details
- **Base Model:** Microsoft Phi-3 Mini 128K
- **Fine-tuned On:** Legal documents and summaries
- **Context Length:** 128K tokens
- **License:** MIT
## Usage
You can load the model using Hugging Face Transformers:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "sairamn/Phi3-Legal-Finetuned"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
```
## Limitations
- The model is not a substitute for professional legal advice.
- May generate incorrect or biased information.
## Acknowledgments
- Based on Microsoft Phi-3 Mini.
## Citation
If you use this model, please cite accordingly. |
facebook/mms-tts-atg | facebook | "2023-09-01T13:24:49Z" | 108 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | text-to-speech | "2023-09-01T13:24:00Z" |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Ivbie North-Okpela-Arhe Text-to-Speech
This repository contains the **Ivbie North-Okpela-Arhe (atg)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-atg")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-atg")
text = "some example text in the Ivbie North-Okpela-Arhe language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
|
winegarj/distilbert-base-uncased-finetuned-sst2 | winegarj | "2025-03-28T01:59:18Z" | 27 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-04-09T19:56:14Z" | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-sst2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2754
- Accuracy: 0.9037
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 512
- eval_batch_size: 512
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 132 | 0.2510 | 0.9014 |
| No log | 2.0 | 264 | 0.2754 | 0.9037 |
| No log | 3.0 | 396 | 0.2798 | 0.9025 |
| 0.1936 | 4.0 | 528 | 0.2905 | 0.9037 |
| 0.1936 | 5.0 | 660 | 0.2984 | 0.9025 |
### Framework versions
- Transformers 4.50.2
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
lesso02/c84fee48-de43-4de0-ac0e-fcc5bbc90f33 | lesso02 | "2025-03-23T03:23:35Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:JackFram/llama-160m",
"base_model:adapter:JackFram/llama-160m",
"license:apache-2.0",
"region:us"
] | null | "2025-03-23T03:09:34Z" | ---
library_name: peft
license: apache-2.0
base_model: JackFram/llama-160m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c84fee48-de43-4de0-ac0e-fcc5bbc90f33
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: JackFram/llama-160m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 74abca5b35871829_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/74abca5b35871829_train_data.json
type:
field_instruction: chat
field_output: system
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 500
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso02/c84fee48-de43-4de0-ac0e-fcc5bbc90f33
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000202
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 50
lora_alpha: 128
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/74abca5b35871829_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 500
saves_per_epoch: null
seed: 20
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 029def98-2bf0-4673-93ae-3a488c6daa7a
wandb_project: 02a
wandb_run: your_name
wandb_runid: 029def98-2bf0-4673-93ae-3a488c6daa7a
warmup_steps: 100
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# c84fee48-de43-4de0-ac0e-fcc5bbc90f33
This model is a fine-tuned version of [JackFram/llama-160m](https://huggingface.co/JackFram/llama-160m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0011
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000202
- train_batch_size: 4
- eval_batch_size: 4
- seed: 20
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0003 | 1 | 2.6935 |
| 0.0011 | 0.1502 | 500 | 0.0011 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Benevolent/PonyDiffusionArtStyle | Benevolent | "2024-03-11T23:44:50Z" | 42 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | "2024-03-11T23:35:23Z" | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
score_9,score_8_up,score_7_up,<lora:PDFA_Style6559515941d2a06gas10-000525:0.8>,1girl,solo,bangs,blush,smile,looking
at viewer,ahoge,animal ear fluff,animal ears,animal hands,apron,blonde
hair,blue eyes,braid,chibi,closed mouth,dog ears,dog
tail,dress,enmaided,frilled apron,frilled dress,frills,gloves,hair between
eyes,hands up,long hair,maid,maid apron,maid headdress,one eye closed,puffy
short sleeves,puffy sleeves,short sleeves,tail,teeth,waist apron,white
apron,white gloves
parameters:
negative_prompt: >-
score_4,score_5,score_6,source_pony,source_furry,monochrome,3d,photo,hyperrealistic,realstic,rough
sketch,fewer digits,extra digits,signature,artist name,blur,lowres,worst
quality,low quality,bad anatomy,bad hands,missing fingers
output:
url: images/00010-1248823506.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: null
---
# 【Art Style】 Parsley - Pony Diffusion for Anime
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Benevolent/PonyDiffusionArtStyle/tree/main) them in the Files & versions tab.
|
Yntec/EstheticRetroAnime5 | Yntec | "2025-02-25T09:07:11Z" | 268 | 0 | diffusers | [
"diffusers",
"safetensors",
"Anime",
"Vintage",
"Sexy",
"OneRing",
"text-to-image",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:Yntec/EstheticRetroAnime",
"base_model:finetune:Yntec/EstheticRetroAnime",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2025-02-24T11:24:11Z" | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
base_model: Yntec/EstheticRetroAnime
tags:
- Anime
- Vintage
- Sexy
- OneRing
- text-to-image
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
---
# Esthetic Retro Anime V5
Use 1990s \\\(style\\\), 1980s \\\(style\\\) or Retro artstyle in your prompts to trigger the effects. Showcase and prompts (all use seed 9119):

Retro artstyle pepperoni pizza, josephine wall winner, hidari, roll20 illumination, radiant light, sitting elementary girl, Pretty CUTE, gorgeous hair, DETAILED CHIBI EYES, Magazine ad, iconic, Cartoon, sharp focus, 4k, towel. comic art on canvas by kyoani and ROSSDRAWS and watched. videogames, robert jordan

Anime cute girl, bangs, depth of field, embedded, hair ribbon, long hair, looking at viewer, neck ribbon, non-web source, palm leaf, palm tree, purple eyes, purple hair, red ribbon, ribbon, self upload, solo

Highly detailed, High Quality, Masterpiece, beautiful, cute girl as toon link, headwear, Zelda. Smile

1girl, pink hair,
Original page:
https://civitai.com/models/137781?modelVersionId=183693
# V4, V3 & V2.1
Other versions also included. For V1 check: https://huggingface.co/Yntec/EstheticRetroAnime |
SliMM-X/CoMP-MM-1B | SliMM-X | "2025-03-27T08:56:04Z" | 70 | 1 | slimm | [
"slimm",
"safetensors",
"qwen2",
"image-text-to-text",
"conversational",
"arxiv:2503.18931",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] | image-text-to-text | "2025-03-24T14:49:08Z" | ---
base_model:
- Qwen/Qwen2.5-0.5B-Instruct
license: apache-2.0
pipeline_tag: image-text-to-text
library_name: slimm
---
# Model Card for CoMP-MM-1B
<!-- Provide a quick summary of what the model is/does. -->
This is an LMM that supports **native image resolution inputs**, composed of [CoMP-SigLIP](https://huggingface.co/SliMM-X/CoMP-SigLIP-So400M) and [Qwen2.5](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct).
## Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/SliMM-X/CoMP-MM
- **Paper:** https://arxiv.org/abs/2503.18931
- **Project Page:** https://slimm-x.github.io/comp
## How to Get Started with the Model
Install the github repo, and use the code below to get started with the model.
```python
# this is very similar to qwen2-vl
from slimm.model.processor import SliMMQwen2VLProcessor
from slimm.model.slimm import SliMMForConditionalGeneration
from slimm.model.utils_vl import process_vision_info
model_path = "SliMM-X/CoMP-MM-1B"
model = SliMMForConditionalGeneration.from_pretrained(
model_path, torch_dtype="auto", device_map="cuda"
)
processor = SliMMQwen2VLProcessor.from_pretrained(model_path)
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "https://slimm-x.github.io/comp/figs/teaser.png",
},
{"type": "text", "text": "Describe this image."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
## Citation
**BibTeX:**
```bibtex
@article{comp2025,
title={CoMP: Continual Multimodal Pre-training for Vision Foundation Models},
author={Chen, Yitong and Meng, Lingchen and Peng, Wujian and Wu, Zuxuan and Jiang, Yu-Gang},
year={2025},
journal={arXiv preprint arXiv:2503.18931},
}
``` |
texanrangee/c35b8ab5-9922-468a-ab0c-f920c605acb8 | texanrangee | "2025-02-26T06:54:24Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-02-26T06:33:09Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ketut/IndoBERTkbli | ketut | "2025-04-08T03:51:09Z" | 1,266 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"id",
"base_model:indobenchmark/indobert-base-p2",
"base_model:finetune:indobenchmark/indobert-base-p2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-03-28T14:52:54Z" | ---
license: apache-2.0
language:
- id
base_model:
- indobenchmark/indobert-base-p2
pipeline_tag: text-classification
library_name: transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
Model klasifikasi kode kelompok KBLI (5 digit).
++untuk label encoder
import joblib
le = joblib.load("label_encoder_base_p2_augmented.pkl") |
igupta/llama3b-alpaca-subset5 | igupta | "2025-03-22T09:53:25Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-22T08:27:27Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
semin-pk/jeju-to-standard-tokenizer | semin-pk | "2025-03-27T07:14:12Z" | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-03-27T07:14:11Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
arfik/names_qwen2.5-1.5B_model-GGUF4 | arfik | "2025-02-26T18:09:44Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-02-26T18:09:35Z" | ---
base_model: unsloth/qwen2.5-1.5b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** arfik
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-1.5b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/JeloH_-_qwen-textgen-model3-gguf | RichardErkhov | "2025-03-23T22:43:10Z" | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-23T22:18:27Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
qwen-textgen-model3 - GGUF
- Model creator: https://huggingface.co/JeloH/
- Original model: https://huggingface.co/JeloH/qwen-textgen-model3/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [qwen-textgen-model3.Q2_K.gguf](https://huggingface.co/RichardErkhov/JeloH_-_qwen-textgen-model3-gguf/blob/main/qwen-textgen-model3.Q2_K.gguf) | Q2_K | 0.63GB |
| [qwen-textgen-model3.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/JeloH_-_qwen-textgen-model3-gguf/blob/main/qwen-textgen-model3.IQ3_XS.gguf) | IQ3_XS | 0.68GB |
| [qwen-textgen-model3.IQ3_S.gguf](https://huggingface.co/RichardErkhov/JeloH_-_qwen-textgen-model3-gguf/blob/main/qwen-textgen-model3.IQ3_S.gguf) | IQ3_S | 0.71GB |
| [qwen-textgen-model3.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/JeloH_-_qwen-textgen-model3-gguf/blob/main/qwen-textgen-model3.Q3_K_S.gguf) | Q3_K_S | 0.71GB |
| [qwen-textgen-model3.IQ3_M.gguf](https://huggingface.co/RichardErkhov/JeloH_-_qwen-textgen-model3-gguf/blob/main/qwen-textgen-model3.IQ3_M.gguf) | IQ3_M | 0.72GB |
| [qwen-textgen-model3.Q3_K.gguf](https://huggingface.co/RichardErkhov/JeloH_-_qwen-textgen-model3-gguf/blob/main/qwen-textgen-model3.Q3_K.gguf) | Q3_K | 0.77GB |
| [qwen-textgen-model3.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/JeloH_-_qwen-textgen-model3-gguf/blob/main/qwen-textgen-model3.Q3_K_M.gguf) | Q3_K_M | 0.77GB |
| [qwen-textgen-model3.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/JeloH_-_qwen-textgen-model3-gguf/blob/main/qwen-textgen-model3.Q3_K_L.gguf) | Q3_K_L | 0.82GB |
| [qwen-textgen-model3.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/JeloH_-_qwen-textgen-model3-gguf/blob/main/qwen-textgen-model3.IQ4_XS.gguf) | IQ4_XS | 0.84GB |
| [qwen-textgen-model3.Q4_0.gguf](https://huggingface.co/RichardErkhov/JeloH_-_qwen-textgen-model3-gguf/blob/main/qwen-textgen-model3.Q4_0.gguf) | Q4_0 | 0.87GB |
| [qwen-textgen-model3.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/JeloH_-_qwen-textgen-model3-gguf/blob/main/qwen-textgen-model3.IQ4_NL.gguf) | IQ4_NL | 0.88GB |
| [qwen-textgen-model3.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/JeloH_-_qwen-textgen-model3-gguf/blob/main/qwen-textgen-model3.Q4_K_S.gguf) | Q4_K_S | 0.88GB |
| [qwen-textgen-model3.Q4_K.gguf](https://huggingface.co/RichardErkhov/JeloH_-_qwen-textgen-model3-gguf/blob/main/qwen-textgen-model3.Q4_K.gguf) | Q4_K | 0.92GB |
| [qwen-textgen-model3.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/JeloH_-_qwen-textgen-model3-gguf/blob/main/qwen-textgen-model3.Q4_K_M.gguf) | Q4_K_M | 0.92GB |
| [qwen-textgen-model3.Q4_1.gguf](https://huggingface.co/RichardErkhov/JeloH_-_qwen-textgen-model3-gguf/blob/main/qwen-textgen-model3.Q4_1.gguf) | Q4_1 | 0.95GB |
| [qwen-textgen-model3.Q5_0.gguf](https://huggingface.co/RichardErkhov/JeloH_-_qwen-textgen-model3-gguf/blob/main/qwen-textgen-model3.Q5_0.gguf) | Q5_0 | 1.02GB |
| [qwen-textgen-model3.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/JeloH_-_qwen-textgen-model3-gguf/blob/main/qwen-textgen-model3.Q5_K_S.gguf) | Q5_K_S | 1.02GB |
| [qwen-textgen-model3.Q5_K.gguf](https://huggingface.co/RichardErkhov/JeloH_-_qwen-textgen-model3-gguf/blob/main/qwen-textgen-model3.Q5_K.gguf) | Q5_K | 1.05GB |
| [qwen-textgen-model3.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/JeloH_-_qwen-textgen-model3-gguf/blob/main/qwen-textgen-model3.Q5_K_M.gguf) | Q5_K_M | 1.05GB |
| [qwen-textgen-model3.Q5_1.gguf](https://huggingface.co/RichardErkhov/JeloH_-_qwen-textgen-model3-gguf/blob/main/qwen-textgen-model3.Q5_1.gguf) | Q5_1 | 1.1GB |
| [qwen-textgen-model3.Q6_K.gguf](https://huggingface.co/RichardErkhov/JeloH_-_qwen-textgen-model3-gguf/blob/main/qwen-textgen-model3.Q6_K.gguf) | Q6_K | 1.19GB |
| [qwen-textgen-model3.Q8_0.gguf](https://huggingface.co/RichardErkhov/JeloH_-_qwen-textgen-model3-gguf/blob/main/qwen-textgen-model3.Q8_0.gguf) | Q8_0 | 1.53GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ManuD/Meta-Llama-3.1-8B-Instruct-checkpoint-1506 | ManuD | "2024-08-12T11:57:55Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-08-12T11:51:47Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
daniel40/b126e93f-be24-40b8-9e25-43a9ded5a087 | daniel40 | "2025-01-30T17:58:19Z" | 7 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Meta-Llama-3-8B",
"base_model:adapter:NousResearch/Meta-Llama-3-8B",
"license:other",
"region:us"
] | null | "2025-01-30T17:45:00Z" | ---
library_name: peft
license: other
base_model: NousResearch/Meta-Llama-3-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b126e93f-be24-40b8-9e25-43a9ded5a087
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Meta-Llama-3-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b16a98cdabfe81ae_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b16a98cdabfe81ae_train_data.json
type:
field_instruction: prompt
field_output: completion
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: daniel40/b126e93f-be24-40b8-9e25-43a9ded5a087
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/b16a98cdabfe81ae_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 9c989506-98ce-49b3-8b31-1226db68b45d
wandb_project: Birthday-SN56-31-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 9c989506-98ce-49b3-8b31-1226db68b45d
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b126e93f-be24-40b8-9e25-43a9ded5a087
This model is a fine-tuned version of [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4966
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.5888 | 0.0001 | 1 | 2.3202 |
| 0.7488 | 0.0016 | 13 | 0.9855 |
| 0.6488 | 0.0033 | 26 | 0.5414 |
| 0.3576 | 0.0049 | 39 | 0.4966 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
cirayusihh00912/f5acffed-b88d-4e10-8c45-a98077d9670e | cirayusihh00912 | "2025-04-07T09:18:20Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-07T08:51:58Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
alvarobartt/ghibli-characters-sd3.5-lora | alvarobartt | "2024-11-19T14:25:23Z" | 36 | 9 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"sd3.5",
"sd3.5-diffusers",
"template:sd-lora",
"endpoints-template",
"dataset:alvarobartt/ghibli-characters",
"base_model:stabilityai/stable-diffusion-3.5-large",
"base_model:adapter:stabilityai/stable-diffusion-3.5-large",
"license:other",
"endpoints_compatible",
"region:us"
] | text-to-image | "2024-10-23T12:04:39Z" | ---
base_model: stabilityai/stable-diffusion-3.5-large
library_name: diffusers
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- sd3.5
- sd3.5-diffusers
- template:sd-lora
- endpoints-template
instance_prompt: >-
Ghibli style [character description] with [distinctive features], [action or
pose], [environment or background], [lighting or atmosphere], [additional
details]
license: other
license_name: stabilityai-ai-community
license_link: https://huggingface.co/stabilityai/stable-diffusion-3.5-large/blob/main/LICENSE.md
widget:
- text: >-
Ghibli style futuristic stormtrooper with glossy white armor and a sleek
helmet, standing heroically on a lush alien planet, vibrant flowers blooming
around, soft sunlight illuminating the scene, a gentle breeze rustling the
leaves
output:
url: images/stormtrooper.jpg
datasets:
- alvarobartt/ghibli-characters
pipeline_tag: text-to-image
---
# Studio Ghibli Characters - SD 3.5 Large LoRA
<Gallery />
## Model description
This repository contains the LoRA adapter for [StableDiffusion 3.5 Large](https://huggingface.co/stabilityai/stable-diffusion-3.5-large), fine-tuned using [Diffusers + Dreambooth](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_sd3.py)
with Studio Ghibli images free-to-use downloaded manually from https://ghibli.jp and hosted at [alvarobartt/ghibli-characters](https://huggingface.co/datasets/alvarobartt/ghibli-characters).
## Prompt Template
You should use the following template (defined when annotating the images with the captions) to trigger the image generation:
"Ghibli style [character description] with [distinctive features], [action or pose], [environment or background], [lighting or atmosphere], [additional details]"
## Inference with `diffusers`
> [!NOTE]
> Requires `diffusers` 0.31.0 or higher, see the [release notes](https://github.com/huggingface/diffusers/releases/tag/v0.31.0).
```python
import torch
from diffusers import DiffusionPipeline
model_id = "stabilityai/stable-diffusion-3.5-large"
adapter_id = "alvarobartt/ghibli-characters-sd3.5-lora"
pipeline = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16)
pipeline.load_lora_weights(adapter_id)
pipeline.to("cuda")
prompt = (
"Ghibli style futuristic stormtrooper with glossy white armor and a sleek helmet,"
" standing heroically on a lush alien planet, vibrant flowers blooming around, soft"
" sunlight illuminating the scene, a gentle breeze rustling the leaves"
)
image = pipeline(
prompt=prompt,
num_inference_steps=30,
width=1024,
height=768,
guidance_scale=3.5,
).images[0]
image.save("ghibli.png", format="PNG")
```
## Disclaimer
This fine-tune is for **personal-use only**, with **no-commercial purposes** as stated within the licensing.
The StableDiffusion 3.5 Large fine-tunes fall under the same license as StableDiffusion 3.5 Large i.e. https://huggingface.co/stabilityai/stable-diffusion-3.5-large/blob/main/LICENSE.md;
and the Studio Ghibli dataset is released with a custom non-commercial license based on the interpretation of the findings on their website. |
mradermacher/skin-disease-treatment-plan-GGUF | mradermacher | "2025-02-28T03:15:22Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Unmeshraj/skin-disease-treatment-plan",
"base_model:quantized:Unmeshraj/skin-disease-treatment-plan",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | "2025-02-28T03:10:34Z" | ---
base_model: Unmeshraj/skin-disease-treatment-plan
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Unmeshraj/skin-disease-treatment-plan
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/skin-disease-treatment-plan-GGUF/resolve/main/skin-disease-treatment-plan.Q2_K.gguf) | Q2_K | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/skin-disease-treatment-plan-GGUF/resolve/main/skin-disease-treatment-plan.Q3_K_S.gguf) | Q3_K_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/skin-disease-treatment-plan-GGUF/resolve/main/skin-disease-treatment-plan.IQ4_XS.gguf) | IQ4_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/skin-disease-treatment-plan-GGUF/resolve/main/skin-disease-treatment-plan.Q3_K_M.gguf) | Q3_K_M | 0.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/skin-disease-treatment-plan-GGUF/resolve/main/skin-disease-treatment-plan.Q3_K_L.gguf) | Q3_K_L | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/skin-disease-treatment-plan-GGUF/resolve/main/skin-disease-treatment-plan.Q4_K_S.gguf) | Q4_K_S | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/skin-disease-treatment-plan-GGUF/resolve/main/skin-disease-treatment-plan.Q4_K_M.gguf) | Q4_K_M | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/skin-disease-treatment-plan-GGUF/resolve/main/skin-disease-treatment-plan.Q5_K_S.gguf) | Q5_K_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/skin-disease-treatment-plan-GGUF/resolve/main/skin-disease-treatment-plan.Q5_K_M.gguf) | Q5_K_M | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/skin-disease-treatment-plan-GGUF/resolve/main/skin-disease-treatment-plan.Q6_K.gguf) | Q6_K | 0.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/skin-disease-treatment-plan-GGUF/resolve/main/skin-disease-treatment-plan.Q8_0.gguf) | Q8_0 | 0.1 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/skin-disease-treatment-plan-GGUF/resolve/main/skin-disease-treatment-plan.f16.gguf) | f16 | 0.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
royal42/gpt2chess_scratch | royal42 | "2023-04-21T03:12:39Z" | 122 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-04-18T18:11:13Z" | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2chess_scratch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2chess_scratch
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6902
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.3139 | 0.7 | 5000 | 0.9635 |
| 0.9239 | 1.4 | 10000 | 0.8407 |
| 0.8367 | 2.1 | 15000 | 0.7807 |
| 0.7813 | 2.8 | 20000 | 0.7415 |
| 0.7373 | 3.5 | 25000 | 0.7127 |
| 0.7073 | 4.2 | 30000 | 0.6955 |
| 0.6853 | 4.9 | 35000 | 0.6902 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
|
Kevinger/Hub-Repoop-1706130115 | Kevinger | "2024-01-24T21:12:22Z" | 92 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mpnet",
"text-classification",
"generated_from_trainer",
"base_model:Kevinger/setfit-hub-report",
"base_model:finetune:Kevinger/setfit-hub-report",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-01-24T21:02:29Z" | ---
base_model: Kevinger/setfit-hub-report
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: Hub-Repoop-1706130115
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Hub-Repoop-1706130115
This model is a fine-tuned version of [Kevinger/setfit-hub-report](https://huggingface.co/Kevinger/setfit-hub-report) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1746
- F1: 0.7727
- Roc Auc: 0.8666
- Accuracy: 0.7637
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 13
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| No log | 1.0 | 277 | 0.3104 | 0.0 | 0.5 | 0.0 |
| 0.3599 | 2.0 | 554 | 0.2312 | 0.5624 | 0.7027 | 0.4135 |
| 0.3599 | 3.0 | 831 | 0.1897 | 0.7479 | 0.8275 | 0.6730 |
| 0.2035 | 4.0 | 1108 | 0.1768 | 0.7645 | 0.8546 | 0.7363 |
| 0.2035 | 5.0 | 1385 | 0.1760 | 0.7649 | 0.8579 | 0.7447 |
| 0.1422 | 6.0 | 1662 | 0.1788 | 0.7559 | 0.8562 | 0.7447 |
| 0.1422 | 7.0 | 1939 | 0.1766 | 0.7633 | 0.8617 | 0.7553 |
| 0.1096 | 8.0 | 2216 | 0.1746 | 0.7727 | 0.8666 | 0.7637 |
| 0.1096 | 9.0 | 2493 | 0.1891 | 0.7457 | 0.8511 | 0.7363 |
| 0.0891 | 10.0 | 2770 | 0.1855 | 0.7524 | 0.8540 | 0.7405 |
| 0.0769 | 11.0 | 3047 | 0.1824 | 0.7655 | 0.8629 | 0.7574 |
| 0.0769 | 12.0 | 3324 | 0.1835 | 0.7655 | 0.8629 | 0.7532 |
| 0.0706 | 13.0 | 3601 | 0.1867 | 0.7604 | 0.8603 | 0.7511 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
texanrangee/f159d783-2475-4020-8281-53badf9b2641 | texanrangee | "2025-03-07T06:33:47Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-03-07T05:11:30Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
onnx-community/maskformer-resnet50-coco-stuff | onnx-community | "2024-10-08T13:54:42Z" | 5 | 0 | transformers.js | [
"transformers.js",
"onnx",
"maskformer",
"image-segmentation",
"base_model:facebook/maskformer-resnet50-coco-stuff",
"base_model:quantized:facebook/maskformer-resnet50-coco-stuff",
"region:us"
] | image-segmentation | "2024-09-02T13:59:12Z" | ---
base_model: facebook/maskformer-resnet50-coco-stuff
library_name: transformers.js
pipeline_tag: image-segmentation
---
https://huggingface.co/facebook/maskformer-resnet50-coco-stuff with ONNX weights to be compatible with Transformers.js.
## Usage (Transformers.js)
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using:
```bash
npm i @huggingface/transformers
```
**Example:** Image segmentation with `onnx-community/maskformer-resnet50-coco-stuff`.
```js
import { pipeline } from '@huggingface/transformers';
// Create an image segmentation pipeline
const segmenter = await pipeline('image-segmentation', 'onnx-community/maskformer-resnet50-coco-stuff');
// Segment an image
const url = 'http://images.cocodataset.org/val2017/000000039769.jpg';
const output = await segmenter(url);
console.log(output)
// [
// {
// score: 0.9626941680908203,
// label: 'couch',
// mask: RawImage { ... }
// },
// {
// score: 0.9967071413993835,
// label: 'cat',
// mask: RawImage { ... }
// },
// ...
// }
// ]
```
You can visualize the outputs with:
```js
for (let i = 0; i < output.length; ++i) {
const { mask, label } = output[i];
mask.save(`${label}-${i}.png`);
}
```
---
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`). |
adil-o/A2C-Pixelcopter-v1 | adil-o | "2022-09-20T13:03:10Z" | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2022-09-20T13:03:03Z" | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: A2C-Pixelcopter-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 11.00 +/- 8.37
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
tensorblock/Qwen2.5-14B-Wernicke-GGUF | tensorblock | "2024-11-20T17:35:54Z" | 21 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"base_model:CultriX/Qwen2.5-14B-Wernicke",
"base_model:quantized:CultriX/Qwen2.5-14B-Wernicke",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2024-11-20T16:45:29Z" | ---
library_name: transformers
tags:
- mergekit
- merge
- TensorBlock
- GGUF
base_model: CultriX/Qwen2.5-14B-Wernicke
license: apache-2.0
language:
- en
metrics:
- accuracy
pipeline_tag: text-generation
model-index:
- name: Qwen2.5-14B-Wernicke
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 52.35
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=CultriX/Qwen2.5-14B-Wernicke
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 50.64
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=CultriX/Qwen2.5-14B-Wernicke
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 30.06
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=CultriX/Qwen2.5-14B-Wernicke
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 19.13
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=CultriX/Qwen2.5-14B-Wernicke
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 18.25
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=CultriX/Qwen2.5-14B-Wernicke
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 49.15
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=CultriX/Qwen2.5-14B-Wernicke
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## CultriX/Qwen2.5-14B-Wernicke - GGUF
This repo contains GGUF format model files for [CultriX/Qwen2.5-14B-Wernicke](https://huggingface.co/CultriX/Qwen2.5-14B-Wernicke).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Qwen2.5-14B-Wernicke-Q2_K.gguf](https://huggingface.co/tensorblock/Qwen2.5-14B-Wernicke-GGUF/blob/main/Qwen2.5-14B-Wernicke-Q2_K.gguf) | Q2_K | 5.770 GB | smallest, significant quality loss - not recommended for most purposes |
| [Qwen2.5-14B-Wernicke-Q3_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-14B-Wernicke-GGUF/blob/main/Qwen2.5-14B-Wernicke-Q3_K_S.gguf) | Q3_K_S | 6.660 GB | very small, high quality loss |
| [Qwen2.5-14B-Wernicke-Q3_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-14B-Wernicke-GGUF/blob/main/Qwen2.5-14B-Wernicke-Q3_K_M.gguf) | Q3_K_M | 7.339 GB | very small, high quality loss |
| [Qwen2.5-14B-Wernicke-Q3_K_L.gguf](https://huggingface.co/tensorblock/Qwen2.5-14B-Wernicke-GGUF/blob/main/Qwen2.5-14B-Wernicke-Q3_K_L.gguf) | Q3_K_L | 7.925 GB | small, substantial quality loss |
| [Qwen2.5-14B-Wernicke-Q4_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-14B-Wernicke-GGUF/blob/main/Qwen2.5-14B-Wernicke-Q4_0.gguf) | Q4_0 | 8.518 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Qwen2.5-14B-Wernicke-Q4_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-14B-Wernicke-GGUF/blob/main/Qwen2.5-14B-Wernicke-Q4_K_S.gguf) | Q4_K_S | 8.573 GB | small, greater quality loss |
| [Qwen2.5-14B-Wernicke-Q4_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-14B-Wernicke-GGUF/blob/main/Qwen2.5-14B-Wernicke-Q4_K_M.gguf) | Q4_K_M | 8.988 GB | medium, balanced quality - recommended |
| [Qwen2.5-14B-Wernicke-Q5_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-14B-Wernicke-GGUF/blob/main/Qwen2.5-14B-Wernicke-Q5_0.gguf) | Q5_0 | 10.267 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Qwen2.5-14B-Wernicke-Q5_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-14B-Wernicke-GGUF/blob/main/Qwen2.5-14B-Wernicke-Q5_K_S.gguf) | Q5_K_S | 10.267 GB | large, low quality loss - recommended |
| [Qwen2.5-14B-Wernicke-Q5_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-14B-Wernicke-GGUF/blob/main/Qwen2.5-14B-Wernicke-Q5_K_M.gguf) | Q5_K_M | 10.509 GB | large, very low quality loss - recommended |
| [Qwen2.5-14B-Wernicke-Q6_K.gguf](https://huggingface.co/tensorblock/Qwen2.5-14B-Wernicke-GGUF/blob/main/Qwen2.5-14B-Wernicke-Q6_K.gguf) | Q6_K | 12.125 GB | very large, extremely low quality loss |
| [Qwen2.5-14B-Wernicke-Q8_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-14B-Wernicke-GGUF/blob/main/Qwen2.5-14B-Wernicke-Q8_0.gguf) | Q8_0 | 15.702 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Qwen2.5-14B-Wernicke-GGUF --include "Qwen2.5-14B-Wernicke-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Qwen2.5-14B-Wernicke-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
RichardErkhov/xingyaoww_-_CodeActAgent-Mistral-7b-v0.1-gguf | RichardErkhov | "2024-07-24T19:34:47Z" | 17 | 0 | null | [
"gguf",
"arxiv:2402.01030",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-07-24T14:46:26Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
CodeActAgent-Mistral-7b-v0.1 - GGUF
- Model creator: https://huggingface.co/xingyaoww/
- Original model: https://huggingface.co/xingyaoww/CodeActAgent-Mistral-7b-v0.1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [CodeActAgent-Mistral-7b-v0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/xingyaoww_-_CodeActAgent-Mistral-7b-v0.1-gguf/blob/main/CodeActAgent-Mistral-7b-v0.1.Q2_K.gguf) | Q2_K | 2.53GB |
| [CodeActAgent-Mistral-7b-v0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/xingyaoww_-_CodeActAgent-Mistral-7b-v0.1-gguf/blob/main/CodeActAgent-Mistral-7b-v0.1.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [CodeActAgent-Mistral-7b-v0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/xingyaoww_-_CodeActAgent-Mistral-7b-v0.1-gguf/blob/main/CodeActAgent-Mistral-7b-v0.1.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [CodeActAgent-Mistral-7b-v0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/xingyaoww_-_CodeActAgent-Mistral-7b-v0.1-gguf/blob/main/CodeActAgent-Mistral-7b-v0.1.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [CodeActAgent-Mistral-7b-v0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/xingyaoww_-_CodeActAgent-Mistral-7b-v0.1-gguf/blob/main/CodeActAgent-Mistral-7b-v0.1.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [CodeActAgent-Mistral-7b-v0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/xingyaoww_-_CodeActAgent-Mistral-7b-v0.1-gguf/blob/main/CodeActAgent-Mistral-7b-v0.1.Q3_K.gguf) | Q3_K | 3.28GB |
| [CodeActAgent-Mistral-7b-v0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/xingyaoww_-_CodeActAgent-Mistral-7b-v0.1-gguf/blob/main/CodeActAgent-Mistral-7b-v0.1.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [CodeActAgent-Mistral-7b-v0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/xingyaoww_-_CodeActAgent-Mistral-7b-v0.1-gguf/blob/main/CodeActAgent-Mistral-7b-v0.1.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [CodeActAgent-Mistral-7b-v0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/xingyaoww_-_CodeActAgent-Mistral-7b-v0.1-gguf/blob/main/CodeActAgent-Mistral-7b-v0.1.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [CodeActAgent-Mistral-7b-v0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/xingyaoww_-_CodeActAgent-Mistral-7b-v0.1-gguf/blob/main/CodeActAgent-Mistral-7b-v0.1.Q4_0.gguf) | Q4_0 | 3.83GB |
| [CodeActAgent-Mistral-7b-v0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/xingyaoww_-_CodeActAgent-Mistral-7b-v0.1-gguf/blob/main/CodeActAgent-Mistral-7b-v0.1.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [CodeActAgent-Mistral-7b-v0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/xingyaoww_-_CodeActAgent-Mistral-7b-v0.1-gguf/blob/main/CodeActAgent-Mistral-7b-v0.1.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [CodeActAgent-Mistral-7b-v0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/xingyaoww_-_CodeActAgent-Mistral-7b-v0.1-gguf/blob/main/CodeActAgent-Mistral-7b-v0.1.Q4_K.gguf) | Q4_K | 4.07GB |
| [CodeActAgent-Mistral-7b-v0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/xingyaoww_-_CodeActAgent-Mistral-7b-v0.1-gguf/blob/main/CodeActAgent-Mistral-7b-v0.1.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [CodeActAgent-Mistral-7b-v0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/xingyaoww_-_CodeActAgent-Mistral-7b-v0.1-gguf/blob/main/CodeActAgent-Mistral-7b-v0.1.Q4_1.gguf) | Q4_1 | 4.24GB |
| [CodeActAgent-Mistral-7b-v0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/xingyaoww_-_CodeActAgent-Mistral-7b-v0.1-gguf/blob/main/CodeActAgent-Mistral-7b-v0.1.Q5_0.gguf) | Q5_0 | 4.65GB |
| [CodeActAgent-Mistral-7b-v0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/xingyaoww_-_CodeActAgent-Mistral-7b-v0.1-gguf/blob/main/CodeActAgent-Mistral-7b-v0.1.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [CodeActAgent-Mistral-7b-v0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/xingyaoww_-_CodeActAgent-Mistral-7b-v0.1-gguf/blob/main/CodeActAgent-Mistral-7b-v0.1.Q5_K.gguf) | Q5_K | 4.78GB |
| [CodeActAgent-Mistral-7b-v0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/xingyaoww_-_CodeActAgent-Mistral-7b-v0.1-gguf/blob/main/CodeActAgent-Mistral-7b-v0.1.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [CodeActAgent-Mistral-7b-v0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/xingyaoww_-_CodeActAgent-Mistral-7b-v0.1-gguf/blob/main/CodeActAgent-Mistral-7b-v0.1.Q5_1.gguf) | Q5_1 | 5.07GB |
| [CodeActAgent-Mistral-7b-v0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/xingyaoww_-_CodeActAgent-Mistral-7b-v0.1-gguf/blob/main/CodeActAgent-Mistral-7b-v0.1.Q6_K.gguf) | Q6_K | 5.53GB |
| [CodeActAgent-Mistral-7b-v0.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/xingyaoww_-_CodeActAgent-Mistral-7b-v0.1-gguf/blob/main/CodeActAgent-Mistral-7b-v0.1.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: apache-2.0
datasets:
- xingyaoww/code-act
language:
- en
pipeline_tag: text-generation
tags:
- llm-agent
---
<h1 align="center"> Executable Code Actions Elicit Better LLM Agents </h1>
<p align="center">
<a href="https://github.com/xingyaoww/code-act">💻 Code</a>
•
<a href="https://arxiv.org/abs/2402.01030">📃 Paper</a>
•
<a href="https://huggingface.co/datasets/xingyaoww/code-act" >🤗 Data (CodeActInstruct)</a>
•
<a href="https://huggingface.co/xingyaoww/CodeActAgent-Mistral-7b-v0.1" >🤗 Model (CodeActAgent-Mistral-7b-v0.1)</a>
•
<a href="https://chat.xwang.dev/">🤖 Chat with CodeActAgent!</a>
</p>
We propose to use executable Python **code** to consolidate LLM agents’ **act**ions into a unified action space (**CodeAct**).
Integrated with a Python interpreter, CodeAct can execute code actions and dynamically revise prior actions or emit new actions upon new observations (e.g., code execution results) through multi-turn interactions.

## Why CodeAct?
Our extensive analysis of 17 LLMs on API-Bank and a newly curated benchmark [M<sup>3</sup>ToolEval](docs/EVALUATION.md) shows that CodeAct outperforms widely used alternatives like Text and JSON (up to 20% higher success rate). Please check our paper for more detailed analysis!

*Comparison between CodeAct and Text / JSON as action.*

*Quantitative results comparing CodeAct and {Text, JSON} on M<sup>3</sup>ToolEval.*
## 📁 CodeActInstruct
We collect an instruction-tuning dataset CodeActInstruct that consists of 7k multi-turn interactions using CodeAct. Dataset is release at [huggingface dataset 🤗](https://huggingface.co/datasets/xingyaoww/code-act). Please refer to the paper and [this section](#-data-generation-optional) for details of data collection.

*Dataset Statistics. Token statistics are computed using Llama-2 tokenizer.*
## 🪄 CodeActAgent
Trained on **CodeActInstruct** and general conversaions, **CodeActAgent** excels at out-of-domain agent tasks compared to open-source models of the same size, while not sacrificing generic performance (e.g., knowledge, dialog). We release two variants of CodeActAgent:
- **CodeActAgent-Mistral-7b-v0.1** (recommended, [model link](https://huggingface.co/xingyaoww/CodeActAgent-Mistral-7b-v0.1)): using Mistral-7b-v0.1 as the base model with 32k context window.
- **CodeActAgent-Llama-7b** ([model link](https://huggingface.co/xingyaoww/CodeActAgent-Llama-2-7b)): using Llama-2-7b as the base model with 4k context window.

*Evaluation results for CodeActAgent. ID and OD stand for in-domain and out-of-domain evaluation correspondingly. Overall averaged performance normalizes the MT-Bench score to be consistent with other tasks and excludes in-domain tasks for fair comparison.*
Please check out [our paper](TODO) and [code](https://github.com/xingyaoww/code-act) for more details about data collection, model training, and evaluation.
## 📚 Citation
```bibtex
@misc{wang2024executable,
title={Executable Code Actions Elicit Better LLM Agents},
author={Xingyao Wang and Yangyi Chen and Lifan Yuan and Yizhe Zhang and Yunzhu Li and Hao Peng and Heng Ji},
year={2024},
eprint={2402.01030},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Litzy619/V0422MADP8 | Litzy619 | "2024-04-22T15:49:09Z" | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:finetune:microsoft/phi-2",
"region:us"
] | null | "2024-04-22T05:15:41Z" | ---
base_model: microsoft/phi-2
tags:
- generated_from_trainer
model-index:
- name: V0422MADP8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V0422MADP8
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0616
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 60
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.881 | 0.09 | 10 | 0.5408 |
| 0.2348 | 0.18 | 20 | 0.1196 |
| 0.1186 | 0.27 | 30 | 0.0956 |
| 0.0994 | 0.36 | 40 | 0.0828 |
| 0.0814 | 0.45 | 50 | 0.0769 |
| 0.0868 | 0.54 | 60 | 0.0796 |
| 0.0835 | 0.63 | 70 | 0.0785 |
| 0.0822 | 0.73 | 80 | 0.0807 |
| 0.0817 | 0.82 | 90 | 0.0692 |
| 0.0773 | 0.91 | 100 | 0.0687 |
| 0.0718 | 1.0 | 110 | 0.0666 |
| 0.064 | 1.09 | 120 | 0.0650 |
| 0.0681 | 1.18 | 130 | 0.0714 |
| 0.0661 | 1.27 | 140 | 0.0664 |
| 0.0598 | 1.36 | 150 | 0.0685 |
| 0.0718 | 1.45 | 160 | 0.0616 |
| 0.0645 | 1.54 | 170 | 0.0630 |
| 0.0659 | 1.63 | 180 | 0.0667 |
| 0.0625 | 1.72 | 190 | 0.0630 |
| 0.0756 | 1.81 | 200 | 0.0679 |
| 0.0669 | 1.9 | 210 | 0.0686 |
| 0.0655 | 1.99 | 220 | 0.0691 |
| 0.0567 | 2.08 | 230 | 0.0691 |
| 0.0583 | 2.18 | 240 | 0.0607 |
| 0.0551 | 2.27 | 250 | 0.0620 |
| 0.0497 | 2.36 | 260 | 0.0661 |
| 0.0542 | 2.45 | 270 | 0.0614 |
| 0.0473 | 2.54 | 280 | 0.0621 |
| 0.0443 | 2.63 | 290 | 0.0634 |
| 0.0492 | 2.72 | 300 | 0.0624 |
| 0.0537 | 2.81 | 310 | 0.0618 |
| 0.0464 | 2.9 | 320 | 0.0616 |
| 0.0526 | 2.99 | 330 | 0.0616 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.14.1
|
aleegis09/c653cc03-ff26-4383-b205-58773542b803 | aleegis09 | "2025-01-21T02:05:47Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:samoline/7d183bf9-ed95-443c-94dc-1cad850bf23f",
"base_model:adapter:samoline/7d183bf9-ed95-443c-94dc-1cad850bf23f",
"region:us"
] | null | "2025-01-21T01:43:07Z" | ---
library_name: peft
base_model: samoline/7d183bf9-ed95-443c-94dc-1cad850bf23f
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c653cc03-ff26-4383-b205-58773542b803
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: samoline/7d183bf9-ed95-443c-94dc-1cad850bf23f
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- train_c4393383-ef1d-4e9c-b95c-18b4f735570d.json
ds_type: json
format: custom
path: /workspace/input_data/train_c4393383-ef1d-4e9c-b95c-18b4f735570d.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: aleegis09/c653cc03-ff26-4383-b205-58773542b803
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/train_c4393383-ef1d-4e9c-b95c-18b4f735570d.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 075526eb-32e0-4485-aab7-014e4d302171
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 075526eb-32e0-4485-aab7-014e4d302171
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# c653cc03-ff26-4383-b205-58773542b803
This model is a fine-tuned version of [samoline/7d183bf9-ed95-443c-94dc-1cad850bf23f](https://huggingface.co/samoline/7d183bf9-ed95-443c-94dc-1cad850bf23f) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0665
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.0234 | 0.0008 | 1 | 1.7760 |
| 1.0573 | 0.0407 | 50 | 1.2761 |
| 0.7829 | 0.0813 | 100 | 1.2511 |
| 1.3316 | 0.1220 | 150 | 1.0824 |
| 0.9138 | 0.1627 | 200 | 1.0665 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
OwOOwO/dumbo-llamalfg | OwOOwO | "2024-04-22T20:39:01Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-22T20:35:28Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Adleu/filip_dewinter_LoRA | Adleu | "2024-01-02T21:57:33Z" | 4 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | "2024-01-02T21:57:28Z" |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of TOK filip dewinter,
license: openrail++
---
# SDXL LoRA DreamBooth - Adleu/filip_dewinter_LoRA
<Gallery />
## Model description
These are Adleu/filip_dewinter_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK filip dewinter, to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](Adleu/filip_dewinter_LoRA/tree/main) them in the Files & versions tab.
|
bitextor/bicleaner-ai-full-en-ro | bitextor | "2023-03-27T11:58:10Z" | 32 | 0 | transformers | [
"transformers",
"tf",
"xlm-roberta",
"bicleaner-ai",
"en",
"ro",
"multilingual",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | "2023-03-27T11:57:48Z" | ---
language:
- en
- ro
- multilingual
license: cc-by-sa-4.0
tags:
- bicleaner-ai
tasks:
- text-classification
---
# Bicleaner AI full model for en-ro
Bicleaner AI is a tool that aims at detecting noisy sentence pairs in a parallel corpus. It
indicates the likelihood of a pair of sentences being mutual translations (with a value near to 1) or not (with a value near to 0).
Sentence pairs considered very noisy are scored with 0.
Find out at our repository for further instructions on how to use it: https://github.com/bitextor/bicleaner-ai
|
lesso02/3ea99431-f101-4260-8333-86ed42eaab9f | lesso02 | "2025-01-13T18:07:48Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"falcon",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:tiiuae/falcon-7b",
"base_model:adapter:tiiuae/falcon-7b",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-13T17:02:27Z" | ---
library_name: peft
license: apache-2.0
base_model: tiiuae/falcon-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3ea99431-f101-4260-8333-86ed42eaab9f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: tiiuae/falcon-7b
bf16: true
chat_template: llama3
datasets:
- data_files:
- b115b4998a0ec6c0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b115b4998a0ec6c0_train_data.json
type:
field_instruction: inputs
field_output: targets
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso02/3ea99431-f101-4260-8333-86ed42eaab9f
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/b115b4998a0ec6c0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 375a0c96-71b2-49ec-9c40-cd98cfe5e1a3
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 375a0c96-71b2-49ec-9c40-cd98cfe5e1a3
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 3ea99431-f101-4260-8333-86ed42eaab9f
This model is a fine-tuned version of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0839
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 12.8747 | 0.0001 | 1 | 2.5377 |
| 6.6979 | 0.0005 | 5 | 2.5217 |
| 9.7445 | 0.0009 | 10 | 2.2452 |
| 8.0454 | 0.0014 | 15 | 2.2552 |
| 11.9565 | 0.0019 | 20 | 2.0987 |
| 10.7773 | 0.0023 | 25 | 2.0839 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nbhatia3/Llama-2-13b-chat-hf-finetune_finetune_csi | nbhatia3 | "2025-04-09T09:24:19Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-09T09:17:24Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Best000/767cb4c6-3ad3-46f4-b827-ca227a9648b9 | Best000 | "2025-01-23T08:14:37Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.1-Storm-8B",
"base_model:adapter:unsloth/Llama-3.1-Storm-8B",
"license:llama3.1",
"region:us"
] | null | "2025-01-23T08:12:33Z" | ---
library_name: peft
license: llama3.1
base_model: unsloth/Llama-3.1-Storm-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 767cb4c6-3ad3-46f4-b827-ca227a9648b9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Llama-3.1-Storm-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- aba4cbb8799260b0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/aba4cbb8799260b0_train_data.json
type:
field_instruction: questions
field_output: context
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: Best000/767cb4c6-3ad3-46f4-b827-ca227a9648b9
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/aba4cbb8799260b0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2f455866-9ba0-47c4-9984-af49b419b951
wandb_project: Birthday-SN56-16-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2f455866-9ba0-47c4-9984-af49b419b951
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 767cb4c6-3ad3-46f4-b827-ca227a9648b9
This model is a fine-tuned version of [unsloth/Llama-3.1-Storm-8B](https://huggingface.co/unsloth/Llama-3.1-Storm-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.5717 | 0.0008 | 1 | nan |
| 2.1874 | 0.0024 | 3 | nan |
| 4.6303 | 0.0049 | 6 | nan |
| 3.7152 | 0.0073 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
sd-concepts-library/mu-sadr | sd-concepts-library | "2022-09-19T04:43:07Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2022-09-19T04:43:03Z" | ---
license: mit
---
### mu-sadr on Stable Diffusion
This is the `<783463b>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:









|
xuandin/semviqa-tc-erniem-isedsc01 | xuandin | "2025-03-07T03:15:07Z" | 0 | 0 | null | [
"safetensors",
"claim_verification",
"region:us"
] | null | "2025-03-07T03:12:47Z" | ```Python
import torch
import torch.nn.functional as F
# tokenizer = AutoTokenizer.from_pretrained("xuandin/semviqa-tc-infoxlm-viwikifc")
# model = ClaimModelForClassification.from_pretrained("xuandin/semviqa-tc-infoxlm-viwikifc")
claim = "Chiến tranh với Campuchia đã kết thúc trước khi Việt Nam thống nhất."
evidence = "Sau khi thống nhất, Việt Nam tiếp tục gặp khó khăn do sự sụp đổ và tan rã của đồng minh Liên Xô cùng Khối phía Đông, các lệnh cấm vận của Hoa Kỳ, chiến tranh với Campuchia, biên giới giáp Trung Quốc và hậu quả của chính sách bao cấp sau nhiều năm áp dụng."
inputs = tokenizer(
claim,
evidence,
truncation="only_second",
add_special_tokens=True,
max_length=256,
padding='max_length',
return_attention_mask=True,
return_token_type_ids=False,
return_tensors='pt',
)
labels = ["NEI", "SUPPORTED", "REFUTED"]
with torch.no_grad():
outputs = model(**inputs)
logits = outputs["logits"]
probabilities = F.softmax(logits, dim=1).squeeze()
for i, (label, prob) in enumerate(zip(labels, probabilities.tolist()), start=1):
print(f"{i}) {label} {prob:.4f}")
# 1) NEI 0.0009
# 2) SUPPORTED 0.0000
# 3) REFUTED 0.9990
``` |
brugmark/all-MiniLM-L6-v2-personal-project-default-2024-05-31 | brugmark | "2024-05-31T14:35:15Z" | 128 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2024-05-31T12:39:53Z" | ---
license: apache-2.0
base_model: sentence-transformers/all-MiniLM-L6-v2
tags:
- generated_from_trainer
model-index:
- name: all-MiniLM-L6-v2-personal-project-default-2024-05-31
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-MiniLM-L6-v2-personal-project-default-2024-05-31
This model is a fine-tuned version of [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 11.4374
- eval_runtime: 3.9419
- eval_samples_per_second: 6.849
- eval_steps_per_second: 1.776
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.5
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
chinoll/Yi-6b-200k-dpo | chinoll | "2023-12-01T14:44:01Z" | 1,516 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"zh",
"en",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-12-01T09:40:14Z" | ---
language:
- zh
- en
pipeline_tag: text-generation
license: other
datasets:
- HuggingFaceH4/ultrafeedback_binarized
library_name: transformers
---
## Examples
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = 'chinoll/Yi-6b-200k-dpo'
tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False)
# Since transformers 4.35.0, the GPT-Q/AWQ model can be loaded using AutoModelForCausalLM.
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
MatthewMxy/CNAudio | MatthewMxy | "2023-11-14T16:29:30Z" | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"zh",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2023-11-14T14:06:47Z" | ---
language:
- zh
license: apache-2.0
base_model: openai/whisper-small
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: Whisper Small ZH - MATTHEW
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small ZH - MATTHEW
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 14
- eval_batch_size: 14
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- training_steps: 3000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
mradermacher/sororicide-12B-Farer-Mell-Unslop-GGUF | mradermacher | "2025-01-26T03:41:24Z" | 468 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:redrix/sororicide-12B-Farer-Mell-Unslop",
"base_model:quantized:redrix/sororicide-12B-Farer-Mell-Unslop",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-26T02:08:05Z" | ---
base_model: redrix/sororicide-12B-Farer-Mell-Unslop
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/redrix/sororicide-12B-Farer-Mell-Unslop
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/sororicide-12B-Farer-Mell-Unslop-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/sororicide-12B-Farer-Mell-Unslop-GGUF/resolve/main/sororicide-12B-Farer-Mell-Unslop.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/sororicide-12B-Farer-Mell-Unslop-GGUF/resolve/main/sororicide-12B-Farer-Mell-Unslop.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/sororicide-12B-Farer-Mell-Unslop-GGUF/resolve/main/sororicide-12B-Farer-Mell-Unslop.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/sororicide-12B-Farer-Mell-Unslop-GGUF/resolve/main/sororicide-12B-Farer-Mell-Unslop.Q3_K_L.gguf) | Q3_K_L | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/sororicide-12B-Farer-Mell-Unslop-GGUF/resolve/main/sororicide-12B-Farer-Mell-Unslop.IQ4_XS.gguf) | IQ4_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/sororicide-12B-Farer-Mell-Unslop-GGUF/resolve/main/sororicide-12B-Farer-Mell-Unslop.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/sororicide-12B-Farer-Mell-Unslop-GGUF/resolve/main/sororicide-12B-Farer-Mell-Unslop.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/sororicide-12B-Farer-Mell-Unslop-GGUF/resolve/main/sororicide-12B-Farer-Mell-Unslop.Q5_K_S.gguf) | Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/sororicide-12B-Farer-Mell-Unslop-GGUF/resolve/main/sororicide-12B-Farer-Mell-Unslop.Q5_K_M.gguf) | Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/sororicide-12B-Farer-Mell-Unslop-GGUF/resolve/main/sororicide-12B-Farer-Mell-Unslop.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/sororicide-12B-Farer-Mell-Unslop-GGUF/resolve/main/sororicide-12B-Farer-Mell-Unslop.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
DevQuasar/LGAI-EXAONE.EXAONE-Deep-7.8B-GGUF | DevQuasar | "2025-03-18T17:32:14Z" | 0 | 0 | null | [
"gguf",
"text-generation",
"base_model:LGAI-EXAONE/EXAONE-Deep-7.8B",
"base_model:quantized:LGAI-EXAONE/EXAONE-Deep-7.8B",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2025-03-18T17:00:37Z" | ---
base_model:
- LGAI-EXAONE/EXAONE-Deep-7.8B
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [LGAI-EXAONE/EXAONE-Deep-7.8B](https://huggingface.co/LGAI-EXAONE/EXAONE-Deep-7.8B)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
ChaoticNeutrals/Eris_PrimeV4-Vision-7B | ChaoticNeutrals | "2024-03-26T00:18:06Z" | 48 | 4 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:ChaoticNeutrals/Eris_PrimeV3.075-Vision-7B",
"base_model:merge:ChaoticNeutrals/Eris_PrimeV3.075-Vision-7B",
"base_model:Nitral-Archive/Eris_PrimeV3.05-Vision-7B",
"base_model:merge:Nitral-Archive/Eris_PrimeV3.05-Vision-7B",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-25T16:14:17Z" | ---
base_model:
- Nitral-AI/Eris_PrimeV3.05-Vision-7B
- Nitral-AI/Eris_PrimeV3.075-Vision-7B
library_name: transformers
tags:
- mergekit
- merge
license: other
---

# Eris Prime: Version 4.0
Somewhere between v3.05 and v3.075 in overall intelligence and rp capability.
Quants Available Here Tahnks to Lewdiculus: https://huggingface.co/Lewdiculous/Eris_PrimeV4-Vision-7B-GGUF-IQ-Imatrix
# Vision/multimodal capabilities:
If you want to use vision functionality:
* You must use the latest versions of [Koboldcpp](https://github.com/LostRuins/koboldcpp).
To use the multimodal capabilities of this model and use **vision** you need to load the specified **mmproj** file, this can be found inside this model repo.
* You can load the **mmproj** by using the corresponding section in the interface:
 |
nbeerbower/Kawaiides-llama3.1-70B | nbeerbower | "2025-03-02T09:48:23Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:Saxo/Linkbricks-Horizon-AI-Japanese-Pro-V8-70B",
"base_model:merge:Saxo/Linkbricks-Horizon-AI-Japanese-Pro-V8-70B",
"base_model:nbeerbower/llama3.1-kartoffeldes-70B",
"base_model:merge:nbeerbower/llama3.1-kartoffeldes-70B",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-01T16:14:24Z" | ---
base_model:
- nbeerbower/llama3.1-kartoffeldes-70B
- Saxo/Linkbricks-Horizon-AI-Japanese-Pro-V8-70B
library_name: transformers
tags:
- mergekit
- merge
license: llama3.1
---
# Kawaiides-llama3.1-70B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [nbeerbower/llama3.1-kartoffeldes-70B](https://huggingface.co/nbeerbower/llama3.1-kartoffeldes-70B) as a base.
### Models Merged
The following models were included in the merge:
* [Saxo/Linkbricks-Horizon-AI-Japanese-Pro-V8-70B](https://huggingface.co/Saxo/Linkbricks-Horizon-AI-Japanese-Pro-V8-70B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Saxo/Linkbricks-Horizon-AI-Japanese-Pro-V8-70B
parameters:
weight: 1
density: 1
merge_method: ties
base_model: nbeerbower/llama3.1-kartoffeldes-70B
parameters:
weight: 1
density: 1
normalize: true
int8_mask: true
dtype: bfloat16
``` |
vuongnhathien/test-more-augment | vuongnhathien | "2024-05-27T15:17:34Z" | 191 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"convnextv2",
"image-classification",
"generated_from_trainer",
"base_model:facebook/convnextv2-base-22k-384",
"base_model:finetune:facebook/convnextv2-base-22k-384",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-05-27T15:14:48Z" | ---
license: apache-2.0
base_model: facebook/convnextv2-base-22k-384
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: test-more-augment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-more-augment
This model is a fine-tuned version of [facebook/convnextv2-base-22k-384](https://huggingface.co/facebook/convnextv2-base-22k-384) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7713
- Accuracy: 0.8
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 0.9146 | 0.7125 |
| No log | 2.0 | 80 | 0.7713 | 0.8 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
varcoder/resnet-101-finetuned-CivilEng11k | varcoder | "2024-03-10T07:31:26Z" | 26 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"resnet",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-01-13T18:02:37Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: resnet-101-finetuned-CivilEng11k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-101-finetuned-CivilEng11k
This model is a fine-tuned version of [microsoft/resnet-101](https://huggingface.co/microsoft/resnet-101) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5490
- Accuracy: 0.8542
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 10
- total_train_batch_size: 320
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.81 | 3 | 1.0724 | 0.5729 |
| No log | 1.89 | 7 | 0.9717 | 0.6542 |
| 1.0293 | 2.97 | 11 | 0.8594 | 0.6678 |
| 1.0293 | 3.78 | 14 | 0.7830 | 0.7017 |
| 1.0293 | 4.86 | 18 | 0.6764 | 0.7593 |
| 0.78 | 5.95 | 22 | 0.6072 | 0.7831 |
| 0.78 | 6.76 | 25 | 0.5745 | 0.8339 |
| 0.78 | 7.84 | 29 | 0.5489 | 0.8508 |
| 0.6037 | 8.11 | 30 | 0.5490 | 0.8542 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.13.1+cpu
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Saladchin/q-FrozenLake-v1-4x4-noSlippery | Saladchin | "2023-03-28T07:43:14Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-28T07:43:11Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Saladchin/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
LandCruiser/Linzzz_1 | LandCruiser | "2025-03-23T18:31:18Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-23T17:13:10Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
maharshpatelx/deepseek-r1-1.5b-SYNTHETIC-1-SFT-Data-2k_q4_k_m | maharshpatelx | "2025-02-23T09:44:03Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-23T09:43:46Z" | ---
base_model: unsloth/deepseek-r1-distill-qwen-1.5b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** maharshpatelx
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-qwen-1.5b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
blockblockblock/TinyLlama-1.1B-32k-Instruct-bpw5.5 | blockblockblock | "2024-03-13T05:35:05Z" | 2 | 0 | transformers | [
"transformers",
"llama",
"text-generation",
"conversational",
"en",
"dataset:LDJnr/Capybara",
"dataset:jondurbin/airoboros-3.2",
"dataset:unalignment/toxic-dpo-v0.1",
"dataset:LDJnr/Verified-Camel",
"dataset:HuggingFaceH4/no_robots",
"dataset:Doctor-Shotgun/no-robots-sharegpt",
"dataset:Doctor-Shotgun/capybara-sharegpt",
"autotrain_compatible",
"region:us"
] | text-generation | "2024-03-13T05:34:40Z" | ---
inference: false
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- llama
datasets:
- LDJnr/Capybara
- jondurbin/airoboros-3.2
- unalignment/toxic-dpo-v0.1
- LDJnr/Verified-Camel
- HuggingFaceH4/no_robots
- Doctor-Shotgun/no-robots-sharegpt
- Doctor-Shotgun/capybara-sharegpt
---
# Norobara-ZLoss-8x7B
This is an instruct-tuned [TinyLlama-1.1B-32k](https://huggingface.co/Doctor-Shotgun/TinyLlama-1.1B-32k) on several open-source instruct datasets, intended primarily for speculative decoding.
## Usage:
The intended prompt format is a modified multi-turn Alpaca instruction format:
```
### Instruction:
{system prompt}
### Input:
{user message}
### Response:
{model response}
### Input:
{user message}
### Response:
{model response}
(etc.)
```
## Bias, Risks, and Limitations
The model will show biases present in the base model. No ethical alignment was applied to prevent the generation of toxic or harmful outputs (in fact the opposite, with examples from toxic-DPO included), so generate at your own risk.
## Training Details
This model was trained as a full finetune for 3 epochs using a single A100 GPU for around 3.5 hours. |
yyq90/Reinforce-CartPole-v1 | yyq90 | "2023-03-16T19:17:37Z" | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-16T18:00:49Z" | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
jetjodh/CopaxTimeLessXLv12 | jetjodh | "2024-04-19T19:58:37Z" | 0 | 1 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2024-04-19T18:49:45Z" | ---
license: creativeml-openrail-m
---
|
su1301397274/gpt2-finetuned-tofu29 | su1301397274 | "2024-09-29T13:22:08Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-09-29T13:21:45Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nhoxinh/4057c142-9b3c-457c-8e74-5a0b7f319a6a | nhoxinh | "2025-01-22T11:49:34Z" | 11 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-1.7B-Instruct",
"base_model:adapter:unsloth/SmolLM2-1.7B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-22T11:36:45Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-1.7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4057c142-9b3c-457c-8e74-5a0b7f319a6a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-1.7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 38b1156500832d5f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/38b1156500832d5f_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhoxinh/4057c142-9b3c-457c-8e74-5a0b7f319a6a
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/38b1156500832d5f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 39d33bf1-ff44-4817-9122-582b1c78d1cc
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 39d33bf1-ff44-4817-9122-582b1c78d1cc
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 4057c142-9b3c-457c-8e74-5a0b7f319a6a
This model is a fine-tuned version of [unsloth/SmolLM2-1.7B-Instruct](https://huggingface.co/unsloth/SmolLM2-1.7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0880
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.9266 | 0.0328 | 200 | 2.0880 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
jhaochenz/finetuned_distilgpt2_sst2_negation0.01_pretrainedFalse_epochs3 | jhaochenz | "2023-01-17T21:14:56Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:sst2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-01-17T08:10:38Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- sst2
model-index:
- name: finetuned_distilgpt2_sst2_negation0.01_pretrainedFalse_epochs3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_distilgpt2_sst2_negation0.01_pretrainedFalse_epochs3
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the sst2 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2579
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6821 | 1.0 | 1323 | 3.2535 |
| 2.5045 | 2.0 | 2646 | 3.2502 |
| 2.4511 | 3.0 | 3969 | 3.2579 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.7.0
- Datasets 2.8.0
- Tokenizers 0.13.2
|
kiramayatu/testing | kiramayatu | "2023-05-20T12:19:33Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-05-06T16:30:22Z" | ---
license: creativeml-openrail-m
---
|
tinycompany/Llamified-128k-local | tinycompany | "2025-04-06T12:42:42Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-06T12:39:59Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
tgrhn/wav2vec2-turkish-300m-7 | tgrhn | "2024-04-26T13:35:08Z" | 21 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:fleurs",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-04-26T07:48:35Z" | ---
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
datasets:
- fleurs
metrics:
- wer
model-index:
- name: wav2vec2-turkish-300m-7
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: fleurs
type: fleurs
config: tr_tr
split: test
args: tr_tr
metrics:
- name: Wer
type: wer
value: 0.16677037958929683
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-turkish-300m-7
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the fleurs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3236
- Wer: 0.1668
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 0.1
- num_epochs: 35
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:-----:|:---------------:|:------:|
| 3.7291 | 0.6983 | 500 | 1.2114 | 0.8908 |
| 1.1707 | 1.3966 | 1000 | 0.3888 | 0.4555 |
| 0.5042 | 2.0950 | 1500 | 0.2879 | 0.3270 |
| 0.2623 | 2.7933 | 2000 | 0.2653 | 0.3265 |
| 0.2012 | 3.4916 | 2500 | 0.2405 | 0.2778 |
| 0.1817 | 4.1899 | 3000 | 0.2555 | 0.2704 |
| 0.1394 | 4.8883 | 3500 | 0.2452 | 0.2647 |
| 0.1112 | 5.5866 | 4000 | 0.2426 | 0.2458 |
| 0.1047 | 6.2849 | 4500 | 0.2520 | 0.2634 |
| 0.0916 | 6.9832 | 5000 | 0.2417 | 0.2443 |
| 0.0902 | 7.6816 | 5500 | 0.2627 | 0.2427 |
| 0.075 | 8.3799 | 6000 | 0.2551 | 0.2320 |
| 0.0716 | 9.0782 | 6500 | 0.2607 | 0.2221 |
| 0.0661 | 9.7765 | 7000 | 0.2504 | 0.2338 |
| 0.0634 | 10.4749 | 7500 | 0.2552 | 0.2229 |
| 0.0583 | 11.1732 | 8000 | 0.2637 | 0.2249 |
| 0.0537 | 11.8715 | 8500 | 0.2627 | 0.2122 |
| 0.0535 | 12.5698 | 9000 | 0.2654 | 0.2148 |
| 0.0521 | 13.2682 | 9500 | 0.2665 | 0.2123 |
| 0.0491 | 13.9665 | 10000 | 0.2814 | 0.2176 |
| 0.0466 | 14.6648 | 10500 | 0.2785 | 0.2138 |
| 0.0445 | 15.3631 | 11000 | 0.2856 | 0.2075 |
| 0.0415 | 16.0615 | 11500 | 0.2750 | 0.2076 |
| 0.0405 | 16.7598 | 12000 | 0.2743 | 0.2045 |
| 0.0368 | 17.4581 | 12500 | 0.2770 | 0.2013 |
| 0.0374 | 18.1564 | 13000 | 0.2961 | 0.2043 |
| 0.0374 | 18.8547 | 13500 | 0.2851 | 0.2028 |
| 0.0322 | 19.5531 | 14000 | 0.2955 | 0.1961 |
| 0.0317 | 20.2514 | 14500 | 0.3053 | 0.1998 |
| 0.0306 | 20.9497 | 15000 | 0.2988 | 0.1960 |
| 0.0328 | 21.6480 | 15500 | 0.2873 | 0.1949 |
| 0.0299 | 22.3464 | 16000 | 0.3030 | 0.1921 |
| 0.0272 | 23.0447 | 16500 | 0.2902 | 0.1866 |
| 0.0286 | 23.7430 | 17000 | 0.2962 | 0.1879 |
| 0.0288 | 24.4413 | 17500 | 0.3114 | 0.1871 |
| 0.0253 | 25.1397 | 18000 | 0.3203 | 0.1844 |
| 0.0262 | 25.8380 | 18500 | 0.2993 | 0.1861 |
| 0.0238 | 26.5363 | 19000 | 0.3108 | 0.1812 |
| 0.0228 | 27.2346 | 19500 | 0.3143 | 0.1759 |
| 0.0235 | 27.9330 | 20000 | 0.3077 | 0.1780 |
| 0.0227 | 28.6313 | 20500 | 0.3099 | 0.1739 |
| 0.0212 | 29.3296 | 21000 | 0.3144 | 0.1730 |
| 0.0212 | 30.0279 | 21500 | 0.3165 | 0.1726 |
| 0.0211 | 30.7263 | 22000 | 0.3178 | 0.1708 |
| 0.0192 | 31.4246 | 22500 | 0.3172 | 0.1682 |
| 0.0193 | 32.1229 | 23000 | 0.3188 | 0.1693 |
| 0.0195 | 32.8212 | 23500 | 0.3255 | 0.1661 |
| 0.0179 | 33.5196 | 24000 | 0.3248 | 0.1668 |
| 0.0166 | 34.2179 | 24500 | 0.3261 | 0.1668 |
| 0.018 | 34.9162 | 25000 | 0.3236 | 0.1668 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.2+cu121
- Datasets 2.17.1
- Tokenizers 0.19.1
|
Joabisac/Isaxc | Joabisac | "2024-05-18T00:54:37Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-05-18T00:54:37Z" | ---
license: apache-2.0
---
|
aleegis10/53a11b58-0356-409d-b6a4-70c8fa48a9a4 | aleegis10 | "2025-01-19T09:40:12Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:HuggingFaceH4/tiny-random-LlamaForCausalLM",
"base_model:adapter:HuggingFaceH4/tiny-random-LlamaForCausalLM",
"region:us"
] | null | "2025-01-19T09:38:53Z" | ---
library_name: peft
base_model: HuggingFaceH4/tiny-random-LlamaForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 53a11b58-0356-409d-b6a4-70c8fa48a9a4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: HuggingFaceH4/tiny-random-LlamaForCausalLM
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 4c4b009282d6d05b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/4c4b009282d6d05b_train_data.json
type:
field_input: url
field_instruction: title
field_output: content
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: aleegis10/53a11b58-0356-409d-b6a4-70c8fa48a9a4
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/4c4b009282d6d05b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 34ad7f9d-ff58-4068-8537-2d15a40438a9
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 34ad7f9d-ff58-4068-8537-2d15a40438a9
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 53a11b58-0356-409d-b6a4-70c8fa48a9a4
This model is a fine-tuned version of [HuggingFaceH4/tiny-random-LlamaForCausalLM](https://huggingface.co/HuggingFaceH4/tiny-random-LlamaForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.3479
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 10.3769 | 0.0017 | 1 | 10.3774 |
| 10.354 | 0.0850 | 50 | 10.3522 |
| 10.3522 | 0.1701 | 100 | 10.3488 |
| 10.351 | 0.2551 | 150 | 10.3481 |
| 10.3505 | 0.3401 | 200 | 10.3479 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/Qwen-32B-RAG-RL-GGUF | mradermacher | "2025-03-30T04:07:33Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"trl",
"grpo",
"en",
"base_model:Chrisp33/Qwen-32B-RAG-RL",
"base_model:quantized:Chrisp33/Qwen-32B-RAG-RL",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-30T02:54:29Z" | ---
base_model: Chrisp33/Qwen-32B-RAG-RL
language:
- en
library_name: transformers
model_name: Qwen-32B-RAG-RL
quantized_by: mradermacher
tags:
- generated_from_trainer
- trl
- grpo
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Chrisp33/Qwen-32B-RAG-RL
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen-32B-RAG-RL-GGUF/resolve/main/Qwen-32B-RAG-RL.Q2_K.gguf) | Q2_K | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-32B-RAG-RL-GGUF/resolve/main/Qwen-32B-RAG-RL.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-32B-RAG-RL-GGUF/resolve/main/Qwen-32B-RAG-RL.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen-32B-RAG-RL-GGUF/resolve/main/Qwen-32B-RAG-RL.Q3_K_L.gguf) | Q3_K_L | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-32B-RAG-RL-GGUF/resolve/main/Qwen-32B-RAG-RL.IQ4_XS.gguf) | IQ4_XS | 18.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-32B-RAG-RL-GGUF/resolve/main/Qwen-32B-RAG-RL.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen-32B-RAG-RL-GGUF/resolve/main/Qwen-32B-RAG-RL.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen-32B-RAG-RL-GGUF/resolve/main/Qwen-32B-RAG-RL.Q5_K_S.gguf) | Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-32B-RAG-RL-GGUF/resolve/main/Qwen-32B-RAG-RL.Q5_K_M.gguf) | Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-32B-RAG-RL-GGUF/resolve/main/Qwen-32B-RAG-RL.Q6_K.gguf) | Q6_K | 27.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen-32B-RAG-RL-GGUF/resolve/main/Qwen-32B-RAG-RL.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Parthi/Taxi_v3 | Parthi | "2023-05-24T10:57:24Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-05-24T10:57:22Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi_v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.67
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Parthi/Taxi_v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Kenny86/ppo-LunarLander-v2 | Kenny86 | "2023-02-19T23:53:20Z" | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-02-19T23:52:55Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 266.17 +/- 21.53
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Azazelle/L3-Decent-Lois-Griffin-8B | Azazelle | "2024-06-06T01:20:54Z" | 8 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:DevQuasar/llama3_8b_chat_brainstorm",
"base_model:merge:DevQuasar/llama3_8b_chat_brainstorm",
"base_model:NousResearch/Hermes-2-Theta-Llama-3-8B",
"base_model:merge:NousResearch/Hermes-2-Theta-Llama-3-8B",
"base_model:NousResearch/Meta-Llama-3-8B-Instruct",
"base_model:merge:NousResearch/Meta-Llama-3-8B-Instruct",
"base_model:OwenArli/ArliAI-Llama-3-8B-Cumulus-v1.0",
"base_model:merge:OwenArli/ArliAI-Llama-3-8B-Cumulus-v1.0",
"base_model:chujiezheng/Llama-3-Instruct-8B-SimPO-ExPO",
"base_model:merge:chujiezheng/Llama-3-Instruct-8B-SimPO-ExPO",
"base_model:nbeerbower/llama-3-gutenberg-8B",
"base_model:merge:nbeerbower/llama-3-gutenberg-8B",
"base_model:openchat/openchat-3.6-8b-20240522",
"base_model:merge:openchat/openchat-3.6-8b-20240522",
"base_model:saishf/Neural-SOVLish-Devil-8B-L3",
"base_model:merge:saishf/Neural-SOVLish-Devil-8B-L3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-06T01:09:35Z" | ---
base_model:
- AwanLLM/Awanllm-Llama-3-8B-Cumulus-v1.0
- DevQuasar/llama3_8b_chat_brainstorm
- NousResearch/Hermes-2-Theta-Llama-3-8B
- chujiezheng/Llama-3-Instruct-8B-SimPO-ExPO
- openchat/openchat-3.6-8b-20240522
- nbeerbower/llama-3-gutenberg-8B
- saishf/Neural-SOVLish-Devil-8B-L3
- NousResearch/Meta-Llama-3-8B-Instruct
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the breadcrumbs_ties merge method using [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct) as a base.
### Models Merged
The following models were included in the merge:
* [AwanLLM/Awanllm-Llama-3-8B-Cumulus-v1.0](https://huggingface.co/AwanLLM/Awanllm-Llama-3-8B-Cumulus-v1.0)
* [DevQuasar/llama3_8b_chat_brainstorm](https://huggingface.co/DevQuasar/llama3_8b_chat_brainstorm)
* [NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B)
* [chujiezheng/Llama-3-Instruct-8B-SimPO-ExPO](https://huggingface.co/chujiezheng/Llama-3-Instruct-8B-SimPO-ExPO)
* [openchat/openchat-3.6-8b-20240522](https://huggingface.co/openchat/openchat-3.6-8b-20240522)
* [nbeerbower/llama-3-gutenberg-8B](https://huggingface.co/nbeerbower/llama-3-gutenberg-8B)
* [saishf/Neural-SOVLish-Devil-8B-L3](https://huggingface.co/saishf/Neural-SOVLish-Devil-8B-L3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: NousResearch/Meta-Llama-3-8B-Instruct
- model: NousResearch/Hermes-2-Theta-Llama-3-8B # 6/10
parameters:
density: 0.4
weight: 0.15
- model: openchat/openchat-3.6-8b-20240522 # 5/10
parameters:
density: 0.3
weight: 0.11
- model: DevQuasar/llama3_8b_chat_brainstorm # 6/10
parameters:
density: 0.4
weight: 0.15
- model: AwanLLM/Awanllm-Llama-3-8B-Cumulus-v1.0 # 6/10
parameters:
density: 0.4
weight: 0.15
- model: chujiezheng/Llama-3-Instruct-8B-SimPO-ExPO # 6/10
parameters:
density: 0.4
weight: 0.15
- model: saishf/Neural-SOVLish-Devil-8B-L3 # 6/10
parameters:
density: 0.4
weight: 0.15
- model: nbeerbower/llama-3-gutenberg-8B # 6/10
parameters:
density: 0.4
weight: 0.15
merge_method: breadcrumbs_ties
base_model: NousResearch/Meta-Llama-3-8B-Instruct
parameters:
normalize: false
rescale: true
gamma: 0.01
dtype: float16
```
|
yabramuvdi/distilbert-wfh | yabramuvdi | "2024-07-16T20:25:28Z" | 111 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-03-02T23:29:05Z" | ---
language:
- en
library_name: transformers
---
This is the offciail model developed in the paper [_“Remote Work across Jobs, Companies, and Space” (Hansen, Lambert, Bloom, Davis, Sadun & Taska, 2023)_](https://wfhmap.com/). The model is a finetuned version of the [distibert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) model on the binary classfication task of predicting if a fragment of text exhibits the possibility of remote work (=1) or not (=0). |
ByteDance/LatentSync-1.5 | ByteDance | "2025-03-16T11:20:38Z" | 0 | 5 | diffusers | [
"diffusers",
"lipsync",
"video-editing",
"arxiv:2412.09262",
"arxiv:2307.04725",
"license:openrail++",
"region:us"
] | null | "2025-03-14T09:38:35Z" | ---
license: openrail++
library_name: diffusers
tags:
- lipsync
- video-editing
---
Paper: https://arxiv.org/abs/2412.09262
Code: https://github.com/bytedance/LatentSync
# What's new in LatentSync 1.5?
1. Add temporal layer: Our previous claim that the [temporal layer](https://arxiv.org/abs/2307.04725) severely impairs lip-sync accuracy was incorrect; the issue was actually caused by a bug in the code implementation. We have corrected our [paper](https://arxiv.org/abs/2412.09262) and updated the code. After incorporating the temporal layer, LatentSync 1.5 demonstrates significantly improved temporal consistency compared to version 1.0.
2. Improves performance on Chinese videos: many issues reported poor performance on Chinese videos, so we added Chinese data to the training of the new model version.
3. Reduce the VRAM requirement of the stage2 training to **20 GB** through the following optimizations:
1. Implement gradient checkpointing in U-Net, VAE, SyncNet and VideoMAE
2. Replace xFormers with PyTorch's native implementation of FlashAttention-2.
3. Clear the CUDA cache after loading checkpoints.
4. The stage2 training only requires training the temporal layer and audio cross-attention layer, which significantly reduces VRAM requirement compared to the previous full-parameter fine-tuning.
Now you can train LatentSync on a single **RTX 3090**! Start the stage2 training with `configs/unet/stage2_efficient.yaml`.
4. Other code optimizations:
1. Remove the dependency on xFormers and Triton.
2. Upgrade the diffusers version to `0.32.2`.
## LatentSync 1.5 Demo
<table class="center">
<tr style="font-weight: bolder;text-align:center;">
<td width="50%"><b>Original video</b></td>
<td width="50%"><b>Lip-synced video</b></td>
</tr>
<tr>
<td>
<video src=https://github.com/user-attachments/assets/b0c8d1da-3fdc-4946-9800-1b2fd0ef9c7f controls preload></video>
</td>
<td>
<video src=https://github.com/user-attachments/assets/25dd1733-44c7-42fe-805a-d612d4bc30e0 controls preload></video>
</td>
</tr>
<tr>
<td>
<video src=https://github.com/user-attachments/assets/4e48e501-64b4-4b4f-a69c-ed18dd987b1f controls preload></video>
</td>
<td>
<video src=https://github.com/user-attachments/assets/e690d91b-9fe5-4323-a60e-2b7f546f01bc controls preload></video>
</td>
</tr>
<tr>
<td>
<video src=https://github.com/user-attachments/assets/e84e2c13-1deb-41f7-8382-048ba1922b71 controls preload></video>
</td>
<td>
<video src=https://github.com/user-attachments/assets/5a5ba09f-590b-4eb3-8dfb-a199d8d1e276 controls preload></video>
</td>
</tr>
<tr>
<td>
<video src=https://github.com/user-attachments/assets/11e4b2b6-64f4-4617-b005-059209fcaea5 controls preload></video>
</td>
<td>
<video src=https://github.com/user-attachments/assets/38437475-3c90-4d08-b540-c8e819e93e0d controls preload></video>
</td>
</tr>
</table> |
qgallouedec/tqc-Swimmer-v3-4006179270 | qgallouedec | "2024-04-10T19:27:57Z" | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"Swimmer-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"Swimmer-v4",
"model-index",
"region:us"
] | reinforcement-learning | "2023-02-28T16:51:35Z" | ---
library_name: stable-baselines3
tags:
- Swimmer-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
- Swimmer-v4
model-index:
- name: TQC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Swimmer-v3
type: Swimmer-v3
metrics:
- type: mean_reward
value: 34.04 +/- 23.93
name: mean_reward
verified: false
---
# **TQC** Agent playing **Swimmer-v3**
This is a trained model of a **TQC** agent playing **Swimmer-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo tqc --env Swimmer-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env Swimmer-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo tqc --env Swimmer-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env Swimmer-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo tqc --env Swimmer-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo tqc --env Swimmer-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('gamma', 0.9999),
('learning_starts', 10000),
('n_timesteps', 1000000.0),
('policy', 'MlpPolicy'),
('normalize', False)])
```
|
lewtun/roberta-large-finetuned-clinc-314 | lewtun | "2022-04-12T15:02:31Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-04-12T14:58:47Z" | ---
license: mit
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: roberta-large-finetuned-clinc-314
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.932258064516129
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-finetuned-clinc-314
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7983
- Accuracy: 0.9323
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: sagemaker_data_parallel
- num_devices: 8
- total_train_batch_size: 128
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 5.0674 | 1.0 | 120 | 5.0406 | 0.0061 |
| 4.566 | 2.0 | 240 | 3.0712 | 0.7316 |
| 2.1553 | 3.0 | 360 | 0.7983 | 0.9323 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
isspek/xlnet-base-cased_ebola_chatgpt_5_2e-5_16_undersampling_0.4 | isspek | "2024-12-19T13:11:35Z" | 121 | 0 | transformers | [
"transformers",
"safetensors",
"xlnet",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-11-17T18:10:56Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
automerger/Experiment27Inex12-7B | automerger | "2024-03-08T01:45:29Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"base_model:MSL7/INEX12-7b",
"base_model:merge:MSL7/INEX12-7b",
"base_model:yam-peleg/Experiment27-7B",
"base_model:merge:yam-peleg/Experiment27-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-08T01:44:27Z" | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
base_model:
- yam-peleg/Experiment27-7B
- MSL7/INEX12-7b
---
# Experiment27Inex12-7B
Experiment27Inex12-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
* [yam-peleg/Experiment27-7B](https://huggingface.co/yam-peleg/Experiment27-7B)
* [MSL7/INEX12-7b](https://huggingface.co/MSL7/INEX12-7b)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: yam-peleg/Experiment27-7B
layer_range: [0, 32]
- model: MSL7/INEX12-7b
layer_range: [0, 32]
merge_method: slerp
base_model: yam-peleg/Experiment27-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
random_seed: 0
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/Experiment27Inex12-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
imdatta0/pints_paged_adamw_32bit_warmup0.02 | imdatta0 | "2024-12-02T06:48:10Z" | 11 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-09-25T06:25:55Z" | ---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: pints_paged_adamw_32bit_warmup0.02
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pints_paged_adamw_32bit_warmup0.02
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 7.4135
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2048
- total_train_batch_size: 2048
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 10.8665 | 0.0020 | 208 | 10.5405 |
| 9.771 | 0.0040 | 416 | 9.3305 |
| 8.9825 | 0.0060 | 624 | 8.5321 |
| 8.2362 | 0.0080 | 832 | 7.9837 |
| 7.8471 | 0.0100 | 1040 | 7.6779 |
| 7.5529 | 0.0120 | 1248 | 7.4253 |
| 7.3361 | 0.0140 | 1456 | 7.2233 |
| 7.137 | 0.0160 | 1664 | 7.0466 |
| 7.0123 | 0.0180 | 1872 | 6.9768 |
| 6.9564 | 0.0200 | 2080 | 6.9193 |
| 6.9615 | 0.0220 | 2288 | 6.9234 |
| 6.9531 | 0.0240 | 2496 | 6.9235 |
| 6.9675 | 0.0260 | 2704 | 6.9571 |
| 6.9392 | 0.0280 | 2912 | 6.9076 |
| 7.3212 | 0.9604 | 3120 | 7.4135 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.3.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
true-ai-apps/best-undresser-ai-apps | true-ai-apps | "2025-03-07T18:44:26Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-03-07T18:43:39Z" | ---
license: apache-2.0
---
# 7 Best Undress AI Apps In 2025
Undress AI apps, powered by advanced AI and deep learning, have sparked both curiosity and controversy. These tools use generative algorithms to digitally alter images, but their ethical implications and potential for misuse cannot be ignored.
In 2025, the landscape of such apps continues to evolve, with some gaining popularity for their capabilities. Here’s a quick look at the top 7 Undress AI apps making waves this year
## 1. Undress.app
### Why I Recommend It:
Undress.app stands out as one of the best undress AI apps available today. With its user-friendly interface and advanced technology, it allows users to generate unclothed images quickly and safely. The app prioritizes user privacy, ensuring that no data is saved or shared, making it a trustworthy choice for those interested in exploring AI-generated content.
⏩⏩⏩[**Try Undress App For Free**](https://bestaitools.top/fgRB)

### Key Features:
User-Friendly Interface: The app is designed to be intuitive, making it easy for anyone to navigate.
Multiple Generation Modes: Users can choose from various modes such as Lingerie, Bikini, and NSFW to customize their experience.
High-Quality Results: The AI processes images to deliver high-quality, unblurred results, even for free trial accounts.
Privacy and Security: The app does not save any user data, ensuring complete confidentiality.
### My Experience:
Using Undress.app was a seamless experience. The sign-up process was quick, and I appreciated the variety of modes available. The results were impressive, showcasing the app's advanced AI capabilities. Overall, it was a satisfying experience that I would recommend to others.
### Pros:
Free Credits: New users receive free credits upon signing up, allowing them to try the app without any financial commitment.
Versatile Usage: The app works with both male and female photos, as well as anime images, providing a wide range of options.
### Cons:
Sign-Up Required: Users must create an account to access the app, which may deter some potential users.
⏩⏩⏩[**Try Undress App For Free**](https://bestaitools.top/fgRB)
## 2. Undressai.tools
### Why I Recommend It
Undressai.tools combines powerful AI algorithms with a seamless user experience, making it an excellent choice for both casual users and professionals. The app prioritizes user privacy by automatically deleting generated images within 48 hours.
⏩⏩⏩[**Try UndressAI.tools For Free**](https://bestaitools.top/fgRB)
### Key Features
Stable Diffusion Technology: Produces high-quality, coherent outputs with minimal artifacts.
Generative Adversarial Networks (GANs): Utilizes two neural networks to create highly realistic images of nudity.
Image Synthesis: Generates realistic skin textures that replace removed clothing for lifelike results.
User-Friendly Interface: Allows users to easily upload images and modify them with just a few clicks.
### My Experience
Using Undressai.tools was a delightful experience. The interface was intuitive, allowing me to upload images effortlessly. I appreciated the ability to outline areas for modification, which resulted in impressive and realistic outputs. The app's speed and efficiency made the process enjoyable, and I was amazed by the quality of the generated images.
### Pros
High-quality image generation with realistic results.
Strong emphasis on user privacy and data security.
### Cons
Some users may find the results vary based on the quality of the uploaded images.
⏩⏩⏩[**Try UndressAI.tools For Free**](https://bestaitools.top/fgRB)
## 3. Nudify.online
### Why I Recommend It
Nudify.online stands out due to its commitment to user satisfaction and the quality of its generated images. The application is designed for entertainment purposes, ensuring a safe and enjoyable experience for users over the age of 18.
⏩⏩⏩[**Try For Free**](https://bestaitools.top/fgRB)
### Key Features
High Accuracy: The AI Nudifier boasts the highest accuracy in generating realistic nudified images.
User-Friendly Interface: The platform is easy to navigate, allowing users to generate images in just a few clicks.
Privacy Assurance: Users are reminded to respect the privacy of others and are solely responsible for the images they create.
No Deepfake Content: The application strictly prohibits the creation of deepfake content, ensuring ethical use of the technology.
### My Experience
Using Nudify.online was a seamless experience. The application is straightforward, and I was able to generate high-quality nudified images quickly. The results were impressive, showcasing the power of AI technology. I appreciated the emphasis on user responsibility and privacy, which made me feel secure while using the app.
### Pros
Highly realistic image generation.
Easy to use with a simple login process.
### Cons
Limited to users aged 18 and above, which may restrict access for younger audiences.
⏩⏩⏩[**Try For Free**](https://bestaitools.top/fgRB)
## 4. Candy.ai
Candy.ai stands out as one of the best undress AI apps available today. It offers users a unique and immersive experience, allowing them to create and interact with their ideal AI girlfriend. The platform combines advanced deep-learning technology with a user-friendly interface, making it easy to explore various fantasies and desires.
⏩⏩⏩[**Try For Free**](https://bestaitools.top/fgRB)
### Why I Recommend It
Candy.ai is highly recommended for those seeking a personalized and intimate experience. The app allows users to design their AI girlfriend according to their preferences, ensuring a tailored interaction that feels genuine and engaging.
### Key Features
Customizable AI Girlfriend: Users can choose body type, personality, and clothing, creating a truly unique companion.
Interactive Chat: The AI girlfriend engages in meaningful conversations, responding quickly and intuitively to user prompts.
Photo Requests: Users can request photos or selfies of their AI girlfriend in various outfits, enhancing the immersive experience.
Privacy and Security: Candy.ai prioritizes user privacy, ensuring that all interactions remain confidential and secure.
### My Experience
Using Candy.ai has been an enjoyable journey. The ability to customize my AI girlfriend made the experience feel personal and engaging. I appreciated how quickly she responded to my messages, making our interactions feel natural. The option to request photos added an exciting layer to our relationship, allowing me to explore my fantasies in a safe environment.
### Pros
Highly customizable experience tailored to individual preferences.
Strong emphasis on user privacy and data security.
### Cons
Some users may find the AI's responses occasionally lack depth.
⏩⏩⏩[**Try For Free**](https://bestaitools.top/fgRB)
## 5. UndressHer.app
### Why I Recommend It
This app combines creativity with advanced AI technology, making it easy for anyone to design their perfect AI girlfriend. The variety of customization options ensures that every user can create a unique character that resonates with their preferences.
### Key Features
Extensive Customization: Choose from over 200 unique options to design your AI girlfriend.
Flexible Pricing: Various token bundles are available, including a free option for casual users.
High-Quality Images: Premium and Ultimate plans offer images without watermarks and in the highest quality.
User-Friendly Interface: Simple navigation makes it easy to create and modify your AI girlfriend.
### My Experience
Using UndressHer.app has been a delightful experience. The customization options are extensive, allowing me to create a character that truly reflects my preferences. The app is intuitive, making it easy to navigate through the various features. I particularly enjoyed the ability to undress my AI girlfriend, which added an exciting layer to the design process. Overall, it was a fun and engaging experience.
### Pros
Offers a free option for users to try before committing to paid plans.
High-quality AI-generated images with no watermarks in premium plans.
### Cons
Some users may find the token system a bit limiting for extensive use.
## 6. Undress.vip
### Why I Recommend It
Undress.vip offers a unique blend of entertainment and technology, making it a top choice for users interested in AI-driven experiences. Its ability to generate realistic images while maintaining user privacy is a significant advantage.
### Key Features
Realistic Image Generation: The app uses advanced algorithms to create lifelike images.
User-Friendly Interface: Easy navigation ensures a seamless experience for all users.
Privacy Protection: User data is kept secure, allowing for worry-free usage.
Regular Updates: The app frequently updates its features to enhance user experience.
### My Experience
Using Undress.vip has been a delightful experience. The app is intuitive, and I was able to generate images quickly without any technical difficulties. The quality of the images exceeded my expectations, and I appreciated the emphasis on privacy. Overall, it was a fun and engaging way to explore AI technology.
### Pros
High-Quality Outputs: The images produced are remarkably realistic.
Engaging User Experience: The app is entertaining and easy to use.
### Cons
Limited Free Features: Some advanced features require a subscription.
## 7. Porngen.art
### Why I Recommend It
Porngen.art combines advanced AI technology with a simple interface, making it accessible for both beginners and experienced users. The ability to generate high-quality nude images from uploaded photos is a game-changer in the realm of adult content creation.
### Key Features
High-Quality Image Generation: The app uses deep learning algorithms to produce realistic nude images.
Customizable Options: Users can choose from various undressing modes, including lingerie and bondage.
User Privacy: All uploaded images are confidential and automatically deleted within 48 hours.
Free and Premium Plans: The platform offers both free credits and subscription options for enhanced features.
### My Experience
Using Porngen.art has been a delightful experience. The interface is intuitive, allowing me to upload images and select undressing options effortlessly. The results were impressive, with high-resolution images that met my expectations. I appreciated the privacy measures in place, which made me feel secure while exploring my creativity.
### Pros
Exceptional image quality and realism.
Wide range of customization options to suit individual preferences.
### Cons
Results can vary based on the quality of the uploaded images.
## Frequently Asked Questions (FAQS)
### 1. What is Undress AI?
Undress AI refers to artificial intelligence-powered tools or applications that use deep learning algorithms, such as generative adversarial networks (GANs), to digitally remove clothing from images of people, creating the illusion of nudity. These tools are controversial due to ethical concerns, privacy violations, and potential misuse.
### 2. How does Undress AI work?
Undress AI works by leveraging advanced machine learning models, particularly GANs, which consist of two neural networks: a generator and a discriminator. The generator creates synthetic images, while the discriminator evaluates their authenticity. Through iterative training on large datasets of clothed and unclothed images, the AI learns to predict and generate realistic nude images from clothed inputs. However, the results are often artificial and not always accurate.
### 3. Can Undress AI be used for educational purposes?
While Undress AI is primarily associated with unethical uses, some argue it could have educational applications, such as in medical training or anatomy studies. However, these use cases are highly speculative and controversial. Most educational institutions and professionals rely on ethical and approved methods for such purposes, making the use of Undress AI unnecessary and inappropriate in most scenarios.
### 4. How can individuals stay safe from Undress AI?
To protect yourself from the misuse of Undress AI:
Avoid sharing personal photos online, especially on public platforms.
Use privacy settings on social media to limit who can view or download your images.
Be cautious of phishing scams or malicious links that could steal your photos.
Use watermarks or other protective measures on images you share.
Stay informed about emerging technologies and their potential risks.
### 5. How to Use Undress AI Apps? (Common Guide Across all Undressing Apps)
Here’s a general guide on how to use Undress AI apps.
Download or Access the App: Find and install the app or access it through a website (note that many such apps are illegal or violate terms of service).
Upload an Image: Select or upload a photo of a person (often requiring specific poses or lighting for better results).
Adjust Settings: Some apps allow users to customize settings, such as the level of undress or image resolution.
Generate the Output: The AI processes the image and generates a modified version.
Save or Share: Users can save or share the output, though this often violates privacy laws and ethical guidelines. |
BlackKakapo/t5-base-paraphrase-ro-v2 | BlackKakapo | "2022-08-19T15:06:52Z" | 91 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"ro",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-08-19T14:51:23Z" | ---
annotations_creators: []
language:
- ro
language_creators:
- machine-generated
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: BlackKakapo/t5-base-paraphrase-ro
size_categories:
- 10K<n<100K
source_datasets:
- original
tags: []
task_categories:
- text2text-generation
task_ids: []
---
# Romanian paraphrase

Fine-tune t5-base-paraphrase-ro model for paraphrase. Since there is no Romanian dataset for paraphrasing, I had to create my own [dataset](https://huggingface.co/datasets/BlackKakapo/paraphrase-ro-v2). The dataset contains ~30k examples.
### How to use
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("BlackKakapo/t5-base-paraphrase-ro-v2")
model = AutoModelForSeq2SeqLM.from_pretrained("BlackKakapo/t5-base-paraphrase-ro-v2")
```
### Or
```python
from transformers import T5ForConditionalGeneration, T5TokenizerFast
model = T5ForConditionalGeneration.from_pretrained("BlackKakapo/t5-base-paraphrase-ro-v2")
tokenizer = T5TokenizerFast.from_pretrained("BlackKakapo/t5-base-paraphrase-ro-v2")
```
### Generate
```python
text = "Într-un interviu pentru Radio Europa Liberă România, acesta a menționat că Bucureștiul este pregătit oricând și ar dura doar o oră de la solicitare, până când gazele ar ajunge la Chișinău."
encoding = tokenizer.encode_plus(text, pad_to_max_length=True, return_tensors="pt")
input_ids, attention_masks = encoding["input_ids"], encoding["attention_mask"]
beam_outputs = model.generate(
input_ids=input_ids,
attention_mask=attention_masks,
do_sample=True,
max_length=256,
top_k=20,
top_p=0.9,
early_stopping=False,
num_return_sequences=5
)
final_outputs = []
for beam_output in beam_outputs:
text_para = tokenizer.decode(beam_output, skip_special_tokens=True,clean_up_tokenization_spaces=True)
if text.lower() != text_para.lower() or text not in final_outputs:
final_outputs.append(text_para)
print(final_outputs)
```
### Output
```out
['Într-un interviu cu Radio Europa Liberă România, el a spus că Bucureștiul este pregătit în orice moment și ar dura doar o oră de la cererea până când gazele ar ajunge la Chișinău.']
``` |
NikitaZagainov/Llama-2-13b-fine-tuned-for-q-a | NikitaZagainov | "2023-12-10T17:04:41Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-13b-hf",
"base_model:adapter:meta-llama/Llama-2-13b-hf",
"region:us"
] | null | "2023-12-10T16:57:50Z" | ---
library_name: peft
base_model: meta-llama/Llama-2-13b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0 |
fine-tuned/ArguAna-32000-384-gpt-4o-2024-05-13-60385830 | fine-tuned | "2024-05-29T12:28:31Z" | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"en",
"dataset:fine-tuned/ArguAna-32000-384-gpt-4o-2024-05-13-60385830",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2024-05-29T12:28:02Z" | ---
license: apache-2.0
datasets:
- fine-tuned/ArguAna-32000-384-gpt-4o-2024-05-13-60385830
- allenai/c4
language:
- en
- en
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
---
This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case:
None
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/ArguAna-32000-384-gpt-4o-2024-05-13-60385830',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
CoolSpring/creative-writing-llm-detector-deberta-v3-xsmall | CoolSpring | "2024-08-03T09:16:46Z" | 108 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-08-03T09:16:22Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lrycro/bert-phishing-categorization-tokenizer3 | lrycro | "2024-05-31T09:52:02Z" | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-05-31T09:52:01Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lesso17/371b9121-b2b5-422b-ba38-d2f77e422cff | lesso17 | "2025-02-16T10:55:27Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-02-16T08:20:55Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 371b9121-b2b5-422b-ba38-d2f77e422cff
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# 371b9121-b2b5-422b-ba38-d2f77e422cff
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7215
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000217
- train_batch_size: 4
- eval_batch_size: 4
- seed: 170
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 3.1707 |
| 2.8061 | 0.0005 | 50 | 2.9568 |
| 2.8023 | 0.0011 | 100 | 2.8996 |
| 2.7131 | 0.0016 | 150 | 2.8569 |
| 2.8209 | 0.0021 | 200 | 2.8228 |
| 2.7838 | 0.0027 | 250 | 2.8029 |
| 2.6598 | 0.0032 | 300 | 2.7662 |
| 2.7703 | 0.0037 | 350 | 2.7444 |
| 2.7915 | 0.0043 | 400 | 2.7271 |
| 2.7784 | 0.0048 | 450 | 2.7224 |
| 2.61 | 0.0053 | 500 | 2.7215 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
GOATsan/ppo-LunarLander-v2 | GOATsan | "2023-07-26T11:12:32Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-07-26T11:12:15Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 273.88 +/- 13.63
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
saidabizi/SmolLM2-FT-MyDataset | saidabizi | "2025-03-03T01:01:10Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"smol-course",
"module_1",
"trl",
"sft",
"conversational",
"base_model:HuggingFaceTB/SmolLM2-135M",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-03T01:00:33Z" | ---
base_model: HuggingFaceTB/SmolLM2-135M
library_name: transformers
model_name: SmolLM2-FT-MyDataset
tags:
- generated_from_trainer
- smol-course
- module_1
- trl
- sft
licence: license
---
# Model Card for SmolLM2-FT-MyDataset
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="saidabizi/SmolLM2-FT-MyDataset", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bizisaida04-fisher-college/huggingface/runs/vk1row6j)
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
seongju/kor-3i4k-bert-base-cased | seongju | "2021-07-20T07:58:11Z" | 21 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-03-02T23:29:05Z" | ### Model information
* language : Korean
* fine tuning data : [kor_3i4k](https://huggingface.co/datasets/kor_3i4k)
* License : CC-BY-SA 4.0
* Base model : [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased)
* input : sentence
* output : intent
----
### Train information
* train_runtime: 2376.638
* train_steps_per_second: 2.175
* train_loss: 0.356829648599977
* epoch: 3.0
----
### How to use
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained (
"seongju/kor-3i4k-bert-base-cased"
)
model = AutoModelForSequenceClassification.from_pretrained (
"seongju/kor-3i4k-bert-base-cased"
)
inputs = tokenizer(
"너는 지금 무엇을 하고 있니?",
padding=True, truncation=True, max_length=128, return_tensors="pt"
)
outputs = model(**inputs)
probs = outputs[0].softmax(1)
output = probs.argmax().item()
``` |
bowilleatyou/fde12d25-efd4-4be7-997d-4d2e40d60410 | bowilleatyou | "2025-02-24T05:17:00Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-02-24T05:12:44Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/sweturner_-_sweaibits_llama_7b-tuned-tz-gguf | RichardErkhov | "2025-04-08T21:28:34Z" | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-04-08T19:39:49Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
sweaibits_llama_7b-tuned-tz - GGUF
- Model creator: https://huggingface.co/sweturner/
- Original model: https://huggingface.co/sweturner/sweaibits_llama_7b-tuned-tz/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [sweaibits_llama_7b-tuned-tz.Q2_K.gguf](https://huggingface.co/RichardErkhov/sweturner_-_sweaibits_llama_7b-tuned-tz-gguf/blob/main/sweaibits_llama_7b-tuned-tz.Q2_K.gguf) | Q2_K | 2.36GB |
| [sweaibits_llama_7b-tuned-tz.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/sweturner_-_sweaibits_llama_7b-tuned-tz-gguf/blob/main/sweaibits_llama_7b-tuned-tz.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
| [sweaibits_llama_7b-tuned-tz.IQ3_S.gguf](https://huggingface.co/RichardErkhov/sweturner_-_sweaibits_llama_7b-tuned-tz-gguf/blob/main/sweaibits_llama_7b-tuned-tz.IQ3_S.gguf) | IQ3_S | 2.75GB |
| [sweaibits_llama_7b-tuned-tz.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/sweturner_-_sweaibits_llama_7b-tuned-tz-gguf/blob/main/sweaibits_llama_7b-tuned-tz.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
| [sweaibits_llama_7b-tuned-tz.IQ3_M.gguf](https://huggingface.co/RichardErkhov/sweturner_-_sweaibits_llama_7b-tuned-tz-gguf/blob/main/sweaibits_llama_7b-tuned-tz.IQ3_M.gguf) | IQ3_M | 2.9GB |
| [sweaibits_llama_7b-tuned-tz.Q3_K.gguf](https://huggingface.co/RichardErkhov/sweturner_-_sweaibits_llama_7b-tuned-tz-gguf/blob/main/sweaibits_llama_7b-tuned-tz.Q3_K.gguf) | Q3_K | 3.07GB |
| [sweaibits_llama_7b-tuned-tz.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/sweturner_-_sweaibits_llama_7b-tuned-tz-gguf/blob/main/sweaibits_llama_7b-tuned-tz.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
| [sweaibits_llama_7b-tuned-tz.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/sweturner_-_sweaibits_llama_7b-tuned-tz-gguf/blob/main/sweaibits_llama_7b-tuned-tz.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
| [sweaibits_llama_7b-tuned-tz.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/sweturner_-_sweaibits_llama_7b-tuned-tz-gguf/blob/main/sweaibits_llama_7b-tuned-tz.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
| [sweaibits_llama_7b-tuned-tz.Q4_0.gguf](https://huggingface.co/RichardErkhov/sweturner_-_sweaibits_llama_7b-tuned-tz-gguf/blob/main/sweaibits_llama_7b-tuned-tz.Q4_0.gguf) | Q4_0 | 3.56GB |
| [sweaibits_llama_7b-tuned-tz.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/sweturner_-_sweaibits_llama_7b-tuned-tz-gguf/blob/main/sweaibits_llama_7b-tuned-tz.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
| [sweaibits_llama_7b-tuned-tz.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/sweturner_-_sweaibits_llama_7b-tuned-tz-gguf/blob/main/sweaibits_llama_7b-tuned-tz.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
| [sweaibits_llama_7b-tuned-tz.Q4_K.gguf](https://huggingface.co/RichardErkhov/sweturner_-_sweaibits_llama_7b-tuned-tz-gguf/blob/main/sweaibits_llama_7b-tuned-tz.Q4_K.gguf) | Q4_K | 3.8GB |
| [sweaibits_llama_7b-tuned-tz.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/sweturner_-_sweaibits_llama_7b-tuned-tz-gguf/blob/main/sweaibits_llama_7b-tuned-tz.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
| [sweaibits_llama_7b-tuned-tz.Q4_1.gguf](https://huggingface.co/RichardErkhov/sweturner_-_sweaibits_llama_7b-tuned-tz-gguf/blob/main/sweaibits_llama_7b-tuned-tz.Q4_1.gguf) | Q4_1 | 3.95GB |
| [sweaibits_llama_7b-tuned-tz.Q5_0.gguf](https://huggingface.co/RichardErkhov/sweturner_-_sweaibits_llama_7b-tuned-tz-gguf/blob/main/sweaibits_llama_7b-tuned-tz.Q5_0.gguf) | Q5_0 | 4.33GB |
| [sweaibits_llama_7b-tuned-tz.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/sweturner_-_sweaibits_llama_7b-tuned-tz-gguf/blob/main/sweaibits_llama_7b-tuned-tz.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
| [sweaibits_llama_7b-tuned-tz.Q5_K.gguf](https://huggingface.co/RichardErkhov/sweturner_-_sweaibits_llama_7b-tuned-tz-gguf/blob/main/sweaibits_llama_7b-tuned-tz.Q5_K.gguf) | Q5_K | 4.45GB |
| [sweaibits_llama_7b-tuned-tz.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/sweturner_-_sweaibits_llama_7b-tuned-tz-gguf/blob/main/sweaibits_llama_7b-tuned-tz.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
| [sweaibits_llama_7b-tuned-tz.Q5_1.gguf](https://huggingface.co/RichardErkhov/sweturner_-_sweaibits_llama_7b-tuned-tz-gguf/blob/main/sweaibits_llama_7b-tuned-tz.Q5_1.gguf) | Q5_1 | 4.72GB |
| [sweaibits_llama_7b-tuned-tz.Q6_K.gguf](https://huggingface.co/RichardErkhov/sweturner_-_sweaibits_llama_7b-tuned-tz-gguf/blob/main/sweaibits_llama_7b-tuned-tz.Q6_K.gguf) | Q6_K | 5.15GB |
| [sweaibits_llama_7b-tuned-tz.Q8_0.gguf](https://huggingface.co/RichardErkhov/sweturner_-_sweaibits_llama_7b-tuned-tz-gguf/blob/main/sweaibits_llama_7b-tuned-tz.Q8_0.gguf) | Q8_0 | 6.67GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ReadyArt/Forgotten-Abomination-8B-V2.2-Q4_K_M-GGUF | ReadyArt | "2025-03-01T05:24:40Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:ReadyArt/Forgotten-Abomination-8B-V2.2",
"base_model:quantized:ReadyArt/Forgotten-Abomination-8B-V2.2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-01T05:24:17Z" | ---
base_model: ReadyArt/Forgotten-Abomination-8B-V2.2
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# sleepdeprived3/Forgotten-Abomination-8B-V2.2-Q4_K_M-GGUF
This model was converted to GGUF format from [`ReadyArt/Forgotten-Abomination-8B-V2.2`](https://huggingface.co/ReadyArt/Forgotten-Abomination-8B-V2.2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ReadyArt/Forgotten-Abomination-8B-V2.2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo sleepdeprived3/Forgotten-Abomination-8B-V2.2-Q4_K_M-GGUF --hf-file forgotten-abomination-8b-v2.2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo sleepdeprived3/Forgotten-Abomination-8B-V2.2-Q4_K_M-GGUF --hf-file forgotten-abomination-8b-v2.2-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo sleepdeprived3/Forgotten-Abomination-8B-V2.2-Q4_K_M-GGUF --hf-file forgotten-abomination-8b-v2.2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo sleepdeprived3/Forgotten-Abomination-8B-V2.2-Q4_K_M-GGUF --hf-file forgotten-abomination-8b-v2.2-q4_k_m.gguf -c 2048
```
|
RichardErkhov/bongchoi_-_MoMo-70B-V1.0-gguf | RichardErkhov | "2024-05-27T05:08:14Z" | 7 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | "2024-05-26T09:14:01Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
MoMo-70B-V1.0 - GGUF
- Model creator: https://huggingface.co/bongchoi/
- Original model: https://huggingface.co/bongchoi/MoMo-70B-V1.0/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [MoMo-70B-V1.0.Q2_K.gguf](https://huggingface.co/RichardErkhov/bongchoi_-_MoMo-70B-V1.0-gguf/blob/main/MoMo-70B-V1.0.Q2_K.gguf) | Q2_K | 23.71GB |
| [MoMo-70B-V1.0.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/bongchoi_-_MoMo-70B-V1.0-gguf/blob/main/MoMo-70B-V1.0.IQ3_XS.gguf) | IQ3_XS | 26.37GB |
| [MoMo-70B-V1.0.IQ3_S.gguf](https://huggingface.co/RichardErkhov/bongchoi_-_MoMo-70B-V1.0-gguf/blob/main/MoMo-70B-V1.0.IQ3_S.gguf) | IQ3_S | 27.86GB |
| [MoMo-70B-V1.0.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/bongchoi_-_MoMo-70B-V1.0-gguf/blob/main/MoMo-70B-V1.0.Q3_K_S.gguf) | Q3_K_S | 27.86GB |
| [MoMo-70B-V1.0.IQ3_M.gguf](https://huggingface.co/RichardErkhov/bongchoi_-_MoMo-70B-V1.0-gguf/blob/main/MoMo-70B-V1.0.IQ3_M.gguf) | IQ3_M | 28.82GB |
| [MoMo-70B-V1.0.Q3_K.gguf](https://huggingface.co/RichardErkhov/bongchoi_-_MoMo-70B-V1.0-gguf/blob/main/MoMo-70B-V1.0.Q3_K.gguf) | Q3_K | 30.99GB |
| [MoMo-70B-V1.0.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/bongchoi_-_MoMo-70B-V1.0-gguf/blob/main/MoMo-70B-V1.0.Q3_K_M.gguf) | Q3_K_M | 30.99GB |
| [MoMo-70B-V1.0.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/bongchoi_-_MoMo-70B-V1.0-gguf/blob/main/MoMo-70B-V1.0.Q3_K_L.gguf) | Q3_K_L | 33.67GB |
| [MoMo-70B-V1.0.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/bongchoi_-_MoMo-70B-V1.0-gguf/blob/main/MoMo-70B-V1.0.IQ4_XS.gguf) | IQ4_XS | 34.64GB |
| [MoMo-70B-V1.0.Q4_0.gguf](https://huggingface.co/RichardErkhov/bongchoi_-_MoMo-70B-V1.0-gguf/blob/main/MoMo-70B-V1.0.Q4_0.gguf) | Q4_0 | 36.2GB |
| [MoMo-70B-V1.0.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/bongchoi_-_MoMo-70B-V1.0-gguf/blob/main/MoMo-70B-V1.0.IQ4_NL.gguf) | IQ4_NL | 36.55GB |
| [MoMo-70B-V1.0.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/bongchoi_-_MoMo-70B-V1.0-gguf/blob/main/MoMo-70B-V1.0.Q4_K_S.gguf) | Q4_K_S | 36.55GB |
| [MoMo-70B-V1.0.Q4_K.gguf](https://huggingface.co/RichardErkhov/bongchoi_-_MoMo-70B-V1.0-gguf/tree/main/) | Q4_K | 38.58GB |
| [MoMo-70B-V1.0.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/bongchoi_-_MoMo-70B-V1.0-gguf/tree/main/) | Q4_K_M | 38.58GB |
| [MoMo-70B-V1.0.Q4_1.gguf](https://huggingface.co/RichardErkhov/bongchoi_-_MoMo-70B-V1.0-gguf/tree/main/) | Q4_1 | 40.2GB |
| [MoMo-70B-V1.0.Q5_0.gguf](https://huggingface.co/RichardErkhov/bongchoi_-_MoMo-70B-V1.0-gguf/tree/main/) | Q5_0 | 44.2GB |
| [MoMo-70B-V1.0.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/bongchoi_-_MoMo-70B-V1.0-gguf/tree/main/) | Q5_K_S | 44.2GB |
| [MoMo-70B-V1.0.Q5_K.gguf](https://huggingface.co/RichardErkhov/bongchoi_-_MoMo-70B-V1.0-gguf/tree/main/) | Q5_K | 45.41GB |
| [MoMo-70B-V1.0.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/bongchoi_-_MoMo-70B-V1.0-gguf/tree/main/) | Q5_K_M | 45.41GB |
| [MoMo-70B-V1.0.Q5_1.gguf](https://huggingface.co/RichardErkhov/bongchoi_-_MoMo-70B-V1.0-gguf/tree/main/) | Q5_1 | 48.2GB |
| [MoMo-70B-V1.0.Q6_K.gguf](https://huggingface.co/RichardErkhov/bongchoi_-_MoMo-70B-V1.0-gguf/tree/main/) | Q6_K | 52.7GB |
| [MoMo-70B-V1.0.Q8_0.gguf](https://huggingface.co/RichardErkhov/bongchoi_-_MoMo-70B-V1.0-gguf/tree/main/) | Q8_0 | 68.26GB |
Original model description:
---
license: llama2
language:
- en
library_name: transformers
---
## Dataset Details
### Used Datasets
- Orca-style dataset
- Alpaca-style dataset
- No other dataset was used except for the dataset mentioned above
- No benchmark test set or the training set are used
## Prompt Template
### Alpaca-style
|
acbdkk/mjhq30k-model | acbdkk | "2024-05-12T10:50:51Z" | 0 | 0 | null | [
"safetensors",
"pytorch_model_hub_mixin",
"model_hub_mixin",
"region:us"
] | null | "2024-05-12T10:50:32Z" | ---
tags:
- pytorch_model_hub_mixin
- model_hub_mixin
---
This model has been pushed to the Hub using ****:
- Repo: [More Information Needed]
- Docs: [More Information Needed] |
ndiy/NLPCC14 | ndiy | "2024-05-10T07:51:38Z" | 118 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:techthiyanes/chinese_sentiment",
"base_model:finetune:techthiyanes/chinese_sentiment",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-05-09T06:23:45Z" | ---
tags:
- generated_from_trainer
base_model: techthiyanes/chinese_sentiment
metrics:
- accuracy
model-index:
- name: NLPCC14
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NLPCC14
This model is a fine-tuned version of [techthiyanes/chinese_sentiment](https://huggingface.co/techthiyanes/chinese_sentiment) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1988
- Accuracy: 0.93
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.308 | 1.0 | 599 | 0.1970 | 0.9258 |
| 0.2976 | 2.0 | 1198 | 0.1988 | 0.93 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
saransh03sharma/mintrec-mistral-2-7b-150-10-shot | saransh03sharma | "2024-04-08T05:33:27Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-08T05:28:06Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mohammadlalavi/mohammad | mohammadlalavi | "2023-10-01T18:33:19Z" | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | "2023-10-01T18:33:19Z" | ---
license: bigscience-openrail-m
---
|
StepLaw/StepLaw-N_214M-D_19.0B-LR1.38E-03-BS1048576 | StepLaw | "2025-04-07T11:59:59Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"step1",
"text-generation",
"StepLaw",
"causal-lm",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-06T03:15:22Z" | ---
license: apache-2.0
tags:
- StepLaw
- causal-lm
language:
- en
library_name: transformers
pipeline_tag: text-generation
model-index:
- name: step2v2_0618_h960_ffnh9368_numh15_numl7_lr1.38E-03_bs512_ti19073_mlr1.00E-05
results: []
---
# Wandb Model Name: step2v2_0618_h960_ffnh9368_numh15_numl7_lr1.38E-03_bs512_ti19073_mlr1.00E-05
This model is part of the [StepLaw-N_214M-D_19.0B](https://huggingface.co/collections/StepLaw/StepLaw-N_214M-D_19.0B) collection.
## Model Specifications
### Architecture
- **Hidden size (H)**: 960
- **Feed-forward network size (FFN)**: 9368
- **Attention heads**: 15
- **Layers**: 7
- **Parameter count**: 214MM
### Training Parameters
- **Learning rate (lr)**: 1.38E-03
- **Batch size (bs)**: 512
- **Training iterations**: 19073
- **Training tokens (D)**: 20.0B
## Model Description
StepLaw models are trained with various hyperparameter settings to enable research on scaling laws and hyperparameter optimization. This specific model was trained with learning rate 1.38E-03 and batch size 512 for 19073 iterations, using a total of 20.0B training tokens.
## Usage Example
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "StepLaw/StepLaw-N_214M-D_19.0B-LR1.38E-03-BS1048576"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True, use_fast=False)
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True)
# Generate text
inputs = tokenizer("A long time ago in a galaxy far, far away", return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```## Part of StepLaw Project
StepLaw is an initiative to provide thousands of models for optimal hyperparameter research.
Visit [StepLaw Project](https://step-law.github.io/) for more information.
|
BarakaM10/Aut-Detect | BarakaM10 | "2025-04-10T08:23:12Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2025-04-10T08:23:12Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
nbninh/a99190ef-0ac0-4769-8522-ad0ddde4e80b | nbninh | "2025-01-11T21:41:46Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28",
"base_model:adapter:rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-11T20:50:30Z" | ---
library_name: peft
base_model: rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a99190ef-0ac0-4769-8522-ad0ddde4e80b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1b64d7adb6ca8fe8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1b64d7adb6ca8fe8_train_data.json
type:
field_input: context
field_instruction: question
field_output: final_decision
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nbninh/a99190ef-0ac0-4769-8522-ad0ddde4e80b
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/1b64d7adb6ca8fe8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 79901faf-0eea-441b-865e-f4ed7923921d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 79901faf-0eea-441b-865e-f4ed7923921d
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# a99190ef-0ac0-4769-8522-ad0ddde4e80b
This model is a fine-tuned version of [rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28](https://huggingface.co/rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0470
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.2156 | 0.0080 | 200 | 0.0470 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
LuciAl/lora | LuciAl | "2025-02-15T03:23:47Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-02-15T03:23:31Z" | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
widget:
- output:
url: sample/lora_004800_00_20250215031006.png
text: lora woman in a bikini on a beach
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: lora
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# lora
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `lora` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
Subsets and Splits