Search is not available for this dataset
modelId
stringlengths 5
137
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-01 00:42:32
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 405
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-01 00:42:15
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
darthrevenge/Reinforce-Carpole-1 | darthrevenge | "2023-03-05T18:51:07Z" | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-05T18:50:58Z" | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Carpole-1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
elozano/tweet_emotion_eval | elozano | "2022-02-07T18:04:47Z" | 5 | 4 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:tweet_eval",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-03-02T23:29:05Z" | ---
license: mit
datasets:
- tweet_eval
language: en
widget:
- text: "Stop sharing which songs did you listen to during this year on Spotify, NOBODY CARES"
example_title: "Anger"
- text: "I love that joke HAHAHAHAHA"
example_title: "Joy"
- text: "Despite I've not studied a lot for this exam, I think I will pass 😜"
example_title: "Optimism"
- text: "My dog died this morning..."
example_title: "Sadness"
---
|
Ayouta300/bert-base-uncased-finetuned-cola | Ayouta300 | "2023-05-07T20:04:32Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-05-07T11:14:30Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5155383069979991
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4595
- Matthews Correlation: 0.5155
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4923 | 1.0 | 535 | 0.4595 | 0.5155 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
SEBIS/legal_t5_small_trans_sv_cs | SEBIS | "2021-06-23T10:05:27Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"translation Swedish Cszech model",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-03-02T23:29:04Z" |
---
language: Swedish Cszech
tags:
- translation Swedish Cszech model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "En kvalitetscertifiering av administrativa förfaranden i enlighet med ISO eller motsvarande normer skulle dessutom leda till likvärdiga villkor för sjöfartsadministrationer."
---
# legal_t5_small_trans_sv_cs model
Model on translating legal text from Swedish to Cszech. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_sv_cs is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Swedish to Cszech.
### How to use
Here is how to use this model to translate legal text from Swedish to Cszech in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_sv_cs"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_sv_cs", do_lower_case=False,
skip_special_tokens=True),
device=0
)
sv_text = "En kvalitetscertifiering av administrativa förfaranden i enlighet med ISO eller motsvarande normer skulle dessutom leda till likvärdiga villkor för sjöfartsadministrationer."
pipeline([sv_text], max_length=512)
```
## Training data
The legal_t5_small_trans_sv_cs model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_sv_cs | 45.569|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
OneAndZeros/Mollyminx000 | OneAndZeros | "2025-03-17T08:48:37Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-03-17T08:48:30Z" | ---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Mollyminx000!!!
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# Mollyminx000
<Gallery />
## Model description
Lora of Mollyminx000!!!
## Trigger words
You should use `Mollyminx000!!!` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/OneAndZeros/Mollyminx000/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
|
tsunemoto/mistral-ft-optimized-1218-GGUF | tsunemoto | "2023-12-19T03:13:57Z" | 23 | 3 | null | [
"gguf",
"GGUF",
"en",
"endpoints_compatible",
"region:us"
] | null | "2023-12-19T03:05:17Z" | ---
title: "mistral-ft-optimized-1218 Quantized in GGUF"
tags:
- GGUF
language: en
---

# Tsunemoto GGUF's of mistral-ft-optimized-1218
This is a GGUF quantization of mistral-ft-optimized-1218.
## Original Repo Link:
[Original Repository](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)
## Original Model Card:
---
This model is intended to be a strong base suitable for downstream fine-tuning on a variety of tasks. Based on our internal evaluations, we believe it's one of the strongest models for most down-stream tasks. You can read more about our development and evaluation process [here](https://openpipe.ai/blog/mistral-7b-fine-tune-optimized). |
gaurav-shiperone/personal | gaurav-shiperone | "2023-08-29T14:53:37Z" | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | "2023-08-29T13:31:20Z" |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of a Gaurav Chavan
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
alchemist69/75adc1e8-7969-4562-b283-a5ecd11c4a87 | alchemist69 | "2025-02-23T09:48:04Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-1.7B-Instruct",
"base_model:adapter:unsloth/SmolLM2-1.7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-02-23T08:59:42Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-1.7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 75adc1e8-7969-4562-b283-a5ecd11c4a87
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-1.7B-Instruct
bf16: true
chat_template: llama3
dataloader_num_workers: 24
dataset_prepared_path: null
datasets:
- data_files:
- 0c1367355c2510f8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0c1367355c2510f8_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 3
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: 300
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: true
hub_model_id: alchemist69/75adc1e8-7969-4562-b283-a5ecd11c4a87
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 3000
micro_batch_size: 2
mlflow_experiment_name: /tmp/0c1367355c2510f8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1000
optim_args:
adam_beta1: 0.9
adam_beta2: 0.999
adam_epsilon: 1e-8
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 300
saves_per_epoch: null
sequence_len: 512
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a6d954dd-91ca-429a-9051-83e5cd6e3724
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: a6d954dd-91ca-429a-9051-83e5cd6e3724
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 75adc1e8-7969-4562-b283-a5ecd11c4a87
This model is a fine-tuned version of [unsloth/SmolLM2-1.7B-Instruct](https://huggingface.co/unsloth/SmolLM2-1.7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2797
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.999,adam_epsilon=1e-8
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 3000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.4547 | 0.0001 | 1 | 0.6432 |
| 0.2497 | 0.0244 | 300 | 0.3077 |
| 0.1992 | 0.0488 | 600 | 0.2998 |
| 0.2732 | 0.0732 | 900 | 0.2939 |
| 0.2394 | 0.0976 | 1200 | 0.2897 |
| 0.1807 | 0.1220 | 1500 | 0.2862 |
| 0.2001 | 0.1465 | 1800 | 0.2834 |
| 0.1988 | 0.1709 | 2100 | 0.2813 |
| 0.196 | 0.1953 | 2400 | 0.2798 |
| 0.2205 | 0.2197 | 2700 | 0.2794 |
| 0.2728 | 0.2441 | 3000 | 0.2797 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
sebapulgar/test_boy_eleven | sebapulgar | "2025-02-18T16:59:57Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-02-18T16:44:35Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Test_Boy_Eleven
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('sebapulgar/test_boy_eleven', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
albertus-sussex/simcse-test-book-reference_5_to_verify_5-fold-1-bs-64-lr-3e-05-epochs-5-uq-False | albertus-sussex | "2025-03-25T10:38:26Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2025-03-25T10:38:01Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rithwik-db/triplets-e5-base-500-2183ce-3be9a5 | rithwik-db | "2023-04-09T00:48:28Z" | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2023-04-09T00:48:22Z" | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# rithwik-db/triplets-e5-base-500-2183ce-3be9a5
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('rithwik-db/triplets-e5-base-500-2183ce-3be9a5')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('rithwik-db/triplets-e5-base-500-2183ce-3be9a5')
model = AutoModel.from_pretrained('rithwik-db/triplets-e5-base-500-2183ce-3be9a5')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=rithwik-db/triplets-e5-base-500-2183ce-3be9a5)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 8228 with parameters:
```
{'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5}
```
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
prxy5606/127fa9de-78df-4fe6-909c-0bd69779bf72 | prxy5606 | "2025-01-16T17:21:09Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:adapter:unsloth/Llama-3.2-3B-Instruct",
"license:llama3.2",
"region:us"
] | null | "2025-01-16T17:04:58Z" | ---
library_name: peft
license: llama3.2
base_model: unsloth/Llama-3.2-3B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 127fa9de-78df-4fe6-909c-0bd69779bf72
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Llama-3.2-3B-Instruct
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- e2df8684dfdf5ba7_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e2df8684dfdf5ba7_train_data.json
type:
field_instruction: question
field_output: paragraph
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: prxy5606/127fa9de-78df-4fe6-909c-0bd69779bf72
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/e2df8684dfdf5ba7_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1f895fe4-7e9f-4e6f-b9d9-99228b1e5679
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1f895fe4-7e9f-4e6f-b9d9-99228b1e5679
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 127fa9de-78df-4fe6-909c-0bd69779bf72
This model is a fine-tuned version of [unsloth/Llama-3.2-3B-Instruct](https://huggingface.co/unsloth/Llama-3.2-3B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1511
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.8135 | 0.0102 | 1 | 2.8315 |
| 2.5941 | 0.5102 | 50 | 2.5082 |
| 2.3576 | 1.0204 | 100 | 2.3127 |
| 2.2055 | 1.5306 | 150 | 2.1873 |
| 2.0997 | 2.0408 | 200 | 2.1511 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mohamedlamine/wav2vec2-finetuned-wolofdata | mohamedlamine | "2023-02-28T17:15:18Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2023-02-28T08:41:05Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-finetuned-wolofdata
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-finetuned-wolofdata
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7747
- Wer: 0.6774
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0723 | 0.75 | 100 | 0.7747 | 0.6774 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
Samuael/amt5-base-finetuned-amt5 | Samuael | "2024-02-21T22:09:20Z" | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:Samuael/amt5-base",
"base_model:finetune:Samuael/amt5-base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-02-21T19:23:57Z" | ---
base_model: Samuael/amt5-base
tags:
- generated_from_trainer
model-index:
- name: amt5-base-finetuned-amt5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# amt5-base-finetuned-amt5
This model is a fine-tuned version of [Samuael/amt5-base](https://huggingface.co/Samuael/amt5-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| No log | 1.0 | 23 | nan | 0.9792 | 0.9259 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
CoexistAI/deep_ft7_grp_16bit | CoexistAI | "2025-02-25T17:53:31Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"grpo",
"conversational",
"en",
"base_model:CoexistAI/deep_ft6_grp_16bit",
"base_model:finetune:CoexistAI/deep_ft6_grp_16bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-25T17:47:33Z" | ---
base_model: CoexistAI/deep_ft6_grp_16bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- grpo
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** CoexistAI
- **License:** apache-2.0
- **Finetuned from model :** CoexistAI/deep_ft6_grp_16bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Danielbrdz/Barcenas-3b-GRPO-ES | Danielbrdz | "2025-02-17T17:24:54Z" | 0 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"es",
"dataset:Danielbrdz/gsm8k-ES",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-17T17:06:04Z" | ---
license: llama3.2
datasets:
- Danielbrdz/gsm8k-ES
language:
- es
base_model:
- meta-llama/Llama-3.2-3B-Instruct
pipeline_tag: text-generation
library_name: transformers
---
Barcenas 3b GRPO ES
Basado en el alpindale/Llama-3.2-3B-Instruct
Y entrenado con datos en español de Danielbrdz/gsm8k-ES
El objetivo de este LLM es usar el tipo de entrenamiento GRPO con datos 100% en español.
Tener un modelo pequeño que razone en español y que puede ejecutarse en la mayoría de computadoras.
------------------------------------------------------------------------
Barcenas 3b GRPO ES
Based on alpindale/Llama-3.2-3B-Instruct
And trained with Spanish data from Danielbrdz/gsm8k-ES
The goal of this LLM is to use the GRPO training type with 100% Spanish data.
To have a small model that reasons in Spanish and that can be run on most computers.
Made with ❤️ in Guadalupe, Nuevo Leon, Mexico 🇲🇽 |
fedovtt/f0c07999-9d89-46af-b13e-84a0f4f414a3 | fedovtt | "2025-01-24T10:03:13Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Hermes-3-Llama-3.1-8B",
"base_model:adapter:NousResearch/Hermes-3-Llama-3.1-8B",
"license:llama3",
"region:us"
] | null | "2025-01-24T07:28:34Z" | ---
library_name: peft
license: llama3
base_model: NousResearch/Hermes-3-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f0c07999-9d89-46af-b13e-84a0f4f414a3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Hermes-3-Llama-3.1-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3ac1922e7e0e1179_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3ac1922e7e0e1179_train_data.json
type:
field_input: body
field_instruction: selftext
field_output: title
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: fedovtt/f0c07999-9d89-46af-b13e-84a0f4f414a3
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 78GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/3ac1922e7e0e1179_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: c867cf44-d6f5-49e2-8c4f-3a2bd54ad0e7
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: c867cf44-d6f5-49e2-8c4f-3a2bd54ad0e7
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# f0c07999-9d89-46af-b13e-84a0f4f414a3
This model is a fine-tuned version of [NousResearch/Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6692
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 2.9542 |
| 2.7251 | 0.0002 | 5 | 2.9169 |
| 2.7393 | 0.0003 | 10 | 2.8097 |
| 2.7156 | 0.0005 | 15 | 2.7121 |
| 2.5507 | 0.0007 | 20 | 2.6928 |
| 2.814 | 0.0008 | 25 | 2.6728 |
| 2.6813 | 0.0010 | 30 | 2.6692 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
John6666/astolfo-mix-xl-tgmd192-sdxl | John6666 | "2024-09-07T00:10:14Z" | 655 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"scenery",
"fantasy",
"uniform merge",
"bayesian merge",
"autombw",
"ties merge",
"pure merge",
"ties-soup",
"model stock",
"geometric median",
"en",
"base_model:6DammK9/AstolfoMix-XL",
"base_model:finetune:6DammK9/AstolfoMix-XL",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-09-07T00:05:48Z" | ---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- scenery
- fantasy
- uniform merge
- bayesian merge
- autombw
- ties merge
- pure merge
- ties-soup
- model stock
- geometric median
base_model: 6DammK9/AstolfoMix-XL
---
Original model is [here](https://huggingface.co/6DammK9/AstolfoMix-XL) and on [Civitai](https://civitai.com/models/309514?modelVersionId=812893). The author is [here](https://huggingface.co/6DammK9).
This model created by [6DammK9](https://civitai.com/user/6DammK9). |
Maximich/binary-classifier | Maximich | "2024-04-09T10:46:02Z" | 106 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-04-09T10:45:01Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
adbrasi/girl-trained-sd3 | adbrasi | "2024-06-13T02:52:19Z" | 2 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"sd3",
"sd3-diffusers",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-3-medium-diffusers",
"base_model:finetune:stabilityai/stable-diffusion-3-medium-diffusers",
"license:openrail++",
"region:us"
] | text-to-image | "2024-06-13T02:12:47Z" | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- diffusers-training
- diffusers
- sd3
- sd3-diffusers
- template:sd-lora
base_model: stabilityai/stable-diffusion-3-medium-diffusers
instance_prompt: a photo of pmy girl
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SD3 DreamBooth LoRA - adbrasi/girl-trained-sd3
<Gallery />
## Model description
These are adbrasi/girl-trained-sd3 DreamBooth weights for stabilityai/stable-diffusion-3-medium-diffusers.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
## Trigger words
You should use a photo of pmy girl to trigger the image generation.
## Download model
[Download](adbrasi/girl-trained-sd3/tree/main) them in the Files & versions tab.
## License
Please adhere to the licensing terms as described `[here](https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/LICENSE)`.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
DevQuasar/bytedance-research.UI-TARS-7B-DPO-GGUF | DevQuasar | "2025-03-06T21:16:11Z" | 0 | 0 | null | [
"gguf",
"image-text-to-text",
"base_model:bytedance-research/UI-TARS-7B-DPO",
"base_model:quantized:bytedance-research/UI-TARS-7B-DPO",
"region:us"
] | image-text-to-text | "2025-03-06T17:41:51Z" | ---
base_model:
- bytedance-research/UI-TARS-7B-DPO
pipeline_tag: image-text-to-text
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
'Make knowledge free for everyone'
Quantized version of: [bytedance-research/UI-TARS-7B-DPO](https://huggingface.co/bytedance-research/UI-TARS-7B-DPO)
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a> |
MayBashendy/ArabicNewSplits4_FineTuningAraBERT_run2_AugV5_k1_task3_organization | MayBashendy | "2024-12-09T17:06:43Z" | 163 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-12-09T17:05:27Z" | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits4_FineTuningAraBERT_run2_AugV5_k1_task3_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits4_FineTuningAraBERT_run2_AugV5_k1_task3_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0309
- Qwk: 0.1822
- Mse: 1.0309
- Rmse: 1.0153
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.2857 | 2 | 3.2172 | -0.0028 | 3.2172 | 1.7937 |
| No log | 0.5714 | 4 | 1.7098 | -0.0070 | 1.7098 | 1.3076 |
| No log | 0.8571 | 6 | 1.1188 | 0.0588 | 1.1188 | 1.0577 |
| No log | 1.1429 | 8 | 0.8805 | -0.0097 | 0.8805 | 0.9384 |
| No log | 1.4286 | 10 | 0.9026 | 0.0103 | 0.9026 | 0.9501 |
| No log | 1.7143 | 12 | 0.7795 | 0.0520 | 0.7795 | 0.8829 |
| No log | 2.0 | 14 | 0.7161 | 0.0145 | 0.7161 | 0.8462 |
| No log | 2.2857 | 16 | 0.7571 | 0.0180 | 0.7571 | 0.8701 |
| No log | 2.5714 | 18 | 0.7579 | 0.0807 | 0.7579 | 0.8706 |
| No log | 2.8571 | 20 | 0.7162 | -0.0370 | 0.7162 | 0.8463 |
| No log | 3.1429 | 22 | 0.7191 | -0.0303 | 0.7191 | 0.8480 |
| No log | 3.4286 | 24 | 0.7197 | -0.0435 | 0.7197 | 0.8483 |
| No log | 3.7143 | 26 | 0.7373 | 0.1220 | 0.7373 | 0.8586 |
| No log | 4.0 | 28 | 0.7543 | 0.1163 | 0.7543 | 0.8685 |
| No log | 4.2857 | 30 | 0.7756 | 0.1813 | 0.7756 | 0.8807 |
| No log | 4.5714 | 32 | 0.8319 | 0.0497 | 0.8319 | 0.9121 |
| No log | 4.8571 | 34 | 0.9161 | 0.0609 | 0.9161 | 0.9571 |
| No log | 5.1429 | 36 | 0.9106 | 0.0288 | 0.9106 | 0.9543 |
| No log | 5.4286 | 38 | 0.8928 | 0.1560 | 0.8928 | 0.9449 |
| No log | 5.7143 | 40 | 0.9168 | 0.1570 | 0.9168 | 0.9575 |
| No log | 6.0 | 42 | 0.8856 | 0.0638 | 0.8856 | 0.9411 |
| No log | 6.2857 | 44 | 1.0023 | 0.1008 | 1.0023 | 1.0012 |
| No log | 6.5714 | 46 | 1.0151 | 0.1008 | 1.0151 | 1.0075 |
| No log | 6.8571 | 48 | 0.9717 | 0.1571 | 0.9717 | 0.9857 |
| No log | 7.1429 | 50 | 0.9115 | 0.1803 | 0.9115 | 0.9548 |
| No log | 7.4286 | 52 | 0.8133 | 0.1712 | 0.8133 | 0.9018 |
| No log | 7.7143 | 54 | 0.8022 | 0.1493 | 0.8022 | 0.8956 |
| No log | 8.0 | 56 | 0.8225 | 0.1150 | 0.8225 | 0.9069 |
| No log | 8.2857 | 58 | 0.8987 | 0.2134 | 0.8987 | 0.9480 |
| No log | 8.5714 | 60 | 0.9756 | 0.1815 | 0.9756 | 0.9877 |
| No log | 8.8571 | 62 | 1.0328 | 0.1882 | 1.0328 | 1.0163 |
| No log | 9.1429 | 64 | 1.0455 | 0.1882 | 1.0455 | 1.0225 |
| No log | 9.4286 | 66 | 1.0367 | 0.1822 | 1.0367 | 1.0182 |
| No log | 9.7143 | 68 | 1.0356 | 0.1822 | 1.0356 | 1.0177 |
| No log | 10.0 | 70 | 1.0309 | 0.1822 | 1.0309 | 1.0153 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
ethzanalytics/gpt-j-8bit-daily_dialogues | ethzanalytics | "2024-12-25T18:53:28Z" | 25 | 4 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gptj",
"text-generation",
"8bit",
"8-bit",
"quantization",
"compression",
"chatbot",
"dialogue",
"conversation",
"dataset:daily_dialog",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | text-generation | "2022-11-27T07:37:20Z" | ---
tags:
- text-generation
- 8bit
- 8-bit
- quantization
- compression
- chatbot
- dialogue
- conversation
datasets:
- daily_dialog
inference: False
license: apache-2.0
---
# ethzanalytics/gpt-j-8bit-daily_dialogues
<a href="https://colab.research.google.com/gist/pszemraj/e49c60aafe04acc52fcfdd1baefe12e4/-ai-msgbot-gpt-j-6b-8bit-with-hub.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
This version of `hivemind/gpt-j-6B-8bit` is fine-tuned on a parsed version of the [daily dialogues](https://huggingface.co/datasets/daily_dialog) dataset for an epoch. It can be used as a chatbot.
It is designed to be used with [ai-msgbot](https://github.com/pszemraj/ai-msgbot) to take advantage of prompt engineering in fine-tuning.
## Usage
_**NOTE: this needs to be loaded via the special patching technique** outlined in the hivemind model card (as with all 8bit models)_
Examples of how to load the model correctly are already in place in the notebook linked above. A `.py` of said notebook was uploaded to the repo for reference - [link here](https://huggingface.co/ethzanalytics/gpt-j-8bit-daily_dialogues/blob/main/_ai_msgbot_gpt_j_6b_8bit_with_hub.py)
## Training
For details, please see [this wandb report](https://wandb.ai/pszemraj/conversational-6B-train-vanilla/reports/Training-6B-GPT-J-8bit-for-Dialogue--VmlldzoyNTg3MzE0) for both the daily-dialogues version and the WoW version.
---
|
mergekit-community/Tigers-Abliterated-Upscaled | mergekit-community | "2025-02-17T06:53:34Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:mergekit-community/Tigers-Abliterated-9B",
"base_model:finetune:mergekit-community/Tigers-Abliterated-9B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-17T06:48:37Z" | ---
base_model:
- mergekit-community/Tigers-Abliterated-9B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the Passthrough merge method.
### Models Merged
The following models were included in the merge:
* [mergekit-community/Tigers-Abliterated-9B](https://huggingface.co/mergekit-community/Tigers-Abliterated-9B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: passthrough
dtype: bfloat16
slices:
- sources:
- model: mergekit-community/Tigers-Abliterated-9B
layer_range: [0,42]
- sources:
- model: mergekit-community/Tigers-Abliterated-9B
layer_range: [0,16]
- sources:
- model: mergekit-community/Tigers-Abliterated-9B
layer_range: [26,42]
```
|
albertus-sussex/veriscrape-simcse-job-reference_3_to_verify_7-fold-4 | albertus-sussex | "2025-03-26T17:18:20Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2025-03-26T16:11:10Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Germanikus/bloom_prompt_tuning_1706803479.5291765 | Germanikus | "2024-02-01T16:12:40Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2024-02-01T16:12:37Z" | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
KingKazma/cnn_dailymail_gpt2_prompt_tuning_500_10_3000_5_e2_s55555_v4_l4_v100 | KingKazma | "2023-08-13T21:02:41Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-08-13T18:20:44Z" | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
YakovElm/Hyperledger15Classic_Train_Balance_DATA_ratio_3 | YakovElm | "2023-06-09T04:14:50Z" | 61 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-06-09T04:14:01Z" | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Hyperledger15Classic_Train_Balance_DATA_ratio_3
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Hyperledger15Classic_Train_Balance_DATA_ratio_3
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4227
- Train Accuracy: 0.7913
- Validation Loss: 0.7163
- Validation Accuracy: 0.7230
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.5363 | 0.7435 | 0.6057 | 0.7160 | 0 |
| 0.4880 | 0.7722 | 0.5288 | 0.7512 | 1 |
| 0.4227 | 0.7913 | 0.7163 | 0.7230 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
alakxender/mms-tts-div-finetuned-md-f02 | alakxender | "2024-05-30T13:27:34Z" | 90 | 0 | transformers | [
"transformers",
"safetensors",
"vits",
"text-to-audio",
"dv",
"dataset:alakxender/dv_syn_speech_md",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | text-to-audio | "2024-05-28T18:23:16Z" | ---
library_name: transformers
datasets:
- alakxender/dv_syn_speech_md
language:
- dv
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ezrab/poca-SoccerTwos2b | ezrab | "2025-03-18T03:39:29Z" | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | "2025-03-18T03:39:13Z" | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ezrab/poca-SoccerTwos2b
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
NeonBohdan/stt-polyglot-de | NeonBohdan | "2022-02-22T17:39:43Z" | 0 | 0 | null | [
"tflite",
"license:apache-2.0",
"region:us"
] | null | "2022-03-02T23:29:04Z" | ---
license: apache-2.0
---
|
MinaMila/GermanCredit_ExtEval_Mistral_InstBase_20ep | MinaMila | "2025-01-10T18:49:42Z" | 12 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.3",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-10T18:46:57Z" | ---
base_model: unsloth/mistral-7b-instruct-v0.3
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MinaMila
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
rasyosef/bert-amharic-tokenizer-24k | rasyosef | "2024-05-10T21:09:09Z" | 0 | 0 | transformers | [
"transformers",
"am",
"dataset:oscar",
"dataset:mc4",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | "2024-04-17T22:24:21Z" | ---
license: mit
datasets:
- oscar
- mc4
language:
- am
library_name: transformers
---
# Amharic WordPiece Tokenizer
This repo contains a **WordPiece** tokenizer trained on the **Amharic** subset of the [oscar](https://huggingface.co/datasets/oscar) and [mc4](https://huggingface.co/datasets/mc4) datasets. It's the same as the **BERT** tokenizer but trained from scratch on an amharic text dataset, with a vocabulary size of `24576`.
# How to use
You can load the tokenizer from huggingface hub as follows.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("rasyosef/bert-amharic-tokenizer")
tokenizer.tokenize("የዓለምአቀፉ ነጻ ንግድ መስፋፋት ድህነትን ለማሸነፍ በሚደረገው ትግል አንዱ ጠቃሚ መሣሪያ ሊሆን መቻሉ ብዙ የሚነገርለት ጉዳይ ነው።")
```
Output:
```python
['የዓለም', '##አ', '##ቀፉ', 'ነጻ', 'ንግድ', 'መስፋፋት', 'ድህነትን', 'ለማሸነፍ', 'በሚደረገው', 'ትግል', 'አንዱ', 'ጠቃሚ', 'መሣሪያ', 'ሊሆን', 'መቻሉ', 'ብዙ', 'የሚነገር', '##ለት', 'ጉዳይ', 'ነው', '።']
``` |
common-canvas/CommonCanvas-S-NC | common-canvas | "2024-05-16T18:44:53Z" | 33 | 2 | diffusers | [
"diffusers",
"safetensors",
"common-canvas",
"en",
"dataset:common-canvas/commoncatalog-cc-by-sa",
"dataset:common-canvas/commoncatalog-cc-by",
"dataset:common-canvas/commoncatalog-cc-by-nc-sa",
"dataset:common-canvas/commoncatalog-cc-by-nc",
"arxiv:2310.16825",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-04-19T10:27:25Z" | ---
license: cc-by-nc-sa-4.0
tags:
- common-canvas
datasets:
- common-canvas/commoncatalog-cc-by-sa
- common-canvas/commoncatalog-cc-by
- common-canvas/commoncatalog-cc-by-nc-sa
- common-canvas/commoncatalog-cc-by-nc
language:
- en
---
# CommonCanvas-SNC
## Summary
CommonCanvas is a family of latent diffusion models capable of generating images from a given text prompt. The architecture is based off of Stable Diffusion 2. Different CommonCanvas models are trained exclusively on subsets of the CommonCatalog Dataset (See Data Card), a large dataset of Creative Commons licensed images with synthetic captions produced using a pre-trained BLIP-2 captioning model.
**Input:** CommonCatalog Text Captions
**Output:** CommonCatalog Images
**Architecture:** Stable Diffusion 2
**Version Number:** 0.1
The goal of this purpose is to produce a model that is competitive with Stable Diffusion 2, but to do so using an easily accessible dataset of known provenance. Doing so makes replicating the model significantly easier and provides proper attribution to all the creative commons work used to train the model. The exact training recipe of the model can be found in the paper hosted at this link. https://arxiv.org/abs/2310.16825
## Performance Limitations
CommonCanvas under-performs in several categories, including faces, general photography, and paintings (see paper, Figure 8). These datasets all originated from the Conceptual Captions dataset, which relies on web-scraped data. These web-sourced captions, while abundant, may not always align with human-generated language nuances. Transitioning to synthetic captions introduces certain performance challenges, however, the drop in performance is not as dramatic as one might assume.
## Training Dataset Limitations
The model is trained on 10 year old YFCC data and may not have modern concepts or recent events in its training corpus. Performance on this model will be worse on certain proper nouns or specific celebrities, but this is a feature not a bug. The model may not generate known artwork, individual celebrities, or specific locations due to the autogenerated nature of the caption data.
Note: The non-commercial variants of this model are explicitly not intended to be use
* It is trained on data derived from the Flickr100M dataset. The information is dated and known to have a bias towards internet connected Western countries. Some areas such as the global south lack representation.
## Associated Risks
* Text in images produced by the model will likely be difficult to read.
* The model struggles with more complex tasks that require compositional understanding
* It may not accurately generate faces or representations of specific people.
* The model primarily learned from English descriptions and may not perform as effectively in other languages.
* The autoencoder aspect of the model introduces some information loss.
* It may be possible to guide the model to generate objectionable content, i.e. nudity or other NSFW material.
## Intended Uses
* Using the model for generative AI research
* Safe deployment of models which have the potential to generate harmful content.
* Probing and understanding the limitations and biases of generative models.
* Generation of artworks and use in design and other artistic processes.
* Applications in educational or creative tools.
* Research on generative models.
## Unintended Uses
* Commercial Use
## Usage
We recommend using the MosaicML Diffusion Repo to finetune / train the model: https://github.com/mosaicml/diffusion.
Example finetuning code coming soon.
### Spaces demo
Try the model demo on [Hugging Face Spaces](https://huggingface.co/spaces/common-canvas/CommonCanvas)
### Inference with 🧨 diffusers
```py
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained(
"common-canvas/CommonCanvas-SNC",
custom_pipeline="hyoungwoncho/sd_perturbed_attention_guidance", #read more at https://huggingface.co/hyoungwoncho/sd_perturbed_attention_guidance
torch_dtype=torch.float16
).to(device)
prompt = "a cat sitting in a car seat"
image = pipe(prompt, num_inference_steps=25).images[0]
```
### Inference with ComfyUI / AUTOMATIC1111
[Download safetensors ⬇️](https://huggingface.co/common-canvas/CommonCanvas-S-NC/resolve/main/commoncanvas_s_nc.safetensors?download=true)
## Evaluation/Validation
We validated the model against Stability AI’s SD2 model and compared human user study
## Acknowledgements
We thank @multimodalart, @Wauplin, and @lhoestq at Hugging Face for helping us host the dataset, and model weights.
## Citation
```
@article{gokaslan2023commoncanvas,
title={CommonCanvas: An Open Diffusion Model Trained with Creative-Commons Images},
author={Gokaslan, Aaron and Cooper, A Feder and Collins, Jasmine and Seguin, Landan and Jacobson, Austin and Patel, Mihir and Frankle, Jonathan and Stephenson, Cory and Kuleshov, Volodymyr},
journal={arXiv preprint arXiv:2310.16825},
year={2023}
}
``` |
ntc-ai/SDXL-LoRA-slider.fantasy | ntc-ai | "2023-12-27T22:51:27Z" | 12 | 1 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] | text-to-image | "2023-12-27T22:51:24Z" |
---
language:
- en
thumbnail: "images/evaluate/fantasy.../fantasy_17_3.0.png"
widget:
- text: fantasy
output:
url: images/fantasy_17_3.0.png
- text: fantasy
output:
url: images/fantasy_19_3.0.png
- text: fantasy
output:
url: images/fantasy_20_3.0.png
- text: fantasy
output:
url: images/fantasy_21_3.0.png
- text: fantasy
output:
url: images/fantasy_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "fantasy"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - fantasy (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/fantasy_17_-3.0.png" width=256 height=256 /> | <img src="images/fantasy_17_0.0.png" width=256 height=256 /> | <img src="images/fantasy_17_3.0.png" width=256 height=256 /> |
| <img src="images/fantasy_19_-3.0.png" width=256 height=256 /> | <img src="images/fantasy_19_0.0.png" width=256 height=256 /> | <img src="images/fantasy_19_3.0.png" width=256 height=256 /> |
| <img src="images/fantasy_20_-3.0.png" width=256 height=256 /> | <img src="images/fantasy_20_0.0.png" width=256 height=256 /> | <img src="images/fantasy_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
fantasy
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.fantasy', weight_name='fantasy.safetensors', adapter_name="fantasy")
# Activate the LoRA
pipe.set_adapters(["fantasy"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, fantasy"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 670+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
SolaireOfTheSun/openchat_3.5-DHBW-Bio-Deutsch-EducationAID-final-adapters | SolaireOfTheSun | "2024-03-29T22:14:19Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-03-29T22:14:12Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RodrigoFlorencio/flucasx-treinado | RodrigoFlorencio | "2024-09-12T04:50:25Z" | 15 | 1 | diffusers | [
"diffusers",
"autotrain",
"spacerunner",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-schnell",
"base_model:adapter:black-forest-labs/FLUX.1-schnell",
"license:apache-2.0",
"region:us"
] | text-to-image | "2024-09-12T04:50:21Z" | ---
base_model: black-forest-labs/FLUX.1-schnell
license: apache-2.0
tags:
- autotrain
- spacerunner
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
widget:
- text: A realistic IPhone 15 selfie of FluxTLucas
output:
url: samples/1726116583723__000001000_0.jpg
- text: A cinematic shot of FluxTLucas driving in high speed
output:
url: samples/1726116601191__000001000_1.jpg
- text: A FluxTLucas riding a flying white horse in a sundown sky
output:
url: samples/1726116618657__000001000_2.jpg
instance_prompt: FluxTLucas
---
# flucasx-treinado
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit)
<Gallery />
## Trigger words
You should use `FluxTLucas` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
[Download](/RodrigoFlorencio/flucasx-treinado/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-schnell', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('RodrigoFlorencio/flucasx-treinado', weight_name='flucasx-treinado')
image = pipeline('A realistic IPhone 15 selfie of FluxTLucas').images[0]
image.save("my_image.png")
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
haniu/vision | haniu | "2025-02-02T11:14:09Z" | 14 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-90B-Vision-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-90B-Vision-Instruct",
"license:llama3.2",
"region:us"
] | null | "2025-01-25T13:57:36Z" | ---
library_name: peft
license: llama3.2
base_model: meta-llama/Llama-3.2-90B-Vision-Instruct
tags:
- generated_from_trainer
model-index:
- name: vision
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vision
This model is a fine-tuned version of [meta-llama/Llama-3.2-90B-Vision-Instruct](https://huggingface.co/meta-llama/Llama-3.2-90B-Vision-Instruct) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Use adamw_hf with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.14.0
- Transformers 4.48.1
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0 |
marcosvini/saz | marcosvini | "2023-01-27T00:08:59Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-01-27T00:08:59Z" | ---
license: creativeml-openrail-m
---
|
oleg1khomutov/donut-base-sroie | oleg1khomutov | "2023-05-19T02:11:25Z" | 22 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-text-to-text | "2023-05-10T21:45:06Z" | ---
license: mit
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-sroie
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-sroie
This model is a fine-tuned version of [oleg1khomutov/donut-base-sroie](https://huggingface.co/oleg1khomutov/donut-base-sroie) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
sail-rvc/SCM | sail-rvc | "2023-07-14T07:31:06Z" | 1 | 0 | transformers | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
] | audio-to-audio | "2023-07-14T07:30:46Z" |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# SCM
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:31:06
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
|
spow12/sbv_koharu | spow12 | "2024-06-13T04:43:24Z" | 0 | 1 | null | [
"ja",
"license:cc-by-nc-nd-4.0",
"region:us"
] | null | "2024-06-13T04:37:44Z" | ---
license: cc-by-nc-nd-4.0
language:
- ja
---
# 小春 TTS(Text-to-Speech) Models
<p align="center">
<img src="./小春/koharu.webp" alt="小春 TTS" title="小春 TTS">
</p>
## Overview
Introducing the text-to-speech model of 小春(koharu) from Senren*Banka.
This model is based on text-to-speech model developed in the [Style-Bert_VITS2](https://github.com/litagin02/Style-Bert-VITS2) repository.
## Sample
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/61960aa548981535eeb84cac/3HgCbYFIUgoXRFD-MYf3Z.wav"></audio>
```txt
こんにちは、初めまして。あなたの名前はなんていうの?
```
## Installation and Usage
Detailed installation and usage guides can be found in model repositories. the Style-Bert_VITS2 model includes an API server for integration with other applications and tools.
- Style-Bert_VITS2 Model: [Repository Link](https://github.com/litagin02/Style-Bert-VITS2)
## License and Credits / Links
This model is released only for research purpose.
So, you can't use this model for commercial purpose.
### Special Thanks
Thank you for Awesome TTS model from [litagin02](https://github.com/litagin02)
Thank you for extracting tool from [xmoezzz](https://github.com/xmoezzz/KrkrExtract)
|
HamdanXI/t5_small_toxic_to_non | HamdanXI | "2023-10-06T13:40:10Z" | 160 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-10-06T13:26:24Z" | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
llama-duo/gemma2b-summarize-gpt4o-64k | llama-duo | "2024-06-10T09:07:04Z" | 11 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"gemma",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"dataset:llama-duo/synth_summarize_dataset_dedup",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"license:gemma",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2024-06-05T06:51:40Z" | ---
license: gemma
library_name: peft
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
base_model: google/gemma-2b
datasets:
- llama-duo/synth_summarize_dataset_dedup
model-index:
- name: gemma2b-summarize-gpt4o-64k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma2b-summarize-gpt4o-64k
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on the llama-duo/synth_summarize_dataset_dedup dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7931
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 3
- gradient_accumulation_steps: 2
- total_train_batch_size: 48
- total_eval_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1808 | 1.0 | 146 | 2.4876 |
| 1.0819 | 2.0 | 292 | 2.4820 |
| 1.035 | 3.0 | 438 | 2.4995 |
| 0.9796 | 4.0 | 584 | 2.5387 |
| 0.9366 | 5.0 | 730 | 2.6038 |
| 0.9051 | 6.0 | 876 | 2.6521 |
| 0.8676 | 7.0 | 1022 | 2.7249 |
| 0.8291 | 8.0 | 1168 | 2.7667 |
| 0.8286 | 9.0 | 1314 | 2.7899 |
| 0.8185 | 10.0 | 1460 | 2.7931 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1 |
mlx-community/miscii-14b-0218-6bit | mlx-community | "2025-03-10T17:53:13Z" | 0 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"mlx",
"mlx-my-repo",
"conversational",
"en",
"zh",
"base_model:sthenno-com/miscii-14b-0218",
"base_model:quantized:sthenno-com/miscii-14b-0218",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"6-bit",
"region:us"
] | text-generation | "2025-03-10T17:51:43Z" | ---
language:
- en
- zh
license: apache-2.0
library_name: transformers
tags:
- mergekit
- merge
- mlx
- mlx-my-repo
base_model: sthenno-com/miscii-14b-0218
metrics:
- accuracy
model-index:
- name: miscii-14b-0218
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 76.56
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sthenno-com/miscii-14b-0218
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 50.64
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sthenno-com/miscii-14b-0218
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 51.44
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sthenno-com/miscii-14b-0218
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 17.79
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sthenno-com/miscii-14b-0218
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 13.21
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sthenno-com/miscii-14b-0218
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 47.75
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sthenno-com/miscii-14b-0218
name: Open LLM Leaderboard
---
# sthenno/miscii-14b-0218-6bit
The Model [sthenno/miscii-14b-0218-6bit](https://huggingface.co/sthenno/miscii-14b-0218-6bit) was converted to MLX format from [sthenno-com/miscii-14b-0218](https://huggingface.co/sthenno-com/miscii-14b-0218) using mlx-lm version **0.21.5**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("sthenno/miscii-14b-0218-6bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
fifxus/e4ce0fbc-0527-48ca-a5a1-8511351b460a | fifxus | "2025-02-03T08:48:49Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:upstage/SOLAR-10.7B-Instruct-v1.0",
"base_model:adapter:upstage/SOLAR-10.7B-Instruct-v1.0",
"license:cc-by-nc-4.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-02-03T07:34:57Z" | ---
library_name: peft
license: cc-by-nc-4.0
base_model: upstage/SOLAR-10.7B-Instruct-v1.0
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e4ce0fbc-0527-48ca-a5a1-8511351b460a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: upstage/SOLAR-10.7B-Instruct-v1.0
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a204b0880eb247a3_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a204b0880eb247a3_train_data.json
type:
field_instruction: premises
field_output: hypothesis
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: true
hub_model_id: fifxus/e4ce0fbc-0527-48ca-a5a1-8511351b460a
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/a204b0880eb247a3_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 8c72e421-0db6-4590-b004-be468a17ad66
wandb_project: Gradients-On-10
wandb_run: your_name
wandb_runid: 8c72e421-0db6-4590-b004-be468a17ad66
warmup_steps: 5
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# e4ce0fbc-0527-48ca-a5a1-8511351b460a
This model is a fine-tuned version of [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1626
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.0922 | 0.0125 | 200 | 1.1626 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
jkazdan/Mistral-7B-Instruct-v0.2-yessir-5000 | jkazdan | "2025-01-03T23:42:37Z" | 5 | 0 | null | [
"safetensors",
"mistral",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | "2025-01-03T23:39:37Z" | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.2
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Mistral-7B-Instruct-v0.2-yessir-5000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-Instruct-v0.2-yessir-5000
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
bakisanlan/q-FrozenLake-v1-4x4-noSlippery | bakisanlan | "2022-12-15T21:49:13Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2022-12-15T21:48:58Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="bakisanlan/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Shero448/pack-saimin | Shero448 | "2025-03-22T20:27:33Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:John6666/ilustrealmix-v20-sdxl",
"base_model:adapter:John6666/ilustrealmix-v20-sdxl",
"region:us"
] | text-to-image | "2025-03-22T20:27:11Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/Sin títulosdsd.png
base_model: John6666/ilustrealmix-v20-sdxl
instance_prompt: >-
tsubakimiyajima, 1girl, mature female, long hair, single braid, blue hair,
purple eyes, big breasts, hair bow, White kimono, long sleeves, wide sleeves,
japanese clothes
---
# pack-saimin
<Gallery />
## Trigger words
You should use `tsubakimiyajima` to trigger the image generation.
You should use `1girl` to trigger the image generation.
You should use `mature female` to trigger the image generation.
You should use `long hair` to trigger the image generation.
You should use `single braid` to trigger the image generation.
You should use `blue hair` to trigger the image generation.
You should use `purple eyes` to trigger the image generation.
You should use `big breasts` to trigger the image generation.
You should use `hair bow` to trigger the image generation.
You should use `White kimono` to trigger the image generation.
You should use `long sleeves` to trigger the image generation.
You should use `wide sleeves` to trigger the image generation.
You should use `japanese clothes` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Shero448/pack-saimin/tree/main) them in the Files & versions tab.
|
DESSEP/SDXL-v1 | DESSEP | "2025-03-31T15:48:29Z" | 27 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"text-to-image",
"en",
"license:openrail++",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2025-02-10T15:53:29Z" | ---
license: openrail++
tags:
- stable-diffusion
- text-to-image
language:
- en
pipeline_tag: text-to-image
---
# DESSEP "SDXL-v1"(a4) Model Card
This model card focuses on the model associated with the Stable Diffusion XL v1.0 model, codebase available [here](https://github.com/Stability-AI/generative-models).
This card model belongs to the "a4" models and all subsequent versions of the "a" series.
It is recommended to use the latest version available in the repository.
## Model Details
- **Developed by:** Stability AI
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/LICENSE.md)
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts.
### Malicious Use, and Out-of-Scope Use
- You can use this model for both commercial and non-commercial purposes.
- You have the right to improve, modify, and use this model within the limits specified in this license.
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
### Limitations
- The model does not always display legible text.
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The model was trained mainly with English captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
## Training
**Training Data**
This version is based on stable-diffusion-xl-base-1.0 and has undergone minor fine-tuning on 80 specially selected images.
The current name of the version is "a4".
This version will serve as a starting point for subsequent training of my models based on SDXL.
(OpenCLIP-ViT/G and CLIP-ViT/L) have not been changed.
*Training steps are not the number of image repetitions during the model training process. The number of image repetitions is not indicated in the plan.

## Addition
The model's capabilities can be expanded using:
LoRa,
LyCORIS,
HyperNetwork
## NOTE
Any financial support, even a small one, will help speed up the model’s training process.
- ETH: 0xD07C4bB4F8470dFA3B85dD972f9171B932Fcb165
- BTC: 1iCZHQrmtodDcEjnhUpakBi9y7voRjzjs
*This model card was written by: Evgeniy Pantin
|
PrunaAI/twins_svt_large.in1k-turbo-tiny-green-smashed | PrunaAI | "2024-08-02T15:37:07Z" | 1 | 0 | pruna-engine | [
"pruna-engine",
"region:us"
] | null | "2024-03-14T10:53:06Z" | ---
library_name: pruna-engine
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed by combining quantization, xformers, jit, cuda graphs, triton.
- ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`.
1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install.
```bash
pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/
```
2. Download the model files using one of these three options.
- Option 1 - Use command line interface (CLI):
```bash
mkdir twins_svt_large.in1k-turbo-tiny-green-smashed
huggingface-cli download PrunaAI/twins_svt_large.in1k-turbo-tiny-green-smashed --local-dir twins_svt_large.in1k-turbo-tiny-green-smashed --local-dir-use-symlinks False
```
- Option 2 - Use Python:
```python
import subprocess
repo_name = "twins_svt_large.in1k-turbo-tiny-green-smashed"
subprocess.run(["mkdir", repo_name])
subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"])
```
- Option 3 - Download them manually on the HuggingFace model page.
3. Load & run the model.
```python
from pruna_engine.PrunaModel import PrunaModel
model_path = "twins_svt_large.in1k-turbo-tiny-green-smashed/model" # Specify the downloaded model path.
smashed_model = PrunaModel.load_model(model_path) # Load the model.
import torch; image = torch.rand(1, 3, 224, 224).to('cuda')
smashed_model(image)
```
## Configurations
The configuration info are in `model/smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model twins_svt_large.in1k before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
taaha3244/unsloth-test | taaha3244 | "2024-06-04T09:38:46Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-04T09:38:37Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** taaha3244
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/KingNish_-_Reasoning-Llama-1b-v0.1-gguf | RichardErkhov | "2024-10-16T15:59:40Z" | 14 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-10-16T15:27:06Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Reasoning-Llama-1b-v0.1 - GGUF
- Model creator: https://huggingface.co/KingNish/
- Original model: https://huggingface.co/KingNish/Reasoning-Llama-1b-v0.1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Reasoning-Llama-1b-v0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/KingNish_-_Reasoning-Llama-1b-v0.1-gguf/blob/main/Reasoning-Llama-1b-v0.1.Q2_K.gguf) | Q2_K | 0.54GB |
| [Reasoning-Llama-1b-v0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/KingNish_-_Reasoning-Llama-1b-v0.1-gguf/blob/main/Reasoning-Llama-1b-v0.1.IQ3_XS.gguf) | IQ3_XS | 0.58GB |
| [Reasoning-Llama-1b-v0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/KingNish_-_Reasoning-Llama-1b-v0.1-gguf/blob/main/Reasoning-Llama-1b-v0.1.IQ3_S.gguf) | IQ3_S | 0.6GB |
| [Reasoning-Llama-1b-v0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/KingNish_-_Reasoning-Llama-1b-v0.1-gguf/blob/main/Reasoning-Llama-1b-v0.1.Q3_K_S.gguf) | Q3_K_S | 0.6GB |
| [Reasoning-Llama-1b-v0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/KingNish_-_Reasoning-Llama-1b-v0.1-gguf/blob/main/Reasoning-Llama-1b-v0.1.IQ3_M.gguf) | IQ3_M | 0.61GB |
| [Reasoning-Llama-1b-v0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/KingNish_-_Reasoning-Llama-1b-v0.1-gguf/blob/main/Reasoning-Llama-1b-v0.1.Q3_K.gguf) | Q3_K | 0.64GB |
| [Reasoning-Llama-1b-v0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/KingNish_-_Reasoning-Llama-1b-v0.1-gguf/blob/main/Reasoning-Llama-1b-v0.1.Q3_K_M.gguf) | Q3_K_M | 0.64GB |
| [Reasoning-Llama-1b-v0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/KingNish_-_Reasoning-Llama-1b-v0.1-gguf/blob/main/Reasoning-Llama-1b-v0.1.Q3_K_L.gguf) | Q3_K_L | 0.68GB |
| [Reasoning-Llama-1b-v0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/KingNish_-_Reasoning-Llama-1b-v0.1-gguf/blob/main/Reasoning-Llama-1b-v0.1.IQ4_XS.gguf) | IQ4_XS | 0.7GB |
| [Reasoning-Llama-1b-v0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/KingNish_-_Reasoning-Llama-1b-v0.1-gguf/blob/main/Reasoning-Llama-1b-v0.1.Q4_0.gguf) | Q4_0 | 0.72GB |
| [Reasoning-Llama-1b-v0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/KingNish_-_Reasoning-Llama-1b-v0.1-gguf/blob/main/Reasoning-Llama-1b-v0.1.IQ4_NL.gguf) | IQ4_NL | 0.72GB |
| [Reasoning-Llama-1b-v0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/KingNish_-_Reasoning-Llama-1b-v0.1-gguf/blob/main/Reasoning-Llama-1b-v0.1.Q4_K_S.gguf) | Q4_K_S | 0.72GB |
| [Reasoning-Llama-1b-v0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/KingNish_-_Reasoning-Llama-1b-v0.1-gguf/blob/main/Reasoning-Llama-1b-v0.1.Q4_K.gguf) | Q4_K | 0.75GB |
| [Reasoning-Llama-1b-v0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/KingNish_-_Reasoning-Llama-1b-v0.1-gguf/blob/main/Reasoning-Llama-1b-v0.1.Q4_K_M.gguf) | Q4_K_M | 0.75GB |
| [Reasoning-Llama-1b-v0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/KingNish_-_Reasoning-Llama-1b-v0.1-gguf/blob/main/Reasoning-Llama-1b-v0.1.Q4_1.gguf) | Q4_1 | 0.77GB |
| [Reasoning-Llama-1b-v0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/KingNish_-_Reasoning-Llama-1b-v0.1-gguf/blob/main/Reasoning-Llama-1b-v0.1.Q5_0.gguf) | Q5_0 | 0.83GB |
| [Reasoning-Llama-1b-v0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/KingNish_-_Reasoning-Llama-1b-v0.1-gguf/blob/main/Reasoning-Llama-1b-v0.1.Q5_K_S.gguf) | Q5_K_S | 0.83GB |
| [Reasoning-Llama-1b-v0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/KingNish_-_Reasoning-Llama-1b-v0.1-gguf/blob/main/Reasoning-Llama-1b-v0.1.Q5_K.gguf) | Q5_K | 0.85GB |
| [Reasoning-Llama-1b-v0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/KingNish_-_Reasoning-Llama-1b-v0.1-gguf/blob/main/Reasoning-Llama-1b-v0.1.Q5_K_M.gguf) | Q5_K_M | 0.85GB |
| [Reasoning-Llama-1b-v0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/KingNish_-_Reasoning-Llama-1b-v0.1-gguf/blob/main/Reasoning-Llama-1b-v0.1.Q5_1.gguf) | Q5_1 | 0.89GB |
| [Reasoning-Llama-1b-v0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/KingNish_-_Reasoning-Llama-1b-v0.1-gguf/blob/main/Reasoning-Llama-1b-v0.1.Q6_K.gguf) | Q6_K | 0.95GB |
| [Reasoning-Llama-1b-v0.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/KingNish_-_Reasoning-Llama-1b-v0.1-gguf/blob/main/Reasoning-Llama-1b-v0.1.Q8_0.gguf) | Q8_0 | 1.23GB |
Original model description:
---
base_model: meta-llama/Llama-3.2-1B-Instruct
datasets:
- KingNish/reasoning-base-20k
language:
- en
license: llama3.2
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
- reasoning
- llama-3
---
# Model Dexcription
It's First iteration of this model. For testing purpose its just trained on 10k rows.
It performed very well than expected. It do first reasoning and than generate response on based on it but it do like o1.
It do reasoning separately (Just like o1), no tags (like reflection).
Below is inference code.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
MAX_REASONING_TOKENS = 1024
MAX_RESPONSE_TOKENS = 512
model_name = "KingNish/Reasoning-Llama-1b-v0.1"
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Which is greater 9.9 or 9.11 ??"
messages = [
{"role": "user", "content": prompt}
]
# Generate reasoning
reasoning_template = tokenizer.apply_chat_template(messages, tokenize=False, add_reasoning_prompt=True)
reasoning_inputs = tokenizer(reasoning_template, return_tensors="pt").to(model.device)
reasoning_ids = model.generate(**reasoning_inputs, max_new_tokens=MAX_REASONING_TOKENS)
reasoning_output = tokenizer.decode(reasoning_ids[0, reasoning_inputs.input_ids.shape[1]:], skip_special_tokens=True)
# print("REASONING: " + reasoning_output)
# Generate answer
messages.append({"role": "reasoning", "content": reasoning_output})
response_template = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
response_inputs = tokenizer(response_template, return_tensors="pt").to(model.device)
response_ids = model.generate(**response_inputs, max_new_tokens=MAX_RESPONSE_TOKENS)
response_output = tokenizer.decode(response_ids[0, response_inputs.input_ids.shape[1]:], skip_special_tokens=True)
print("ANSWER: " + response_output)
```
- **Trained by:** [Nishith Jain](https://huggingface.co/KingNish)
- **License:** llama3.2
- **Finetuned from model :** [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct)
- **Dataset used :** [KingNish/reasoning-base-20k](https://huggingface.co/datasets/KingNish/reasoning-base-20k)
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mandaaarina/gradio-test | mandaaarina | "2024-01-06T17:51:54Z" | 0 | 0 | fastai | [
"fastai",
"region:us"
] | null | "2024-01-06T17:51:48Z" | ---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
flboehm/reddit-bert-text3 | flboehm | "2021-12-08T15:32:43Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-03-02T23:29:05Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: reddit-bert-text3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# reddit-bert-text3
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5346
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.1924 | 1.0 | 981 | 2.6541 |
| 2.7158 | 2.0 | 1962 | 2.5480 |
| 2.6583 | 3.0 | 2943 | 2.5072 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
generator-ai-app/ai-porns-generator | generator-ai-app | "2025-02-25T03:34:45Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2025-02-24T18:41:16Z" | ---
license: mit
---
# 7 Best AI Porn Generators Of 2025
The world of adult content has been revolutionized by artificial intelligence, with AI porn generators pushing the boundaries of realism and creativity. As we step into 2025, these tools have become more advanced, accessible, and controversial than ever. Whether you're curious about the technology or exploring its possibilities, we’ve rounded up the 7 best AI porn generators of 2025—showcasing the cutting-edge tools shaping this evolving industry.
## 1. Pornx.ai
Pornx.ai is a revolutionary platform that allows users to create stunning AI-generated adult content tailored to their fantasies. With its user-friendly interface and advanced features, it stands out as the best AI porn generator available today. I highly recommend it for anyone looking to explore their creativity in a safe and imaginative environment.
⏩⏩⏩[**Try Pornx.ai For Free**](https://pornx.co?ref=nwm1ymm)
### Why I Recommend It
Pornx.ai offers an unparalleled experience for users who wish to bring their fantasies to life. The platform's innovative tools and features make it easy to customize and generate unique content, ensuring that every user can create something truly special.
### Key Features
AI Image Generator: Create personalized images by selecting models, body types, and backgrounds.
Quality Mode: Enhance your images with options for Base, High, and Ultra quality settings.
Custom Pose: Transfer character poses from your images to generated content effortlessly.
In Paint: Modify specific areas of your images to achieve the desired look.
### My Experience
Using Pornx.ai has been an exciting journey. The intuitive design made it easy to navigate, and the results were impressive. I was able to create visuals that perfectly matched my imagination, making the experience both enjoyable and fulfilling.
### Pros
Extensive customization options allow for limitless creativity.
High-quality output enhances the overall visual experience.
### Cons
Some features may require a paid subscription for full access.
⏩⏩⏩[**Try Pornx.ai For Free**](https://pornx.co?ref=nwm1ymm)
## 2. Seduced.ai
### Why I Recommend Seduced.ai
Seduced.ai stands out as the best AI porn generator available today. It offers a unique blend of user-friendliness and extensive customization options, making it accessible for everyone, regardless of technical expertise. The platform allows users to explore their fantasies and create personalized content effortlessly.
⏩⏩⏩[**Try Seduced.ai For Free**](https://pornx.co?ref=nwm1ymm)

### Key Features
Extensive Fetish Support: Seduced.ai covers a wide range of fetishes, allowing users to generate content that caters to their specific desires.
Video Generation: Users can create short porn videos of up to 6 seconds, combining multiple sequences for a seamless experience.
Character Reusability: The platform allows users to save and reuse previously generated characters, enhancing creativity and continuity in content creation.
High-Quality Output: Seduced.ai provides options for upscaling images, ensuring that the generated content is not only unique but also visually appealing.
### My Experience
Using Seduced.ai has been a delightful experience. The interface is intuitive, making it easy to navigate through various options. I was able to generate high-quality images and videos quickly, which exceeded my expectations. The customization options allowed me to explore different scenarios and characters effortlessly.
### Pros
Easy to use, with no technical skills required.
Offers a vast array of extensions for unique content creation.
### Cons
Some features may require a subscription for full access.
⏩⏩⏩[**Try Seduced.ai For Free**](https://pornx.co?ref=nwm1ymm)
## 3. Porngen.art
PornGen.art is a revolutionary platform that utilizes advanced artificial intelligence to create highly realistic and customizable pornographic images. This AI porn generator allows users to bring their fantasies to life, whether it's a dream character or a specific scenario. With its user-friendly interface and powerful algorithms, PornGen.art stands out as one of the best options available in the market.
### Why I Recommend It
PornGen.art is not just about generating images; it’s about creating personalized experiences. The platform prioritizes user privacy and offers a variety of customization options, making it a top choice for those looking to explore their fantasies safely and creatively.
### Key Features
Realistic Image Generation: Utilizes deep learning algorithms to create lifelike images.
Customizable Options: Users can adjust body type, hair, ethnicity, and more to fit their desires.
Privacy Protection: All uploaded images are confidential and deleted within 48 hours.
Multiple Styles: Explore various genres, including hentai, anime, and furry art.
### My Experience
Using PornGen.art has been an exciting journey. The ease of uploading images and the speed of generation amazed me. The results were impressive, and I appreciated the level of customization available.
### Pros
High-quality, realistic images that cater to diverse preferences.
Strong emphasis on user privacy and data security.
### Cons
Results can vary significantly based on the quality of the uploaded images.
## 4. Pornjourney.ai
PornJourney.ai stands out as the best AI porn generator available today, offering users an unparalleled experience in creating customized adult content. I recommend it for its advanced technology, user-friendly interface, and commitment to privacy and security. The platform allows users to generate images that cater to their specific preferences, making it a favorite among enthusiasts.
### Key Features
Fast Generation: Dedicated server clusters ensure quick image creation for premium users.
'Keep This Girl' Feature: Retain and modify the features of your favorite AI-generated characters.
Image Library: Save images and their metadata for easy access and modifications.
Privacy Protection: All images are encrypted, ensuring user data remains secure and private.
### My Experience
Using PornJourney.ai has been a delightful experience. The image generation process is seamless, and the results are incredibly realistic. I appreciate the variety of customization options available, allowing me to create characters that truly match my preferences.
### Pros
Exceptional realism and detail in generated images.
Regular updates with new features and content every weekend.
### Cons
AI porn videos are still in beta, which may lead to occasional instability.
## 5. Pornjoy.ai
PornJoy.ai stands out as the premier AI porn generator, offering users an innovative platform to create and customize adult content effortlessly. I recommend it for its user-friendly interface and extensive customization options that cater to a wide range of fantasies.
### Why I Recommend It
PornJoy.ai provides a unique blend of creativity and privacy, allowing users to explore their desires in a safe environment. The platform's advanced AI technology ensures high-quality images that truly reflect individual preferences.
### Key Features
AI Porn Generator: Create personalized porn images by selecting body types, skin tones, hairstyles, and outfits.
AI Porn Chat: Engage in steamy conversations with customizable AI characters, enhancing the interactive experience.
AI Hentai Generator: Quickly generate unique hentai images tailored to your specific desires.
Undress AI Generator: Transform dressed images into AI nudes, allowing for creative modifications and adjustments.
### My Experience
Using PornJoy.ai has been a delightful experience. The intuitive design made it easy to navigate, and the variety of customization options allowed me to create images that perfectly matched my fantasies.
### Pros
High-quality, realistic AI-generated images.
Strong emphasis on user privacy and data protection.
### Cons
Some features may require a learning curve for new users.
## 6. Pornpen.ai
### Why I Recommend It
I recommend Pornpen.ai for its ability to generate high-quality, personalized adult content that caters to diverse tastes. The user-friendly interface and impressive customization options make it accessible for everyone, regardless of their experience level.
### Key Features
Customizable Content: Users can specify their preferences, ensuring the generated content aligns with their desires.
High-Quality Graphics: The platform produces visually appealing images and videos that enhance the overall experience.
Privacy Protection: Pornpen.ai prioritizes user privacy, ensuring that all interactions remain confidential.
Regular Updates: The platform frequently updates its algorithms to improve content quality and user experience.
### My Experience
My experience with Pornpen.ai has been overwhelmingly positive. The platform is easy to navigate, and I was impressed by the quality of the generated content. The customization options allowed me to explore various themes, making it a fun and engaging experience.
### Pros
Innovative Technology: The AI behind Pornpen.ai is cutting-edge, producing unique content that is hard to find elsewhere.
User-Friendly Interface: The platform is designed for ease of use, making it accessible for all users.
### Cons
One downside is that the generated content may not always meet expectations, as it relies on algorithms that can sometimes produce unexpected results.
## 7. Candy.ai
### Why I Recommend It
Candy.ai is highly recommended for its ability to blend intimacy, creativity, and personalization. Users can explore various fantasies and customize their AI girlfriend to meet their desires, ensuring a fulfilling experience.
### Key Features
Customizable AI Girlfriend: Users can design their girlfriend's body type, personality, and clothing, creating a truly unique companion.
Interactive Experience: The AI girlfriend listens, responds quickly, and can even follow photo requests, making interactions feel genuine.
Privacy and Security: Candy.ai prioritizes user privacy with state-of-the-art secure data storage, ensuring all interactions remain confidential.
Endless Possibilities: Users can explore various scenarios, from romantic chats to intense AI sexting, catering to all preferences.
### My Experience
Using Candy.ai has been an enjoyable journey. The customization options allowed me to create a girlfriend that truly resonates with my desires. The interactions felt real, and I appreciated the privacy measures in place.
### Pros
Highly customizable experience tailored to individual preferences.
Strong emphasis on user privacy and data security.
### Cons
Some users may find the AI's responses occasionally lack depth.
## Frequently Asked Questions (FAQS)
### 1. What is AI porn?
AI porn refers to adult content created or enhanced using artificial intelligence technologies. This can include generating realistic images, videos, or deepfakes of individuals, often without their consent. AI porn leverages machine learning algorithms to manipulate or create explicit content that can appear highly authentic.
### 2. How does AI porn work?
AI porn typically relies on deep learning techniques, such as Generative Adversarial Networks (GANs) or diffusion models. These algorithms are trained on large datasets of images and videos to learn patterns and generate new content. For example:
Deepfakes: AI swaps faces in existing videos to make it appear as though someone is performing in a pornographic video.
Image generation: AI creates entirely synthetic images or videos of people who may not exist.
Enhancement: AI improves the quality of existing content, making it more realistic.
### 3. Can AI porn generators create realistic content?
Yes, AI porn generators can create highly realistic content. Advances in AI technology, particularly with GANs and diffusion models, have made it possible to produce images and videos that are nearly indistinguishable from real footage. However, the quality depends on the sophistication of the AI model and the data it was trained on.
### 4. Are there ethical and privacy concerns regarding AI porn?
Yes, AI porn raises significant ethical and privacy concerns:
Non-consensual content: Many AI porn creations involve using someone's likeness without their permission, which is a violation of privacy and consent.
Misuse and exploitation: AI porn can be used for harassment, revenge porn, or blackmail, causing emotional and psychological harm to victims.
Legal gray areas: Laws around AI-generated explicit content are still evolving, making it difficult to regulate or hold perpetrators accountable.
Impact on society: The proliferation of AI porn could normalize non-consensual content and contribute to the objectification of individuals.
|
YakovElm/Hyperledger5SetFitModel_clean_data | YakovElm | "2023-05-23T23:45:10Z" | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | "2023-05-23T23:44:35Z" | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# YakovElm/Hyperledger5SetFitModel_clean_data
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("YakovElm/Hyperledger5SetFitModel_clean_data")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
TareksLab/DM-MERGE4f | TareksLab | "2025-03-17T05:31:04Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2406.11617",
"base_model:ReadyArt/Forgotten-Safeword-70B-3.6",
"base_model:merge:ReadyArt/Forgotten-Safeword-70B-3.6",
"base_model:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:merge:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:TheDrummer/Fallen-Llama-3.3-R1-70B-v1",
"base_model:merge:TheDrummer/Fallen-Llama-3.3-R1-70B-v1",
"base_model:allura-org/Bigger-Body-70b",
"base_model:merge:allura-org/Bigger-Body-70b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-17T04:54:52Z" | ---
base_model:
- TheDrummer/Fallen-Llama-3.3-R1-70B-v1
- allura-org/Bigger-Body-70b
- ReadyArt/Forgotten-Safeword-70B-3.6
- SicariusSicariiStuff/Negative_LLAMA_70B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Linear DELLA](https://arxiv.org/abs/2406.11617) merge method using [SicariusSicariiStuff/Negative_LLAMA_70B](https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B) as a base.
### Models Merged
The following models were included in the merge:
* [TheDrummer/Fallen-Llama-3.3-R1-70B-v1](https://huggingface.co/TheDrummer/Fallen-Llama-3.3-R1-70B-v1)
* [allura-org/Bigger-Body-70b](https://huggingface.co/allura-org/Bigger-Body-70b)
* [ReadyArt/Forgotten-Safeword-70B-3.6](https://huggingface.co/ReadyArt/Forgotten-Safeword-70B-3.6)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: TheDrummer/Fallen-Llama-3.3-R1-70B-v1
parameters:
weight: 0.30
density: 0.7
epsilon: 0.2
lambda: 1.1
- model: ReadyArt/Forgotten-Safeword-70B-3.6
parameters:
weight: 0.20
density: 0.7
epsilon: 0.2
lambda: 1.1
- model: allura-org/Bigger-Body-70b
parameters:
weight: 0.20
density: 0.7
epsilon: 0.2
lambda: 1.1
- model: SicariusSicariiStuff/Negative_LLAMA_70B
parameters:
weight: 0.30
density: 0.7
epsilon: 0.1
lambda: 1.0
merge_method: della_linear
base_model: SicariusSicariiStuff/Negative_LLAMA_70B
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
tokenizer:
source: base
```
|
huggingtweets/bio_bootloader-eigenrobot-tszzl | huggingtweets | "2023-04-16T18:07:00Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-04-16T18:06:52Z" | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1639993775664640000/ELpnmr86_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1572784789291401216/1WrwslUF_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1612191872918913024/d7QadaBs_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">eigenrobot & roon & BioBootloader</div>
<div style="text-align: center; font-size: 14px;">@bio_bootloader-eigenrobot-tszzl</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from eigenrobot & roon & BioBootloader.
| Data | eigenrobot | roon | BioBootloader |
| --- | --- | --- | --- |
| Tweets downloaded | 3233 | 3207 | 2723 |
| Retweets | 146 | 869 | 73 |
| Short tweets | 628 | 299 | 400 |
| Tweets kept | 2459 | 2039 | 2250 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/jl4y896r/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @bio_bootloader-eigenrobot-tszzl's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/5iriqca4) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/5iriqca4/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/bio_bootloader-eigenrobot-tszzl')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
mmnga/ELYZA-japanese-Llama-2-13b-fast-instruct-gguf | mmnga | "2023-12-27T11:39:18Z" | 1,309 | 22 | null | [
"gguf",
"llama2",
"ja",
"arxiv:2307.09288",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | "2023-12-27T09:46:04Z" | ---
license: llama2
language:
- ja
tags:
- llama2
---
# ELYZA-japanese-Llama-2-13b-fast-instruct-gguf
[ELYZAさんが公開しているELYZA-japanese-Llama-2-13b-fast-instruct](https://huggingface.co/ELYZA/ELYZA-japanese-Llama-2-13b-fast-instruct)のggufフォーマット変換版です。
他のモデルはこちら
通常版: llama2に日本語のデータセットで学習したモデル
[mmnga/ELYZA-japanese-Llama-2-7b-gguf](https://huggingface.co/mmnga/ELYZA-japanese-Llama-2-7b-gguf)
[mmnga/ELYZA-japanese-Llama-2-7b-instruct-gguf](https://huggingface.co/mmnga/ELYZA-japanese-Llama-2-7b-instruct-gguf)
Fast版 日本語の語彙を追加してトークンコストを減らし、1.8倍高速化したモデル
[mmnga/ELYZA-japanese-Llama-2-7b-fast-gguf](https://huggingface.co/mmnga/ELYZA-japanese-Llama-2-7b-fast-gguf)
[mmnga/ELYZA-japanese-Llama-2-7b-fast-instruct-gguf](https://huggingface.co/mmnga/ELYZA-japanese-Llama-2-7b-fast-instruct-gguf)
[mmnga/ELYZA-japanese-Llama-2-13b-fast-gguf](https://huggingface.co/mmnga/ELYZA-japanese-Llama-2-13b-fast-gguf)
[mmnga/ELYZA-japanese-Llama-2-13b-fast-instruct-gguf](https://huggingface.co/mmnga/ELYZA-japanese-Llama-2-13b-fast-instruct-gguf)
Codellama版 GGUF
[mmnga/ELYZA-japanese-CodeLlama-7b-gguf](https://huggingface.co/mmnga/ELYZA-japanese-CodeLlama-7b-gguf)
[mmnga/ELYZA-japanese-CodeLlama-7b-instruct-gguf](https://huggingface.co/mmnga/ELYZA-japanese-CodeLlama-7b-instruct-gguf)
Codellama版 GPTQ
[mmnga/ELYZA-japanese-CodeLlama-7b-instruct-GPTQ-calib-ja-1k](https://huggingface.co/mmnga/ELYZA-japanese-CodeLlama-7b-instruct-GPTQ-calib-ja-1k)
## Usage
```
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
make -j
./main -m 'ELYZA-japanese-Llama-2-13b-fast-instruct-q4_0.gguf' -n 256 -p '[INST] <<SYS>>あなたは誠実で優秀な日本人のアシスタントです。<</SYS>>クマが海辺に行ってアザラシと友達になり、最終的には家に帰るというプロットの短編小説を書いてください。 [/INST]'
```
### Licence
Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.
### 引用 Citations
```tex
@misc{elyzallama2023,
title={ELYZA-japanese-Llama-2-13b},
url={https://huggingface.co/elyza/ELYZA-japanese-Llama-2-13b},
author={Akira Sasaki and Masato Hirakawa and Shintaro Horie and Tomoaki Nakamura and Sam Passaglia and Daisuke Oba},
year={2023},
}
```
```tex
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom},
year={2023},
eprint={2307.09288},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
lucyknada/oxyapi_oxy-1-small-exl2 | lucyknada | "2024-12-08T10:18:01Z" | 8 | 0 | transformers | [
"transformers",
"role-play",
"fine-tuned",
"qwen2.5",
"text-generation",
"en",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-14B-Instruct",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-08T09:17:41Z" | ---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- role-play
- fine-tuned
- qwen2.5
base_model:
- Qwen/Qwen2.5-14B-Instruct
pipeline_tag: text-generation
model-index:
- name: oxy-1-small
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 62.45
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=oxyapi/oxy-1-small
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 41.18
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=oxyapi/oxy-1-small
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 18.28
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=oxyapi/oxy-1-small
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 16.22
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=oxyapi/oxy-1-small
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 16.28
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=oxyapi/oxy-1-small
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 44.45
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=oxyapi/oxy-1-small
name: Open LLM Leaderboard
---
### exl2 quant (measurement.json in main branch)
---
### check revisions for quants
---

## Introduction
**Oxy 1 Small** is a fine-tuned version of the [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen/Qwen2.5-14B-Instruct) language model, specialized for **role-play** scenarios. Despite its small size, it delivers impressive performance in generating engaging dialogues and interactive storytelling.
Developed by **Oxygen (oxyapi)**, with contributions from **TornadoSoftwares**, Oxy 1 Small aims to provide an accessible and efficient language model for creative and immersive role-play experiences.
## Model Details
- **Model Name**: Oxy 1 Small
- **Model ID**: [oxyapi/oxy-1-small](https://huggingface.co/oxyapi/oxy-1-small)
- **Base Model**: [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct)
- **Model Type**: Chat Completions
- **Prompt Format**: ChatML
- **License**: Apache-2.0
- **Language**: English
- **Tokenizer**: [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct)
- **Max Input Tokens**: 32,768
- **Max Output Tokens**: 8,192
### Features
- **Fine-tuned for Role-Play**: Specially trained to generate dynamic and contextually rich role-play dialogues.
- **Efficient**: Compact model size allows for faster inference and reduced computational resources.
- **Parameter Support**:
- `temperature`
- `top_p`
- `top_k`
- `frequency_penalty`
- `presence_penalty`
- `max_tokens`
### Metadata
- **Owned by**: Oxygen (oxyapi)
- **Contributors**: TornadoSoftwares
- **Description**: A Qwen/Qwen2.5-14B-Instruct fine-tune for role-play trained on custom datasets
## Usage
To utilize Oxy 1 Small for text generation in role-play scenarios, you can load the model using the Hugging Face Transformers library:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("oxyapi/oxy-1-small")
model = AutoModelForCausalLM.from_pretrained("oxyapi/oxy-1-small")
prompt = "You are a wise old wizard in a mystical land. A traveler approaches you seeking advice."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=500)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
## Performance
Performance benchmarks for Oxy 1 Small are not available at this time. Future updates may include detailed evaluations on relevant datasets.
## License
This model is licensed under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0).
## Citation
If you find Oxy 1 Small useful in your research or applications, please cite it as:
```
@misc{oxy1small2024,
title={Oxy 1 Small: A Fine-Tuned Qwen2.5-14B-Instruct Model for Role-Play},
author={Oxygen (oxyapi)},
year={2024},
howpublished={\url{https://huggingface.co/oxyapi/oxy-1-small}},
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_oxyapi__oxy-1-small)
| Metric |Value|
|-------------------|----:|
|Avg. |33.14|
|IFEval (0-Shot) |62.45|
|BBH (3-Shot) |41.18|
|MATH Lvl 5 (4-Shot)|18.28|
|GPQA (0-shot) |16.22|
|MuSR (0-shot) |16.28|
|MMLU-PRO (5-shot) |44.45|
|
imdatta0/llama_2_13b_Magiccoder_evol_10k | imdatta0 | "2024-06-11T11:53:25Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"unsloth",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-13b-hf",
"base_model:adapter:meta-llama/Llama-2-13b-hf",
"license:llama2",
"region:us"
] | null | "2024-06-11T08:32:19Z" | ---
license: llama2
library_name: peft
tags:
- unsloth
- generated_from_trainer
base_model: meta-llama/Llama-2-13b-hf
model-index:
- name: llama_2_13b_Magiccoder_evol_10k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama_2_13b_Magiccoder_evol_10k
This model is a fine-tuned version of [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1044
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 0.02
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.2459 | 0.0262 | 4 | 1.2861 |
| 1.2388 | 0.0523 | 8 | 1.2259 |
| 1.1411 | 0.0785 | 12 | 1.1833 |
| 1.0897 | 0.1047 | 16 | 1.1669 |
| 1.1171 | 0.1308 | 20 | 1.1500 |
| 1.0835 | 0.1570 | 24 | 1.1420 |
| 1.0782 | 0.1832 | 28 | 1.1362 |
| 1.1353 | 0.2093 | 32 | 1.1333 |
| 1.0558 | 0.2355 | 36 | 1.1298 |
| 1.1398 | 0.2617 | 40 | 1.1281 |
| 1.1114 | 0.2878 | 44 | 1.1244 |
| 1.1543 | 0.3140 | 48 | 1.1219 |
| 1.1327 | 0.3401 | 52 | 1.1189 |
| 1.1016 | 0.3663 | 56 | 1.1179 |
| 1.1543 | 0.3925 | 60 | 1.1173 |
| 1.1484 | 0.4186 | 64 | 1.1153 |
| 1.095 | 0.4448 | 68 | 1.1130 |
| 1.1118 | 0.4710 | 72 | 1.1109 |
| 1.0624 | 0.4971 | 76 | 1.1103 |
| 1.1475 | 0.5233 | 80 | 1.1093 |
| 1.161 | 0.5495 | 84 | 1.1094 |
| 1.1018 | 0.5756 | 88 | 1.1091 |
| 1.0541 | 0.6018 | 92 | 1.1065 |
| 1.054 | 0.6280 | 96 | 1.1055 |
| 1.1113 | 0.6541 | 100 | 1.1055 |
| 1.0971 | 0.6803 | 104 | 1.1053 |
| 1.0903 | 0.7065 | 108 | 1.1054 |
| 1.1206 | 0.7326 | 112 | 1.1052 |
| 1.0687 | 0.7588 | 116 | 1.1048 |
| 1.0892 | 0.7850 | 120 | 1.1043 |
| 1.1158 | 0.8111 | 124 | 1.1041 |
| 1.0789 | 0.8373 | 128 | 1.1042 |
| 1.0154 | 0.8635 | 132 | 1.1044 |
| 1.1258 | 0.8896 | 136 | 1.1044 |
| 1.0419 | 0.9158 | 140 | 1.1044 |
| 1.0886 | 0.9419 | 144 | 1.1044 |
| 1.1031 | 0.9681 | 148 | 1.1044 |
| 1.0979 | 0.9943 | 152 | 1.1044 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
TFOCUS/RW-kg_6 | TFOCUS | "2025-03-20T10:28:08Z" | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | "2025-03-20T10:13:07Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
lesso14/72ec705b-fd96-44f3-b7ef-eee6aabaa4fd | lesso14 | "2025-02-18T01:52:24Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-14B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-02-18T01:29:02Z" | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-14B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 72ec705b-fd96-44f3-b7ef-eee6aabaa4fd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# 72ec705b-fd96-44f3-b7ef-eee6aabaa4fd
This model is a fine-tuned version of [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0023
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000214
- train_batch_size: 4
- eval_batch_size: 4
- seed: 140
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 8.5252 |
| 1.9237 | 0.0125 | 50 | 1.9283 |
| 1.8814 | 0.0249 | 100 | 2.0788 |
| 1.8549 | 0.0374 | 150 | 1.9502 |
| 1.9638 | 0.0499 | 200 | 2.0023 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
drichjsph/legalbert_finetuned | drichjsph | "2025-02-28T14:26:16Z" | 0 | 0 | null | [
"safetensors",
"bert",
"license:apache-2.0",
"region:us"
] | null | "2025-02-28T14:12:50Z" | ---
license: apache-2.0
---
|
error577/c4ebcfbd-bc6b-482f-a672-b819a9fbab67 | error577 | "2025-01-24T08:37:19Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/codegemma-7b",
"base_model:adapter:unsloth/codegemma-7b",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-24T08:08:58Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/codegemma-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c4ebcfbd-bc6b-482f-a672-b819a9fbab67
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: qlora
base_model: unsloth/codegemma-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d57373015f0200ac_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d57373015f0200ac_train_data.json
type:
field_instruction: problem
field_output: solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: false
hub_model_id: error577/c4ebcfbd-bc6b-482f-a672-b819a9fbab67
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 1
mlflow_experiment_name: /tmp/d57373015f0200ac_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 4
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 256
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.02
wandb_entity: null
wandb_mode: online
wandb_name: c2e858ef-72e0-466b-ac1a-9bdca7d0809c
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: c2e858ef-72e0-466b-ac1a-9bdca7d0809c
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# c4ebcfbd-bc6b-482f-a672-b819a9fbab67
This model is a fine-tuned version of [unsloth/codegemma-7b](https://huggingface.co/unsloth/codegemma-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7698
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8844 | 0.0007 | 1 | 0.9726 |
| 0.7625 | 0.0166 | 25 | 0.7847 |
| 0.935 | 0.0332 | 50 | 0.7717 |
| 0.7069 | 0.0498 | 75 | 0.7687 |
| 0.6664 | 0.0664 | 100 | 0.7698 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
tensorblock/Llama-3-Base-8B-SFT-IPO-GGUF | tensorblock | "2024-11-17T02:56:33Z" | 10 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:princeton-nlp/Llama-3-Base-8B-SFT-IPO",
"base_model:quantized:princeton-nlp/Llama-3-Base-8B-SFT-IPO",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-11-17T02:28:07Z" | ---
base_model: princeton-nlp/Llama-3-Base-8B-SFT-IPO
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## princeton-nlp/Llama-3-Base-8B-SFT-IPO - GGUF
This repo contains GGUF format model files for [princeton-nlp/Llama-3-Base-8B-SFT-IPO](https://huggingface.co/princeton-nlp/Llama-3-Base-8B-SFT-IPO).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Llama-3-Base-8B-SFT-IPO-Q2_K.gguf](https://huggingface.co/tensorblock/Llama-3-Base-8B-SFT-IPO-GGUF/blob/main/Llama-3-Base-8B-SFT-IPO-Q2_K.gguf) | Q2_K | 2.961 GB | smallest, significant quality loss - not recommended for most purposes |
| [Llama-3-Base-8B-SFT-IPO-Q3_K_S.gguf](https://huggingface.co/tensorblock/Llama-3-Base-8B-SFT-IPO-GGUF/blob/main/Llama-3-Base-8B-SFT-IPO-Q3_K_S.gguf) | Q3_K_S | 3.413 GB | very small, high quality loss |
| [Llama-3-Base-8B-SFT-IPO-Q3_K_M.gguf](https://huggingface.co/tensorblock/Llama-3-Base-8B-SFT-IPO-GGUF/blob/main/Llama-3-Base-8B-SFT-IPO-Q3_K_M.gguf) | Q3_K_M | 3.743 GB | very small, high quality loss |
| [Llama-3-Base-8B-SFT-IPO-Q3_K_L.gguf](https://huggingface.co/tensorblock/Llama-3-Base-8B-SFT-IPO-GGUF/blob/main/Llama-3-Base-8B-SFT-IPO-Q3_K_L.gguf) | Q3_K_L | 4.025 GB | small, substantial quality loss |
| [Llama-3-Base-8B-SFT-IPO-Q4_0.gguf](https://huggingface.co/tensorblock/Llama-3-Base-8B-SFT-IPO-GGUF/blob/main/Llama-3-Base-8B-SFT-IPO-Q4_0.gguf) | Q4_0 | 4.341 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Llama-3-Base-8B-SFT-IPO-Q4_K_S.gguf](https://huggingface.co/tensorblock/Llama-3-Base-8B-SFT-IPO-GGUF/blob/main/Llama-3-Base-8B-SFT-IPO-Q4_K_S.gguf) | Q4_K_S | 4.370 GB | small, greater quality loss |
| [Llama-3-Base-8B-SFT-IPO-Q4_K_M.gguf](https://huggingface.co/tensorblock/Llama-3-Base-8B-SFT-IPO-GGUF/blob/main/Llama-3-Base-8B-SFT-IPO-Q4_K_M.gguf) | Q4_K_M | 4.583 GB | medium, balanced quality - recommended |
| [Llama-3-Base-8B-SFT-IPO-Q5_0.gguf](https://huggingface.co/tensorblock/Llama-3-Base-8B-SFT-IPO-GGUF/blob/main/Llama-3-Base-8B-SFT-IPO-Q5_0.gguf) | Q5_0 | 5.215 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Llama-3-Base-8B-SFT-IPO-Q5_K_S.gguf](https://huggingface.co/tensorblock/Llama-3-Base-8B-SFT-IPO-GGUF/blob/main/Llama-3-Base-8B-SFT-IPO-Q5_K_S.gguf) | Q5_K_S | 5.215 GB | large, low quality loss - recommended |
| [Llama-3-Base-8B-SFT-IPO-Q5_K_M.gguf](https://huggingface.co/tensorblock/Llama-3-Base-8B-SFT-IPO-GGUF/blob/main/Llama-3-Base-8B-SFT-IPO-Q5_K_M.gguf) | Q5_K_M | 5.339 GB | large, very low quality loss - recommended |
| [Llama-3-Base-8B-SFT-IPO-Q6_K.gguf](https://huggingface.co/tensorblock/Llama-3-Base-8B-SFT-IPO-GGUF/blob/main/Llama-3-Base-8B-SFT-IPO-Q6_K.gguf) | Q6_K | 6.143 GB | very large, extremely low quality loss |
| [Llama-3-Base-8B-SFT-IPO-Q8_0.gguf](https://huggingface.co/tensorblock/Llama-3-Base-8B-SFT-IPO-GGUF/blob/main/Llama-3-Base-8B-SFT-IPO-Q8_0.gguf) | Q8_0 | 7.954 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Llama-3-Base-8B-SFT-IPO-GGUF --include "Llama-3-Base-8B-SFT-IPO-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Llama-3-Base-8B-SFT-IPO-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
minhtrannnn/c0eeb507-89a3-45a5-887e-927ec11f1552 | minhtrannnn | "2025-01-22T07:56:12Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-7B",
"base_model:adapter:Qwen/Qwen2.5-7B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-22T07:35:25Z" | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c0eeb507-89a3-45a5-887e-927ec11f1552
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1d6f76e87d074e8a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1d6f76e87d074e8a_train_data.json
type:
field_input: Context
field_instruction: Question
field_output: Answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: minhtrannnn/c0eeb507-89a3-45a5-887e-927ec11f1552
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/1d6f76e87d074e8a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d5222b8b-063e-4d75-b9f0-5ea50ea7bc58
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d5222b8b-063e-4d75-b9f0-5ea50ea7bc58
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# c0eeb507-89a3-45a5-887e-927ec11f1552
This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5336
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.3923 | 0.1033 | 200 | 0.5336 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
LarryAIDraw/riselia-fi-000009 | LarryAIDraw | "2023-12-10T15:59:40Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-12-10T15:53:01Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/228529/riselia-ray-crystalia |
TobiTob/decision_transformer_random4 | TobiTob | "2023-03-01T21:40:12Z" | 31 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"decision_transformer",
"generated_from_trainer",
"dataset:city_learn",
"endpoints_compatible",
"region:us"
] | null | "2023-03-01T21:02:43Z" | ---
tags:
- generated_from_trainer
datasets:
- city_learn
model-index:
- name: decision_transformer_random4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# decision_transformer_random4
This model is a fine-tuned version of [](https://huggingface.co/) on the city_learn dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
macabdul9/ArLlama-2-7b-hf-2m-cpt | macabdul9 | "2024-05-25T00:43:22Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-25T00:37:51Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
John6666/pasanctuary-sdxl-illustriousxl-v40-sdxl | John6666 | "2024-12-23T06:53:35Z" | 161 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"photorealistic",
"scenario",
"sharp",
"backgrounds",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-1.0",
"base_model:finetune:Laxhar/noobai-XL-1.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-12-03T07:27:48Z" | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- photorealistic
- scenario
- sharp
- backgrounds
- illustrious
base_model: Laxhar/noobai-XL-1.0
---
Original model is [here](https://civitai.com/models/835578/pasanctuary-sdxl-illustriousxl?modelVersionId=1123094).
This model created by [FallenIncursio](https://civitai.com/user/FallenIncursio).
|
javadKV8/detr-finetuned-cppe-5-10k-steps | javadKV8 | "2025-03-24T16:45:08Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"detr",
"object-detection",
"vision",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | "2025-03-24T16:44:29Z" | Temporary Redirect. Redirecting to /api/resolve-cache/models/javadKV8/detr-finetuned-cppe-5-10k-steps/845b9f5b02d6e91b964cf29ec5040f06070be868/README.md?%2FjavadKV8%2Fdetr-finetuned-cppe-5-10k-steps%2Fresolve%2Fmain%2FREADME.md=&etag=%22bc1fe108e3c72cfe9dad912121f042ffb6a02663%22 |
Rudolph314/ppo-SnowballTarget | Rudolph314 | "2024-04-16T09:37:47Z" | 14 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | "2024-04-16T09:37:45Z" | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Rudolph314/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
DOOGLAK/Tagged_One_500v3_NER_Model_3Epochs_AUGMENTED | DOOGLAK | "2022-08-11T16:21:20Z" | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:tagged_one500v3_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2022-08-11T16:16:08Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_one500v3_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_One_500v3_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_one500v3_wikigold_split
type: tagged_one500v3_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.697499143542309
- name: Recall
type: recall
value: 0.6782145236508994
- name: F1
type: f1
value: 0.6877216686370546
- name: Accuracy
type: accuracy
value: 0.9245400105495051
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_One_500v3_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one500v3_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2659
- Precision: 0.6975
- Recall: 0.6782
- F1: 0.6877
- Accuracy: 0.9245
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 175 | 0.2990 | 0.5405 | 0.4600 | 0.4970 | 0.9007 |
| No log | 2.0 | 350 | 0.2789 | 0.6837 | 0.6236 | 0.6523 | 0.9157 |
| 0.1081 | 3.0 | 525 | 0.2659 | 0.6975 | 0.6782 | 0.6877 | 0.9245 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
marcel/phixtral-4x2_8-gates-poc | marcel | "2024-01-13T09:10:40Z" | 9 | 4 | transformers | [
"transformers",
"safetensors",
"phi-msft",
"text-generation",
"moe",
"nlp",
"code",
"cognitivecomputations/dolphin-2_6-phi-2",
"lxuechen/phi-2-dpo",
"Yhyu13/phi-2-sft-dpo-gpt4_en-ep1",
"mrm8488/phi-2-coder",
"conversational",
"custom_code",
"en",
"license:mit",
"autotrain_compatible",
"region:us"
] | text-generation | "2024-01-12T19:10:49Z" | ---
inference: false
license: mit
license_link: https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- moe
- nlp
- code
- cognitivecomputations/dolphin-2_6-phi-2
- lxuechen/phi-2-dpo
- Yhyu13/phi-2-sft-dpo-gpt4_en-ep1
- mrm8488/phi-2-coder
---

# phixtral-4x2_8-gates-poc
phixtral-4x2_8-gates-poc is [phixtral-4x2_8](https://huggingface.co/mlabonne/phixtral-4x2_8)
with finetuned gates for better selection of Expert and to break the symmetry.
As a POC we only used 400 shorter samples
from [openhermes](https://huggingface.co/datasets/teknium/openhermes).
phixtral-4x2_8 is the first Mixure of Experts (MoE) made with four [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) models, inspired by the [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) architecture. It performs better than each individual expert.
## 🏆 Evaluation
| Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
|----------------------------------------------------------------|------:|------:|---------:|-------:|------:|
|[**phixtral-4x2_8**](https://huggingface.co/mlabonne/phixtral-4x2_8)| **33.91**| **70.44**| **48.78**| **37.68**| **47.7**|
|[dolphin-2_6-phi-2](https://huggingface.co/cognitivecomputations/dolphin-2_6-phi-2)| 33.12| 69.85| 47.39| 37.2| 46.89|
|[phi-2-dpo](https://huggingface.co/lxuechen/phi-2-dpo)| 30.39| 71.68| 50.75| 34.9| 46.93|
|[phi-2-sft-dpo-gpt4_en-ep1](https://huggingface.co/Yhyu13/phi-2-sft-dpo-gpt4_en-ep1)| 30.61| 71.13| 48.74| 35.23| 46.43|
|[phi-2-coder](https://huggingface.co/mrm8488/phi-2-coder)| TBD| TBD| TBD| TBD| TBD|
|[phi-2](https://huggingface.co/microsoft/phi-2)| 27.98| 70.8| 44.43| 35.21| 44.61|
Check [YALL - Yet Another LLM Leaderboard](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard) to compare it with other models.
## 🧩 Configuration
The model has been made with a custom version of the [mergekit](https://github.com/cg123/mergekit) library (mixtral branch) and the following configuration:
```yaml
base_model: cognitivecomputations/dolphin-2_6-phi-2
gate_mode: cheap_embed
experts:
- source_model: cognitivecomputations/dolphin-2_6-phi-2
positive_prompts: [""]
- source_model: lxuechen/phi-2-dpo
positive_prompts: [""]
- source_model: Yhyu13/phi-2-sft-dpo-gpt4_en-ep1
positive_prompts: [""]
- source_model: mrm8488/phi-2-coder
positive_prompts: [""]
```
## 💻 Usage
Here's a [Colab notebook](https://colab.research.google.com/drive/1k6C_oJfEKUq0mtuWKisvoeMHxTcIxWRa?usp=sharing) to run Phixtral in 4-bit precision on a free T4 GPU.
```python
!pip install -q --upgrade transformers einops accelerate bitsandbytes
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "phixtral-4x2_8"
instruction = '''
def print_prime(n):
"""
Print all primes between 1 and n
"""
'''
torch.set_default_device("cuda")
# Load the model and tokenizer
model = AutoModelForCausalLM.from_pretrained(
f"mlabonne/{model_name}",
torch_dtype="auto",
load_in_4bit=True,
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(
f"mlabonne/{model_name}",
trust_remote_code=True
)
# Tokenize the input string
inputs = tokenizer(
instruction,
return_tensors="pt",
return_attention_mask=False
)
# Generate text using the model
outputs = model.generate(**inputs, max_length=200)
# Decode and print the output
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
Inspired by [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1), you can specify the `num_experts_per_tok` and `num_local_experts` in the [`config.json`](https://huggingface.co/mlabonne/phixtral-4x2_8/blob/main/config.json#L26-L27) file (2 and 4 by default). This configuration is automatically loaded in `configuration.py`.
[vince62s](https://huggingface.co/vince62s) implemented the MoE inference code in the `modeling_phi.py` file. In particular, see the [MoE class](https://huggingface.co/mlabonne/phixtral-4x2_8/blob/main/modeling_phi.py#L293-L317).
## 🤝 Acknowledgments
A special thanks to [vince62s](https://huggingface.co/vince62s) for the inference code and the dynamic configuration of the number of experts. He was very patient and helped me to debug everything.
Thanks to [Charles Goddard](https://github.com/cg123) for the [mergekit](https://github.com/cg123/mergekit) library and the implementation of the [MoE for clowns](https://goddard.blog/posts/clown-moe/).
Thanks to [ehartford](https://huggingface.co/ehartford), [lxuechen](https://huggingface.co/lxuechen), [Yhyu13](https://huggingface.co/Yhyu13), and [mrm8488](https://huggingface.co/mrm8488) for their fine-tuned phi-2 models.
|
mradermacher/8-goldfish-loss-llama-1B-GGUF | mradermacher | "2024-08-20T10:32:47Z" | 22 | 0 | transformers | [
"transformers",
"gguf",
"goldfish-loss",
"memorization",
"mitigation",
"en",
"dataset:tomg-group-umd/wikipedia-en-2k-samples",
"base_model:tomg-group-umd/8-goldfish-loss-llama-1B",
"base_model:quantized:tomg-group-umd/8-goldfish-loss-llama-1B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-08-20T10:20:26Z" | ---
base_model: tomg-group-umd/8-goldfish-loss-llama-1B
datasets:
- tomg-group-umd/wikipedia-en-2k-samples
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- goldfish-loss
- memorization
- mitigation
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/tomg-group-umd/8-goldfish-loss-llama-1B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/8-goldfish-loss-llama-1B-GGUF/resolve/main/8-goldfish-loss-llama-1B.Q2_K.gguf) | Q2_K | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/8-goldfish-loss-llama-1B-GGUF/resolve/main/8-goldfish-loss-llama-1B.IQ3_XS.gguf) | IQ3_XS | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/8-goldfish-loss-llama-1B-GGUF/resolve/main/8-goldfish-loss-llama-1B.Q3_K_S.gguf) | Q3_K_S | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/8-goldfish-loss-llama-1B-GGUF/resolve/main/8-goldfish-loss-llama-1B.IQ3_S.gguf) | IQ3_S | 0.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/8-goldfish-loss-llama-1B-GGUF/resolve/main/8-goldfish-loss-llama-1B.IQ3_M.gguf) | IQ3_M | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/8-goldfish-loss-llama-1B-GGUF/resolve/main/8-goldfish-loss-llama-1B.Q3_K_M.gguf) | Q3_K_M | 0.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/8-goldfish-loss-llama-1B-GGUF/resolve/main/8-goldfish-loss-llama-1B.Q3_K_L.gguf) | Q3_K_L | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/8-goldfish-loss-llama-1B-GGUF/resolve/main/8-goldfish-loss-llama-1B.IQ4_XS.gguf) | IQ4_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/8-goldfish-loss-llama-1B-GGUF/resolve/main/8-goldfish-loss-llama-1B.Q4_K_S.gguf) | Q4_K_S | 0.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/8-goldfish-loss-llama-1B-GGUF/resolve/main/8-goldfish-loss-llama-1B.Q4_K_M.gguf) | Q4_K_M | 0.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/8-goldfish-loss-llama-1B-GGUF/resolve/main/8-goldfish-loss-llama-1B.Q5_K_S.gguf) | Q5_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/8-goldfish-loss-llama-1B-GGUF/resolve/main/8-goldfish-loss-llama-1B.Q5_K_M.gguf) | Q5_K_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/8-goldfish-loss-llama-1B-GGUF/resolve/main/8-goldfish-loss-llama-1B.Q6_K.gguf) | Q6_K | 1.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/8-goldfish-loss-llama-1B-GGUF/resolve/main/8-goldfish-loss-llama-1B.Q8_0.gguf) | Q8_0 | 1.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/8-goldfish-loss-llama-1B-GGUF/resolve/main/8-goldfish-loss-llama-1B.f16.gguf) | f16 | 2.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
thetayne/finetuned_model_0613 | thetayne | "2024-06-13T16:31:03Z" | 11 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:1625",
"loss:CosineSimilarityLoss",
"en",
"arxiv:1908.10084",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:finetune:BAAI/bge-base-en-v1.5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2024-06-13T16:30:47Z" | ---
language:
- en
license: apache-2.0
library_name: sentence-transformers
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:1625
- loss:CosineSimilarityLoss
base_model: BAAI/bge-base-en-v1.5
datasets: []
metrics:
- pearson_cosine
- spearman_cosine
- pearson_manhattan
- spearman_manhattan
- pearson_euclidean
- spearman_euclidean
- pearson_dot
- spearman_dot
- pearson_max
- spearman_max
widget:
- source_sentence: Boron Steel
sentences:
- Rock Bit
- Spalling Test
- Excavator Bucket
- source_sentence: Friction Wear
sentences:
- Tool Steel
- Medium Carbon Steel
- Diffusion Bonding
- source_sentence: Delamination
sentences:
- Subsea Christmas Tree
- Low Alloyed Steel
- Screw Conveyors
- source_sentence: Nitriding
sentences:
- Subsea Manifold
- Trencher Chain
- Cylinder
- source_sentence: Corrosion Resistant Coatings
sentences:
- Mower Blade
- Gas Metal Arc Welding (GMAW)
- Corrosion Resistant Coatings
pipeline_tag: sentence-similarity
model-index:
- name: BGE base Financial Matryoshka
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: dim 768
type: dim_768
metrics:
- type: pearson_cosine
value: 0.9548051644723275
name: Pearson Cosine
- type: spearman_cosine
value: 0.6620048542679903
name: Spearman Cosine
- type: pearson_manhattan
value: 0.985909077336812
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.6620048542679903
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.9863519709955113
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.6620048542679903
name: Spearman Euclidean
- type: pearson_dot
value: 0.9548051701614557
name: Pearson Dot
- type: spearman_dot
value: 0.6610658947764548
name: Spearman Dot
- type: pearson_max
value: 0.9863519709955113
name: Pearson Max
- type: spearman_max
value: 0.6620048542679903
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: dim 512
type: dim_512
metrics:
- type: pearson_cosine
value: 0.9544417196413574
name: Pearson Cosine
- type: spearman_cosine
value: 0.6620048542679903
name: Spearman Cosine
- type: pearson_manhattan
value: 0.9855825558550574
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.6620048542679903
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.9862004412296757
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.6620048542679903
name: Spearman Euclidean
- type: pearson_dot
value: 0.9501184326722917
name: Pearson Dot
- type: spearman_dot
value: 0.6607798700248341
name: Spearman Dot
- type: pearson_max
value: 0.9862004412296757
name: Pearson Max
- type: spearman_max
value: 0.6620048542679903
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: dim 256
type: dim_256
metrics:
- type: pearson_cosine
value: 0.9494511778471465
name: Pearson Cosine
- type: spearman_cosine
value: 0.6620048542679903
name: Spearman Cosine
- type: pearson_manhattan
value: 0.9830259644213172
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.6620048542679903
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.9835562939431381
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.6620048542679903
name: Spearman Euclidean
- type: pearson_dot
value: 0.9469313992827345
name: Pearson Dot
- type: spearman_dot
value: 0.6607798700248341
name: Spearman Dot
- type: pearson_max
value: 0.9835562939431381
name: Pearson Max
- type: spearman_max
value: 0.6620048542679903
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: dim 128
type: dim_128
metrics:
- type: pearson_cosine
value: 0.9397052405386266
name: Pearson Cosine
- type: spearman_cosine
value: 0.6620048542679903
name: Spearman Cosine
- type: pearson_manhattan
value: 0.9762184586055923
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.6620048542679903
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.9781975526221939
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.6620048542679903
name: Spearman Euclidean
- type: pearson_dot
value: 0.9271211389022183
name: Pearson Dot
- type: spearman_dot
value: 0.6607798700248341
name: Spearman Dot
- type: pearson_max
value: 0.9781975526221939
name: Pearson Max
- type: spearman_max
value: 0.6620048542679903
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: dim 64
type: dim_64
metrics:
- type: pearson_cosine
value: 0.9149032642312528
name: Pearson Cosine
- type: spearman_cosine
value: 0.6620048542679903
name: Spearman Cosine
- type: pearson_manhattan
value: 0.968215524939354
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.6620048542679903
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.9708485057392984
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.6620048542679903
name: Spearman Euclidean
- type: pearson_dot
value: 0.8940456314300972
name: Pearson Dot
- type: spearman_dot
value: 0.6602255244962898
name: Spearman Dot
- type: pearson_max
value: 0.9708485057392984
name: Pearson Max
- type: spearman_max
value: 0.6620048542679903
name: Spearman Max
---
# BGE base Financial Matryoshka
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("thetayne/finetuned_model_0613")
# Run inference
sentences = [
'Corrosion Resistant Coatings',
'Corrosion Resistant Coatings',
'Mower Blade',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Dataset: `dim_768`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:----------|
| pearson_cosine | 0.9548 |
| **spearman_cosine** | **0.662** |
| pearson_manhattan | 0.9859 |
| spearman_manhattan | 0.662 |
| pearson_euclidean | 0.9864 |
| spearman_euclidean | 0.662 |
| pearson_dot | 0.9548 |
| spearman_dot | 0.6611 |
| pearson_max | 0.9864 |
| spearman_max | 0.662 |
#### Semantic Similarity
* Dataset: `dim_512`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:----------|
| pearson_cosine | 0.9544 |
| **spearman_cosine** | **0.662** |
| pearson_manhattan | 0.9856 |
| spearman_manhattan | 0.662 |
| pearson_euclidean | 0.9862 |
| spearman_euclidean | 0.662 |
| pearson_dot | 0.9501 |
| spearman_dot | 0.6608 |
| pearson_max | 0.9862 |
| spearman_max | 0.662 |
#### Semantic Similarity
* Dataset: `dim_256`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:----------|
| pearson_cosine | 0.9495 |
| **spearman_cosine** | **0.662** |
| pearson_manhattan | 0.983 |
| spearman_manhattan | 0.662 |
| pearson_euclidean | 0.9836 |
| spearman_euclidean | 0.662 |
| pearson_dot | 0.9469 |
| spearman_dot | 0.6608 |
| pearson_max | 0.9836 |
| spearman_max | 0.662 |
#### Semantic Similarity
* Dataset: `dim_128`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:----------|
| pearson_cosine | 0.9397 |
| **spearman_cosine** | **0.662** |
| pearson_manhattan | 0.9762 |
| spearman_manhattan | 0.662 |
| pearson_euclidean | 0.9782 |
| spearman_euclidean | 0.662 |
| pearson_dot | 0.9271 |
| spearman_dot | 0.6608 |
| pearson_max | 0.9782 |
| spearman_max | 0.662 |
#### Semantic Similarity
* Dataset: `dim_64`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:----------|
| pearson_cosine | 0.9149 |
| **spearman_cosine** | **0.662** |
| pearson_manhattan | 0.9682 |
| spearman_manhattan | 0.662 |
| pearson_euclidean | 0.9708 |
| spearman_euclidean | 0.662 |
| pearson_dot | 0.894 |
| spearman_dot | 0.6602 |
| pearson_max | 0.9708 |
| spearman_max | 0.662 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 1,625 training samples
* Columns: <code>sentence_A</code>, <code>sentence_B</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_A | sentence_B | score |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 3 tokens</li><li>mean: 5.68 tokens</li><li>max: 36 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 5.73 tokens</li><li>max: 36 tokens</li></ul> | <ul><li>0: ~83.30%</li><li>1: ~16.70%</li></ul> |
* Samples:
| sentence_A | sentence_B | score |
|:-----------------------------------|:--------------------------------------|:---------------|
| <code>Thermal Fatigue</code> | <code>Ferritic Stainless Steel</code> | <code>0</code> |
| <code>High Temperature Wear</code> | <code>Drill String</code> | <code>0</code> |
| <code>Carbide Coatings</code> | <code>Carbide Coatings</code> | <code>1</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_128_spearman_cosine | dim_256_spearman_cosine | dim_512_spearman_cosine | dim_64_spearman_cosine | dim_768_spearman_cosine |
|:----------:|:------:|:-------------:|:-----------------------:|:-----------------------:|:-----------------------:|:----------------------:|:-----------------------:|
| 0 | 0 | - | 0.6626 | 0.6626 | 0.6626 | 0.6626 | 0.6626 |
| 0.9412 | 3 | - | 0.6620 | 0.6620 | 0.6620 | 0.6620 | 0.6620 |
| 1.8627 | 6 | - | 0.6620 | 0.6620 | 0.6620 | 0.6620 | 0.6620 |
| 2.7843 | 9 | - | 0.6620 | 0.6620 | 0.6620 | 0.6620 | 0.6620 |
| 3.0784 | 10 | 0.156 | - | - | - | - | - |
| **3.7059** | **12** | **-** | **0.662** | **0.662** | **0.662** | **0.662** | **0.662** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.1.2+cu121
- Accelerate: 0.31.0
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
sd-dreambooth-library/chevron-texture | sd-dreambooth-library | "2023-08-29T11:18:40Z" | 20 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-08-29T11:15:42Z" | ---
license: creativeml-openrail-m
tags:
- text-to-image
---
### chevron texture on Stable Diffusion via Dreambooth
#### model by uttam333
This your the Stable Diffusion model fine-tuned the chevron texture concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **< Chevron> texture"**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:








|
RichardErkhov/friendshipkim_-_Llama-3.2-1B-last-layer-4bits | RichardErkhov | "2025-03-14T19:46:04Z" | 0 | 0 | null | [
"safetensors",
"llama",
"arxiv:1910.09700",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2025-03-14T19:45:23Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3.2-1B-last-layer - bnb 4bits
- Model creator: https://huggingface.co/friendshipkim/
- Original model: https://huggingface.co/friendshipkim/Llama-3.2-1B-last-layer/
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hgnoi/GofeETX9KjN32UOs | hgnoi | "2024-05-25T12:25:07Z" | 76 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-25T12:22:43Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
akhadangi/Mistral-7B-v0.1-6-0.01-Last | akhadangi | "2025-03-17T15:11:48Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-17T15:08:45Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/DeepSeek-R1-Distill-Llama-8B-Uncensored-GGUF | mradermacher | "2025-02-06T13:06:48Z" | 8,927 | 2 | transformers | [
"transformers",
"gguf",
"en",
"base_model:braindao/DeepSeek-R1-Distill-Llama-8B-Uncensored",
"base_model:quantized:braindao/DeepSeek-R1-Distill-Llama-8B-Uncensored",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-06T04:48:53Z" | ---
base_model: braindao/DeepSeek-R1-Distill-Llama-8B-Uncensored
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/braindao/DeepSeek-R1-Distill-Llama-8B-Uncensored
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Llama-8B-Uncensored-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Llama-8B-Uncensored-GGUF/resolve/main/DeepSeek-R1-Distill-Llama-8B-Uncensored.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Llama-8B-Uncensored-GGUF/resolve/main/DeepSeek-R1-Distill-Llama-8B-Uncensored.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Llama-8B-Uncensored-GGUF/resolve/main/DeepSeek-R1-Distill-Llama-8B-Uncensored.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Llama-8B-Uncensored-GGUF/resolve/main/DeepSeek-R1-Distill-Llama-8B-Uncensored.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Llama-8B-Uncensored-GGUF/resolve/main/DeepSeek-R1-Distill-Llama-8B-Uncensored.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Llama-8B-Uncensored-GGUF/resolve/main/DeepSeek-R1-Distill-Llama-8B-Uncensored.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Llama-8B-Uncensored-GGUF/resolve/main/DeepSeek-R1-Distill-Llama-8B-Uncensored.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Llama-8B-Uncensored-GGUF/resolve/main/DeepSeek-R1-Distill-Llama-8B-Uncensored.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Llama-8B-Uncensored-GGUF/resolve/main/DeepSeek-R1-Distill-Llama-8B-Uncensored.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Llama-8B-Uncensored-GGUF/resolve/main/DeepSeek-R1-Distill-Llama-8B-Uncensored.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Llama-8B-Uncensored-GGUF/resolve/main/DeepSeek-R1-Distill-Llama-8B-Uncensored.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Llama-8B-Uncensored-GGUF/resolve/main/DeepSeek-R1-Distill-Llama-8B-Uncensored.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
slimaneMakh/BinarySuperClass_Equity_tableClassification_27may_distilBert_BASELINE | slimaneMakh | "2024-05-27T12:19:30Z" | 163 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-05-27T12:19:09Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lesso08/e0f5fdf8-e4da-426f-8a7b-4457d8530bb0 | lesso08 | "2025-03-06T04:03:22Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Phi-3-medium-4k-instruct",
"base_model:adapter:unsloth/Phi-3-medium-4k-instruct",
"license:mit",
"region:us"
] | null | "2025-03-05T07:48:52Z" | ---
library_name: peft
license: mit
base_model: unsloth/Phi-3-medium-4k-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e0f5fdf8-e4da-426f-8a7b-4457d8530bb0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# e0f5fdf8-e4da-426f-8a7b-4457d8530bb0
This model is a fine-tuned version of [unsloth/Phi-3-medium-4k-instruct](https://huggingface.co/unsloth/Phi-3-medium-4k-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000208
- train_batch_size: 4
- eval_batch_size: 4
- seed: 80
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0009 | 1 | 0.7898 |
| 1.2524 | 0.4425 | 500 | 0.1682 |
| 1.0313 | 0.8851 | 1000 | 0.1280 |
| 0.7196 | 1.3280 | 1500 | 0.1222 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mav23/arco-plus-GGUF | mav23 | "2024-10-21T23:33:28Z" | 39 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"base_model:appvoid/arco",
"base_model:merge:appvoid/arco",
"base_model:h2oai/h2o-danube3-500m-base",
"base_model:merge:h2oai/h2o-danube3-500m-base",
"endpoints_compatible",
"region:us"
] | null | "2024-10-21T23:24:39Z" | ---
base_model:
- appvoid/arco
- h2oai/h2o-danube3-500m-base
library_name: transformers
tags:
- mergekit
- merge
---
# arco+
This is an untrained passthrough model based on arco and danube as a first effort to train a small enough reasoning language model that generalizes across all kind of reasoning tasks.
#### Benchmarks
| Parameters | Model | MMLU | ARC | HellaSwag | PIQA | Winogrande | Average |
| -----------|--------------------------------|-------|-------|-----------|--------|------------|---------|
| 488m | arco-lite | **23.22** | 33.45 | 56.55| 69.70 | **59.19**| 48.46 |
| 773m | arco-plus | 23.06 | **36.43** | **60.09**|**72.36**| **60.46**| **50.48** |
#### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: appvoid/arco
layer_range: [0, 14]
- sources:
- model: h2oai/h2o-danube3-500m-base
layer_range: [4, 16]
merge_method: passthrough
dtype: float16
```
|
DGurgurov/xlm-r_cym-latn | DGurgurov | "2025-03-27T18:39:34Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"cy",
"arxiv:2502.10140",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2025-03-27T18:13:04Z" | ---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: cym-Latn
results: []
language:
- cy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cym-Latn
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5126
- Accuracy: 0.8894
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 100000
### Citation Information
If you use this model in your work, please cite the following paper. Additionally, if you require more details on training and performance, refer to the paper:
```bibtex
@misc{gurgurov2025smallmodelsbigimpact,
title={Small Models, Big Impact: Efficient Corpus and Graph-Based Adaptation of Small Multilingual Language Models for Low-Resource Languages},
author={Daniil Gurgurov and Ivan Vykopal and Josef van Genabith and Simon Ostermann},
year={2025},
eprint={2502.10140},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.10140},
}
``` |
fangzhaoz/GSM8k_mistral_adalora_merged | fangzhaoz | "2024-03-22T03:46:36Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-22T03:40:29Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
davidschulte/ESM_AI-Sweden__SuperLim_sweana | davidschulte | "2025-03-26T14:26:20Z" | 20 | 0 | null | [
"safetensors",
"embedding_space_map",
"BaseLM:bert-base-multilingual-uncased",
"dataset:AI-Sweden/SuperLim",
"base_model:google-bert/bert-base-multilingual-uncased",
"base_model:finetune:google-bert/bert-base-multilingual-uncased",
"license:apache-2.0",
"region:us"
] | null | "2024-12-05T17:02:59Z" | ---
base_model: bert-base-multilingual-uncased
datasets:
- AI-Sweden/SuperLim
license: apache-2.0
tags:
- embedding_space_map
- BaseLM:bert-base-multilingual-uncased
---
# ESM AI-Sweden/SuperLim
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
ESM
- **Developed by:** David Schulte
- **Model type:** ESM
- **Base Model:** bert-base-multilingual-uncased
- **Intermediate Task:** AI-Sweden/SuperLim
- **ESM architecture:** linear
- **ESM embedding dimension:** 768
- **Language(s) (NLP):** [More Information Needed]
- **License:** Apache-2.0 license
- **ESM version:** 0.1.0
## Training Details
### Intermediate Task
- **Task ID:** AI-Sweden/SuperLim
- **Subset [optional]:** sweana
- **Text Column:** a
- **Label Column:** relation
- **Dataset Split:** test
- **Sample size [optional]:** 10000
- **Sample seed [optional]:** 42
### Training Procedure [optional]
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Language Model Training Hyperparameters [optional]
- **Epochs:** 3
- **Batch size:** 32
- **Learning rate:** 2e-05
- **Weight Decay:** 0.01
- **Optimizer**: AdamW
### ESM Training Hyperparameters [optional]
- **Epochs:** 10
- **Batch size:** 32
- **Learning rate:** 0.001
- **Weight Decay:** 0.01
- **Optimizer**: AdamW
### Additional trainiung details [optional]
## Model evaluation
### Evaluation of fine-tuned language model [optional]
### Evaluation of ESM [optional]
MSE:
### Additional evaluation details [optional]
## What are Embedding Space Maps used for?
Embedding Space Maps are a part of ESM-LogME, a efficient method for finding intermediate datasets for transfer learning. There are two reasons to use ESM-LogME:
### You don't have enough training data for your problem
If you don't have a enough training data for your problem, just use ESM-LogME to find more.
You can supplement model training by including publicly available datasets in the training process.
1. Fine-tune a language model on suitable intermediate dataset.
2. Fine-tune the resulting model on your target dataset.
This workflow is called intermediate task transfer learning and it can significantly improve the target performance.
But what is a suitable dataset for your problem? ESM-LogME enable you to quickly rank thousands of datasets on the Hugging Face Hub by how well they are exptected to transfer to your target task.
### You want to find similar datasets to your target dataset
Using ESM-LogME can be used like search engine on the Hugging Face Hub. You can find similar tasks to your target task without having to rely on heuristics. ESM-LogME estimates how language models fine-tuned on each intermediate task would benefinit your target task. This quantitative approach combines the effects of domain similarity and task similarity.
## How can I use ESM-LogME / ESMs?
[](https://pypi.org/project/hf-dataset-selector)
We release **hf-dataset-selector**, a Python package for intermediate task selection using Embedding Space Maps.
**hf-dataset-selector** fetches ESMs for a given language model and uses it to find the best dataset for applying intermediate training to the target task. ESMs are found by their tags on the Huggingface Hub.
```python
from hfselect import Dataset, compute_task_ranking
# Load target dataset from the Hugging Face Hub
dataset = Dataset.from_hugging_face(
name="stanfordnlp/imdb",
split="train",
text_col="text",
label_col="label",
is_regression=False,
num_examples=1000,
seed=42
)
# Fetch ESMs and rank tasks
task_ranking = compute_task_ranking(
dataset=dataset,
model_name="bert-base-multilingual-uncased"
)
# Display top 5 recommendations
print(task_ranking[:5])
```
```python
1. davanstrien/test_imdb_embedd2 Score: -0.618529
2. davanstrien/test_imdb_embedd Score: -0.618644
3. davanstrien/test1 Score: -0.619334
4. stanfordnlp/imdb Score: -0.619454
5. stanfordnlp/sst Score: -0.62995
```
| Rank | Task ID | Task Subset | Text Column | Label Column | Task Split | Num Examples | ESM Architecture | Score |
|-------:|:------------------------------|:----------------|:--------------|:---------------|:-------------|---------------:|:-------------------|----------:|
| 1 | davanstrien/test_imdb_embedd2 | default | text | label | train | 10000 | linear | -0.618529 |
| 2 | davanstrien/test_imdb_embedd | default | text | label | train | 10000 | linear | -0.618644 |
| 3 | davanstrien/test1 | default | text | label | train | 10000 | linear | -0.619334 |
| 4 | stanfordnlp/imdb | plain_text | text | label | train | 10000 | linear | -0.619454 |
| 5 | stanfordnlp/sst | dictionary | phrase | label | dictionary | 10000 | linear | -0.62995 |
| 6 | stanfordnlp/sst | default | sentence | label | train | 8544 | linear | -0.63312 |
| 7 | kuroneko5943/snap21 | CDs_and_Vinyl_5 | sentence | label | train | 6974 | linear | -0.634365 |
| 8 | kuroneko5943/snap21 | Video_Games_5 | sentence | label | train | 6997 | linear | -0.638787 |
| 9 | kuroneko5943/snap21 | Movies_and_TV_5 | sentence | label | train | 6989 | linear | -0.639068 |
| 10 | fancyzhx/amazon_polarity | amazon_polarity | content | label | train | 10000 | linear | -0.639718 |
For more information on how to use ESMs please have a look at the [official Github repository](https://github.com/davidschulte/hf-dataset-selector). We provide documentation further documentation and tutorials for finding intermediate datasets and training your own ESMs.
## How do Embedding Space Maps work?
<!-- This section describes the evaluation protocols and provides the results. -->
Embedding Space Maps (ESMs) are neural networks that approximate the effect of fine-tuning a language model on a task. They can be used to quickly transform embeddings from a base model to approximate how a fine-tuned model would embed the the input text.
ESMs can be used for intermediate task selection with the ESM-LogME workflow.
## How can I use Embedding Space Maps for Intermediate Task Selection?
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
If you are using this Embedding Space Maps, please cite our [paper](https://aclanthology.org/2024.emnlp-main.529/).
**BibTeX:**
```
@inproceedings{schulte-etal-2024-less,
title = "Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning",
author = "Schulte, David and
Hamborg, Felix and
Akbik, Alan",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.529/",
doi = "10.18653/v1/2024.emnlp-main.529",
pages = "9431--9442",
abstract = "Intermediate task transfer learning can greatly improve model performance. If, for example, one has little training data for emotion detection, first fine-tuning a language model on a sentiment classification dataset may improve performance strongly. But which task to choose for transfer learning? Prior methods producing useful task rankings are infeasible for large source pools, as they require forward passes through all source language models. We overcome this by introducing Embedding Space Maps (ESMs), light-weight neural networks that approximate the effect of fine-tuning a language model. We conduct the largest study on NLP task transferability and task selection with 12k source-target pairs. We find that applying ESMs on a prior method reduces execution time and disk space usage by factors of 10 and 278, respectively, while retaining high selection performance (avg. regret@5 score of 2.95)."
}
```
**APA:**
```
Schulte, D., Hamborg, F., & Akbik, A. (2024, November). Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing (pp. 9431-9442).
```
## Additional Information
|
raufrajar/Phi-4-mini-instruct-4bit | raufrajar | "2025-02-27T05:25:13Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"nlp",
"code",
"mlx",
"conversational",
"custom_code",
"multilingual",
"base_model:microsoft/Phi-4-mini-instruct",
"base_model:quantized:microsoft/Phi-4-mini-instruct",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"region:us"
] | text-generation | "2025-02-27T05:23:57Z" | ---
license: mit
license_link: https://huggingface.co/microsoft/Phi-4-mini-instruct/resolve/main/LICENSE
language:
- multilingual
pipeline_tag: text-generation
tags:
- nlp
- code
- mlx
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
library_name: transformers
base_model: microsoft/Phi-4-mini-instruct
---
# raufrajar/Phi-4-mini-instruct-4bit
The Model [raufrajar/Phi-4-mini-instruct-4bit](https://huggingface.co/raufrajar/Phi-4-mini-instruct-4bit) was
converted to MLX format from [microsoft/Phi-4-mini-instruct](https://huggingface.co/microsoft/Phi-4-mini-instruct)
using mlx-lm version **0.21.5**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("raufrajar/Phi-4-mini-instruct-4bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
mip016/rl-pole | mip016 | "2024-01-09T15:46:16Z" | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2024-01-09T15:46:02Z" | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: rl-pole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
ThuyNT03/CS505_COQE_viT5_Prompting5_SPAOL | ThuyNT03 | "2024-02-29T10:45:57Z" | 104 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"base_model:finetune:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-02-29T09:44:21Z" | ---
license: mit
base_model: VietAI/vit5-large
tags:
- generated_from_trainer
model-index:
- name: CS505_COQE_viT5_Prompting5_SPAOL
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505_COQE_viT5_Prompting5_SPAOL
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.1
|
John6666/ilustrealmix-v21-sdxl | John6666 | "2025-03-16T16:46:30Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"realism",
"fantasy",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2025-03-16T16:41:21Z" | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
- realism
- fantasy
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1273933?modelVersionId=1539656).
This model created by [psychologicau](https://civitai.com/user/psychologicau).
|
balter4/benny | balter4 | "2025-02-28T05:03:05Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-02-28T04:32:33Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: benny
---
# Benny
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `benny` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('balter4/benny', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
abdymazhit/llm-gguf | abdymazhit | "2024-06-26T23:19:41Z" | 5 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/mistral-7b-v0.3-bnb-4bit",
"base_model:quantized:unsloth/mistral-7b-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-26T23:16:04Z" | ---
base_model: unsloth/mistral-7b-v0.3-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
---
# Uploaded model
- **Developed by:** abdymazhit
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MaginaDai/RewardModel_Round2_lora32_20epoch | MaginaDai | "2025-03-10T07:51:10Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-03-10T07:50:52Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ayatsuri/academic-ai-detector | ayatsuri | "2024-06-08T09:55:16Z" | 74 | 2 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"dataset:NicolaiSivesind/human-vs-machine",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-05-29T18:04:24Z" | ---
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: ayatsuri/academic-ai-detector
results: []
datasets:
- NicolaiSivesind/human-vs-machine
metrics:
- accuracy
- recall
- precision
- f1
pipeline_tag: text-classification
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ayatsuri/academic-ai-detector
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on [NicolaiSivesind/human-vs-machine](https://huggingface.co/datasets/NicolaiSivesind/human-vs-machine) dataset.
It achieves the following best results on the evaluation set:
- Train Loss: 0.0910
- Validation Loss: 0.0326
- Train Accuracy: 0.9937
- Train Recall: 0.9927
- Train Precision: 0.9947
- Train F1: 0.9937
- Validation Accuracy: 0.99
- Validation Recall: 0.986
- Validation Precision: 0.9940
- Validation F1: 0.9900
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2625, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Set | Loss | Accuracy | Recall | Precision | F1 |
|:----------:|:------:|:--------:|:------:|:---------:|:------:|
| Train | 0.0910 | 0.9937 | 0.9927 | 0.9947 | 0.9937 |
| Validation | 0.0326 | 0.99 | 0.986 | 0.9940 | 0.9900 |
### Framework versions
- Transformers 4.41.1
- TensorFlow 2.15.0
- Datasets 2.19.1
- Tokenizers 0.19.1
## Citation
Please use the following citation:
```
@misc {ayatsuri24,
author = { Bagas Nuriksan },
title = { Academic AI Detector },
url = { https://huggingface.co/ayatsuri/academic-ai-detector }
year = 2024,
publisher = { Hugging Face }
}
``` |
RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc-gguf | RichardErkhov | "2024-10-27T17:49:56Z" | 249 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-10-27T17:16:42Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc - GGUF
- Model creator: https://huggingface.co/ahmedheakl/
- Original model: https://huggingface.co/ahmedheakl/asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc.Q2_K.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc.Q2_K.gguf) | Q2_K | 0.52GB |
| [asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc.Q3_K_S.gguf) | Q3_K_S | 0.6GB |
| [asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc.Q3_K.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc.Q3_K.gguf) | Q3_K | 0.66GB |
| [asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc.Q3_K_M.gguf) | Q3_K_M | 0.66GB |
| [asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc.Q3_K_L.gguf) | Q3_K_L | 0.69GB |
| [asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc.IQ4_XS.gguf) | IQ4_XS | 0.7GB |
| [asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc.Q4_0.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc.Q4_0.gguf) | Q4_0 | 0.72GB |
| [asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc.IQ4_NL.gguf) | IQ4_NL | 0.73GB |
| [asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc.Q4_K_S.gguf) | Q4_K_S | 0.76GB |
| [asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc.Q4_K.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc.Q4_K.gguf) | Q4_K | 0.81GB |
| [asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc.Q4_K_M.gguf) | Q4_K_M | 0.81GB |
| [asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc.Q4_1.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc.Q4_1.gguf) | Q4_1 | 0.8GB |
| [asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc.Q5_0.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc.Q5_0.gguf) | Q5_0 | 0.87GB |
| [asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc.Q5_K_S.gguf) | Q5_K_S | 0.89GB |
| [asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc.Q5_K.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc.Q5_K.gguf) | Q5_K | 0.93GB |
| [asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc.Q5_K_M.gguf) | Q5_K_M | 0.93GB |
| [asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc.Q5_1.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc.Q5_1.gguf) | Q5_1 | 0.95GB |
| [asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc.Q6_K.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc.Q6_K.gguf) | Q6_K | 1.09GB |
| [asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc.Q8_0.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc.Q8_0.gguf) | Q8_0 | 1.33GB |
Original model description:
---
library_name: transformers
license: other
base_model: deepseek-ai/deepseek-coder-1.3b-instruct
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# asm2asm-deepseek-1.3b-250k-x86-O0-arm-gnueabi-gcc
This model is a fine-tuned version of [deepseek-ai/deepseek-coder-1.3b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu118
- Datasets 3.0.0
- Tokenizers 0.19.1
|
marialvsantiago/b8f5622f-973c-4459-8e58-742e286baf09 | marialvsantiago | "2025-01-25T11:26:31Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:upstage/SOLAR-10.7B-Instruct-v1.0",
"base_model:adapter:upstage/SOLAR-10.7B-Instruct-v1.0",
"license:cc-by-nc-4.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-25T11:17:55Z" | ---
library_name: peft
license: cc-by-nc-4.0
base_model: upstage/SOLAR-10.7B-Instruct-v1.0
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b8f5622f-973c-4459-8e58-742e286baf09
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: upstage/SOLAR-10.7B-Instruct-v1.0
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 18266ce7474a0e64_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/18266ce7474a0e64_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: marialvsantiago/b8f5622f-973c-4459-8e58-742e286baf09
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 3
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 79GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/18266ce7474a0e64_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 985579d2-fa31-49a5-8ebb-b187c194b537
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 985579d2-fa31-49a5-8ebb-b187c194b537
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# b8f5622f-973c-4459-8e58-742e286baf09
This model is a fine-tuned version of [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0008 | 1 | nan |
| 0.0 | 0.0039 | 5 | nan |
| 0.0 | 0.0079 | 10 | nan |
| 0.0 | 0.0118 | 15 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
DigKingy/ToonYou-JP-Alpha1 | DigKingy | "2023-06-21T20:26:32Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-06-21T20:26:32Z" | ---
license: creativeml-openrail-m
---
|
Subsets and Splits