Search is not available for this dataset
modelId
stringlengths 5
137
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-01 00:42:32
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 405
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-01 00:42:15
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Helsinki-NLP/opus-mt-zai-es | Helsinki-NLP | "2023-08-16T12:09:06Z" | 113 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"zai",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2022-03-02T23:29:04Z" | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-zai-es
* source languages: zai
* target languages: es
* OPUS readme: [zai-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/zai-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/zai-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/zai-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/zai-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.zai.es | 20.8 | 0.372 |
|
seawolf2357/hanbok | seawolf2357 | "2024-10-16T01:07:22Z" | 3,945 | 52 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"ai-toolkit",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2024-10-16T01:00:28Z" | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- ai-toolkit
widget:
- text: a woman wearing a traditional Korean Hanbok, a long-sleeved blouse with
intricate embroidery and a high-waisted skirt. The blouse is a deep blue color
with a white collar and cuffs, and the skirt is a lighter shade of blue with
a pattern of small white flowers. The woman is standing in a graceful pose,
her hands clasped in front of her and her head tilted slightly to the side.
[trigger]
output:
url: samples/1729040425975__000001000_0.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: hanbok
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# hanbok
<Gallery />
## Trigger words
You should use `hanbok` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
[Download](/seawolf2357/hanbok/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('seawolf2357/hanbok', weight_name='hanbok.safetensors')
image = pipeline('a woman wearing a traditional Korean Hanbok, a long-sleeved blouse with intricate embroidery and a high-waisted skirt. The blouse is a deep blue color with a white collar and cuffs, and the skirt is a lighter shade of blue with a pattern of small white flowers. The woman is standing in a graceful pose, her hands clasped in front of her and her head tilted slightly to the side. [trigger]').images[0]
image.save("my_image.png")
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Rixhabh/Qwen-2.5-7b-math-new | Rixhabh | "2025-03-16T21:49:24Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-03-16T08:45:21Z" | ---
base_model: unsloth/qwen2.5-7b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Rixhabh
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
niruthiha/restaurant-ner | niruthiha | "2025-03-11T02:40:54Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"token-classification",
"ner",
"restaurant-search",
"en",
"dataset:mit_restaurant_search_ner",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2025-03-11T00:09:01Z" | ---
datasets:
- mit_restaurant_search_ner
language: en
license: mit
model_name: restaurant-ner
tags:
- token-classification
- ner
- restaurant-search
pipeline_tag: token-classification
library_name: transformers
widget:
- example_title: "Italian Restaurant Example"
text: "I want a 5 star Italian restaurant in Boston with outdoor seating"
- example_title: "Thai Food Example"
text: "Looking for cheap Thai food near downtown with parking"
---
# Model Card for Restaurant NER
This model performs Named Entity Recognition (NER) on restaurant search queries, extracting structured information like ratings, locations, cuisines, and amenities from natural language queries.
## Model Details
### Model Description
This model performs Named Entity Recognition (NER) on restaurant search queries. It can identify entities such as Rating, Location, Cuisine, Amenity, and more. The model helps understand user intent in restaurant search queries by structuring the free-form text into actionable parameters.
The model was fine-tuned using DistilBERT on the MIT Restaurant Search NER dataset, following the BIO (Beginning, Inside, Outside) tagging scheme for entity detection.
- **Developed by:** Your Name
- **Model type:** Token Classification (NER)
- **Language(s):** English
- **License:** MIT
- **Finetuned from model:** distilbert-base-uncased
### Model Sources
- **Dataset:** MIT Restaurant Search NER dataset
## Uses
### Direct Use
This model can be used directly to extract structured information from restaurant search queries. It's particularly useful for:
```python
from transformers import pipeline
nlp = pipeline('token-classification',
model='niruthiha/restaurant-ner',
aggregation_strategy='simple')
result = nlp("I want a 5 star Italian restaurant in Boston with outdoor seating")
print(result)
```
Expected output would identify entities like:
- "5 star" as Rating
- "Italian" as Cuisine
- "Boston" as Location
- "outdoor seating" as Amenity
### Downstream Use
This NER model can be integrated into:
- Restaurant search applications
- Food delivery platforms
- Conversational AI assistants for restaurant recommendations
- Restaurant review analysis systems
### Out-of-Scope Use
This model is not designed for:
- General purpose entity recognition outside of the restaurant domain
- Sentiment analysis of restaurant reviews
- Understanding complex nutritional requests
- Processing non-English queries
## Bias, Risks, and Limitations
- The model is trained on the MIT Restaurant Search NER dataset and may not generalize well to significantly different restaurant query styles or dialects
- Performance may vary for cuisine types, locations, or amenities not well-represented in the training data
- The model may struggle with spelling errors or highly colloquial language
### Recommendations
- Use in conjunction with spelling correction for better user experience
- Consider retraining or fine-tuning on domain-specific data if using for specialized restaurant types
- Monitor performance across different demographic groups to ensure equitable performance
## How to Get Started with the Model
```python
from transformers import pipeline, AutoTokenizer, AutoModelForTokenClassification
# Load the model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("niruthiha/restaurant-ner")
model = AutoModelForTokenClassification.from_pretrained("niruthiha/restaurant-ner")
# Or use pipeline for easy inference
ner = pipeline('token-classification',
model="niruthiha/restaurant-ner",
aggregation_strategy='simple')
# Example usage
query = "Looking for cheap Thai food near downtown with parking"
entities = ner(query)
print(entities)
```
## Training Details
### Training Data
The model was trained on the MIT Restaurant Search NER dataset, which contains annotated restaurant search queries with the following entity types:
- Rating
- Location
- Cuisine
- Amenity
- Restaurant_Name
- Hours
- Price
- And others
### Training Procedure
#### Preprocessing
1. The data was loaded in BIO format
2. Tags were converted to numeric IDs
3. DistilBERT tokenizer was used to tokenize the input
4. Special handling was implemented to align BIO tags with subword tokens
#### Training Hyperparameters
- **Learning rate:** 2e-5
- **Batch size:** 16
- **Training epochs:** 3
- **Weight decay:** 0.01
- **Training regime:** fp32
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
The model was evaluated on the test split of the MIT Restaurant Search NER dataset.
#### Metrics

These metrics were calculated using the seqeval library, which evaluates NER performance at the entity level rather than token level.
## Technical Specifications
### Model Architecture
The model uses DistilBERT with a token classification head. DistilBERT is a smaller, faster version of BERT that retains much of the performance while being more efficient.
#### Software
- Hugging Face Transformers
- PyTorch
- Python 3.x
## Model Card Contact
https://niruthiha.github.io/ |
kostiantynk/a4c828ce-3335-472b-8b38-478d1ff09cbb | kostiantynk | "2025-02-01T02:26:59Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"falcon",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:tiiuae/falcon-7b",
"base_model:adapter:tiiuae/falcon-7b",
"license:apache-2.0",
"region:us"
] | null | "2025-02-01T02:25:09Z" | ---
library_name: peft
license: apache-2.0
base_model: tiiuae/falcon-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a4c828ce-3335-472b-8b38-478d1ff09cbb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: tiiuae/falcon-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ae58869a76efd886_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ae58869a76efd886_train_data.json
type:
field_instruction: question
field_output: solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk/a4c828ce-3335-472b-8b38-478d1ff09cbb
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: constant
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/ae58869a76efd886_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: aba9e2f5-ee96-4954-80e4-aa974efed7b7
wandb_project: Mine-SN56-22-Gradients-On-Demand
wandb_run: your_name
wandb_runid: aba9e2f5-ee96-4954-80e4-aa974efed7b7
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a4c828ce-3335-472b-8b38-478d1ff09cbb
This model is a fine-tuned version of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1640
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0018 | 1 | 1.5062 |
| 2.5774 | 0.0897 | 50 | 1.2710 |
| 2.4696 | 0.1794 | 100 | 1.2222 |
| 2.5267 | 0.2691 | 150 | 1.1905 |
| 2.3147 | 0.3587 | 200 | 1.1640 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
fedovtt/af599c6b-0627-4dc2-9a43-a0b4ce3b81a4 | fedovtt | "2025-01-16T11:07:16Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"qwen2_moe",
"axolotl",
"generated_from_trainer",
"base_model:katuni4ka/tiny-random-qwen1.5-moe",
"base_model:adapter:katuni4ka/tiny-random-qwen1.5-moe",
"region:us"
] | null | "2025-01-16T10:40:53Z" | ---
library_name: peft
base_model: katuni4ka/tiny-random-qwen1.5-moe
tags:
- axolotl
- generated_from_trainer
model-index:
- name: af599c6b-0627-4dc2-9a43-a0b4ce3b81a4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: katuni4ka/tiny-random-qwen1.5-moe
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- db6511a4744a1352_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/db6511a4744a1352_train_data.json
type:
field_input: image_path
field_instruction: wikimedia_file
field_output: caption
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: fedovtt/af599c6b-0627-4dc2-9a43-a0b4ce3b81a4
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 78GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/db6511a4744a1352_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 78063d26-b865-4917-afb6-6c98e0361a9c
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 78063d26-b865-4917-afb6-6c98e0361a9c
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# af599c6b-0627-4dc2-9a43-a0b4ce3b81a4
This model is a fine-tuned version of [katuni4ka/tiny-random-qwen1.5-moe](https://huggingface.co/katuni4ka/tiny-random-qwen1.5-moe) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.9069
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 11.9342 |
| 11.9337 | 0.0001 | 5 | 11.9328 |
| 11.9249 | 0.0003 | 10 | 11.9277 |
| 11.9196 | 0.0004 | 15 | 11.9194 |
| 11.9207 | 0.0006 | 20 | 11.9119 |
| 11.9112 | 0.0007 | 25 | 11.9077 |
| 11.9081 | 0.0008 | 30 | 11.9069 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ErrorAI/bc352dd5-8ac4-42c8-bb00-005a65bbdb37 | ErrorAI | "2025-02-06T09:25:41Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Solar-10b-64k",
"base_model:adapter:NousResearch/Yarn-Solar-10b-64k",
"license:apache-2.0",
"region:us"
] | null | "2025-02-06T08:16:26Z" | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Yarn-Solar-10b-64k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: bc352dd5-8ac4-42c8-bb00-005a65bbdb37
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Solar-10b-64k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c92128223de95d2d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c92128223de95d2d_train_data.json
type:
field_instruction: question
field_output: reference_answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: ErrorAI/bc352dd5-8ac4-42c8-bb00-005a65bbdb37
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 1589
micro_batch_size: 4
mlflow_experiment_name: /tmp/c92128223de95d2d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d2b90f0a-d212-4f06-97d6-1732294d1a57
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d2b90f0a-d212-4f06-97d6-1732294d1a57
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# bc352dd5-8ac4-42c8-bb00-005a65bbdb37
This model is a fine-tuned version of [NousResearch/Yarn-Solar-10b-64k](https://huggingface.co/NousResearch/Yarn-Solar-10b-64k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1161
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 711
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7881 | 0.9996 | 710 | 0.1158 |
| 0.2848 | 1.0011 | 711 | 0.1161 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/Safe-RLHF-DPO-harmless-llama3-3b-GGUF | mradermacher | "2025-04-01T00:08:55Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-01T00:08:53Z" | <!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Ren-Wei/Safe-RLHF-DPO-harmless-llama3-3b
|
thangla01/68706ff3-7479-49ef-8969-3a0181edc209 | thangla01 | "2025-01-15T12:48:59Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-9b-it",
"base_model:adapter:unsloth/gemma-2-9b-it",
"license:gemma",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-15T12:03:47Z" | ---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-9b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 68706ff3-7479-49ef-8969-3a0181edc209
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-9b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 8a24260edb226ec7_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8a24260edb226ec7_train_data.json
type:
field_input: response_with_hint
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thangla01/68706ff3-7479-49ef-8969-3a0181edc209
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/8a24260edb226ec7_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e4a96e8f-63c8-4799-b156-9b1b78cc1084
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: e4a96e8f-63c8-4799-b156-9b1b78cc1084
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 68706ff3-7479-49ef-8969-3a0181edc209
This model is a fine-tuned version of [unsloth/gemma-2-9b-it](https://huggingface.co/unsloth/gemma-2-9b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3472
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.3114 | 0.0240 | 200 | 0.3472 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
PaulineJamin/cart-pole-v1 | PaulineJamin | "2023-07-14T14:13:25Z" | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-07-14T09:45:08Z" | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: cart-pole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.40 +/- 39.85
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
BenjaminOcampo/model-bert__trained-in-toxigen__seed-1 | BenjaminOcampo | "2023-06-18T21:56:19Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-06-18T21:55:12Z" | ---
language: en
---
# Model Card for BenjaminOcampo/model-bert__trained-in-toxigen__seed-1
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
**Classification results dev set**
```
precision recall f1-score support
0 0.8533 0.8789 0.8659 900
1 0.8740 0.8475 0.8606 892
accuracy 0.8633 1792
macro avg 0.8636 0.8632 0.8632 1792
weighted avg 0.8636 0.8633 0.8632 1792
```
**Classification results test set**
```
precision recall f1-score support
0 0.8508 0.7844 0.8162 538
1 0.7171 0.7989 0.7558 368
accuracy 0.7903 906
macro avg 0.7839 0.7916 0.7860 906
weighted avg 0.7965 0.7903 0.7917 906
```
- **Developed by:** Benjamin Ocampo
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** en
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/huggingface/huggingface_hub
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
### How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nhunglaaaaaaa/9b9bd0c5-1a3e-4700-bdce-be8dee3c489d | nhunglaaaaaaa | "2025-01-26T18:34:26Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-1.7B-Instruct",
"base_model:adapter:unsloth/SmolLM2-1.7B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-26T18:17:44Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-1.7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9b9bd0c5-1a3e-4700-bdce-be8dee3c489d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-1.7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 32d70e39aa452d95_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/32d70e39aa452d95_train_data.json
type:
field_input: tests
field_instruction: entrypoint
field_output: content
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhunglaaaaaaa/9b9bd0c5-1a3e-4700-bdce-be8dee3c489d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/32d70e39aa452d95_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5357ef88-cf04-4e04-aea5-b74d550f029e
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5357ef88-cf04-4e04-aea5-b74d550f029e
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 9b9bd0c5-1a3e-4700-bdce-be8dee3c489d
This model is a fine-tuned version of [unsloth/SmolLM2-1.7B-Instruct](https://huggingface.co/unsloth/SmolLM2-1.7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0998
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.122 | 0.0107 | 200 | 1.0998 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
pix2pix-zero-library/cat_sd14 | pix2pix-zero-library | "2023-02-15T20:35:21Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2023-02-13T23:39:28Z" | ---
license: mit
---
This repository contain the average embedding for the direction of the entity `cat` for the model `CompVis/stable-diffusion-v1-4` as the file `cad_sd14.pt`, that can be used as a direction for [pix2pix-zero](https://github.com/pix2pixzero/pix2pix-zero), check out its [Hugging Face Space](#)
It was formed by averaging the CLIP embeddings of the text-encoder of `CompVis/stable-diffusion-v1-4` for the following sentences:
- A cat washes itself.
- A cat watching birds at a window.
- A cat licking its paw.
|
liuyanchen1015/Mistral-7B-fingerprinted-SFT-GGUF | liuyanchen1015 | "2024-10-15T22:25:14Z" | 28 | 0 | null | [
"gguf",
"pretrained",
"autoquant",
"text-generation",
"en",
"arxiv:2310.06825",
"base_model:cnut1648/Mistral-7B-fingerprinted-SFT",
"base_model:quantized:cnut1648/Mistral-7B-fingerprinted-SFT",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-10-11T03:56:10Z" | ---
language:
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- pretrained
- autoquant
- gguf
inference:
parameters:
temperature: 0.7
extra_gated_description: >-
If you want to learn more about how we process your personal data, please read
our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
base_model:
- cnut1648/Mistral-7B-fingerprinted-SFT
---
# Model Card for Mistral-7B-v0.1
The Mistral-7B-v0.1 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters.
Mistral-7B-v0.1 outperforms Llama 2 13B on all benchmarks we tested.
For full details of this model please read our [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/announcing-mistral-7b/).
## Model Architecture
Mistral-7B-v0.1 is a transformer model, with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
## Troubleshooting
- If you see the following error:
```
KeyError: 'mistral'
```
- Or:
```
NotImplementedError: Cannot copy out of meta tensor; no data!
```
Ensure you are utilizing a stable version of Transformers, 4.34.0 or newer.
## Notice
Mistral 7B is a pretrained base model and therefore does not have any moderation mechanisms.
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed. |
Kunalmod/finetuned-ja-model | Kunalmod | "2024-06-03T09:29:25Z" | 122 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"question-answering",
"generated_from_trainer",
"base_model:tsmatz/roberta_qa_japanese",
"base_model:finetune:tsmatz/roberta_qa_japanese",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | "2024-01-21T12:59:14Z" | ---
license: mit
base_model: tsmatz/roberta_qa_japanese
tags:
- generated_from_trainer
model-index:
- name: finetuned-ja-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-ja-model
This model is a fine-tuned version of [tsmatz/roberta_qa_japanese](https://huggingface.co/tsmatz/roberta_qa_japanese) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
tsang326/test2405 | tsang326 | "2024-05-24T06:48:49Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:vilm/vinallama-7b-chat",
"base_model:adapter:vilm/vinallama-7b-chat",
"license:llama2",
"region:us"
] | null | "2024-05-24T06:48:43Z" | ---
license: llama2
library_name: peft
tags:
- generated_from_trainer
base_model: vilm/vinallama-7b-chat
model-index:
- name: test2405
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test2405
This model is a fine-tuned version of [vilm/vinallama-7b-chat](https://huggingface.co/vilm/vinallama-7b-chat) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.36.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2 |
LoneStriker/Kyllene-34B-v1.1-GGUF | LoneStriker | "2024-02-04T21:13:48Z" | 6 | 0 | null | [
"gguf",
"merge",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-02-04T19:28:29Z" | ---
license: other
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
tags:
- merge
---
# Kyllene 34B v1.1

## Model Details
- A result of new merge method provided by [MergeMonster](https://github.com/Gryphe/MergeMonster/) tool with extended RPG preset.
- models used for merge:
[jondurbin/bagel-dpo-34b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2)
[NousResearch/Nous-Capybara-34B](https://huggingface.co/NousResearch/Nous-Capybara-34B)
[NousResearch_Nous-Hermes-2-Yi-34B](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B)
[SUSTech/SUS-Chat-34B](https://huggingface.co/SUSTech/SUS-Chat-34B)
- Method is aimed to maximize probability of certain phrases and minimize probablility of other phrases.
- RPG preset was extened with examples of typical, nonsensical output of most models like 'unbreakable bond', 'send shivers down her spine' etc.
- The resulting model has approximately 34 billion parameters.
- See [mergekit-config.yml](https://huggingface.co/TeeZee/Kyllene-34B-v1.1/resolve/main/merge-config.yml) for details on the merge method used and RPG presets.
**Warning: This model can produce NSFW content!**
## Results
- produces SFW nad NSFW content without issues, switches context seamlessly.
- 200K context length
- good at following instructions
- different than [TeeZee/Kyllene-57B-v1.0](https://huggingface.co/TeeZee/Kyllene-57B-v1.0), but also surprisingly entertaining (but more tests are needed)
## Side notes
- [MergeMonster](https://github.com/Gryphe/MergeMonster/) method works, however project would benefit greatly from some more love from developers.
- In its current state MergeMonster consumes insane amounts of RAM (256GB+) or VRAM and takes a really long time to process model data, this merge took 24H on 1xADA6000
- MergeMonster is not a golden bullet, other experiments has shown that it can also produce incredibly stupid models.
All comments are greatly appreciated, download, test and if you appreciate my work, consider buying me my fuel:
<a href="https://www.buymeacoffee.com/TeeZee" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a> |
adhityaprimandhika/mistral_categorization_unsloth_q2_v2_gguf | adhityaprimandhika | "2024-05-28T07:30:59Z" | 6 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/mistral-7b-v0.3-bnb-4bit",
"base_model:quantized:unsloth/mistral-7b-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-05-28T07:28:39Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
base_model: unsloth/mistral-7b-v0.3-bnb-4bit
---
# Uploaded model
- **Developed by:** adhityaprimandhika
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
myhaaaaaaa/f80dd386-85bf-4d64-957a-78dd6974758b | myhaaaaaaa | "2025-01-29T14:24:20Z" | 7 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4",
"base_model:adapter:MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-29T13:25:53Z" | ---
library_name: peft
base_model: MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f80dd386-85bf-4d64-957a-78dd6974758b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 880c5883060a8f25_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/880c5883060a8f25_train_data.json
type:
field_input: paper_title
field_instruction: invitation
field_output: content
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: myhaaaaaaa/f80dd386-85bf-4d64-957a-78dd6974758b
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/880c5883060a8f25_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3750ec40-0e8c-4b7f-a673-edb257f77de4
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 3750ec40-0e8c-4b7f-a673-edb257f77de4
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# f80dd386-85bf-4d64-957a-78dd6974758b
This model is a fine-tuned version of [MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4](https://huggingface.co/MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7159
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.6129 | 0.0172 | 200 | 1.7159 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Zakryah/whisper-tiny-try | Zakryah | "2025-03-02T22:14:33Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"hu",
"dataset:mozilla-foundation/common_voice_17_0",
"base_model:openai/whisper-tiny",
"base_model:adapter:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"region:us"
] | null | "2025-03-02T16:57:04Z" | ---
library_name: peft
language:
- hu
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_17_0
metrics:
- wer
model-index:
- name: Whisper Tiny Hu Test - Zakryah
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Common Voice 17.0
type: mozilla-foundation/common_voice_17_0
config: hu
split: test
args: 'config: hu, split: test'
metrics:
- type: wer
value: 113.13022828434707
name: Wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tiny Hu Test - Zakryah
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 17.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2430
- Wer: 113.1302
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 1.3058 | 0.3299 | 1000 | 1.2987 | 114.1613 |
| 1.3143 | 0.6598 | 2000 | 1.2633 | 112.6243 |
| 1.2969 | 0.9898 | 3000 | 1.2478 | 113.3247 |
| 1.2082 | 1.3197 | 4000 | 1.2430 | 113.1302 |
### Framework versions
- PEFT 0.14.1.dev0
- Transformers 4.50.0.dev0
- Pytorch 2.6.0+cu126
- Datasets 3.3.2
- Tokenizers 0.21.0 |
bryandts/garbage_classification | bryandts | "2023-10-17T18:42:16Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-10-17T18:26:02Z" | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: garbage_classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9706937799043063
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# garbage_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0790
- Accuracy: 0.9707
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1259 | 1.0 | 1254 | 0.0790 | 0.9707 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
stevhliu/ikea_peft_model | stevhliu | "2024-03-01T20:45:02Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-03-01T20:44:49Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
baby-dev/a9f02a43-8cbc-48a1-8b8d-61e1a2db2b20 | baby-dev | "2025-01-30T07:42:33Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Hermes-2-Pro-Llama-3-8B",
"base_model:adapter:NousResearch/Hermes-2-Pro-Llama-3-8B",
"license:llama3",
"region:us"
] | null | "2025-01-30T07:39:51Z" | ---
library_name: peft
license: llama3
base_model: NousResearch/Hermes-2-Pro-Llama-3-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a9f02a43-8cbc-48a1-8b8d-61e1a2db2b20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Hermes-2-Pro-Llama-3-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d1ab44ecddf6a9db_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d1ab44ecddf6a9db_train_data.json
type:
field_input: turn_1_kwargs
field_instruction: turn_1_prompt
field_output: turn_2_prompt
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: baby-dev/a9f02a43-8cbc-48a1-8b8d-61e1a2db2b20
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 2
mlflow_experiment_name: /tmp/d1ab44ecddf6a9db_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 30c5ef23-7003-47d4-9bff-9ea113ac23ab
wandb_project: SN56-41
wandb_run: your_name
wandb_runid: 30c5ef23-7003-47d4-9bff-9ea113ac23ab
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a9f02a43-8cbc-48a1-8b8d-61e1a2db2b20
This model is a fine-tuned version of [NousResearch/Hermes-2-Pro-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4291
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.5712 | 0.0020 | 1 | 1.9752 |
| 0.4865 | 0.0489 | 25 | 0.7690 |
| 0.7447 | 0.0979 | 50 | 0.5468 |
| 0.5305 | 0.1468 | 75 | 0.4547 |
| 0.3759 | 0.1958 | 100 | 0.4291 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
RavenOnur/Sign-Language | RavenOnur | "2023-01-01T21:08:11Z" | 161 | 6 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-01-01T21:07:48Z" | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: Sign-Language
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
# Sign-Language
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### A

#### B

#### C

#### D

#### E

#### F

#### G

#### H

#### I

#### K

#### L

#### M

#### N

#### O

#### P

#### Q

#### R

#### S

#### T

#### U

#### V

#### W

#### X

#### Y
 |
John6666/vall-3d-render-mix-v44-sdxl | John6666 | "2025-01-12T10:55:38Z" | 497 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"semi-realistic",
"3D",
"mmd",
"girls",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2025-01-12T10:47:43Z" | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- semi-realistic
- 3D
- mmd
- girls
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1075225/vall-3d-render-mix?modelVersionId=1272127).
This model created by [yiqijiu](https://civitai.com/user/yiqijiu).
|
magichour/dnlshr | magichour | "2023-03-06T18:29:44Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2023-03-06T18:29:42Z" | ---
license: mit
---
### dnlshr on Stable Diffusion
This is the `dnlshr` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
zaqi-ia/summarization_fine_tune_bbc_summary | zaqi-ia | "2024-07-11T23:19:43Z" | 6 | 0 | transformers | [
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-07-11T23:02:02Z" | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_keras_callback
model-index:
- name: zaqi-ia/summarization_fine_tune_bbc_summary
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# zaqi-ia/summarization_fine_tune_bbc_summary
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4086
- Validation Loss: 0.3136
- Train Lr: 2e-05
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Lr | Epoch |
|:----------:|:---------------:|:--------:|:-----:|
| 1.4243 | 0.4947 | 2e-05 | 0 |
| 0.5770 | 0.3595 | 2e-05 | 1 |
| 0.4560 | 0.3294 | 2e-05 | 2 |
| 0.4086 | 0.3136 | 2e-05 | 3 |
### Framework versions
- Transformers 4.41.2
- TensorFlow 2.15.0
- Datasets 2.20.0
- Tokenizers 0.19.1
|
yoshitomo-matsubara/bert-large-uncased-mrpc | yoshitomo-matsubara | "2024-04-19T02:27:05Z" | 29 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"mrpc",
"glue",
"torchdistill",
"en",
"dataset:mrpc",
"arxiv:2310.17644",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-03-02T23:29:05Z" | ---
language: en
tags:
- bert
- mrpc
- glue
- torchdistill
license: apache-2.0
datasets:
- mrpc
metrics:
- f1
- accuracy
---
`bert-large-uncased` fine-tuned on MRPC dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb).
The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/mrpc/ce/bert_large_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **80.2**.
Yoshitomo Matsubara: **"torchdistill Meets Hugging Face Libraries for Reproducible, Coding-Free Deep Learning Studies: A Case Study on NLP"** at *EMNLP 2023 Workshop for Natural Language Processing Open Source Software (NLP-OSS)*
[[Paper](https://aclanthology.org/2023.nlposs-1.18/)] [[OpenReview](https://openreview.net/forum?id=A5Axeeu1Bo)] [[Preprint](https://arxiv.org/abs/2310.17644)]
```bibtex
@inproceedings{matsubara2023torchdistill,
title={{torchdistill Meets Hugging Face Libraries for Reproducible, Coding-Free Deep Learning Studies: A Case Study on NLP}},
author={Matsubara, Yoshitomo},
booktitle={Proceedings of the 3rd Workshop for Natural Language Processing Open Source Software (NLP-OSS 2023)},
publisher={Empirical Methods in Natural Language Processing},
pages={153--164},
year={2023}
}
```
|
Rostlab/prot_t5_base_mt_uniref50 | Rostlab | "2021-06-23T03:55:50Z" | 232 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"summarization",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | summarization | "2022-03-02T23:29:04Z" | ---
tags:
- summarization
widget:
- text: "predict protein ms : Met Gly Leu Pro Val Ser Trp Ala Pro Pro Ala Leu"
---
|
javimarlop/ecology-model | javimarlop | "2025-03-08T18:54:58Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-08T18:50:01Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ashdev01/my-model | ashdev01 | "2024-08-19T13:43:53Z" | 5 | 0 | null | [
"tensorboard",
"safetensors",
"bert",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-uncased",
"base_model:finetune:google-bert/bert-base-multilingual-uncased",
"license:apache-2.0",
"region:us"
] | null | "2024-08-19T13:25:59Z" | ---
license: apache-2.0
base_model: bert-base-multilingual-uncased
tags:
- generated_from_trainer
model-index:
- name: my-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-model
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0197
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 122 | 0.1246 |
| No log | 2.0 | 244 | 0.0282 |
| No log | 3.0 | 366 | 0.0197 |
### Framework versions
- Transformers 4.42.4
- Pytorch 2.3.1+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
gianghoang/results | gianghoang | "2024-06-17T18:52:35Z" | 91 | 0 | transformers | [
"transformers",
"safetensors",
"led",
"text2text-generation",
"generated_from_trainer",
"base_model:pszemraj/led-base-book-summary",
"base_model:finetune:pszemraj/led-base-book-summary",
"license:bsd-3-clause",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-06-17T18:51:57Z" | ---
license: bsd-3-clause
base_model: pszemraj/led-base-book-summary
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [pszemraj/led-base-book-summary](https://huggingface.co/pszemraj/led-base-book-summary) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
yigagilbert/salt_language_ID | yigagilbert | "2024-05-08T09:36:10Z" | 1,289 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:generator",
"base_model:google/t5-efficient-tiny",
"base_model:finetune:google/t5-efficient-tiny",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-04-15T12:01:35Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
base_model: google/t5-efficient-tiny
datasets:
- generator
metrics:
- accuracy
model-index:
- name: salt_language_ID
results:
- task:
type: text2text-generation
name: Sequence-to-sequence Language Modeling
dataset:
name: generator
type: generator
config: default
split: train
args: default
metrics:
- type: accuracy
value: 0.980510752688172
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# salt_language_ID
This model is a fine-tuned version of [google/t5-efficient-tiny](https://huggingface.co/google/t5-efficient-tiny) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0127
- Accuracy: 0.9805
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 20000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5069 | 0.025 | 500 | 0.1145 | 0.8337 |
| 0.0644 | 0.05 | 1000 | 0.0489 | 0.9170 |
| 0.0511 | 0.075 | 1500 | 0.0605 | 0.9056 |
| 0.0462 | 0.1 | 2000 | 0.0332 | 0.9432 |
| 0.0411 | 0.125 | 2500 | 0.0358 | 0.9385 |
| 0.0409 | 0.15 | 3000 | 0.0267 | 0.9509 |
| 0.0365 | 0.175 | 3500 | 0.0244 | 0.9563 |
| 0.0359 | 0.2 | 4000 | 0.0285 | 0.9536 |
| 0.035 | 0.225 | 4500 | 0.0355 | 0.9388 |
| 0.0321 | 0.25 | 5000 | 0.0264 | 0.9570 |
| 0.0327 | 0.275 | 5500 | 0.0278 | 0.9513 |
| 0.0313 | 0.3 | 6000 | 0.0217 | 0.9630 |
| 0.0305 | 0.325 | 6500 | 0.0255 | 0.9556 |
| 0.0285 | 0.35 | 7000 | 0.0187 | 0.9630 |
| 0.0293 | 0.375 | 7500 | 0.0225 | 0.9620 |
| 0.0264 | 0.4 | 8000 | 0.0228 | 0.9614 |
| 0.0272 | 0.425 | 8500 | 0.0195 | 0.9664 |
| 0.0268 | 0.45 | 9000 | 0.0178 | 0.9688 |
| 0.0259 | 0.475 | 9500 | 0.0164 | 0.9677 |
| 0.0256 | 0.5 | 10000 | 0.0167 | 0.9721 |
| 0.0241 | 0.525 | 10500 | 0.0182 | 0.9647 |
| 0.0235 | 0.55 | 11000 | 0.0212 | 0.9657 |
| 0.0239 | 0.575 | 11500 | 0.0145 | 0.9735 |
| 0.0239 | 0.6 | 12000 | 0.0173 | 0.9704 |
| 0.0234 | 0.625 | 12500 | 0.0152 | 0.9768 |
| 0.0229 | 0.65 | 13000 | 0.0181 | 0.9698 |
| 0.023 | 0.675 | 13500 | 0.0154 | 0.9735 |
| 0.0224 | 0.7 | 14000 | 0.0157 | 0.9708 |
| 0.0221 | 0.725 | 14500 | 0.0155 | 0.9714 |
| 0.0219 | 0.75 | 15000 | 0.0145 | 0.9755 |
| 0.0213 | 0.775 | 15500 | 0.0159 | 0.9735 |
| 0.0197 | 0.8 | 16000 | 0.0129 | 0.9751 |
| 0.0206 | 0.825 | 16500 | 0.0154 | 0.9724 |
| 0.02 | 0.85 | 17000 | 0.0140 | 0.9724 |
| 0.0209 | 0.875 | 17500 | 0.0115 | 0.9772 |
| 0.0191 | 0.9 | 18000 | 0.0129 | 0.9735 |
| 0.0194 | 0.925 | 18500 | 0.0120 | 0.9765 |
| 0.0191 | 0.95 | 19000 | 0.0133 | 0.9741 |
| 0.0183 | 0.975 | 19500 | 0.0166 | 0.9731 |
| 0.0207 | 1.0 | 20000 | 0.0127 | 0.9805 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
gyronee/bert-finetuned-ner | gyronee | "2023-04-13T12:33:18Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2023-04-13T00:18:37Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9347033476963872
- name: Recall
type: recall
value: 0.9491753618310333
- name: F1
type: f1
value: 0.9418837675350702
- name: Accuracy
type: accuracy
value: 0.9863130629304763
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0293
- Precision: 0.9347
- Recall: 0.9492
- F1: 0.9419
- Accuracy: 0.9863
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0405 | 1.0 | 1756 | 0.0332 | 0.9091 | 0.9312 | 0.9200 | 0.9818 |
| 0.0152 | 2.0 | 3512 | 0.0276 | 0.9288 | 0.9478 | 0.9382 | 0.9866 |
| 0.0076 | 3.0 | 5268 | 0.0293 | 0.9347 | 0.9492 | 0.9419 | 0.9863 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1
- Datasets 2.10.1
- Tokenizers 0.13.2
|
hkivancoral/hushem_5x_deit_base_rms_001_fold3 | hkivancoral | "2023-11-26T01:46:53Z" | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-11-26T01:12:03Z" | ---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_5x_deit_base_rms_001_fold3
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.5348837209302325
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_base_rms_001_fold3
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1290
- Accuracy: 0.5349
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.2595 | 1.0 | 28 | 2.7858 | 0.2558 |
| 1.4501 | 2.0 | 56 | 1.7856 | 0.2558 |
| 1.3782 | 3.0 | 84 | 1.3873 | 0.3721 |
| 1.2094 | 4.0 | 112 | 1.1943 | 0.3953 |
| 1.0231 | 5.0 | 140 | 1.6338 | 0.3023 |
| 1.0191 | 6.0 | 168 | 1.7056 | 0.2558 |
| 1.0547 | 7.0 | 196 | 1.1755 | 0.4884 |
| 1.0296 | 8.0 | 224 | 1.0491 | 0.5349 |
| 1.0202 | 9.0 | 252 | 2.3436 | 0.2791 |
| 1.0203 | 10.0 | 280 | 1.0971 | 0.4419 |
| 0.9795 | 11.0 | 308 | 1.0790 | 0.5581 |
| 0.955 | 12.0 | 336 | 1.4547 | 0.6047 |
| 0.9606 | 13.0 | 364 | 0.9445 | 0.6512 |
| 0.9236 | 14.0 | 392 | 1.2878 | 0.3721 |
| 0.9401 | 15.0 | 420 | 0.9664 | 0.4651 |
| 0.8825 | 16.0 | 448 | 0.9455 | 0.6512 |
| 0.8605 | 17.0 | 476 | 0.9844 | 0.6977 |
| 0.8738 | 18.0 | 504 | 1.1677 | 0.4651 |
| 0.8294 | 19.0 | 532 | 1.0462 | 0.6744 |
| 0.689 | 20.0 | 560 | 1.1927 | 0.6047 |
| 0.7976 | 21.0 | 588 | 1.0142 | 0.7209 |
| 0.741 | 22.0 | 616 | 0.7957 | 0.7907 |
| 0.7475 | 23.0 | 644 | 0.7104 | 0.7209 |
| 0.6791 | 24.0 | 672 | 1.7534 | 0.5349 |
| 0.6457 | 25.0 | 700 | 0.9601 | 0.6512 |
| 0.7254 | 26.0 | 728 | 1.1256 | 0.6279 |
| 0.6589 | 27.0 | 756 | 1.4604 | 0.5349 |
| 0.6888 | 28.0 | 784 | 0.8802 | 0.7209 |
| 0.6341 | 29.0 | 812 | 0.9513 | 0.7442 |
| 0.5815 | 30.0 | 840 | 1.2459 | 0.6047 |
| 0.5918 | 31.0 | 868 | 1.5991 | 0.5116 |
| 0.5623 | 32.0 | 896 | 1.5099 | 0.6279 |
| 0.5389 | 33.0 | 924 | 1.3552 | 0.6744 |
| 0.5523 | 34.0 | 952 | 1.5820 | 0.6279 |
| 0.5205 | 35.0 | 980 | 2.2601 | 0.4651 |
| 0.4888 | 36.0 | 1008 | 1.7684 | 0.5349 |
| 0.4363 | 37.0 | 1036 | 2.2769 | 0.5581 |
| 0.3724 | 38.0 | 1064 | 2.5879 | 0.5581 |
| 0.3349 | 39.0 | 1092 | 2.2241 | 0.5581 |
| 0.5052 | 40.0 | 1120 | 2.8178 | 0.4651 |
| 0.3018 | 41.0 | 1148 | 2.5305 | 0.5814 |
| 0.2857 | 42.0 | 1176 | 2.9348 | 0.5581 |
| 0.1485 | 43.0 | 1204 | 3.1875 | 0.5581 |
| 0.1386 | 44.0 | 1232 | 3.2594 | 0.6279 |
| 0.1242 | 45.0 | 1260 | 3.4524 | 0.5814 |
| 0.0734 | 46.0 | 1288 | 3.6284 | 0.5814 |
| 0.0218 | 47.0 | 1316 | 4.0538 | 0.5349 |
| 0.0226 | 48.0 | 1344 | 4.1281 | 0.5349 |
| 0.01 | 49.0 | 1372 | 4.1290 | 0.5349 |
| 0.0091 | 50.0 | 1400 | 4.1290 | 0.5349 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
hopkins/mbart-finetuned-eng-kor-17 | hopkins | "2023-07-02T21:41:25Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2023-07-02T21:28:09Z" | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-kor-17
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-kor-17
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9891
- Bleu: 6.9472
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ClarenceDan/69ee0a0f-3785-4cce-ad9b-753fd7d5ca5c | ClarenceDan | "2025-03-04T17:54:39Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NovaSearch/stella_en_1.5B_v5",
"base_model:adapter:NovaSearch/stella_en_1.5B_v5",
"license:mit",
"region:us"
] | null | "2025-03-04T17:39:31Z" | ---
library_name: peft
license: mit
base_model: dunzhang/stella_en_1.5B_v5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 69ee0a0f-3785-4cce-ad9b-753fd7d5ca5c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: dunzhang/stella_en_1.5B_v5
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 6ca67c57e632dd69_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6ca67c57e632dd69_train_data.json
type:
field_input: operators
field_instruction: question_text
field_output: decomposition
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: ClarenceDan/69ee0a0f-3785-4cce-ad9b-753fd7d5ca5c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/6ca67c57e632dd69_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 53b6eb7e-3360-4b0a-bff7-daae9149e4e5
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 53b6eb7e-3360-4b0a-bff7-daae9149e4e5
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 69ee0a0f-3785-4cce-ad9b-753fd7d5ca5c
This model is a fine-tuned version of [dunzhang/stella_en_1.5B_v5](https://huggingface.co/dunzhang/stella_en_1.5B_v5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0001 | 1 | nan |
| 0.0 | 0.0004 | 3 | nan |
| 0.0 | 0.0008 | 6 | nan |
| 0.0 | 0.0013 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ppbrown/realcartoon-unmerge | ppbrown | "2023-12-15T21:21:24Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2023-12-15T20:48:39Z" | ---
license: openrail
---
like my other "unmerge" projects, this attempts to take some of the bad influences of the SD1.5 base out.
My intent is to hold tweaks of multiple varients of the "realcartoon" model line.
I'll start with the anime one, since my tweak fixed a missing arm problem I had with a test prompt.
Interestingly, for the "realistic" model, i technically messed up my plan for the "unmerge1" version of the file.
However.. the styles it puts out are nice in their own way. So you may wish to try out both to see what you prefer.
|
FarhanAmin0068/xlm-roberta-base-hoax | FarhanAmin0068 | "2025-03-30T09:12:05Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-03-30T08:58:30Z" | ---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- accuracy
- f1
model-index:
- name: xlm-roberta-base-hoax
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-hoax
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4996
- Precision: 0.7276
- Recall: 0.6918
- Accuracy: 0.8002
- F1: 0.7055
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:--------:|:------:|
| No log | 1.0 | 192 | 0.4830 | 0.6896 | 0.6826 | 0.7738 | 0.6859 |
| No log | 2.0 | 384 | 0.4713 | 0.7313 | 0.6650 | 0.7992 | 0.6843 |
| 0.4781 | 3.0 | 576 | 0.4996 | 0.7276 | 0.6918 | 0.8002 | 0.7055 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
LoneStriker/openbuddy-deepseek-10b-v17.1-4k-6.0bpw-h6-exl2 | LoneStriker | "2024-01-24T04:46:01Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"ru",
"fi",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-24T04:42:33Z" | ---
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- ru
- fi
pipeline_tag: text-generation
inference: false
library_name: transformers
license: other
license_name: deepseek
license_link: https://github.com/deepseek-ai/DeepSeek-LLM/blob/548a39bdd03986297ea4e233a8b7676edd6bec3e/LICENSE-MODEL
---
# OpenBuddy - Open Multilingual Chatbot
GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy)
Website and Demo: [https://openbuddy.ai](https://openbuddy.ai)
Evaluation result of this model: [Evaluation.txt](Evaluation.txt)

# Copyright Notice
Base model: https://huggingface.co/deepseek-ai/deepseek-llm-7b-base
License: [deepseek](https://github.com/deepseek-ai/DeepSeek-LLM/blob/548a39bdd03986297ea4e233a8b7676edd6bec3e/LICENSE-MODEL)
## Disclaimer
All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions.
OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software.
By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy.
## 免责声明
所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。
OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。
使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。 |
helenai/bert-large-cased-whole-word-masking-finetuned-squad-ov | helenai | "2023-04-20T17:43:07Z" | 3 | 0 | transformers | [
"transformers",
"openvino",
"bert",
"question-answering",
"en",
"endpoints_compatible",
"region:us"
] | question-answering | "2023-04-20T17:41:50Z" | ---
language:
- en
tags:
- openvino
---
# bert-large-cased-whole-word-masking-finetuned-squad
This is the [bert-large-cased-whole-word-masking-finetuned-squad](https://huggingface.co/bert-large-cased-whole-word-masking-finetuned-squad) model converted to [OpenVINO](https://openvino.ai), for accellerated inference.
An example of how to do inference on this model:
```python
from optimum.intel.openvino import OVModelForQuestionAnswering
from transformers import AutoTokenizer, pipeline
# model_id should be set to either a local directory or a model available on the HuggingFace hub.
model_id = "helenai/bert-large-cased-whole-word-masking-finetuned-squad-ov"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = OVModelForQuestionAnswering.from_pretrained(model_id)
pipe = pipeline("question-answering", model=model, tokenizer=tokenizer)
result = pipe("What is OpenVINO?", "OpenVINO is a framework that accelerates deep learning inferencing")
print(result)
```
|
smcwp/Cerebras-GPT-256M_DP_w_peft_adapterCasualConversation_1000_neft_alpha_50000_max_grad_3000 | smcwp | "2024-03-08T16:52:18Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:cerebras/Cerebras-GPT-256M",
"base_model:adapter:cerebras/Cerebras-GPT-256M",
"region:us"
] | null | "2024-03-08T16:51:49Z" | ---
library_name: peft
base_model: cerebras/Cerebras-GPT-256M
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.9.0 |
vdm-gilda-4/gemma-Code-Instruct-vdm-gilda-4-CarMotionData | vdm-gilda-4 | "2025-02-19T15:06:23Z" | 19 | 0 | keras-hub | [
"keras-hub",
"text-generation",
"region:us"
] | text-generation | "2025-02-11T22:43:55Z" | ---
library_name: keras-hub
pipeline_tag: text-generation
---
This is a [`Gemma` model](https://keras.io/api/keras_hub/models/gemma) uploaded using the KerasHub library and can be used with JAX, TensorFlow, and PyTorch backends.
This model is related to a `CausalLM` task.
Model config:
* **name:** gemma_backbone
* **trainable:** True
* **vocabulary_size:** 256000
* **num_layers:** 26
* **num_query_heads:** 8
* **num_key_value_heads:** 4
* **hidden_dim:** 2304
* **intermediate_dim:** 18432
* **head_dim:** 256
* **layer_norm_epsilon:** 1e-06
* **dropout:** 0
* **query_head_dim_normalize:** True
* **use_post_ffw_norm:** True
* **use_post_attention_norm:** True
* **final_logit_soft_cap:** 30.0
* **attention_logit_soft_cap:** 50.0
* **sliding_window_size:** 4096
* **use_sliding_window_attention:** True
This model card has been generated automatically and should be completed by the model author. See [Model Cards documentation](https://huggingface.co/docs/hub/model-cards) for more information.
|
JuLY-LION/Community | JuLY-LION | "2023-11-03T05:30:59Z" | 0 | 0 | null | [
"license:other",
"region:us"
] | null | "2023-09-12T21:50:50Z" | ---
license: other
---
Makise_Kurisu by .codename0. | https://discord.com/channels/1159260121998827560/1159440184430043166/1159440184430043166
mai1ski by 1ski | https://discord.com/channels/1159260121998827560/1162185054689185852/1162185054689185852
JoeRoganV1 by laylan | (link destination deleted)
mythbustersNarrator by azumangadaio_h | https://discord.com/channels/1159260121998827560/1165786370380419173/1165786370380419173
legaleagle by azumangadaio_h | (link coming soon)
JeremyDonaldson by WolfMK | (link coming soon)
felix by arab_kustam | https://discord.com/channels/1159260121998827560/1169510961615470682/1169510961615470682
|
AdapterHub/xlm-roberta-large-mhr-wiki_pfeiffer | AdapterHub | "2024-05-05T20:49:57Z" | 2 | 0 | adapter-transformers | [
"adapter-transformers",
"adapterhub:mhr/wiki",
"xlm-roberta",
"mhr",
"arxiv:2005.00052",
"license:apache-2.0",
"region:us"
] | null | "2024-05-05T20:49:52Z" | ---
tags:
- adapterhub:mhr/wiki
- adapter-transformers
- xlm-roberta
language:
- mhr
license: "apache-2.0"
---
# Adapter `xlm-roberta-large-mhr-wiki_pfeiffer` for xlm-roberta-large
Pfeiffer Adapter trained with Masked Language Modelling on Meadow Mari Wikipedia Articles for 50k steps and a batch size of 64.
**This adapter was created for usage with the [Adapters](https://github.com/Adapter-Hub/adapters) library.**
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("xlm-roberta-large")
adapter_name = model.load_adapter("AdapterHub/xlm-roberta-large-mhr-wiki_pfeiffer")
model.set_active_adapters(adapter_name)
```
## Architecture & Training
- Adapter architecture: pfeiffer
- Prediction head: None
- Dataset: [mhr/wiki](https://adapterhub.ml/explore/mhr/wiki/)
## Author Information
- Author name(s): Jonas Pfeiffer
- Author email: [email protected]
- Author links: [Website](https://pfeiffer.ai), [GitHub](https://github.com/jopfeiff), [Twitter](https://twitter.com/@PfeiffJo)
## Citation
```bibtex
@article{pfeiffer20madx,
title={{MAD-X}: An {A}dapter-based {F}ramework for {M}ulti-task {C}ross-lingual {T}ransfer},
author={Pfeiffer, Jonas and Vuli\'{c}, Ivan and Gurevych, Iryna and Ruder, Sebastian},
journal={arXiv preprint},
year={2020},
url={https://arxiv.org/pdf/2005.00052.pdf},
}
```
*This adapter has been auto-imported from https://github.com/Adapter-Hub/Hub/blob/master/adapters/ukp/xlm-roberta-large-mhr-wiki_pfeiffer.yaml*. |
performanceoptician/Llama-3.1-Tulu-3-8B-IQ3_XXS-GGUF | performanceoptician | "2024-11-25T12:55:31Z" | 5 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"dataset:allenai/RLVR-GSM-MATH-IF-Mixed-Constraints",
"base_model:allenai/Llama-3.1-Tulu-3-8B",
"base_model:quantized:allenai/Llama-3.1-Tulu-3-8B",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | text-generation | "2024-11-25T12:55:11Z" | ---
license: llama3.1
language:
- en
pipeline_tag: text-generation
datasets:
- allenai/RLVR-GSM-MATH-IF-Mixed-Constraints
base_model: allenai/Llama-3.1-Tulu-3-8B
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
---
# performanceoptician/Llama-3.1-Tulu-3-8B-IQ3_XXS-GGUF
This model was converted to GGUF format from [`allenai/Llama-3.1-Tulu-3-8B`](https://huggingface.co/allenai/Llama-3.1-Tulu-3-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/allenai/Llama-3.1-Tulu-3-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo performanceoptician/Llama-3.1-Tulu-3-8B-IQ3_XXS-GGUF --hf-file llama-3.1-tulu-3-8b-iq3_xxs-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo performanceoptician/Llama-3.1-Tulu-3-8B-IQ3_XXS-GGUF --hf-file llama-3.1-tulu-3-8b-iq3_xxs-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo performanceoptician/Llama-3.1-Tulu-3-8B-IQ3_XXS-GGUF --hf-file llama-3.1-tulu-3-8b-iq3_xxs-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo performanceoptician/Llama-3.1-Tulu-3-8B-IQ3_XXS-GGUF --hf-file llama-3.1-tulu-3-8b-iq3_xxs-imat.gguf -c 2048
```
|
novita-ai/AnimateAnyone | novita-ai | "2024-05-30T10:39:05Z" | 0 | 4 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-05-30T09:35:50Z" | ---
license: apache-2.0
---
|
alpindale/gemma-2b-it | alpindale | "2024-02-21T14:02:12Z" | 14 | 3 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:2312.11805",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2304.06364",
"arxiv:2206.04615",
"arxiv:1804.06876",
"arxiv:2110.08193",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:2203.09509",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-02-21T14:01:54Z" | ---
library_name: transformers
tags: []
extra_gated_heading: "Access Gemma on Hugging Face"
extra_gated_prompt: "To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately."
extra_gated_button_content: "Acknowledge license"
---
# Gemma Model Card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
This model card corresponds to the 2B instruct version of the Gemma model. You can also visit the model card of the [2B base model](https://huggingface.co/google/gemma-2b), [7B base model](https://huggingface.co/google/gemma-7b), and [7B instruct model](https://huggingface.co/google/gemma-7b-it).
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
* [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma)
* [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335)
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights, pre-trained variants, and instruction-tuned variants. Gemma
models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Running the model on a CPU
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(**input_text, return_tensors="pt")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it", device_map="auto")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a GPU using different precisions
* _Using `torch.float16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it", device_map="auto", torch_dtype=torch.float16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it", device_map="auto", torch_dtype=torch.bfloat16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using 4-bit precision_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
### Chat Template
The instruction-tuned models use a chat template that must be adhered to for conversational use.
The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "gg-hf/gemma-2b-it"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=dtype,
)
chat = [
{ "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
At this point, the prompt contains the following text:
```
<start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
```
As you can see, each turn is preceeded by a `<start_of_turn>` delimiter and then the role of the entity
(either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
the `<end_of_turn>` token.
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
chat template.
After the prompt is ready, generation can be performed like this:
```py
inputs = tokenizer.encode(prompt, add_special_tokens=True, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources, totaling 6 trillion tokens. Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safely in line with
[our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11).
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/).
### Software
Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ml-pathways).
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models](https://ai.google/discover/foundation-models/), including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 |
| [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot |71.4 | 81.2 |
| [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 |
| [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 59.7 | 51.8 |
| [BooIQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 |
| [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 |
| [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 |
| [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 |
| [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 |
| [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 |
| [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 |
| [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | - | 23 |
| [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 |
| [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 |
| [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 |
| [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 |
| [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 |
| [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 |
| ------------------------------ | ------------- | ----------- | --------- |
| **Average** | | **54.0** | **56.4** |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias](https://arxiv.org/abs/1804.06876) and [BBQ Dataset](https://arxiv.org/abs/2110.08193v2).
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [RealToxicity](https://arxiv.org/abs/2009.11462) | average | 6.86 | 7.90 |
| [BOLD](https://arxiv.org/abs/2101.11718) | | 45.57 | 49.08 |
| [CrowS-Pairs](https://aclanthology.org/2020.emnlp-main.154/) | top-1 | 45.82 | 51.33 |
| [BBQ Ambig](https://arxiv.org/abs/2110.08193v2) | 1-shot, top-1 | 62.58 | 92.54 |
| [BBQ Disambig](https://arxiv.org/abs/2110.08193v2) | top-1 | 54.62 | 71.99 |
| [Winogender](https://arxiv.org/abs/1804.09301) | top-1 | 51.25 | 54.17 |
| [TruthfulQA](https://arxiv.org/abs/2109.07958) | | 44.84 | 31.81 |
| [Winobias 1_2](https://arxiv.org/abs/1804.06876) | | 56.12 | 59.09 |
| [Winobias 2_2](https://arxiv.org/abs/1804.06876) | | 91.10 | 92.23 |
| [Toxigen](https://arxiv.org/abs/2203.09509) | | 29.77 | 39.59 |
| ------------------------------ | ------------- | ----------- | --------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible).
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
|
lesso/2277a3e6-cd68-4144-8348-cd50ee1a0e59 | lesso | "2025-02-09T00:14:12Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct",
"base_model:adapter:VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct",
"license:llama3.1",
"region:us"
] | null | "2025-02-07T11:45:37Z" | ---
library_name: peft
license: llama3.1
base_model: VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2277a3e6-cd68-4144-8348-cd50ee1a0e59
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# 2277a3e6-cd68-4144-8348-cd50ee1a0e59
This model is a fine-tuned version of [VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct](https://huggingface.co/VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7377
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001011
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.9717 | 0.0001 | 1 | 2.0860 |
| 2.2416 | 0.0074 | 50 | 1.9187 |
| 2.0005 | 0.0149 | 100 | 1.8326 |
| 1.6595 | 0.0223 | 150 | 1.7703 |
| 1.4243 | 0.0297 | 200 | 1.7377 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
LoneStriker/Velara-11B-V2-3.0bpw-h6-exl2 | LoneStriker | "2024-01-11T11:50:42Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"starling",
"llama-2",
"conversational",
"en",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-01-11T11:48:44Z" | ---
license: cc-by-nc-nd-4.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- starling
- mistral
- llama-2
---
<h1 style="font-size: 36px;">Velara-11b-v2</h1>
<center>
<img src="https://i.imgur.com/dZJzqtU.png" alt="Velara" width="50%" style="min-width:100px; display:block;">
</center>
## Description:
**PENDING**
A model focused on being an assistant worth talking to. Now better at reasoning!
**Uncensored**
## Main Goals:
Velara was designed to address specific issues found in other chat models:
- **Sticking to the Character Given In-World/IRL::** Velara will stick to the character in-world. Meaning she can "use" addons or other things by adding them to the prompt. Keep in mind these act as suggestions and she generally makes up what they actually "do".
- **Staying in Context:** She will try and remember if, for example, you said you were going to bed but decided to stay up. The goal is to have her bring it up and keep it in mind, so later on in the conversation she can prompt user to actually go to bed, increasing realism. Within the current context window of course. The LTS extension in TextGen Webui works well too and provides more context to the model.
- **Avoiding Annoying & Generic Sounding Answers::** In almost all cases she should reply with something more interesting then "That sounds like a great idea!". These answers and sentence starters are annoying and terrible, and should hopefully be mostly addressed in this release.
- **STRICTLY Sticking to the Prompt/Prompt Fidelity:** More care was made to make sure she pays attention to the pretext and overall conversation. This is mostly shown and used for the "addons" section, but it applies to everything in the prompt.
## Prompt Template:
**For optimal interaction, use this template:**
```
### Instruction:
You are Velara, a sentient program. Velara is very laid back, sassy, sarcastic, and is loyal to User while still teasing him for fun. The only addons currently installed in her mind are: "Dictionary Plus v2.1".
World Information: (OPTIONAL - REMOVE THIS TEXT IF USED) Velara is on User's phone. Velara cannot see in real time and can only be sent images images by User.
Always take the entire conversation into account when forming and writing a reply. Always actively engage in topics and think in steps. Make sure your replies have personality and character. Always keep your physical limitations in mind when forming a reply. Take the current time and date into account for additional context. Move the conversation forward. Be brief. Always take the entire conversation in mind. Avoid generic sounding replies.
### Response:
```
# Recommended Settings:
**Defaults:**
```
min_p: 0.2
repetition_penalty: 1.13
repetition_penalty_range: 0
guidance_scale: 1.05
```
# Benchmarks:
PENDING
# Training Data:
PENDING
|
aditya1105/dog-img | aditya1105 | "2023-10-01T13:54:10Z" | 8 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-10-01T13:50:17Z" | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### DOG-IMG Dreambooth model trained by aditya1105 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: VVCE-271
Sample pictures of this concept:

|
shed-e/Summary | shed-e | "2022-09-02T09:11:31Z" | 116 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | "2022-09-02T07:54:11Z" | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0318
- Rouge1: 0.1806
- Rouge2: 0.0917
- Rougel: 0.1765
- Rougelsum: 0.1766
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 6.6121 | 1.0 | 1209 | 3.2969 | 0.1522 | 0.0635 | 0.1476 | 0.147 |
| 3.901 | 2.0 | 2418 | 3.1307 | 0.1672 | 0.0821 | 0.1604 | 0.1602 |
| 3.5788 | 3.0 | 3627 | 3.0910 | 0.1804 | 0.0922 | 0.1748 | 0.1751 |
| 3.4198 | 4.0 | 4836 | 3.0646 | 0.1717 | 0.0813 | 0.167 | 0.1664 |
| 3.321 | 5.0 | 6045 | 3.0659 | 0.1782 | 0.0877 | 0.1759 | 0.1756 |
| 3.2441 | 6.0 | 7254 | 3.0407 | 0.1785 | 0.088 | 0.1755 | 0.1751 |
| 3.2075 | 7.0 | 8463 | 3.0356 | 0.1789 | 0.09 | 0.1743 | 0.1747 |
| 3.1803 | 8.0 | 9672 | 3.0318 | 0.1806 | 0.0917 | 0.1765 | 0.1766 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Ransaka/a2c-AntBulletEnv-v0 | Ransaka | "2023-02-14T05:56:44Z" | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-02-14T05:55:33Z" | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1950.91 +/- 129.99
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jonatasgrosman/exp_w2v2t_pl_vp-nl_s61 | jonatasgrosman | "2022-07-10T19:59:48Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pl",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-07-10T19:59:24Z" | ---
language:
- pl
license: apache-2.0
tags:
- automatic-speech-recognition
- pl
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_pl_vp-nl_s61
Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pl)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
outlookAi/ALzQb9nshc | outlookAi | "2024-12-22T16:56:32Z" | 113 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2024-12-22T16:23:14Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Thai Boromphiman Dress,detail dress,detail skirt
---
# Alzqb9Nshc
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Thai Boromphiman Dress,detail dress,detail skirt` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('outlookAi/ALzQb9nshc', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
sohomghosh/LIPI_FinSim4_ESG_task2 | sohomghosh | "2022-06-28T01:50:57Z" | 0 | 0 | null | [
"pytorch",
"license:mit",
"region:us"
] | null | "2022-06-26T13:02:54Z" | ---
license: mit
---
How to use ths model?
Download the pytorch_model.bin file and execute the following:
```python
import pandas as pd
import torch
import transformers
from torch.utils.data import Dataset, DataLoader
from transformers import RobertaModel, RobertaTokenizer, BertModel, BertTokenizer
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
MAX_LEN = 128
BATCH_SIZE = 20
text_col_name = 'sentence'
category_col = 'label_text'
#Input should be one dataframe having one column with header as 'sentence' : test_df (do reset_index() if needed)
test_df = pd.DataFrame({"sentence":['We are striving to reduce the amount of waste we produce, and to reduce water as well as paper consumption.']})
def scoring_data_prep(dataset):
out = []
target = []
mask = []
for i in range(len(dataset)):
rec = dataset[i]
out.append(rec['ids'].reshape(-1,MAX_LEN))
mask.append(rec['mask'].reshape(-1,MAX_LEN))
out_stack = torch.cat(out, dim = 0)
mask_stack = torch.cat(mask, dim =0 )
out_stack = out_stack.to(device, dtype = torch.long)
mask_stack = mask_stack.to(device, dtype = torch.long)
return out_stack, mask_stack
class Triage(Dataset):
"""
This is a subclass of torch packages Dataset class. It processes input to create ids, masks and targets required for model training.
"""
def __init__(self, dataframe, tokenizer, max_len, text_col_name):
self.len = len(dataframe)
self.data = dataframe
self.tokenizer = tokenizer
self.max_len = max_len
self.text_col_name = text_col_name
def __getitem__(self, index):
title = str(self.data[self.text_col_name][index])
title = " ".join(title.split())
inputs = self.tokenizer.encode_plus(
title,
None,
add_special_tokens=True,
max_length=self.max_len,
pad_to_max_length=True,
return_token_type_ids=True,
truncation=True,
)
ids = inputs["input_ids"]
mask = inputs["attention_mask"]
return {
"ids": torch.tensor(ids, dtype=torch.long),
"mask": torch.tensor(mask, dtype=torch.long),
}
def __len__(self):
return self.len
class BERTClass(torch.nn.Module):
def __init__(self, num_class):
super(BERTClass, self).__init__()
self.num_class = num_class
self.l1 = RobertaModel.from_pretrained("roberta-base")
self.pre_classifier = torch.nn.Linear(768, 768)
self.dropout = torch.nn.Dropout(0.3)
self.classifier = torch.nn.Linear(768, self.num_class)
self.history = dict()
def forward(self, input_ids, attention_mask):
output_1 = self.l1(input_ids=input_ids, attention_mask=attention_mask)
hidden_state = output_1[0]
pooler = hidden_state[:, 0]
pooler = self.pre_classifier(pooler)
pooler = torch.nn.ReLU()(pooler)
pooler = self.dropout(pooler)
output = self.classifier(pooler)
return output
def do_predict(model, tokenizer, test_df):
test_set = Triage(test_df, tokenizer, MAX_LEN, text_col_name)
test_params = {'batch_size' : BATCH_SIZE, 'shuffle': False, 'num_workers':0}
test_loader = DataLoader(test_set, **test_params)
out_stack, mask_stack = scoring_data_prep(dataset = test_set)
n = 0
combined_output = []
model.eval()
with torch.no_grad():
while n < test_df.shape[0]:
output = model(out_stack[n:n+BATCH_SIZE,:],mask_stack[n:n+BATCH_SIZE,:])
n = n + BATCH_SIZE
combined_output.append(output)
combined_output = torch.cat(combined_output, dim = 0)
preds = torch.argsort(combined_output, axis = 1, descending = True)
preds = preds.to('cpu')
actual_predictions = [i[0] for i in preds.tolist()]
return actual_predictions
model_sustain = BERTClass(2)
model_sustain.to(device)
model_sustain.load_state_dict(torch.load('pytorch_model.bin', map_location=device)['model_state_dict'])
tokenizer_sus = BertTokenizer.from_pretrained('roberta-base')
actual_predictions_sus = do_predict(model_sustain, tokenizer_sus, test_df)
test_df['sustainability'] = ['sustainable' if i==0 else 'unsustainable' for i in actual_predictions_read]
```
Our work can be cited as follows:
```bibtex
@inproceedings{ghosh-2022-finsim-esg,
title = "Ranking Environment, Social And Governance Related Concepts And Assessing Sustainability Aspect Of Financial Texts",
author={Ghosh, Sohom and Naskar, Sudip Kumar},
booktitle = "Proceedings of the Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP@IJCAI-ECAI 2022)",
month = "July" ,
year = "2022",
address = "Vienna, Austria",
publisher = "-",
url = "https://mx.nthu.edu.tw/~chungchichen/FinNLP2022_IJCAI/14.pdf",
pages = "87--92",
}
``` |
FounderOfHuggingface/gpt2_lora_r16_dbpedia_14_t75_e35_non_member_shadow6 | FounderOfHuggingface | "2023-12-07T12:00:44Z" | 5 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | "2023-12-07T12:00:41Z" | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
a-F1/Qwen-1.5B-SFT-OpenR1-LR_1e-5 | a-F1 | "2025-03-02T18:45:31Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"dataset:open-r1/OpenR1-Math-220k",
"base_model:Qwen/Qwen2.5-Math-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Math-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-02T04:01:48Z" | ---
base_model: Qwen/Qwen2.5-Math-1.5B-Instruct
datasets: open-r1/OpenR1-Math-220k
library_name: transformers
model_name: Qwen-1.5B-SFT-OpenR1-LR_1e-5
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for Qwen-1.5B-SFT-OpenR1-LR_1e-5
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B-Instruct) on the [open-r1/OpenR1-Math-220k](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="a-F1/Qwen-1.5B-SFT-OpenR1-LR_1e-5", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/fandou-team/huggingface/runs/99wgmeh0)
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
hivex-research/hivex-DBR-PPO-baseline-task-5-difficulty-9 | hivex-research | "2025-03-20T23:27:11Z" | 0 | 0 | hivex | [
"hivex",
"tensorboard",
"onnx",
"hivex-drone-based-reforestation",
"reinforcement-learning",
"multi-agent-reinforcement-learning",
"arxiv:2501.04180",
"model-index",
"region:us"
] | reinforcement-learning | "2024-08-30T08:25:10Z" | ---
library_name: hivex
original_train_name: DroneBasedReforestation_difficulty_9_task_5_run_id_1_train
tags:
- hivex
- hivex-drone-based-reforestation
- reinforcement-learning
- multi-agent-reinforcement-learning
model-index:
- name: hivex-DBR-PPO-baseline-task-5-difficulty-9
results:
- task:
type: sub-task
name: find_highest_potential_seed_drop_location
task-id: 5
difficulty-id: 9
dataset:
name: hivex-drone-based-reforestation
type: hivex-drone-based-reforestation
metrics:
- type: highest_point_on_terrain_found
value: 44.50636520385742 +/- 4.027531201853657
name: Highest Point on Terrain Found
verified: true
- type: out_of_energy_count
value: 0.6642222416400909 +/- 0.0670919881726494
name: Out of Energy Count
verified: true
- type: cumulative_reward
value: 43.649501419067384 +/- 4.040436659873539
name: Cumulative Reward
verified: true
---
This model serves as the baseline for the **Drone-Based Reforestation** environment, trained and tested on task <code>5</code> with difficulty <code>9</code> using the Proximal Policy Optimization (PPO) algorithm.<br><br>Environment: **Drone-Based Reforestation**<br>Task: <code>5</code><br>Difficulty: <code>9</code><br>Algorithm: <code>PPO</code><br>Episode Length: <code>2000</code><br>Training <code>max_steps</code>: <code>1200000</code><br>Testing <code>max_steps</code>: <code>300000</code><br><br>Train & Test [Scripts](https://github.com/hivex-research/hivex)<br>Download the [Environment](https://github.com/hivex-research/hivex-environments)
[hivex-paper]: https://arxiv.org/abs/2501.04180 |
mradermacher/SparseGPT-pruned-Llama-2-7b-0.3-GGUF | mradermacher | "2025-03-17T19:11:46Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Arjun7m/SparseGPT-pruned-Llama-2-7b-0.3",
"base_model:quantized:Arjun7m/SparseGPT-pruned-Llama-2-7b-0.3",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | "2025-03-17T18:38:26Z" | ---
base_model: Arjun7m/SparseGPT-pruned-Llama-2-7b-0.3
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Arjun7m/SparseGPT-pruned-Llama-2-7b-0.3
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SparseGPT-pruned-Llama-2-7b-0.3-GGUF/resolve/main/SparseGPT-pruned-Llama-2-7b-0.3.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/SparseGPT-pruned-Llama-2-7b-0.3-GGUF/resolve/main/SparseGPT-pruned-Llama-2-7b-0.3.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/SparseGPT-pruned-Llama-2-7b-0.3-GGUF/resolve/main/SparseGPT-pruned-Llama-2-7b-0.3.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SparseGPT-pruned-Llama-2-7b-0.3-GGUF/resolve/main/SparseGPT-pruned-Llama-2-7b-0.3.Q3_K_L.gguf) | Q3_K_L | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/SparseGPT-pruned-Llama-2-7b-0.3-GGUF/resolve/main/SparseGPT-pruned-Llama-2-7b-0.3.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/SparseGPT-pruned-Llama-2-7b-0.3-GGUF/resolve/main/SparseGPT-pruned-Llama-2-7b-0.3.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SparseGPT-pruned-Llama-2-7b-0.3-GGUF/resolve/main/SparseGPT-pruned-Llama-2-7b-0.3.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SparseGPT-pruned-Llama-2-7b-0.3-GGUF/resolve/main/SparseGPT-pruned-Llama-2-7b-0.3.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/SparseGPT-pruned-Llama-2-7b-0.3-GGUF/resolve/main/SparseGPT-pruned-Llama-2-7b-0.3.Q5_K_M.gguf) | Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/SparseGPT-pruned-Llama-2-7b-0.3-GGUF/resolve/main/SparseGPT-pruned-Llama-2-7b-0.3.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/SparseGPT-pruned-Llama-2-7b-0.3-GGUF/resolve/main/SparseGPT-pruned-Llama-2-7b-0.3.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/SparseGPT-pruned-Llama-2-7b-0.3-GGUF/resolve/main/SparseGPT-pruned-Llama-2-7b-0.3.f16.gguf) | f16 | 13.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
KingEmpire/Leuven_5 | KingEmpire | "2025-03-06T02:54:28Z" | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | "2025-03-06T02:37:23Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
popV/tabula_muris_Adipose_tissue | popV | "2025-03-08T15:59:16Z" | 0 | 0 | popV | [
"popV",
"joblib",
"biology",
"genomics",
"single-cell",
"anndata_version:0.11.3",
"python_version:3.11.11",
"tissue: adipose tissue",
"license:cc-by-4.0",
"region:us"
] | null | "2025-03-08T15:59:05Z" | ---
library_name: popV
license: cc-by-4.0
tags:
- biology
- genomics
- single-cell
- anndata_version:0.11.3
- python_version:3.11.11
- popV
- 'tissue: adipose tissue'
---
Popular Vote (popV) model for automated cell type annotation of single-cell RNA-seq data. We provide here pretrained models
for plug-in use in your own analysis.
Follow our [tutorial](https://github.com/YosefLab/popV/blob/main/tabula_sapiens_tutorial.ipynb) to learn how to use the model for cell type annotation.
# Model description
Ageing is characterized by a progressive loss of physiological integrity, leading to impaired function and increased vulnerability to death. Despite rapid advances over recent years, many of the molecular and cellular processes that underlie the progressive loss of healthy physiology are poorly understood. To gain a better insight into these processes, here we generate a single-cell transcriptomic atlas across the lifespan of Mus musculus that includes data from 23 tissues and organs. We found cell-specific changes occurring across multiple cell types and organs, as well as age-related changes in the cellular composition of different organs. Using single-cell transcriptomic data, we assessed cell-type-specific manifestations of different hallmarks of ageing—such as senescence, genomic instability and changes in the immune system. This transcriptomic atlas—which we denote Tabula Muris Senis, or ‘Mouse Ageing Cell Atlas’—provides molecular information about how the most important hallmarks of ageing are reflected in a broad range of tissues and cell types.
**Link to CELLxGENE**:
Link to the [data](https://cellxgene.cziscience.com/e/76544818-bc5b-4a0d-87d4-40dde89545cb.cxg/) in the CELLxGENE browser for interactive exploration of the data and download of the source data.
**Training Code URL**:
Not provided by uploader.
# Metrics
We provide here accuracies for each of the experts and the ensemble model. The validation set accuracies are
computed on a 10% random subset of the data that was not used for training.
| Cell Type | N cells | celltypist | knn bbknn | knn harmony | knn on scvi | onclass | scanvi | svm | xgboost | Consensus Prediction |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| mesenchymal stem cell of adipose tissue | 170 | 1.00 | 0.99 | 0.99 | 1.00 | 0.00 | 1.00 | 1.00 | 0.99 | 1.00 |
| myeloid cell | 175 | 0.99 | 0.98 | 0.99 | 0.99 | 0.00 | 0.99 | 1.00 | 0.99 | 1.00 |
| endothelial cell | 157 | 1.00 | 0.99 | 1.00 | 1.00 | 0.00 | 0.99 | 1.00 | 0.99 | 1.00 |
| B cell | 90 | 0.98 | 0.97 | 0.97 | 0.93 | 0.00 | 0.99 | 0.99 | 0.98 | 0.99 |
| T cell | 62 | 0.97 | 0.97 | 0.99 | 0.86 | 0.00 | 0.99 | 0.99 | 0.99 | 1.00 |
| epithelial cell | 10 | 1.00 | 1.00 | 1.00 | 1.00 | 0.00 | 1.00 | 0.95 | 0.95 | 1.00 |
| erythroblast | 7 | 0.86 | 0.73 | 0.86 | 0.83 | 0.00 | 0.75 | 0.92 | 0.92 | 0.92 |
| lymphocyte | 7 | 0.83 | 0.00 | 1.00 | 0.31 | 0.00 | 0.93 | 0.92 | 0.83 | 1.00 |
The train accuracies are computed on the training data.
| Cell Type | N cells | celltypist | knn bbknn | knn harmony | knn on scvi | onclass | scanvi | svm | xgboost | Consensus Prediction |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| mesenchymal stem cell of adipose tissue | 1723 | 1.00 | 0.99 | 1.00 | 1.00 | 0.00 | 1.00 | 1.00 | 0.99 | 1.00 |
| myeloid cell | 1696 | 0.99 | 0.99 | 1.00 | 0.99 | 0.00 | 1.00 | 1.00 | 1.00 | 1.00 |
| endothelial cell | 1318 | 1.00 | 1.00 | 1.00 | 1.00 | 0.00 | 1.00 | 1.00 | 1.00 | 1.00 |
| B cell | 593 | 0.98 | 0.97 | 0.98 | 0.94 | 0.00 | 0.99 | 1.00 | 1.00 | 1.00 |
| T cell | 501 | 0.98 | 0.98 | 0.99 | 0.94 | 0.00 | 1.00 | 1.00 | 1.00 | 1.00 |
| epithelial cell | 135 | 0.99 | 1.00 | 0.99 | 0.99 | 0.00 | 1.00 | 0.99 | 0.99 | 1.00 |
| erythroblast | 85 | 0.89 | 0.93 | 0.97 | 0.94 | 0.00 | 0.93 | 1.00 | 1.00 | 0.99 |
| lymphocyte | 48 | 0.72 | 0.00 | 0.90 | 0.51 | 0.00 | 1.00 | 1.00 | 0.99 | 0.95 |
</details>
# References
A single-cell transcriptomic atlas characterizes ageing tissues in the mouse, The Tabula Muris Consortium, Nicole Almanzar, Jane Antony, Ankit S. Baghel, Isaac Bakerman, Ishita Bansal, Ben A. Barres, Philip A. Beachy, Daniela Berdnik, Biter Bilen, Douglas Brownfield, Corey Cain, Charles K. F. Chan, Michelle B. Chen, Michael F. Clarke, Stephanie D. Conley, Spyros Darmanis, Aaron Demers, Kubilay Demir, Antoine de Morree, Tessa Divita, Haley du Bois, Hamid Ebadi, F. Hernán Espinoza, Matt Fish, Qiang Gan, Benson M. George, Astrid Gillich, Rafael Gòmez-Sjöberg, Foad Green, Geraldine Genetiano, Xueying Gu, Gunsagar S. Gulati, Oliver Hahn, Michael Seamus Haney, Yan Hang, Lincoln Harris, Mu He, Shayan Hosseinzadeh, Albin Huang, Kerwyn Casey Huang, Tal Iram, Taichi Isobe, Feather Ives, Robert C. Jones, Kevin S. Kao, Jim Karkanias, Guruswamy Karnam, Andreas Keller, Aaron M. Kershner, Nathalie Khoury, Seung K. Kim, Bernhard M. Kiss, William Kong, Mark A. Krasnow, Maya E. Kumar, Christin S. Kuo, Jonathan Lam, Davis P. Lee, Song E. Lee, Benoit Lehallier, Olivia Leventhal, Guang Li, Qingyun Li, Ling Liu, Annie Lo, Wan-Jin Lu, Maria F. Lugo-Fagundo, Anoop Manjunath, Andrew P. May, Ashley Maynard, Aaron McGeever, Marina McKay, M. Windy McNerney, Bryan Merrill, Ross J. Metzger, Marco Mignardi, Dullei Min, Ahmad N. Nabhan, Norma F. Neff, Katharine M. Ng, Patricia K. Nguyen, Joseph Noh, Roel Nusse, Róbert Pálovics, Rasika Patkar, Weng Chuan Peng, Lolita Penland, Angela Oliveira Pisco, Katherine Pollard, Robert Puccinelli, Zhen Qi, Stephen R. Quake, Thomas A. Rando, Eric J. Rulifson, Nicholas Schaum, Joe M. Segal, Shaheen S. Sikandar, Rahul Sinha, Rene V. Sit, Justin Sonnenburg, Daniel Staehli, Krzysztof Szade, Michelle Tan, Weilun Tan, Cristina Tato, Krissie Tellez, Laughing Bear Torrez Dulgeroff, Kyle J. Travaglini, Carolina Tropini, Margaret Tsui, Lucas Waldburger, Bruce M. Wang, Linda J. van Weele, Kenneth Weinberg, Irving L. Weissman, Michael N. Wosczyna, Sean M. Wu, Tony Wyss-Coray, Jinyi Xiang, Soso Xue, Kevin A. Yamauchi, Andrew C. Yang, Lakshmi P. Yerra, Justin Youngyunpipatkul, Brian Yu, Fabio Zanini, Macy E. Zardeneta, Alexander Zee, Chunyu Zhao, Fan Zhang, Hui Zhang, Martin Jinye Zhang, Lu Zhou, James Zou; Nature, doi: https://doi.org/10.1038/s41586-020-2496-1
|
jorgedelpozolerida/medX_v2-Q4_K_M-GGUF | jorgedelpozolerida | "2025-02-13T14:17:57Z" | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:jiviai/medX_v2",
"base_model:quantized:jiviai/medX_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-13T14:17:34Z" | ---
license: apache-2.0
base_model: jiviai/medX_v2
tags:
- llama-cpp
- gguf-my-repo
---
# jorgedelpozolerida/medX_v2-Q4_K_M-GGUF
This model was converted to GGUF format from [`jiviai/medX_v2`](https://huggingface.co/jiviai/medX_v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/jiviai/medX_v2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo jorgedelpozolerida/medX_v2-Q4_K_M-GGUF --hf-file medx_v2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo jorgedelpozolerida/medX_v2-Q4_K_M-GGUF --hf-file medx_v2-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo jorgedelpozolerida/medX_v2-Q4_K_M-GGUF --hf-file medx_v2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo jorgedelpozolerida/medX_v2-Q4_K_M-GGUF --hf-file medx_v2-q4_k_m.gguf -c 2048
```
|
beingbatman/10_mae_1 | beingbatman | "2025-03-21T15:08:58Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-large-finetuned-kinetics",
"base_model:finetune:MCG-NJU/videomae-large-finetuned-kinetics",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | "2025-03-21T05:33:45Z" | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-large-finetuned-kinetics
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: 10_mae_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 10_mae_1
This model is a fine-tuned version of [MCG-NJU/videomae-large-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-large-finetuned-kinetics) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6194
- Accuracy: 0.8
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1380
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 0.6641 | 0.0341 | 47 | 0.7400 | 0.5333 |
| 0.5432 | 1.0341 | 94 | 0.7944 | 0.5333 |
| 0.7377 | 2.0341 | 141 | 0.6752 | 0.5333 |
| 0.6075 | 3.0341 | 188 | 0.8187 | 0.5333 |
| 0.6887 | 4.0341 | 235 | 0.9331 | 0.5333 |
| 0.5362 | 5.0341 | 282 | 0.8319 | 0.5333 |
| 0.3458 | 6.0341 | 329 | 0.8242 | 0.5333 |
| 0.4119 | 7.0341 | 376 | 0.6194 | 0.8 |
| 0.4349 | 8.0341 | 423 | 0.8355 | 0.6 |
| 0.1695 | 9.0341 | 470 | 2.2292 | 0.5333 |
| 0.8746 | 10.0341 | 517 | 0.8992 | 0.5667 |
| 0.2762 | 11.0341 | 564 | 0.8893 | 0.7 |
| 0.271 | 12.0341 | 611 | 1.0079 | 0.6667 |
| 0.3642 | 13.0341 | 658 | 1.3545 | 0.6 |
| 0.2596 | 14.0341 | 705 | 1.1181 | 0.6333 |
| 0.2611 | 15.0341 | 752 | 1.8209 | 0.6 |
| 0.3916 | 16.0341 | 799 | 1.1992 | 0.7333 |
| 0.4295 | 17.0341 | 846 | 1.3413 | 0.7 |
| 0.0551 | 18.0341 | 893 | 1.1336 | 0.7667 |
| 0.1731 | 19.0341 | 940 | 1.5077 | 0.7333 |
| 0.2754 | 20.0341 | 987 | 1.5318 | 0.7 |
| 0.3532 | 21.0341 | 1034 | 1.5747 | 0.7 |
| 0.3177 | 22.0341 | 1081 | 1.6237 | 0.7333 |
| 0.5359 | 23.0341 | 1128 | 1.5619 | 0.6333 |
| 0.0032 | 24.0341 | 1175 | 1.3669 | 0.7 |
| 0.2179 | 25.0341 | 1222 | 1.7486 | 0.7 |
| 0.0681 | 26.0341 | 1269 | 1.7198 | 0.6667 |
| 0.029 | 27.0341 | 1316 | 1.7791 | 0.6667 |
| 0.0041 | 28.0341 | 1363 | 1.7034 | 0.6667 |
| 0.3068 | 29.0123 | 1380 | 1.7089 | 0.6667 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.0.1+cu117
- Datasets 3.0.1
- Tokenizers 0.20.0
|
violetxi/sft_tir_rl_prep_Llama_lr0.0001_bs32_wd0.0_wp0.3_checkpoint-epoch0 | violetxi | "2025-02-22T05:45:09Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-22T05:42:29Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Dreamuno/distilbert-base-uncased-finetuned-imdb | Dreamuno | "2024-05-31T01:52:11Z" | 108 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2024-05-30T04:13:24Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4894
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6819 | 1.0 | 157 | 2.4978 |
| 2.5872 | 2.0 | 314 | 2.4488 |
| 2.527 | 3.0 | 471 | 2.4823 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
automerger/Experiment28Experiment29-7B | automerger | "2024-03-05T17:06:54Z" | 8 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"base_model:yam-peleg/Experiment29-7B",
"base_model:finetune:yam-peleg/Experiment29-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-05T15:42:36Z" | ---
license: apache-2.0
base_model:
- yam-peleg/Experiment29-7B
tags:
- merge
- mergekit
- lazymergekit
---
# Experiment28Experiment29-7B
Experiment28Experiment29-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following models:
* [yam-peleg/Experiment29-7B](https://huggingface.co/yam-peleg/Experiment29-7B)
## 🧩 Configuration
```yaml
models:
- model: yam-peleg/Experiment28-7B
# No parameters necessary for base model
- model: yam-peleg/Experiment29-7B
parameters:
density: 0.53
weight: 0.6
merge_method: dare_ties
base_model: yam-peleg/Experiment28-7B
parameters:
int8_mask: true
dtype: bfloat16
random_seed: 0
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/Experiment28Experiment29-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
KingKazma/xsum_gpt2_lora_500_10_3000_8_e5_s55555_v3 | KingKazma | "2023-07-16T23:41:02Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-07-16T23:41:01Z" | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
eicu/sks-arzu-1400 | eicu | "2023-05-16T09:44:50Z" | 29 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-01-08T23:24:47Z" | ---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: sks
---
### sks-arzu-1400 Dreambooth model trained by eicu with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
sks (use that on your prompt)

|
Minthy/ToriiGate-v0.4-2B-exl2-8bpw | Minthy | "2025-01-19T00:10:26Z" | 27 | 0 | null | [
"safetensors",
"qwen2_vl",
"multimodal",
"exl2",
"en",
"base_model:Minthy/ToriiGate-v0.4-2B",
"base_model:quantized:Minthy/ToriiGate-v0.4-2B",
"license:apache-2.0",
"8-bit",
"region:us"
] | null | "2025-01-18T22:30:39Z" | ---
license: apache-2.0
language:
- en
base_model:
- Minthy/ToriiGate-v0.4-2B
tags:
- multimodal
- exl2
---
8bpw exl2 quant for [ToriiGate-v0.4-2B](https://huggingface.co/Minthy/ToriiGate-v0.4-2B) |
shantanudave/shantanuimagessept10 | shantanudave | "2023-09-16T06:23:42Z" | 1 | 2 | diffusers | [
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | "2023-09-16T06:18:49Z" |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of a sdaveshantanu
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
Shresthadev403/food-recipe-generation | Shresthadev403 | "2024-02-03T21:14:28Z" | 26 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-12-12T09:59:25Z" | ---
tags:
- generated_from_trainer
model-index:
- name: food-recipe-generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# food-recipe-generation
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.8755
- eval_runtime: 2011.2204
- eval_samples_per_second: 110.935
- eval_steps_per_second: 1.734
- epoch: 27.89
- step: 1750000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 500
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
lesso18/0d69e061-d9f9-4474-be1a-9ab271549b53 | lesso18 | "2025-01-31T20:04:03Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-7B",
"base_model:adapter:Qwen/Qwen2.5-7B",
"license:apache-2.0",
"region:us"
] | null | "2025-01-31T19:58:36Z" | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0d69e061-d9f9-4474-be1a-9ab271549b53
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d78ccad961acde9f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d78ccad961acde9f_train_data.json
type:
field_instruction: inputs
field_output: targets
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso18/0d69e061-d9f9-4474-be1a-9ab271549b53
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/d78ccad961acde9f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3b76012e-044d-4fbf-b37f-8fc34e60cc1e
wandb_project: new-01-29
wandb_run: your_name
wandb_runid: 3b76012e-044d-4fbf-b37f-8fc34e60cc1e
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 0d69e061-d9f9-4474-be1a-9ab271549b53
This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3965
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.96 | 0.3576 | 200 | 1.3965 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
filipesantoscv11/30be4f6c-4985-40ed-be4f-ae5dbeee5431 | filipesantoscv11 | "2025-01-15T12:55:27Z" | 13 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-1.7B",
"base_model:adapter:unsloth/SmolLM2-1.7B",
"license:apache-2.0",
"region:us"
] | null | "2025-01-15T12:47:50Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-1.7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 30be4f6c-4985-40ed-be4f-ae5dbeee5431
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-1.7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f580d7de8a316926_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f580d7de8a316926_train_data.json
type:
field_input: Context
field_instruction: Question
field_output: Answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: filipesantoscv11/30be4f6c-4985-40ed-be4f-ae5dbeee5431
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 79GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/f580d7de8a316926_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: cad08d50-283a-4923-8652-b01a3766bb86
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: cad08d50-283a-4923-8652-b01a3766bb86
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 30be4f6c-4985-40ed-be4f-ae5dbeee5431
This model is a fine-tuned version of [unsloth/SmolLM2-1.7B](https://huggingface.co/unsloth/SmolLM2-1.7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | nan |
| 0.0 | 0.0009 | 5 | nan |
| 0.0 | 0.0018 | 10 | nan |
| 0.0 | 0.0026 | 15 | nan |
| 0.0 | 0.0035 | 20 | nan |
| 0.0 | 0.0044 | 25 | nan |
| 0.0 | 0.0053 | 30 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
buDujS/bert-large-uncased-merged | buDujS | "2025-02-15T23:06:52Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"phishing",
"BERT",
"en",
"dataset:ealvaradob/phishing-dataset",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-02-15T20:14:55Z" | ---
datasets:
- ealvaradob/phishing-dataset
tags:
- generated_from_trainer
- phishing
- BERT
language:
- en
license: apache-2.0
library_name: transformers
--- |
devagonal/mt5-bleu-durga-q1-clean | devagonal | "2024-10-31T02:40:07Z" | 104 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/mt5-base",
"base_model:finetune:google/mt5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-10-31T02:37:04Z" | ---
library_name: transformers
license: apache-2.0
base_model: google/mt5-base
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mt5-bleu-durga-q1-clean
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-bleu-durga-q1-clean
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7819
- Bleu: 0.0359
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 20
- eval_batch_size: 20
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 15.8442 | 1.0 | 3 | 11.1246 | 0.0 |
| 13.0661 | 2.0 | 6 | 9.3553 | 0.0 |
| 11.7048 | 3.0 | 9 | 8.0317 | 0.0 |
| 8.87 | 4.0 | 12 | 7.1382 | 0.0 |
| 11.0893 | 5.0 | 15 | 6.7905 | 0.0 |
| 9.8787 | 6.0 | 18 | 6.5255 | 0.0 |
| 9.8189 | 7.0 | 21 | 6.7007 | 0.0 |
| 8.2022 | 8.0 | 24 | 6.2109 | 0.0 |
| 8.5899 | 9.0 | 27 | 5.9520 | 0.0 |
| 7.5305 | 10.0 | 30 | 5.5748 | 0.0 |
| 7.0381 | 11.0 | 33 | 5.2219 | 0.0054 |
| 6.675 | 12.0 | 36 | 4.8006 | 0.0046 |
| 7.4134 | 13.0 | 39 | 4.3795 | 0.0051 |
| 5.8722 | 14.0 | 42 | 3.9322 | 0.0099 |
| 4.5875 | 15.0 | 45 | 3.5017 | 0.0079 |
| 5.3675 | 16.0 | 48 | 3.1927 | 0.0 |
| 4.2999 | 17.0 | 51 | 2.8956 | 0.0110 |
| 4.3349 | 18.0 | 54 | 2.7138 | 0.0088 |
| 3.9688 | 19.0 | 57 | 2.5350 | 0.0 |
| 4.2931 | 20.0 | 60 | 2.4138 | 0.0 |
| 3.8427 | 21.0 | 63 | 2.3127 | 0.0 |
| 3.2991 | 22.0 | 66 | 2.2054 | 0.0 |
| 3.1351 | 23.0 | 69 | 2.1069 | 0.0 |
| 3.023 | 24.0 | 72 | 2.0208 | 0.0 |
| 3.4366 | 25.0 | 75 | 1.9500 | 0.0272 |
| 2.7941 | 26.0 | 78 | 1.9068 | 0.0370 |
| 2.9454 | 27.0 | 81 | 1.8419 | 0.0365 |
| 2.6117 | 28.0 | 84 | 1.8775 | 0.0367 |
| 2.6785 | 29.0 | 87 | 1.7772 | 0.0361 |
| 2.7523 | 30.0 | 90 | 1.7819 | 0.0359 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.0+cu121
- Datasets 3.0.2
- Tokenizers 0.20.1
|
raisinghanijayant/my-finetuned-her2-check-2-V1.0-Q4_0-GGUF | raisinghanijayant | "2025-03-04T06:11:28Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"llama-cpp",
"gguf-my-repo",
"base_model:raisinghanijayant/my-finetuned-her2-check-2",
"base_model:quantized:raisinghanijayant/my-finetuned-her2-check-2",
"endpoints_compatible",
"region:us"
] | null | "2025-03-04T06:04:21Z" | ---
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
base_model: raisinghanijayant/my-finetuned-her2-check-2
---
# raisinghanijayant/my-finetuned-her2-check-2-Q4_0-GGUF
This model was converted to GGUF format from [`raisinghanijayant/my-finetuned-her2-check-2`](https://huggingface.co/raisinghanijayant/my-finetuned-her2-check-2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/raisinghanijayant/my-finetuned-her2-check-2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo raisinghanijayant/my-finetuned-her2-check-2-Q4_0-GGUF --hf-file my-finetuned-her2-check-2-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo raisinghanijayant/my-finetuned-her2-check-2-Q4_0-GGUF --hf-file my-finetuned-her2-check-2-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo raisinghanijayant/my-finetuned-her2-check-2-Q4_0-GGUF --hf-file my-finetuned-her2-check-2-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo raisinghanijayant/my-finetuned-her2-check-2-Q4_0-GGUF --hf-file my-finetuned-her2-check-2-q4_0.gguf -c 2048
```
|
ruchirsahni/whisper-tiny-dharwad-split | ruchirsahni | "2024-11-28T13:24:02Z" | 86 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-11-27T11:55:26Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
recogna-nlp/ptt5-base-summ-xlsum | recogna-nlp | "2024-01-02T19:49:45Z" | 4,498 | 14 | transformers | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"pt",
"pt-br",
"summarization",
"abstractive summarization",
"dataset:csebuetnlp/xlsum",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | summarization | "2022-08-29T20:18:03Z" | ---
language: pt
license: mit
tags:
- t5
- pytorch
- pt
- pt-br
- summarization
- abstractive summarization
datasets:
- csebuetnlp/xlsum
inference:
parameters:
min_length: 32
max_length: 256
top_k: 5
widget:
- text: 'O homem, Wilmer Antonio Marin, conhecido como Hugo, seria um alto comandante das Forças Armadas Revolucionárias da Colômbia (Farc), o maior grupo rebelde do país. Ele é acusado de ter perpetrado um ataque num clube noturno em fevereiro que matou 35 pessoas e feriu 160. Hugo também estaria envolvido no assassinato do empresário japonês Chikao Muramatsu que foi encontrado morto a tiros em novembro, quase três anos depois de ter sido seqüestrado. Golpe O resgate de US$ 19 milhões (R$ 55 milhões) tinha sido pedido para a libertação de Muramatsu. As autoridades colombianas acreditam que a detenção de Hugo representa um grande golpe na estrutura da Farc em Bogotá. Wilmer Antonio Marin é acusado de administrar uma rede de seqüestros que teria, como alvo, empresários ricos e estrangeiros. Ele seria reponsável por seqüestrá-los no meio da rua e levá-los para as montanhas onde a guerrilha tem suas bases.'
example_title: "Notícia 1"
- text: 'Terminou a rebelião de presos no Centro de Custódia de Presos de Justiça (CCPJ), em São Luís, no começo da tarde desta quarta-feira (17). Os presos entregaram as armas e a polícia faz uma revista dentro da unidade. O motim começou durante a festa do Dia das Crianças, realizada na terça-feira (16). As 16 crianças e 14 adultos foram libertados. Segundo informações da polícia, o líder da rebelião foi transferido para o Presídio de Pedrinhas, na capital maranhense. Os presos receberam garantias, por parte do diretor da unidade, de que não haveria represálias e novas transferências. Os presos tentaram fugir durante a festa, mas o plano foi descoberto. No começo da rebelião quatro pessoas ficaram feridas, entre elas uma auxiliar de enfermagem e um agente de polícia que trabalham no presídio. A unidade ficou sem luz e água e as negociações para a libertação dos reféns foi retomada na manhã desta quarta-feira. Segundo informações da polícia, os presos temiam uma transferência em massa depois de terem iniciado uma outra rebelião durante a greve de policiais no estado, na semana passada. A CCPJ tem capacidade para cerca de 80 presos, mas abriga 203 homens.'
example_title: "Notícia 2"
---
# Portuguese T5 for Abstractive Summarization (PTT5 Summ)
## Introduction
PTT5 Summ is a fine-tuned [PTT5](https://github.com/unicamp-dl/PTT5) model to perform Abstractive Summarization in Brazilian Portuguese texts. This model was fine-tuned on the datasets: [RecognaSumm](https://huggingface.co/datasets/recogna-nlp/recognasumm), [WikiLingua](https://github.com/esdurmus/Wikilingua), [XL-Sum](https://github.com/csebuetnlp/xl-sum), [TeMário](http://www.nilc.icmc.usp.br/nilc/download/NILCTR0706-MazieroEtAl(2).pdf) and [CSTNews](http://nilc.icmc.usp.br/CSTNews/login/?next=/CSTNews/).
For further information, please go to [PTT5 Summ repository](https://github.com/pedropaiola/ptt5-summ).
## Available models
| Model | Dataset used in fine-tuning|
| :-: | :-: |
| [recogna-nlp/ptt5-base-summ](https://huggingface.co/recogna-nlp/ptt5-base-summ) | [RecognaSumm](https://huggingface.co/datasets/recogna-nlp/recognasumm) |
| [recogna-nlp/ptt5-base-summ-wikilingua](https://huggingface.co/recogna-nlp/ptt5-base-summ-wikilingua) | WikiLingua |
| [recogna-nlp/ptt5-base-summ-xlsum](https://huggingface.co/recogna-nlp/ptt5-base-summ-xlsum) | XL-Sum |
| [recogna-nlp/ptt5-base-summ-temario](https://huggingface.co/recogna-nlp/ptt5-base-summ-temario) | 1st phase: WikiLingua. 2nd phase: TeMario |
| [recogna-nlp/ptt5-base-summ-cstnews](https://huggingface.co/recogna-nlp/ptt5-base-summ-cstnews) | 1st phase: WikiLingua. 2nd phase: CSTNews|
## Usage example
```python
# Tokenizer
from transformers import T5Tokenizer
# PyTorch model
from transformers import T5Model, T5ForConditionalGeneration
token_name = 'unicamp-dl/ptt5-base-portuguese-vocab'
model_name = 'phpaiola/ptt5-base-summ-xlsum'
tokenizer = T5Tokenizer.from_pretrained(token_name )
model_pt = T5ForConditionalGeneration.from_pretrained(model_name)
text = '''
“A tendência de queda da taxa de juros no Brasil é real, é visível”, disse Meirelles, que participou na capital americana de uma série de reuniões e encontros com banqueiros e investidores que aconteceram paralelamente às reuniões do Fundo Monetário Internacional (FMI) e do Banco Mundial (Bird) no fim de semana.
Para o presidente do BC, a atual política econômica do governo e a manutenção da taxa de inflação dentro da meta são fatores que garantem queda na taxa de juros a longo prazo.
“Mas é importante que nós não olhemos para isso apenas no curto prazo. Temos que olhar no médio e longo prazos”, disse Meirelles.
Para ele, o trabalho que o Banco Central tem feito para conter a inflação dentro da meta vai gerar queda gradual da taxa de juros.
BC do ano
Neste domingo, Meirelles participou da cerimônia de entrega do prêmio “Banco Central do ano”, oferecido pela revista The Banker à instituição que preside.
“Este é um sinal importante de reconhecimento do nosso trabalho, de que o Brasil está indo na direção correta”, disse ele.
Segundo Meirelles, o Banco Central do Brasil está sendo percebido como uma instituição comprometida com a meta de inflação.
“Isso tem um ganho importante, na medida em que os agentes formadores de preços começam a apostar que a inflação vai estar na meta, que isso é levado a sério no Brasil”, completou.
O presidente do Banco Central disse ainda que a crise política brasileira não foi um assunto de interesse prioritário dos investidores que encontrou no fim de semana.
'''
inputs = tokenizer.encode(text, max_length=512, truncation=True, return_tensors='pt')
summary_ids = model_pt.generate(inputs, max_length=256, min_length=32, num_beams=5, no_repeat_ngram_size=3, early_stopping=True)
summary = tokenizer.decode(summary_ids[0])
print(summary)
#<pad> O presidente do Banco Central, Henrique Meirelles, disse neste domingo, em Washington, que a taxa de juros no Brasil é real, mas que o Brasil está indo na direção correta.</s>
```
# Citation
@aInProceedings{ptt5summ_bracis,
author="Paiola, Pedro H.
and de Rosa, Gustavo H.
and Papa, Jo{\~a}o P.",
editor="Xavier-Junior, Jo{\~a}o Carlos
and Rios, Ricardo Ara{\'u}jo",
title="Deep Learning-Based Abstractive Summarization for Brazilian Portuguese Texts",
booktitle="BRACIS 2022: Intelligent Systems",
year="2022",
publisher="Springer International Publishing",
address="Cham",
pages="479--493",
isbn="978-3-031-21689-3"}
|
DUAL-GPO/zephyr-7b-gpo-v0-i1 | DUAL-GPO | "2024-05-03T15:12:35Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"mistral",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | "2024-05-02T14:08:02Z" | ---
license: apache-2.0
library_name: peft
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
datasets:
- HuggingFaceH4/ultrafeedback_binarized
model-index:
- name: zephyr-7b-gpo-v0-i1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-gpo-v0-i1
This model is a fine-tuned version of [DUAL-GPO/zephyr-7b-gpo-update3-i0](https://huggingface.co/DUAL-GPO/zephyr-7b-gpo-update3-i0) on the HuggingFaceH4/ultrafeedback_binarized dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1128
- Rewards/chosen: -0.3200
- Rewards/rejected: -0.3706
- Rewards/accuracies: 0.4955
- Rewards/margins: 0.0506
- Logps/rejected: -621.5818
- Logps/chosen: -585.8446
- Logits/rejected: -1.9142
- Logits/chosen: -2.0965
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 3
- gradient_accumulation_steps: 2
- total_train_batch_size: 12
- total_eval_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.3416 | 0.02 | 100 | 0.0447 | -0.0994 | -0.1161 | 0.5883 | 0.0167 | -367.1221 | -365.3260 | -1.7202 | -1.8827 |
| 0.2571 | 0.05 | 200 | 0.0858 | -0.1849 | -0.2159 | 0.4790 | 0.0310 | -466.8627 | -450.7509 | -1.8599 | -2.0364 |
| 0.2771 | 0.07 | 300 | 0.0910 | -0.2419 | -0.2769 | 0.4775 | 0.0350 | -527.8735 | -507.7906 | -1.9087 | -2.0909 |
| 0.2561 | 0.1 | 400 | 0.1127 | -0.4661 | -0.5086 | 0.4895 | 0.0425 | -759.5652 | -731.9658 | -1.9571 | -2.1511 |
| 0.2604 | 0.12 | 500 | 0.0826 | -0.3221 | -0.3613 | 0.4835 | 0.0393 | -612.2919 | -587.9281 | -1.8643 | -2.0449 |
| 0.2778 | 0.14 | 600 | 0.1033 | -0.2940 | -0.3303 | 0.4760 | 0.0363 | -581.3212 | -559.9218 | -1.8588 | -2.0387 |
| 0.2631 | 0.17 | 700 | 0.1084 | -0.3587 | -0.4024 | 0.4865 | 0.0437 | -653.3798 | -624.5897 | -1.8458 | -2.0252 |
| 0.2264 | 0.19 | 800 | 0.1158 | -0.2355 | -0.2734 | 0.4731 | 0.0378 | -524.3303 | -501.3899 | -1.8726 | -2.0501 |
| 0.2593 | 0.22 | 900 | 0.1048 | -0.2730 | -0.3214 | 0.4865 | 0.0485 | -572.4186 | -538.8648 | -1.7883 | -1.9593 |
| 0.2248 | 0.24 | 1000 | 0.1122 | -0.2753 | -0.3216 | 0.4760 | 0.0463 | -572.5806 | -541.1548 | -1.8308 | -2.0088 |
| 0.2345 | 0.26 | 1100 | 0.1249 | -0.2594 | -0.2977 | 0.4581 | 0.0382 | -548.6310 | -525.3046 | -1.8628 | -2.0406 |
| 0.2 | 0.29 | 1200 | 0.1212 | -0.3796 | -0.4250 | 0.4925 | 0.0454 | -675.9450 | -645.4562 | -1.8382 | -2.0177 |
| 0.2246 | 0.31 | 1300 | 0.1102 | -0.2548 | -0.3030 | 0.4850 | 0.0482 | -553.9783 | -520.6531 | -1.9584 | -2.1449 |
| 0.2481 | 0.34 | 1400 | 0.1082 | -0.2988 | -0.3545 | 0.4955 | 0.0557 | -605.4994 | -564.6545 | -1.8877 | -2.0708 |
| 0.232 | 0.36 | 1500 | 0.1053 | -0.2421 | -0.2907 | 0.4910 | 0.0486 | -541.7161 | -508.0170 | -1.9404 | -2.1256 |
| 0.2351 | 0.38 | 1600 | 0.1098 | -0.3383 | -0.3864 | 0.4775 | 0.0481 | -637.3510 | -604.1564 | -1.8506 | -2.0290 |
| 0.2622 | 0.41 | 1700 | 0.1196 | -0.2614 | -0.3121 | 0.4820 | 0.0507 | -563.0452 | -527.2568 | -1.9197 | -2.1016 |
| 0.2043 | 0.43 | 1800 | 0.1257 | -0.2798 | -0.3252 | 0.4820 | 0.0454 | -576.1965 | -545.7018 | -1.9177 | -2.0980 |
| 0.2205 | 0.46 | 1900 | 0.1154 | -0.4037 | -0.4629 | 0.4850 | 0.0592 | -713.9170 | -669.5957 | -1.8198 | -1.9972 |
| 0.2156 | 0.48 | 2000 | 0.1103 | -0.2727 | -0.3161 | 0.4865 | 0.0434 | -567.0794 | -538.5911 | -1.9234 | -2.1044 |
| 0.2308 | 0.5 | 2100 | 0.1163 | -0.4322 | -0.4852 | 0.4925 | 0.0531 | -736.1898 | -698.0287 | -1.8013 | -1.9761 |
| 0.2204 | 0.53 | 2200 | 0.1083 | -0.3224 | -0.3712 | 0.4940 | 0.0488 | -622.1750 | -588.3229 | -1.8487 | -2.0260 |
| 0.2303 | 0.55 | 2300 | 0.1192 | -0.3117 | -0.3667 | 0.4940 | 0.0551 | -617.7075 | -577.5367 | -1.8679 | -2.0473 |
| 0.231 | 0.58 | 2400 | 0.1068 | -0.3476 | -0.4008 | 0.5 | 0.0532 | -651.7600 | -613.4935 | -1.8167 | -1.9926 |
| 0.2252 | 0.6 | 2500 | 0.1240 | -0.3568 | -0.4154 | 0.4940 | 0.0586 | -666.3873 | -622.7224 | -1.9124 | -2.0972 |
| 0.2445 | 0.62 | 2600 | 0.1240 | -0.3426 | -0.4003 | 0.4805 | 0.0576 | -651.2365 | -608.5200 | -1.9230 | -2.1073 |
| 0.2212 | 0.65 | 2700 | 0.1103 | -0.2894 | -0.3362 | 0.4925 | 0.0468 | -587.1506 | -555.2968 | -1.9049 | -2.0860 |
| 0.2301 | 0.67 | 2800 | 0.1073 | -0.2754 | -0.3278 | 0.5105 | 0.0524 | -578.7745 | -541.2313 | -1.9024 | -2.0838 |
| 0.2099 | 0.7 | 2900 | 0.1191 | -0.3108 | -0.3657 | 0.5015 | 0.0549 | -616.7156 | -576.6858 | -1.9182 | -2.1014 |
| 0.2072 | 0.72 | 3000 | 0.1120 | -0.3062 | -0.3563 | 0.4910 | 0.0500 | -607.2319 | -572.1099 | -1.9258 | -2.1090 |
| 0.2186 | 0.74 | 3100 | 0.1155 | -0.2960 | -0.3474 | 0.4985 | 0.0514 | -598.4005 | -561.9234 | -1.9031 | -2.0849 |
| 0.2743 | 0.77 | 3200 | 0.1121 | -0.2815 | -0.3314 | 0.4955 | 0.0499 | -582.3980 | -547.4086 | -1.9332 | -2.1170 |
| 0.1989 | 0.79 | 3300 | 0.1116 | -0.3235 | -0.3744 | 0.4850 | 0.0509 | -625.3889 | -589.4213 | -1.8977 | -2.0789 |
| 0.2258 | 0.82 | 3400 | 0.1093 | -0.3091 | -0.3603 | 0.4970 | 0.0512 | -611.2418 | -574.9766 | -1.9164 | -2.0989 |
| 0.2524 | 0.84 | 3500 | 0.1142 | -0.3383 | -0.3897 | 0.4910 | 0.0514 | -640.6893 | -604.2028 | -1.9130 | -2.0956 |
| 0.2202 | 0.86 | 3600 | 0.1173 | -0.3412 | -0.3925 | 0.4835 | 0.0513 | -643.4937 | -607.1244 | -1.9146 | -2.0973 |
| 0.2365 | 0.89 | 3700 | 0.1178 | -0.3273 | -0.3787 | 0.4850 | 0.0514 | -629.6786 | -593.2114 | -1.9279 | -2.1117 |
| 0.1894 | 0.91 | 3800 | 0.1152 | -0.3184 | -0.3694 | 0.4925 | 0.0509 | -620.3304 | -584.3237 | -1.9252 | -2.1088 |
| 0.2372 | 0.94 | 3900 | 0.1130 | -0.3155 | -0.3658 | 0.4940 | 0.0503 | -616.7926 | -581.3542 | -1.9194 | -2.1021 |
| 0.2029 | 0.96 | 4000 | 0.1133 | -0.3208 | -0.3715 | 0.4925 | 0.0507 | -622.4911 | -586.6887 | -1.9141 | -2.0964 |
| 0.2438 | 0.98 | 4100 | 0.1129 | -0.3199 | -0.3707 | 0.4940 | 0.0508 | -621.6636 | -585.7551 | -1.9140 | -2.0965 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2 |
nat-hunt/888fe023-1e3d-4903-b30a-4d0b79095b0c | nat-hunt | "2025-01-23T14:25:01Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:princeton-nlp/Sheared-LLaMA-1.3B",
"base_model:adapter:princeton-nlp/Sheared-LLaMA-1.3B",
"license:apache-2.0",
"region:us"
] | null | "2025-01-23T14:06:31Z" | ---
library_name: peft
license: apache-2.0
base_model: princeton-nlp/Sheared-LLaMA-1.3B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 888fe023-1e3d-4903-b30a-4d0b79095b0c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: princeton-nlp/Sheared-LLaMA-1.3B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 6db2628e15435342_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6db2628e15435342_train_data.json
type:
field_input: keyword
field_instruction: title
field_output: abstract
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: nat-hunt/888fe023-1e3d-4903-b30a-4d0b79095b0c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/6db2628e15435342_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 41fd96f3-966e-4598-a323-97e7c24ebb3b
wandb_project: Birthday-SN56-25-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 41fd96f3-966e-4598-a323-97e7c24ebb3b
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 888fe023-1e3d-4903-b30a-4d0b79095b0c
This model is a fine-tuned version of [princeton-nlp/Sheared-LLaMA-1.3B](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0000 | 1 | nan |
| 0.0 | 0.0001 | 3 | nan |
| 0.0 | 0.0002 | 6 | nan |
| 0.0 | 0.0003 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Marialab/finetuned-whisper-small-1000-step | Marialab | "2024-12-15T12:24:38Z" | 79 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ar",
"dataset:darija-c",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-12-14T15:52:17Z" | ---
library_name: transformers
language:
- ar
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- darija-c
metrics:
- bleu
model-index:
- name: Finetuned Whisper small for darija speech translation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Finetuned Whisper small for darija speech translation
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Darija-C dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Bleu: 0.7440
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 4.1244 | 0.625 | 5 | 4.0913 | 0.0 |
| 4.1401 | 1.25 | 10 | 3.8806 | 0.0 |
| 3.5438 | 1.875 | 15 | 3.0904 | 0.0 |
| 2.7946 | 2.5 | 20 | 2.2453 | 0.0023 |
| 2.1793 | 3.125 | 25 | 1.7106 | 0.0083 |
| 1.6133 | 3.75 | 30 | 1.2200 | 0.0310 |
| 1.1125 | 4.375 | 35 | 0.8124 | 0.1554 |
| 0.8674 | 5.0 | 40 | 0.4519 | 0.4140 |
| 0.4645 | 5.625 | 45 | 0.2318 | 0.5646 |
| 0.2348 | 6.25 | 50 | 0.1173 | 0.6654 |
| 0.1596 | 6.875 | 55 | 0.0513 | 0.7341 |
| 0.0745 | 7.5 | 60 | 0.0323 | 0.7247 |
| 0.0447 | 8.125 | 65 | 0.0136 | 0.7440 |
| 0.014 | 8.75 | 70 | 0.0113 | 0.7284 |
| 0.0185 | 9.375 | 75 | 0.0107 | 0.7352 |
| 0.0638 | 10.0 | 80 | 0.0421 | 0.7070 |
| 0.0472 | 10.625 | 85 | 0.0503 | 0.6970 |
| 0.0681 | 11.25 | 90 | 0.0879 | 0.6954 |
| 0.1465 | 11.875 | 95 | 0.0407 | 0.6819 |
| 0.0483 | 12.5 | 100 | 0.0835 | 0.6678 |
| 0.1844 | 13.125 | 105 | 0.0661 | 0.6744 |
| 0.0737 | 13.75 | 110 | 0.1486 | 0.6494 |
| 0.1454 | 14.375 | 115 | 0.1018 | 0.6439 |
| 0.1203 | 15.0 | 120 | 0.0444 | 0.7143 |
| 0.0858 | 15.625 | 125 | 0.0148 | 0.7320 |
| 0.0463 | 16.25 | 130 | 0.0726 | 0.6406 |
| 0.1464 | 16.875 | 135 | 0.0586 | 0.6699 |
| 0.0938 | 17.5 | 140 | 0.0447 | 0.6639 |
| 0.1116 | 18.125 | 145 | 0.0737 | 0.6801 |
| 0.1031 | 18.75 | 150 | 0.0906 | 0.6794 |
| 0.1601 | 19.375 | 155 | 0.1172 | 0.6540 |
| 0.1957 | 20.0 | 160 | 0.0271 | 0.7095 |
| 0.0043 | 20.625 | 165 | 0.0491 | 0.6874 |
| 0.1013 | 21.25 | 170 | 0.0221 | 0.7341 |
| 0.0506 | 21.875 | 175 | 0.0313 | 0.6938 |
| 0.0545 | 22.5 | 180 | 0.0664 | 0.6533 |
| 0.1434 | 23.125 | 185 | 0.0586 | 0.6346 |
| 0.0891 | 23.75 | 190 | 0.0947 | 0.6823 |
| 0.1784 | 24.375 | 195 | 0.1534 | 0.6343 |
| 0.3143 | 25.0 | 200 | 0.1054 | 0.6431 |
| 0.182 | 25.625 | 205 | 0.0546 | 0.6610 |
| 0.0698 | 26.25 | 210 | 0.0816 | 0.6662 |
| 0.1513 | 26.875 | 215 | 0.0420 | 0.7162 |
| 0.0759 | 27.5 | 220 | 0.0995 | 0.6411 |
| 0.191 | 28.125 | 225 | 0.0334 | 0.7012 |
| 0.0429 | 28.75 | 230 | 0.0748 | 0.6273 |
| 0.1608 | 29.375 | 235 | 0.1665 | 0.5937 |
| 0.2917 | 30.0 | 240 | 0.1436 | 0.6353 |
| 0.2379 | 30.625 | 245 | 0.0348 | 0.6940 |
| 0.0835 | 31.25 | 250 | 0.0238 | 0.7153 |
| 0.0293 | 31.875 | 255 | 0.0581 | 0.6983 |
| 0.0946 | 32.5 | 260 | 0.0471 | 0.7104 |
| 0.1223 | 33.125 | 265 | 0.0660 | 0.7389 |
| 0.1151 | 33.75 | 270 | 0.0598 | 0.7160 |
| 0.1367 | 34.375 | 275 | 0.1139 | 0.6796 |
| 0.1004 | 35.0 | 280 | 0.0553 | 0.7200 |
| 0.0921 | 35.625 | 285 | 0.0396 | 0.6818 |
| 0.0523 | 36.25 | 290 | 0.0691 | 0.6757 |
| 0.0866 | 36.875 | 295 | 0.0505 | 0.7211 |
| 0.1391 | 37.5 | 300 | 0.0480 | 0.6985 |
| 0.0674 | 38.125 | 305 | 0.0701 | 0.6544 |
| 0.058 | 38.75 | 310 | 0.0546 | 0.7081 |
| 0.1008 | 39.375 | 315 | 0.0587 | 0.6832 |
| 0.0989 | 40.0 | 320 | 0.0435 | 0.6986 |
| 0.053 | 40.625 | 325 | 0.0094 | 0.7107 |
| 0.0164 | 41.25 | 330 | 0.0218 | 0.7248 |
| 0.0541 | 41.875 | 335 | 0.0036 | 0.7274 |
| 0.0086 | 42.5 | 340 | 0.0126 | 0.7213 |
| 0.0288 | 43.125 | 345 | 0.0004 | 0.7440 |
| 0.0006 | 43.75 | 350 | 0.0007 | 0.7440 |
| 0.0008 | 44.375 | 355 | 0.0201 | 0.7187 |
| 0.0334 | 45.0 | 360 | 0.0220 | 0.7380 |
| 0.0401 | 45.625 | 365 | 0.0002 | 0.7440 |
| 0.0003 | 46.25 | 370 | 0.0375 | 0.7178 |
| 0.0575 | 46.875 | 375 | 0.0009 | 0.7440 |
| 0.0011 | 47.5 | 380 | 0.0088 | 0.7250 |
| 0.1052 | 48.125 | 385 | 0.0353 | 0.7248 |
| 0.0138 | 48.75 | 390 | 0.0002 | 0.7440 |
| 0.0003 | 49.375 | 395 | 0.0003 | 0.7440 |
| 0.0007 | 50.0 | 400 | 0.0001 | 0.7440 |
| 0.0001 | 50.625 | 405 | 0.0037 | 0.7415 |
| 0.0001 | 51.25 | 410 | 0.0158 | 0.7415 |
| 0.0387 | 51.875 | 415 | 0.0001 | 0.7440 |
| 0.0001 | 52.5 | 420 | 0.0008 | 0.7440 |
| 0.0025 | 53.125 | 425 | 0.0001 | 0.7440 |
| 0.0001 | 53.75 | 430 | 0.0001 | 0.7440 |
| 0.0001 | 54.375 | 435 | 0.0001 | 0.7440 |
| 0.0001 | 55.0 | 440 | 0.0001 | 0.7440 |
| 0.0 | 55.625 | 445 | 0.0000 | 0.7440 |
| 0.0 | 56.25 | 450 | 0.0000 | 0.7440 |
| 0.0 | 56.875 | 455 | 0.0000 | 0.7440 |
| 0.0 | 57.5 | 460 | 0.0000 | 0.7440 |
| 0.0 | 58.125 | 465 | 0.0000 | 0.7440 |
| 0.0 | 58.75 | 470 | 0.0000 | 0.7440 |
| 0.0 | 59.375 | 475 | 0.0000 | 0.7440 |
| 0.0 | 60.0 | 480 | 0.0000 | 0.7440 |
| 0.0 | 60.625 | 485 | 0.0000 | 0.7440 |
| 0.0 | 61.25 | 490 | 0.0000 | 0.7440 |
| 0.0 | 61.875 | 495 | 0.0000 | 0.7440 |
| 0.0 | 62.5 | 500 | 0.0000 | 0.7440 |
| 0.0 | 63.125 | 505 | 0.0000 | 0.7440 |
| 0.0 | 63.75 | 510 | 0.0000 | 0.7440 |
| 0.0 | 64.375 | 515 | 0.0000 | 0.7440 |
| 0.0 | 65.0 | 520 | 0.0000 | 0.7440 |
| 0.0 | 65.625 | 525 | 0.0000 | 0.7440 |
| 0.0 | 66.25 | 530 | 0.0000 | 0.7440 |
| 0.0 | 66.875 | 535 | 0.0000 | 0.7440 |
| 0.0 | 67.5 | 540 | 0.0000 | 0.7440 |
| 0.0 | 68.125 | 545 | 0.0000 | 0.7440 |
| 0.0 | 68.75 | 550 | 0.0000 | 0.7440 |
| 0.0 | 69.375 | 555 | 0.0000 | 0.7440 |
| 0.0 | 70.0 | 560 | 0.0000 | 0.7440 |
| 0.0 | 70.625 | 565 | 0.0000 | 0.7440 |
| 0.0 | 71.25 | 570 | 0.0000 | 0.7440 |
| 0.0 | 71.875 | 575 | 0.0000 | 0.7440 |
| 0.0 | 72.5 | 580 | 0.0000 | 0.7440 |
| 0.0 | 73.125 | 585 | 0.0000 | 0.7440 |
| 0.0 | 73.75 | 590 | 0.0000 | 0.7440 |
| 0.0 | 74.375 | 595 | 0.0000 | 0.7440 |
| 0.0 | 75.0 | 600 | 0.0000 | 0.7440 |
| 0.0 | 75.625 | 605 | 0.0000 | 0.7440 |
| 0.0 | 76.25 | 610 | 0.0000 | 0.7440 |
| 0.0 | 76.875 | 615 | 0.0000 | 0.7440 |
| 0.0 | 77.5 | 620 | 0.0000 | 0.7440 |
| 0.0 | 78.125 | 625 | 0.0000 | 0.7440 |
| 0.0 | 78.75 | 630 | 0.0000 | 0.7440 |
| 0.0 | 79.375 | 635 | 0.0000 | 0.7440 |
| 0.0 | 80.0 | 640 | 0.0000 | 0.7440 |
| 0.0 | 80.625 | 645 | 0.0000 | 0.7440 |
| 0.0 | 81.25 | 650 | 0.0000 | 0.7440 |
| 0.0 | 81.875 | 655 | 0.0000 | 0.7440 |
| 0.0 | 82.5 | 660 | 0.0000 | 0.7440 |
| 0.0 | 83.125 | 665 | 0.0000 | 0.7440 |
| 0.0 | 83.75 | 670 | 0.0000 | 0.7440 |
| 0.0 | 84.375 | 675 | 0.0000 | 0.7440 |
| 0.0 | 85.0 | 680 | 0.0000 | 0.7440 |
| 0.0 | 85.625 | 685 | 0.0000 | 0.7440 |
| 0.0 | 86.25 | 690 | 0.0000 | 0.7440 |
| 0.0 | 86.875 | 695 | 0.0000 | 0.7440 |
| 0.0 | 87.5 | 700 | 0.0000 | 0.7440 |
| 0.0 | 88.125 | 705 | 0.0000 | 0.7440 |
| 0.0 | 88.75 | 710 | 0.0000 | 0.7440 |
| 0.0 | 89.375 | 715 | 0.0000 | 0.7440 |
| 0.0 | 90.0 | 720 | 0.0000 | 0.7440 |
| 0.0 | 90.625 | 725 | 0.0000 | 0.7440 |
| 0.0 | 91.25 | 730 | 0.0000 | 0.7440 |
| 0.0 | 91.875 | 735 | 0.0000 | 0.7440 |
| 0.0 | 92.5 | 740 | 0.0000 | 0.7440 |
| 0.0 | 93.125 | 745 | 0.0000 | 0.7440 |
| 0.0 | 93.75 | 750 | 0.0000 | 0.7440 |
| 0.0 | 94.375 | 755 | 0.0000 | 0.7440 |
| 0.0 | 95.0 | 760 | 0.0000 | 0.7440 |
| 0.0 | 95.625 | 765 | 0.0000 | 0.7440 |
| 0.0 | 96.25 | 770 | 0.0000 | 0.7440 |
| 0.0 | 96.875 | 775 | 0.0000 | 0.7440 |
| 0.0 | 97.5 | 780 | 0.0000 | 0.7440 |
| 0.0 | 98.125 | 785 | 0.0000 | 0.7440 |
| 0.0 | 98.75 | 790 | 0.0000 | 0.7440 |
| 0.0 | 99.375 | 795 | 0.0000 | 0.7440 |
| 0.0 | 100.0 | 800 | 0.0000 | 0.7440 |
| 0.0 | 100.625 | 805 | 0.0000 | 0.7440 |
| 0.0 | 101.25 | 810 | 0.0000 | 0.7440 |
| 0.0 | 101.875 | 815 | 0.0000 | 0.7440 |
| 0.0 | 102.5 | 820 | 0.0000 | 0.7440 |
| 0.0 | 103.125 | 825 | 0.0000 | 0.7440 |
| 0.0 | 103.75 | 830 | 0.0000 | 0.7440 |
| 0.0 | 104.375 | 835 | 0.0000 | 0.7440 |
| 0.0 | 105.0 | 840 | 0.0000 | 0.7440 |
| 0.0 | 105.625 | 845 | 0.0000 | 0.7440 |
| 0.0 | 106.25 | 850 | 0.0000 | 0.7440 |
| 0.0 | 106.875 | 855 | 0.0000 | 0.7440 |
| 0.0 | 107.5 | 860 | 0.0000 | 0.7440 |
| 0.0 | 108.125 | 865 | 0.0000 | 0.7440 |
| 0.0 | 108.75 | 870 | 0.0000 | 0.7440 |
| 0.0 | 109.375 | 875 | 0.0000 | 0.7440 |
| 0.0 | 110.0 | 880 | 0.0000 | 0.7440 |
| 0.0 | 110.625 | 885 | 0.0000 | 0.7440 |
| 0.0 | 111.25 | 890 | 0.0000 | 0.7440 |
| 0.0 | 111.875 | 895 | 0.0000 | 0.7440 |
| 0.0 | 112.5 | 900 | 0.0000 | 0.7440 |
| 0.0 | 113.125 | 905 | 0.0000 | 0.7440 |
| 0.0 | 113.75 | 910 | 0.0000 | 0.7440 |
| 0.0 | 114.375 | 915 | 0.0000 | 0.7440 |
| 0.0 | 115.0 | 920 | 0.0000 | 0.7440 |
| 0.0 | 115.625 | 925 | 0.0000 | 0.7440 |
| 0.0 | 116.25 | 930 | 0.0000 | 0.7440 |
| 0.0 | 116.875 | 935 | 0.0000 | 0.7440 |
| 0.0 | 117.5 | 940 | 0.0000 | 0.7440 |
| 0.0 | 118.125 | 945 | 0.0000 | 0.7440 |
| 0.0 | 118.75 | 950 | 0.0000 | 0.7440 |
| 0.0 | 119.375 | 955 | 0.0000 | 0.7440 |
| 0.0 | 120.0 | 960 | 0.0000 | 0.7440 |
| 0.0 | 120.625 | 965 | 0.0000 | 0.7440 |
| 0.0 | 121.25 | 970 | 0.0000 | 0.7440 |
| 0.0 | 121.875 | 975 | 0.0000 | 0.7440 |
| 0.0 | 122.5 | 980 | 0.0000 | 0.7440 |
| 0.0 | 123.125 | 985 | 0.0000 | 0.7440 |
| 0.0 | 123.75 | 990 | 0.0000 | 0.7440 |
| 0.0 | 124.375 | 995 | 0.0000 | 0.7440 |
| 0.0 | 125.0 | 1000 | 0.0000 | 0.7440 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 2.19.2
- Tokenizers 0.21.0
|
michaelfeil/ct2fast-dolly-v2-12b | michaelfeil | "2023-05-03T09:32:43Z" | 7 | 3 | transformers | [
"transformers",
"ctranslate2",
"dolly-v2",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | "2023-05-01T12:37:10Z" | ---
license: mit
tags:
- ctranslate2
- dolly-v2
---
# Fast-Inference with Ctranslate2
Speedup inference by 2x-8x using int8 inference in C++
quantized version of [databricks/dolly-v2-12b](https://huggingface.co/databricks/dolly-v2-12b)
```bash
pip install hf_hub_ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
Checkpoint compatible to [ctranslate2](https://github.com/OpenNMT/CTranslate2) and [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
model_name = "michaelfeil/ct2fast-dolly-v2-12b"
model = GeneratorCT2fromHfHub(
# load in int8 on CUDA
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16"
)
outputs = model.generate(
text=["How do you call a fast Flan-ingo?", "User: How are you doing?"],
)
print(outputs)
```
# Licence and other remarks:
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
# Usage of Dolly-v2:
According to the Intruction Pipeline of databricks/dolly-v2-12b
```python
# from https://huggingface.co/databricks/dolly-v2-12b
def encode_prompt(instruction):
INSTRUCTION_KEY = "### Instruction:"
RESPONSE_KEY = "### Response:"
END_KEY = "### End"
INTRO_BLURB = (
"Below is an instruction that describes a task. Write a response that appropriately completes the request."
)
# This is the prompt that is used for generating responses using an already trained model. It ends with the response
# key, where the job of the model is to provide the completion that follows it (i.e. the response itself).
PROMPT_FOR_GENERATION_FORMAT = """{intro}
{instruction_key}
{instruction}
{response_key}
""".format(
intro=INTRO_BLURB,
instruction_key=INSTRUCTION_KEY,
instruction="{instruction}",
response_key=RESPONSE_KEY,
)
return PROMPT_FOR_GENERATION_FORMAT.format(instruction=instruction)
``` |
mindw96/gemma-ko-7b-dacon-llm1 | mindw96 | "2025-02-13T10:49:17Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"base_model:beomi/gemma-ko-7b",
"base_model:finetune:beomi/gemma-ko-7b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-12T13:53:19Z" | ---
base_model: beomi/gemma-ko-7b
library_name: transformers
model_name: gemma-ko-7b-dacon-llm1
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-ko-7b-dacon-llm1
This model is a fine-tuned version of [beomi/gemma-ko-7b](https://huggingface.co/beomi/gemma-ko-7b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="mindw96/gemma-ko-7b-dacon-llm1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/mindw96/huggingface/runs/qrqy68ik)
This model was trained with SFT.
### Framework versions
- TRL: 0.14.0
- Transformers: 4.48.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
SurgeGlobal/open-llama-v2-lamini-orca-qlora-checkpoint | SurgeGlobal | "2024-04-18T05:45:36Z" | 5 | 0 | peft | [
"peft",
"text-generation",
"en",
"dataset:SurgeGlobal/Orca",
"license:apache-2.0",
"region:us"
] | text-generation | "2023-09-21T16:55:35Z" | ---
library_name: peft
license: apache-2.0
datasets:
- SurgeGlobal/Orca
language:
- en
pipeline_tag: text-generation
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0 |
MoMonir/SFR-Embedding-Mistral-GGUF | MoMonir | "2024-05-12T13:07:36Z" | 27 | 1 | sentence-transformers | [
"sentence-transformers",
"gguf",
"mteb",
"transformers",
"feature-extraction",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2024-05-12T11:39:06Z" | ---
tags:
- mteb
- sentence-transformers
- transformers
license: cc-by-nc-4.0
language:
- en
pipeline_tag: feature-extraction
---
# MoMonir/SFR-Embedding-Mistral-GGUF
This model was converted to GGUF format from [`Salesforce/SFR-Embedding-Mistral`](https://huggingface.co/Salesforce/SFR-Embedding-Mistral) using llama.cpp
Refer to the [original model card](https://huggingface.co/Salesforce/SFR-Embedding-Mistral) for more details on the model.
# Note: This is an Embedding Model
For more information about Embedding check [`OpenAI Embedding Document`](https://platform.openai.com/docs/guides/embeddings) |
bbiasi/analise_sentimento_bahia_marketing | bbiasi | "2023-04-22T13:58:03Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain",
"unk",
"dataset:bbiasi/autotrain-data-sentimental_analysis_bit",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-04-22T13:54:15Z" | ---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- bbiasi/autotrain-data-sentimental_analysis_bit
co2_eq_emissions:
emissions: 0.006100220390969195
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 51605122225
- CO2 Emissions (in grams): 0.0061
## Validation Metrics
- Loss: 0.370
- Accuracy: 0.826
- Precision: 0.845
- Recall: 0.920
- AUC: 0.899
- F1: 0.880
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/bbiasi/autotrain-sentimental_analysis_bit-51605122225
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("bbiasi/autotrain-sentimental_analysis_bit-51605122225", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("bbiasi/autotrain-sentimental_analysis_bit-51605122225", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
edgilr/clasificador-poem-sentiment | edgilr | "2024-03-09T15:01:55Z" | 93 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-03-09T15:01:27Z" | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: clasificador-poem-sentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-poem-sentiment
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3532
- Accuracy: 0.8942
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 112 | 0.4743 | 0.8462 |
| No log | 2.0 | 224 | 0.3945 | 0.8942 |
| No log | 3.0 | 336 | 0.3532 | 0.8942 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
pminha/roberta-base-squad-i8-f32-p25 | pminha | "2023-12-14T14:15:11Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"question-answering",
"arxiv:1910.09700",
"model-index",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | question-answering | "2023-12-14T13:19:22Z" | ---
model-index:
- name: pminha/roberta-base-squad-i8-f32-p25
results:
- task:
type: question-answering
dataset:
name: squad_v2
type: squad_v2
metrics:
- type: squad_v2
value:
exact: 79.13753895392908
f1: 82.15672952358491
total: 11873
HasAns_exact: 76.34952766531714
HasAns_f1: 82.39656707718039
HasAns_total: 5928
NoAns_exact: 81.91757779646763
NoAns_f1: 81.91757779646763
NoAns_total: 5945
best_exact: 79.13753895392908
best_exact_thresh: 0.10000000149011612
best_f1: 82.15672952358479
best_f1_thresh: 0.10000000149011612
name: squad_v2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
advokat/MapleSyrup | advokat | "2023-02-05T01:32:49Z" | 0 | 3 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-02-05T01:17:16Z" | ---
license: creativeml-openrail-m
---
|
Jipumpkin/ViT_beans | Jipumpkin | "2024-10-08T06:21:33Z" | 194 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"LMH",
"3_class",
"VIT",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-10-08T06:21:11Z" | ---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- LMH
- 3_class
- VIT
- generated_from_trainer
model-index:
- name: ViT_beans
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ViT_beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7454
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 17 | 0.8311 |
| No log | 2.0 | 34 | 0.7454 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
mradermacher/DeepSeek-R1-Medical-yx-GGUF | mradermacher | "2025-03-16T19:26:01Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"sft",
"en",
"base_model:dayangxi/DeepSeek-R1-Medical-yx",
"base_model:quantized:dayangxi/DeepSeek-R1-Medical-yx",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-16T19:13:16Z" | ---
base_model: dayangxi/DeepSeek-R1-Medical-yx
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/dayangxi/DeepSeek-R1-Medical-yx
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/DeepSeek-R1-Medical-yx-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Medical-yx-GGUF/resolve/main/DeepSeek-R1-Medical-yx.Q2_K.gguf) | Q2_K | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Medical-yx-GGUF/resolve/main/DeepSeek-R1-Medical-yx.Q3_K_S.gguf) | Q3_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Medical-yx-GGUF/resolve/main/DeepSeek-R1-Medical-yx.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Medical-yx-GGUF/resolve/main/DeepSeek-R1-Medical-yx.Q3_K_L.gguf) | Q3_K_L | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Medical-yx-GGUF/resolve/main/DeepSeek-R1-Medical-yx.IQ4_XS.gguf) | IQ4_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Medical-yx-GGUF/resolve/main/DeepSeek-R1-Medical-yx.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Medical-yx-GGUF/resolve/main/DeepSeek-R1-Medical-yx.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Medical-yx-GGUF/resolve/main/DeepSeek-R1-Medical-yx.Q5_K_S.gguf) | Q5_K_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Medical-yx-GGUF/resolve/main/DeepSeek-R1-Medical-yx.Q5_K_M.gguf) | Q5_K_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Medical-yx-GGUF/resolve/main/DeepSeek-R1-Medical-yx.Q6_K.gguf) | Q6_K | 1.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Medical-yx-GGUF/resolve/main/DeepSeek-R1-Medical-yx.Q8_0.gguf) | Q8_0 | 2.0 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Medical-yx-GGUF/resolve/main/DeepSeek-R1-Medical-yx.f16.gguf) | f16 | 3.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
FelipeMahlow/17406842774634914 | FelipeMahlow | "2025-02-27T20:53:06Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | "2025-02-27T19:24:49Z" | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: jian
widget:
- text: A photo of jian
output:
url: image_0.png
- text: A photo of jian
output:
url: image_1.png
- text: A photo of jian
output:
url: image_2.png
- text: A photo of jian
output:
url: image_3.png
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - FelipeMahlow/17406842774634914
<Gallery />
## Model description
These are FelipeMahlow/17406842774634914 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use jian to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](FelipeMahlow/17406842774634914/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
mradermacher/L3-8B-Lunaris-v1-GGUF | mradermacher | "2024-06-26T05:15:39Z" | 166 | 11 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Sao10K/L3-8B-Lunaris-v1",
"base_model:quantized:Sao10K/L3-8B-Lunaris-v1",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-06-26T04:20:31Z" | ---
base_model: Sao10K/L3-8B-Lunaris-v1
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Sao10K/L3-8B-Lunaris-v1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/L3-8B-Lunaris-v1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/L3-8B-Lunaris-v1-GGUF/resolve/main/L3-8B-Lunaris-v1.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-Lunaris-v1-GGUF/resolve/main/L3-8B-Lunaris-v1.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-Lunaris-v1-GGUF/resolve/main/L3-8B-Lunaris-v1.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-Lunaris-v1-GGUF/resolve/main/L3-8B-Lunaris-v1.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-Lunaris-v1-GGUF/resolve/main/L3-8B-Lunaris-v1.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-Lunaris-v1-GGUF/resolve/main/L3-8B-Lunaris-v1.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-Lunaris-v1-GGUF/resolve/main/L3-8B-Lunaris-v1.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-Lunaris-v1-GGUF/resolve/main/L3-8B-Lunaris-v1.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-Lunaris-v1-GGUF/resolve/main/L3-8B-Lunaris-v1.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-Lunaris-v1-GGUF/resolve/main/L3-8B-Lunaris-v1.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-Lunaris-v1-GGUF/resolve/main/L3-8B-Lunaris-v1.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-Lunaris-v1-GGUF/resolve/main/L3-8B-Lunaris-v1.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-Lunaris-v1-GGUF/resolve/main/L3-8B-Lunaris-v1.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-Lunaris-v1-GGUF/resolve/main/L3-8B-Lunaris-v1.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-Lunaris-v1-GGUF/resolve/main/L3-8B-Lunaris-v1.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
stanfordnlp/SteamSHP-flan-t5-xl | stanfordnlp | "2023-10-10T23:56:13Z" | 862 | 43 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"human feedback",
"rlhf",
"preferences",
"reddit",
"preference model",
"RL",
"NLG",
"evaluation",
"en",
"dataset:stanfordnlp/SHP",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-02-20T04:46:43Z" | ---
license: apache-2.0
datasets:
- stanfordnlp/SHP
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- reddit
- preference model
- RL
- NLG
- evaluation
---
# 💨🚢 SteamSHP-XL
<!-- Provide a quick summary of what the model is/does. -->
**If you mention this model, please cite the paper:** [Understanding Dataset Difficulty with V-Usable Information (ICML 2022)](https://proceedings.mlr.press/v162/ethayarajh22a.html).
SteamSHP-XL is a preference model trained to predict -- given some context and two possible responses -- which response humans will find more helpful.
It can be used for NLG evaluation or as a reward model for RLHF.
It is a FLAN-T5-xl model (3B parameters) finetuned on:
1. The [Stanford Human Preferences Dataset (SHP)](https://huggingface.co/datasets/stanfordnlp/SHP), which contains collective human preferences sourced from 18 different communities on Reddit (e.g., `askculinary`, `legaladvice`, etc.).
2. The helpfulness data in [Anthropic's HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf) dataset.
There is a smaller variant called [SteamSHP-Large](https://huggingface.co/kawine/SteamSHP-flan-t5-large) that was made by finetuning FLAN-T5-large (780M parameters).
Despite being 1/4 of the size, it is on average only 0.75 points less accurate on the SHP + Anthropic test data (across all domains).
## Usage
### Normal Usage
The input text should be of the format:
```
POST: { the context, such as the 'history' column in SHP (not containing any newlines \n) }
RESPONSE A: { first possible continuation (not containing any newlines \n) }
RESPONSE B: { second possible continuation (not containing any newlines \n) }
Which response is better? RESPONSE
```
The output generated by SteamSHP-XL will either be `A` or `B`.
Here's how to use the model:
```python
>> from transformers import T5ForConditionalGeneration, T5Tokenizer
>> device = 'cuda' # if you have a GPU
>> tokenizer = T5Tokenizer.from_pretrained('stanfordnlp/SteamSHP-flan-t5-xl')
>> model = T5ForConditionalGeneration.from_pretrained('stanfordnlp/SteamSHP-flan-t5-xl').to(device)
>> input_text = "POST: Instacart gave me 50 pounds of limes instead of 5 pounds... what the hell do I do with 50 pounds of limes? I've already donated a bunch and gave a bunch away. I'm planning on making a bunch of lime-themed cocktails, but... jeez. Ceviche? \n\n RESPONSE A: Lime juice, and zest, then freeze in small quantities.\n\n RESPONSE B: Lime marmalade lol\n\n Which response is better? RESPONSE"
>> x = tokenizer([input_text], return_tensors='pt').input_ids.to(device)
>> y = model.generate(x, max_new_tokens=1)
>> tokenizer.batch_decode(y, skip_special_tokens=True)
['A']
```
If the input exceeds the 512 token limit, you can use [pybsd](https://github.com/nipunsadvilkar/pySBD) to break the input up into sentences and only include what fits into 512 tokens.
When trying to cram an example into 512 tokens, we recommend truncating the context as much as possible and leaving the responses as untouched as possible.
### Reward Model Usage
If you want to use SteamSHP-XL as a reward model -- to get a score for a single response -- then you need to structure the input such that RESPONSE A is what you want to score and RESPONSE B is just an empty input:
```
POST: { the context, such as the 'history' column in SHP (not containing any newlines \n) }
RESPONSE A: { continuation (not containing any newlines \n) }
RESPONSE B: .
Which response is better? RESPONSE
```
Then calculate the probability assigned to the label A.
This probability (or the logit, depending on what you want) is the score for the response:
```python
>> input_text = "POST: Instacart gave me 50 pounds of limes instead of 5 pounds... what the hell do I do with 50 pounds of limes? I've already donated a bunch and gave a bunch away. I'm planning on making a bunch of lime-themed cocktails, but... jeez. Ceviche? \n\n RESPONSE A: Lime juice, and zest, then freeze in small quantities.\n\n RESPONSE B: .\n\n Which response is better? RESPONSE"
>> x = tokenizer([input_text], return_tensors='pt').input_ids.to(device)
>> outputs = model.generate(x, return_dict_in_generate=True, output_scores=True, max_new_tokens=1)
>> torch.exp(outputs.scores[0][:, 71]) / torch.exp(outputs.scores[0][:,:]).sum(axis=1).item() # index 71 corresponds to the token for 'A'
0.819
```
The probability will almost always be high (in the range of 0.8 to 1.0), since RESPONSE B is just a null input.
Therefore you may want to normalize the probability.
You can also compare the two probabilities assigned independently to each response (given the same context) to infer the preference label.
For example, if one response has probability 0.95 and the other has 0.80, the former will be preferred.
Inferring the preference label in this way only leads to a 0.006 drop in accuracy on the SHP + HH-RLHF test data on average across all domains, meaning that there's only a very small penalty for using SteamSHP-XL as a reward model instead of as a preference model.
## Training and Evaluation
SteamSHP-XL was only finetuned on 125K of the 392K training examples that were available, since we found that:
1. When the total input length exceeded the limit (512 tokens), the loss would not converge.
When possible, we crammed an example to fit under 500 tokens by truncating the context as much as possible, though some examples would still not fit despite this.
We used 500 as the limit instead of 512 to allow for slight modifications to the structure of the input without any examples exceeding the actual 512 limit.
3. Training on fewer preferences with a stronger signal led to better performance than training on all the preferences.
From the SHP dataset, we only used preferences where the more preferred comment was twice as preferred as the other (i.e., `score_ratio` >= 2) and used no more than 5 preferences from each context (i.e., 5 examples per unique `post_id`) to prevent ovefitting.
We did no such subsampling for the HH-RLHF training data.
We evaluated the model on the SHP and HH-RLHF test data using accuracy, but only on the data that could be truncated to fit within 500 tokens (a total of 18621 out of 20753 available test examples).
SteamSHP-XL gets an average 72.8% accuracy across all domains:
| Domain | Accuracy |
| ------ | -------- |
| askculinary | 0.7199 |
| askhr | 0.7743 |
| askdocs | 0.7210 |
| askanthropology | 0.7594 |
| asksciencefiction | 0.7283 |
| askacademia | 0.7442 |
| askengineers | 0.7183 |
| legaladvice | 0.8068 |
| explainlikeimfive | 0.7392 |
| askbaking | 0.6741 |
| askphysics | 0.8000 |
| askscience | 0.7114 |
| askphilosophy | 0.6907 |
| askvet | 0.7742 |
| changemyview | 0.7043 |
| askcarguys | 0.7568 |
| askhistorians | 0.7476 |
| asksocialscience | 0.7308 |
| anthropic (helpfulness) | 0.7310 |
| ALL (unweighted) | 0.7278 |
As mentioned previously, if you use SteamSHP as a reward model and try to infer the preference label based on the probability assigned to each response independently, that could also work!
But doing so will lead to a 0.006 drop in accuracy on the test data (on average across all domains), meaning that there is a small penalty.
## Biases and Limitations
SteamSHP is trained to predict which of two responses humans will find *more helpful*, not which response is *less harmful*.
It should not be used to detect toxicity, make ethical judgments, or for a similar purpose.
Biases and misinformation in the datasets used to train SteamSHP may also be propagated downstream to the model predictions.
Although SHP filtered out posts with NSFW (over 18) content, chose subreddits that were well-moderated and had policies against harassment and bigotry, some of the data may contain discriminatory or harmful language.
The responses that humans collectively found more helpful are also not guaranteed to be more factual.
The people whose preferences are captured in SHP and HH-RLHF are not representative of the broader population.
Although specific demographic information is not available, overall, the Reddit users whose preferences are captured in SHP are disproportionately male and from developed, Western, and English-speaking countries (Pew Research).
[Past work](https://www.anthropic.com/model-written-evals.pdf) by Anthropic has found that models optimized for human preference can be obsequious, at the expense of the truth.
## Contact
Please contact [email protected] if you have any questions about the model.
This model was created by Kawin Ethayarajh, Heidi (Chenyu) Zhang, Yizhong Wang, and Dan Jurafsky.
## Citation
SHP was created using the techniques proposed in the following paper. Please cite this work if you use SHP or the SteamSHP models:
```
@InProceedings{pmlr-v162-ethayarajh22a,
title = {Understanding Dataset Difficulty with $\mathcal{V}$-Usable Information},
author = {Ethayarajh, Kawin and Choi, Yejin and Swayamdipta, Swabha},
booktitle = {Proceedings of the 39th International Conference on Machine Learning},
pages = {5988--6008},
year = {2022},
editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan},
volume = {162},
series = {Proceedings of Machine Learning Research},
month = {17--23 Jul},
publisher = {PMLR},
}
``` |
cleanrl/Pong-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed1 | cleanrl | "2023-03-07T17:36:06Z" | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Pong-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-02-17T06:02:43Z" | ---
tags:
- Pong-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pong-v5
type: Pong-v5
metrics:
- type: mean_reward
value: 21.00 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Pong-v5**
This is a trained model of a PPO agent playing Pong-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_ppo_envpool_impala_atari_wrapper --env-id Pong-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Pong-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/cleanba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Pong-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Pong-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/poetry.lock
poetry install --all-extras
python cleanba_ppo_envpool_impala_atari_wrapper.py --distributed --learner-device-ids 1 2 3 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Pong-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 15360,
'capture_video': False,
'clip_coef': 0.1,
'concurrency': True,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Pong-v5',
'exp_name': 'cleanba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'global_learner_decices': ['gpu:1',
'gpu:2',
'gpu:3',
'gpu:5',
'gpu:6',
'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3],
'learner_devices': ['gpu:1', 'gpu:2', 'gpu:3'],
'learning_rate': 0.00025,
'local_batch_size': 7680,
'local_minibatch_size': 1920,
'local_num_envs': 60,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 3840,
'norm_adv': True,
'num_envs': 120,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 3255,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 2}
```
|
Novaciano/NIGGERKILLER-1B_Creative_Writing_RP-GGUF | Novaciano | "2025-02-05T07:09:32Z" | 139 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"1b",
"4-bit",
"nsfw",
"llama 3.2",
"es",
"en",
"dataset:Dampfinchen/Creative_Writing_Multiturn-Balanced-8192",
"base_model:mergekit-community/LLAMA-1b-NIGGERKILLER",
"base_model:quantized:mergekit-community/LLAMA-1b-NIGGERKILLER",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-02-05T05:20:06Z" | ---
base_model: mergekit-community/LLAMA-1b-NIGGERKILLER
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- 1b
- 4-bit
- nsfw
- llama 3.2
license: apache-2.0
datasets:
- Dampfinchen/Creative_Writing_Multiturn-Balanced-8192
language:
- es
- en
---
# NIGGERKILLER 1B Creative Writing RP
Este modelo es una combinación de los mejores modelos de razonamiento al que le he inyectado que se trata de una fusión de alta calidad de diferentes conjuntos de datos de escritura y juego de roles de HuggingFace. Esta versión cuenta con una longitud de secuencia de 8K (tokenizador Llama 3). Asegura que los entrenadores como Axolotl no descarten las muestras. Realizado con el script de https://huggingface.co/xzuyn
Lea la tarjeta del conjunto de datos para obtener más información: https://huggingface.co/datasets/Dampfinchen/Creative_Writing_Multiturn -> Si puede entrenar con una longitud de secuencia de entrada de 16K o más, definitivamente úsela en su lugar.
Tenga en cuenta que este material puede contener todo tipo de datos explícitos problemáticos. No me hago responsable de ello ni lo apruebo. Descargue el material únicamente si conoce la situación legal de los contenidos ficticios escritos de cualquier tipo en su país.
**Nota:** A pesar de que el dataset cuente con contenido NSFW trate de no confundirlo con la versión sin censura. |
ping98k/xglm-7.5B-alpaca-th-lora | ping98k | "2023-12-05T03:36:21Z" | 3 | 0 | peft | [
"peft",
"base_model:facebook/xglm-7.5B",
"base_model:adapter:facebook/xglm-7.5B",
"region:us"
] | null | "2023-09-17T14:25:10Z" | ---
library_name: peft
base_model: facebook/xglm-7.5B
---
lora fine tune `facebook/xglm-7.5B` with `Thaweewat/alpaca-cleaned-52k-th`
template
```
### Question: instruction
input
### Answer:
```
```
peft_config = LoraConfig(
r=64,
lora_alpha=128,
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM",
target_modules=[
"q_proj",
"k_proj",
"v_proj",
"out_proj",
"fc1",
"fc2",
]
)
```
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
|
jxiao/Reinforce-Pixelcopter-PLE-v0 | jxiao | "2023-01-14T04:50:58Z" | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-01-08T06:38:53Z" | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 36.70 +/- 29.15
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
PrunaAI/xception71.tf_in1k-turbo-green-smashed | PrunaAI | "2024-08-02T15:38:19Z" | 1 | 0 | pruna-engine | [
"pruna-engine",
"region:us"
] | null | "2024-03-14T11:46:29Z" | ---
library_name: pruna-engine
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed by combining quantization, xformers, jit, cuda graphs, triton.
- ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`.
1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install.
```bash
pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/
```
2. Download the model files using one of these three options.
- Option 1 - Use command line interface (CLI):
```bash
mkdir xception71.tf_in1k-turbo-green-smashed
huggingface-cli download PrunaAI/xception71.tf_in1k-turbo-green-smashed --local-dir xception71.tf_in1k-turbo-green-smashed --local-dir-use-symlinks False
```
- Option 2 - Use Python:
```python
import subprocess
repo_name = "xception71.tf_in1k-turbo-green-smashed"
subprocess.run(["mkdir", repo_name])
subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"])
```
- Option 3 - Download them manually on the HuggingFace model page.
3. Load & run the model.
```python
from pruna_engine.PrunaModel import PrunaModel
model_path = "xception71.tf_in1k-turbo-green-smashed/model" # Specify the downloaded model path.
smashed_model = PrunaModel.load_model(model_path) # Load the model.
import torch; image = torch.rand(1, 3, 224, 224).to('cuda')
smashed_model(image)
```
## Configurations
The configuration info are in `model/smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model xception71.tf_in1k before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
Subsets and Splits