modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
|---|---|---|---|---|---|---|---|---|---|
chchen/Llama-3.1-8B-Instruct-SFT-900
|
chchen
| 2025-01-12T20:55:22Z
| 12
| 0
|
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"region:us"
] | null | 2025-01-12T20:43:01Z
|
---
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: peft
license: llama3.1
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: Llama-3.1-8B-Instruct-SFT-900
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.1-8B-Instruct-SFT-900
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on the bct_non_cot_sft_900 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1053
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.201 | 0.9877 | 50 | 1.0016 |
| 0.1407 | 1.9753 | 100 | 0.1513 |
| 0.0885 | 2.9630 | 150 | 0.1082 |
| 0.0743 | 3.9506 | 200 | 0.1068 |
| 0.0855 | 4.9383 | 250 | 0.1062 |
| 0.0571 | 5.9259 | 300 | 0.1058 |
| 0.063 | 6.9136 | 350 | 0.1054 |
| 0.0597 | 7.9012 | 400 | 0.1057 |
| 0.0694 | 8.8889 | 450 | 0.1053 |
| 0.0593 | 9.8765 | 500 | 0.1053 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.45.2
- Pytorch 2.3.0
- Datasets 2.19.0
- Tokenizers 0.20.0
|
akseljoonas/deberta-v3-predtrade-new-0.647-profit0.339
|
akseljoonas
| 2025-01-12T20:55:20Z
| 6
| 0
|
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-12T20:54:55Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kk-aivio/89934c3e-bd95-48a9-8e1d-4306b1e26c0a
|
kk-aivio
| 2025-01-12T20:55:00Z
| 10
| 0
|
peft
|
[
"peft",
"safetensors",
"opt",
"axolotl",
"generated_from_trainer",
"base_model:facebook/opt-350m",
"base_model:adapter:facebook/opt-350m",
"license:other",
"region:us"
] | null | 2025-01-12T20:45:49Z
|
---
library_name: peft
license: other
base_model: facebook/opt-350m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 89934c3e-bd95-48a9-8e1d-4306b1e26c0a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: facebook/opt-350m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- acfb77941d93072a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/acfb77941d93072a_train_data.json
type:
field_input: CWE-ID
field_instruction: CVE-ID
field_output: DESCRIPTION
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kk-aivio/89934c3e-bd95-48a9-8e1d-4306b1e26c0a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/acfb77941d93072a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: df80e456-2e8f-4f39-8af5-41f7ce1a762c
wandb_project: birthday-sn56-17-Gradients-On-Demand
wandb_run: your_name
wandb_runid: df80e456-2e8f-4f39-8af5-41f7ce1a762c
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 89934c3e-bd95-48a9-8e1d-4306b1e26c0a
This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8645
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 16.6058 | 0.0000 | 1 | 4.0096 |
| 16.8055 | 0.0001 | 3 | 4.0026 |
| 18.832 | 0.0003 | 6 | 3.9527 |
| 16.743 | 0.0004 | 9 | 3.8645 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Asadali12/videomae-base-finetuned-cricket_shot_detection_12_latest
|
Asadali12
| 2025-01-12T20:54:43Z
| 41
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2025-01-12T18:56:42Z
|
---
library_name: transformers
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- recall
- precision
model-index:
- name: videomae-base-finetuned-cricket_shot_detection_12_latest
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-cricket_shot_detection_12_latest
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4676
- Accuracy: 0.6316
- F1: 0.6366
- Recall: 0.6316
- Precision: 0.7675
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 576
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 1.7629 | 0.1267 | 73 | 1.8203 | 0.2105 | 0.1815 | 0.2105 | 0.1974 |
| 1.5269 | 1.1267 | 146 | 1.7099 | 0.2632 | 0.2731 | 0.2632 | 0.5474 |
| 1.2084 | 2.1267 | 219 | 1.5873 | 0.5789 | 0.5797 | 0.5789 | 0.7570 |
| 1.1034 | 3.1267 | 292 | 1.5001 | 0.5789 | 0.5797 | 0.5789 | 0.7570 |
| 0.9862 | 4.1267 | 365 | 1.4676 | 0.6316 | 0.6366 | 0.6316 | 0.7675 |
| 0.891 | 5.1267 | 438 | 1.4256 | 0.6316 | 0.6366 | 0.6316 | 0.7675 |
| 0.6645 | 6.1267 | 511 | 1.4055 | 0.6316 | 0.6476 | 0.6316 | 0.8184 |
| 0.7161 | 7.1128 | 576 | 1.4041 | 0.6316 | 0.6366 | 0.6316 | 0.7675 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.19.1
|
usyd-community/vitpose-plus-small
|
usyd-community
| 2025-01-12T20:50:10Z
| 4,335
| 1
|
transformers
|
[
"transformers",
"safetensors",
"vitpose",
"keypoint-detection",
"arxiv:2204.12484",
"arxiv:2212.04246",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
keypoint-detection
| 2025-01-12T14:41:11Z
|
---
library_name: transformers
license: apache-2.0
pipeline_tag: keypoint-detection
---
# Model Card for VitPose
<img src="https://cdn-uploads.huggingface.co/production/uploads/6579e0eaa9e58aec614e9d97/ZuIwMdomy2_6aJ_JTE1Yd.png" alt="x" width="400"/>
ViTPose: Simple Vision Transformer Baselines for Human Pose Estimation and ViTPose++: Vision Transformer Foundation Model for Generic Body Pose Estimation. It obtains 81.1 AP on MS COCO Keypoint test-dev set.
## Model Details
Although no specific domain knowledge is considered in the design, plain vision transformers have shown excellent performance in visual recognition tasks. However, little effort has been made to reveal the potential of such simple structures for
pose estimation tasks. In this paper, we show the surprisingly good capabilities of plain vision transformers for pose estimation from various aspects, namely simplicity in model structure, scalability in model size, flexibility in training paradigm,
and transferability of knowledge between models, through a simple baseline model called ViTPose. Specifically, ViTPose employs plain and non-hierarchical vision
transformers as backbones to extract features for a given person instance and a
lightweight decoder for pose estimation. It can be scaled up from 100M to 1B
parameters by taking the advantages of the scalable model capacity and high
parallelism of transformers, setting a new Pareto front between throughput and performance. Besides, ViTPose is very flexible regarding the attention type, input resolution, pre-training and finetuning strategy, as well as dealing with multiple pose
tasks. We also empirically demonstrate that the knowledge of large ViTPose models
can be easily transferred to small ones via a simple knowledge token. Experimental
results show that our basic ViTPose model outperforms representative methods
on the challenging MS COCO Keypoint Detection benchmark, while the largest
model sets a new state-of-the-art, i.e., 80.9 AP on the MS COCO test-dev set. The
code and models are available at https://github.com/ViTAE-Transformer/ViTPose
### Model Description
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** Yufei Xu, Jing Zhang, Qiming Zhang, Dacheng Tao
- **Funded by:** ARC FL-170100117 and IH-180100002.
- **License:** Apache-2.0
- **Ported to 🤗 Transformers by:** Sangbum Choi and Niels Rogge
### Model Sources
- **Original repository:** https://github.com/ViTAE-Transformer/ViTPose
- **Paper:** https://arxiv.org/pdf/2204.12484
- **Demo:** https://huggingface.co/spaces?sort=trending&search=vitpose
## Uses
The ViTPose model, developed by the ViTAE-Transformer team, is primarily designed for pose estimation tasks. Here are some direct uses of the model:
Human Pose Estimation: The model can be used to estimate the poses of humans in images or videos. This involves identifying the locations of key body joints such as the head, shoulders, elbows, wrists, hips, knees, and ankles.
Action Recognition: By analyzing the poses over time, the model can help in recognizing various human actions and activities.
Surveillance: In security and surveillance applications, ViTPose can be used to monitor and analyze human behavior in public spaces or private premises.
Health and Fitness: The model can be utilized in fitness apps to track and analyze exercise poses, providing feedback on form and technique.
Gaming and Animation: ViTPose can be integrated into gaming and animation systems to create more realistic character movements and interactions.
## Bias, Risks, and Limitations
In this paper, we propose a simple yet effective vision transformer baseline for pose estimation,
i.e., ViTPose. Despite no elaborate designs in structure, ViTPose obtains SOTA performance
on the MS COCO dataset. However, the potential of ViTPose is not fully explored with more
advanced technologies, such as complex decoders or FPN structures, which may further improve the
performance. Besides, although the ViTPose demonstrates exciting properties such as simplicity,
scalability, flexibility, and transferability, more research efforts could be made, e.g., exploring the
prompt-based tuning to demonstrate the flexibility of ViTPose further. In addition, we believe
ViTPose can also be applied to other pose estimation datasets, e.g., animal pose estimation [47, 9, 45]
and face keypoint detection [21, 6]. We leave them as the future work.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
import torch
import requests
import numpy as np
from PIL import Image
from transformers import (
AutoProcessor,
RTDetrForObjectDetection,
VitPoseForPoseEstimation,
)
device = "cuda" if torch.cuda.is_available() else "cpu"
url = "http://images.cocodataset.org/val2017/000000000139.jpg"
image = Image.open(requests.get(url, stream=True).raw)
# ------------------------------------------------------------------------
# Stage 1. Detect humans on the image
# ------------------------------------------------------------------------
# You can choose detector by your choice
person_image_processor = AutoProcessor.from_pretrained("PekingU/rtdetr_r50vd_coco_o365")
person_model = RTDetrForObjectDetection.from_pretrained("PekingU/rtdetr_r50vd_coco_o365", device_map=device)
inputs = person_image_processor(images=image, return_tensors="pt").to(device)
with torch.no_grad():
outputs = person_model(**inputs)
results = person_image_processor.post_process_object_detection(
outputs, target_sizes=torch.tensor([(image.height, image.width)]), threshold=0.3
)
result = results[0] # take first image results
# Human label refers 0 index in COCO dataset
person_boxes = result["boxes"][result["labels"] == 0]
person_boxes = person_boxes.cpu().numpy()
# Convert boxes from VOC (x1, y1, x2, y2) to COCO (x1, y1, w, h) format
person_boxes[:, 2] = person_boxes[:, 2] - person_boxes[:, 0]
person_boxes[:, 3] = person_boxes[:, 3] - person_boxes[:, 1]
# ------------------------------------------------------------------------
# Stage 2. Detect keypoints for each person found
# ------------------------------------------------------------------------
image_processor = AutoProcessor.from_pretrained("usyd-community/vitpose-plus-small")
model = VitPoseForPoseEstimation.from_pretrained("usyd-community/vitpose-plus-small", device_map=device)
inputs = image_processor(image, boxes=[person_boxes], return_tensors="pt").to(device)
with torch.no_grad():
outputs = model(**inputs)
pose_results = image_processor.post_process_pose_estimation(outputs, boxes=[person_boxes], threshold=0.3)
image_pose_result = pose_results[0] # results for first image
for i, person_pose in enumerate(image_pose_result):
print(f"Person #{i}")
for keypoint, label, score in zip(
person_pose["keypoints"], person_pose["labels"], person_pose["scores"]
):
keypoint_name = model.config.id2label[label.item()]
x, y = keypoint
print(f" - {keypoint_name}: x={x.item():.2f}, y={y.item():.2f}, score={score.item():.2f}")
```
Output:
```
Person #0
- Nose: x=428.25, y=170.88, score=0.98
- L_Eye: x=428.76, y=168.03, score=0.97
- R_Eye: x=428.09, y=168.15, score=0.82
- L_Ear: x=433.28, y=167.72, score=0.95
- R_Ear: x=440.77, y=166.66, score=0.88
- L_Shoulder: x=440.52, y=177.60, score=0.92
- R_Shoulder: x=444.64, y=178.11, score=0.70
- L_Elbow: x=436.64, y=198.21, score=0.92
- R_Elbow: x=431.42, y=201.19, score=0.76
- L_Wrist: x=430.96, y=218.39, score=0.98
- R_Wrist: x=419.95, y=213.27, score=0.85
- L_Hip: x=445.33, y=222.93, score=0.77
- R_Hip: x=451.91, y=222.52, score=0.75
- L_Knee: x=443.31, y=255.61, score=0.83
- R_Knee: x=451.42, y=255.03, score=0.84
- L_Ankle: x=447.76, y=287.33, score=0.68
- R_Ankle: x=456.78, y=286.08, score=0.83
Person #1
- Nose: x=398.23, y=181.74, score=0.89
- L_Eye: x=398.31, y=179.77, score=0.84
- R_Eye: x=395.99, y=179.46, score=0.91
- R_Ear: x=388.95, y=180.24, score=0.86
- L_Shoulder: x=397.35, y=194.22, score=0.73
- R_Shoulder: x=384.50, y=190.86, score=0.58
```
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
Dataset details. We use MS COCO [28], AI Challenger [41], MPII [3], and CrowdPose [22] datasets
for training and evaluation. OCHuman [54] dataset is only involved in the evaluation stage to measure
the models’ performance in dealing with occluded people. The MS COCO dataset contains 118K
images and 150K human instances with at most 17 keypoint annotations each instance for training.
The dataset is under the CC-BY-4.0 license. MPII dataset is under the BSD license and contains
15K images and 22K human instances for training. There are at most 16 human keypoints for each
instance annotated in this dataset. AI Challenger is much bigger and contains over 200K training
images and 350 human instances, with at most 14 keypoints for each instance annotated. OCHuman
contains human instances with heavy occlusion and is just used for val and test set, which includes
4K images and 8K instances.
#### Training Hyperparameters
- **Training regime:** 
#### Speeds, Sizes, Times

## Evaluation
OCHuman val and test set. To evaluate the performance of human pose estimation models on the
human instances with heavy occlusion, we test the ViTPose variants and representative models on
the OCHuman val and test set with ground truth bounding boxes. We do not adopt extra human
detectors since not all human instances are annotated in the OCHuman datasets, where the human
detector will cause a lot of “false positive” bounding boxes and can not reflect the true ability of
pose estimation models. Specifically, the decoder head of ViTPose corresponding to the MS COCO
dataset is used, as the keypoint definitions are the same in MS COCO and OCHuman datasets.
MPII val set. We evaluate the performance of ViTPose and representative models on the MPII val
set with the ground truth bounding boxes. Following the default settings of MPII, we use PCKh
as metric for performance evaluation.
### Results

### Model Architecture and Objective

#### Hardware
The models are trained on 8 A100 GPUs based on the mmpose codebase
## Citation
**BibTeX:**
```bibtex
@article{xu2022vitposesimplevisiontransformer,
title={ViTPose: Simple Vision Transformer Baselines for Human Pose Estimation},
author={Yufei Xu and Jing Zhang and Qiming Zhang and Dacheng Tao},
year={2022},
eprint={2204.12484},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2204.12484}
}
@misc{xu2023vitposevisiontransformergeneric,
title={ViTPose++: Vision Transformer for Generic Body Pose Estimation},
author={Yufei Xu and Jing Zhang and Qiming Zhang and Dacheng Tao},
year={2023},
eprint={2212.04246},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2212.04246},
}
```
|
duyphu/3bc22b23-d6db-f00b-c023-0fc40e39ee8a
|
duyphu
| 2025-01-12T20:49:59Z
| 12
| 0
|
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-Coder-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-Coder-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-12T20:37:55Z
|
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3bc22b23-d6db-f00b-c023-0fc40e39ee8a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1516d1ee6d08c7db_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1516d1ee6d08c7db_train_data.json
type:
field_input: p
field_instruction: asks-for
field_output: explanation
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: duyphu/3bc22b23-d6db-f00b-c023-0fc40e39ee8a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/1516d1ee6d08c7db_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1849022f-60a5-4fce-8dec-ce632a995207
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1849022f-60a5-4fce-8dec-ce632a995207
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 3bc22b23-d6db-f00b-c023-0fc40e39ee8a
This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0009 | 1 | 4.9763 |
| 4.6934 | 0.0088 | 10 | 4.6777 |
| 3.3654 | 0.0176 | 20 | 3.3385 |
| 2.9184 | 0.0265 | 30 | 2.6892 |
| 2.3493 | 0.0353 | 40 | 2.4670 |
| 2.3078 | 0.0441 | 50 | 2.4333 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
vikyt2846/gulnara
|
vikyt2846
| 2025-01-12T20:48:31Z
| 177
| 0
|
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-01-12T19:58:15Z
|
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: gulnara
---
# Gulnara
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `gulnara` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('vikyt2846/gulnara', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
lesso04/bb5d0e76-4ff8-46ac-a7d3-08ba460717fc
|
lesso04
| 2025-01-12T20:48:13Z
| 9
| 0
|
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:berkeley-nest/Starling-LM-7B-alpha",
"base_model:adapter:berkeley-nest/Starling-LM-7B-alpha",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-12T20:37:51Z
|
---
library_name: peft
license: apache-2.0
base_model: berkeley-nest/Starling-LM-7B-alpha
tags:
- axolotl
- generated_from_trainer
model-index:
- name: bb5d0e76-4ff8-46ac-a7d3-08ba460717fc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: berkeley-nest/Starling-LM-7B-alpha
bf16: true
chat_template: llama3
datasets:
- data_files:
- d6064e0e61015da3_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d6064e0e61015da3_train_data.json
type:
field_input: author
field_instruction: title
field_output: paragraph
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso04/bb5d0e76-4ff8-46ac-a7d3-08ba460717fc
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/d6064e0e61015da3_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 56f07b4f-9ae5-4a2a-ac20-c9250ed57e82
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 56f07b4f-9ae5-4a2a-ac20-c9250ed57e82
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# bb5d0e76-4ff8-46ac-a7d3-08ba460717fc
This model is a fine-tuned version of [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0007 | 1 | nan |
| 0.0 | 0.0034 | 5 | nan |
| 0.0 | 0.0068 | 10 | nan |
| 0.0 | 0.0103 | 15 | nan |
| 0.0 | 0.0137 | 20 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso07/d17e67a8-a09a-4baa-8ac1-0d46e5aa8886
|
lesso07
| 2025-01-12T20:46:50Z
| 17
| 0
|
peft
|
[
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5",
"base_model:adapter:OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-12T20:12:48Z
|
---
library_name: peft
license: apache-2.0
base_model: OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d17e67a8-a09a-4baa-8ac1-0d46e5aa8886
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5
bf16: true
chat_template: llama3
datasets:
- data_files:
- be2ba7b03623a3f9_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/be2ba7b03623a3f9_train_data.json
type:
field_instruction: instruction
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso07/d17e67a8-a09a-4baa-8ac1-0d46e5aa8886
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/be2ba7b03623a3f9_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: c0407220-bcdf-45f1-9319-5d220bd89166
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: c0407220-bcdf-45f1-9319-5d220bd89166
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# d17e67a8-a09a-4baa-8ac1-0d46e5aa8886
This model is a fine-tuned version of [OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5](https://huggingface.co/OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0038
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 6.3544 | 0.0002 | 1 | 1.5151 |
| 6.0994 | 0.0009 | 5 | 1.4552 |
| 5.3102 | 0.0019 | 10 | 1.2249 |
| 4.9855 | 0.0028 | 15 | 1.0979 |
| 4.2284 | 0.0038 | 20 | 1.0216 |
| 3.8412 | 0.0047 | 25 | 1.0038 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
dimasik1987/63078e3b-e158-42d2-8c22-d99ce1d77b79
|
dimasik1987
| 2025-01-12T20:45:43Z
| 10
| 0
|
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-v0.3",
"base_model:adapter:unsloth/mistral-7b-v0.3",
"license:apache-2.0",
"region:us"
] | null | 2025-01-12T20:28:16Z
|
---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-v0.3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 63078e3b-e158-42d2-8c22-d99ce1d77b79
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/mistral-7b-v0.3
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f37f4750e6ccfd17_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f37f4750e6ccfd17_train_data.json
type:
field_input: my_solu
field_instruction: prompt
field_output: solution
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: dimasik1987/63078e3b-e158-42d2-8c22-d99ce1d77b79
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 78GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/f37f4750e6ccfd17_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: fe496a38-12d8-455d-b139-0123bb7357f3
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: fe496a38-12d8-455d-b139-0123bb7357f3
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 63078e3b-e158-42d2-8c22-d99ce1d77b79
This model is a fine-tuned version of [unsloth/mistral-7b-v0.3](https://huggingface.co/unsloth/mistral-7b-v0.3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0004 | 1 | nan |
| 0.0 | 0.0034 | 8 | nan |
| 0.0 | 0.0068 | 16 | nan |
| 0.0 | 0.0102 | 24 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
mdsalem17/opus-mt-en-ar-finetuned
|
mdsalem17
| 2025-01-12T20:45:06Z
| 8
| 0
|
transformers
|
[
"transformers",
"safetensors",
"marian",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-01-12T20:44:29Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
shruticic/coco_finetuned_modelv4
|
shruticic
| 2025-01-12T20:44:03Z
| 172
| 0
| null |
[
"safetensors",
"phi3",
"custom_code",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-12T20:32:14Z
|
---
license: apache-2.0
---
|
thalllsssss/822a776d-c9bb-4850-a280-9cd752f236c4
|
thalllsssss
| 2025-01-12T20:42:17Z
| 10
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"base_model:adapter:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"license:llama3",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-12T20:21:46Z
|
---
library_name: peft
license: llama3
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 822a776d-c9bb-4850-a280-9cd752f236c4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 4731ee7238473373_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/4731ee7238473373_train_data.json
type:
field_instruction: query
field_output: ori_review
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thalllsssss/822a776d-c9bb-4850-a280-9cd752f236c4
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/4731ee7238473373_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 152f3551-46bc-4bdb-a1dc-a104e3faed55
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 152f3551-46bc-4bdb-a1dc-a104e3faed55
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 822a776d-c9bb-4850-a280-9cd752f236c4
This model is a fine-tuned version of [Orenguteng/Llama-3-8B-Lexi-Uncensored](https://huggingface.co/Orenguteng/Llama-3-8B-Lexi-Uncensored) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9310
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.0532 | 0.0801 | 200 | 1.9310 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
chauhoang/62676781-25e7-5ce6-e09f-565bef2a6294
|
chauhoang
| 2025-01-12T20:39:26Z
| 6
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:HuggingFaceM4/tiny-random-LlamaForCausalLM",
"base_model:adapter:HuggingFaceM4/tiny-random-LlamaForCausalLM",
"region:us"
] | null | 2025-01-12T20:38:10Z
|
---
library_name: peft
base_model: HuggingFaceM4/tiny-random-LlamaForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 62676781-25e7-5ce6-e09f-565bef2a6294
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: HuggingFaceM4/tiny-random-LlamaForCausalLM
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 120e2b58d59a1b2e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/120e2b58d59a1b2e_train_data.json
type:
field_input: original_code
field_instruction: update_snippet
field_output: final_code
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: chauhoang/62676781-25e7-5ce6-e09f-565bef2a6294
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/120e2b58d59a1b2e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 562f173b-b07d-4eb4-a59f-d230672ec843
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 562f173b-b07d-4eb4-a59f-d230672ec843
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 62676781-25e7-5ce6-e09f-565bef2a6294
This model is a fine-tuned version of [HuggingFaceM4/tiny-random-LlamaForCausalLM](https://huggingface.co/HuggingFaceM4/tiny-random-LlamaForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.3731
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0015 | 1 | 10.3742 |
| 10.3739 | 0.0148 | 10 | 10.3740 |
| 10.3749 | 0.0296 | 20 | 10.3736 |
| 10.3723 | 0.0444 | 30 | 10.3733 |
| 10.3734 | 0.0592 | 40 | 10.3732 |
| 10.3744 | 0.0740 | 50 | 10.3731 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
xiuyul/mamba-2.8b-zephyr
|
xiuyul
| 2025-01-12T20:38:57Z
| 22,599
| 18
|
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"arxiv:2305.18290",
"base_model:xiuyul/mamba-2.8b-ultrachat",
"base_model:finetune:xiuyul/mamba-2.8b-ultrachat",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-12-28T17:36:20Z
|
---
license: apache-2.0
base_model: xiuyul/mamba-2.8b-ultrachat
datasets:
- HuggingFaceH4/ultrafeedback_binarized
model-index:
- name: mamba-2.8b-zephyr
results: []
---
# mamba-2.8b-zephyr
This model is a fine-tuned version of [xiuyul/mamba-2.8b-ultrachat](https://huggingface.co/xiuyul/mamba-2.8b-ultrachat) on the [HuggingFaceH4/ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) dataset trained using [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290).
The base model, [xiuyul/mamba-2.8b-ultrachat](https://huggingface.co/xiuyul/mamba-2.8b-ultrachat), was instruction-tuned from [state-spaces/mamba-2.8b-slimpj](https://huggingface.co/state-spaces/mamba-2.8b-slimpj) on the [HuggingFaceH4/ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4996
- Rewards/chosen: -0.4523
- Rewards/rejected: -1.6105
- Rewards/accuracies: 0.7857
- Rewards/margins: 1.1582
- Logps/rejected: -290.1885
- Logps/chosen: -359.0926
- Logits/rejected: 23.0423
- Logits/chosen: 23.1861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6639 | 0.1 | 100 | 0.6593 | 0.1762 | 0.0957 | 0.6151 | 0.0805 | -273.1268 | -352.8086 | 23.5852 | 23.8356 |
| 0.5804 | 0.21 | 200 | 0.5836 | 0.0780 | -0.3396 | 0.6508 | 0.4176 | -277.4798 | -353.7904 | 23.5872 | 23.8302 |
| 0.5815 | 0.31 | 300 | 0.5510 | -0.1923 | -0.7857 | 0.7421 | 0.5934 | -281.9403 | -356.4929 | 23.5224 | 23.7498 |
| 0.5526 | 0.41 | 400 | 0.5361 | -0.1953 | -0.8928 | 0.7341 | 0.6975 | -283.0119 | -356.5235 | 23.5033 | 23.7264 |
| 0.5225 | 0.52 | 500 | 0.5262 | -0.1041 | -0.8809 | 0.7540 | 0.7768 | -282.8929 | -355.6114 | 23.4578 | 23.6718 |
| 0.5577 | 0.62 | 600 | 0.5156 | -0.1946 | -1.0285 | 0.7659 | 0.8339 | -284.3683 | -356.5158 | 23.4466 | 23.6618 |
| 0.5515 | 0.72 | 700 | 0.5163 | 0.0648 | -0.7650 | 0.7659 | 0.8298 | -281.7334 | -353.9220 | 23.4243 | 23.6343 |
| 0.5159 | 0.83 | 800 | 0.5113 | -0.1400 | -1.0595 | 0.7778 | 0.9195 | -284.6783 | -355.9698 | 23.4095 | 23.6179 |
| 0.5242 | 0.93 | 900 | 0.5089 | -0.0383 | -0.9148 | 0.7659 | 0.8766 | -283.2318 | -354.9529 | 23.4035 | 23.6145 |
| 0.4618 | 1.03 | 1000 | 0.5077 | -0.1223 | -1.0201 | 0.7778 | 0.8978 | -284.2841 | -355.7929 | 23.3805 | 23.5856 |
| 0.4484 | 1.14 | 1100 | 0.5019 | -0.3311 | -1.3299 | 0.7778 | 0.9989 | -287.3827 | -357.8807 | 23.3427 | 23.5381 |
| 0.4228 | 1.24 | 1200 | 0.5034 | -0.0617 | -1.0989 | 0.7619 | 1.0372 | -285.0726 | -355.1871 | 23.3191 | 23.5101 |
| 0.4306 | 1.34 | 1300 | 0.5032 | -0.1585 | -1.1849 | 0.7698 | 1.0264 | -285.9320 | -356.1549 | 23.2889 | 23.4787 |
| 0.4678 | 1.45 | 1400 | 0.5030 | -0.2351 | -1.1601 | 0.7817 | 0.9250 | -285.6841 | -356.9207 | 23.2661 | 23.4551 |
| 0.4317 | 1.55 | 1500 | 0.4997 | -0.1401 | -1.1458 | 0.7619 | 1.0057 | -285.5417 | -355.9716 | 23.2621 | 23.4524 |
| 0.4363 | 1.65 | 1600 | 0.5010 | -0.3313 | -1.3592 | 0.7738 | 1.0279 | -287.6752 | -357.8830 | 23.2320 | 23.4178 |
| 0.408 | 1.76 | 1700 | 0.4989 | -0.2456 | -1.3073 | 0.7778 | 1.0617 | -287.1568 | -357.0265 | 23.2135 | 23.3950 |
| 0.4076 | 1.86 | 1800 | 0.4996 | -0.3904 | -1.4365 | 0.7659 | 1.0461 | -288.4482 | -358.4738 | 23.1866 | 23.3617 |
| 0.4547 | 1.96 | 1900 | 0.5008 | -0.2516 | -1.2648 | 0.7857 | 1.0133 | -286.7317 | -357.0858 | 23.1605 | 23.3298 |
| 0.3469 | 2.07 | 2000 | 0.4977 | -0.2868 | -1.3916 | 0.7778 | 1.1048 | -287.9999 | -357.4383 | 23.1361 | 23.2990 |
| 0.3547 | 2.17 | 2100 | 0.4987 | -0.4251 | -1.5510 | 0.7619 | 1.1259 | -289.5935 | -358.8210 | 23.1142 | 23.2730 |
| 0.3468 | 2.27 | 2200 | 0.4979 | -0.2674 | -1.3945 | 0.7778 | 1.1271 | -288.0285 | -357.2443 | 23.0998 | 23.2561 |
| 0.3432 | 2.37 | 2300 | 0.5026 | -0.3792 | -1.4630 | 0.7738 | 1.0838 | -288.7130 | -358.3621 | 23.0726 | 23.2233 |
| 0.324 | 2.48 | 2400 | 0.5022 | -0.4892 | -1.6090 | 0.7698 | 1.1198 | -290.1737 | -359.4620 | 23.0543 | 23.2006 |
| 0.3556 | 2.58 | 2500 | 0.5010 | -0.5270 | -1.6576 | 0.7817 | 1.1306 | -290.6595 | -359.8404 | 23.0520 | 23.1981 |
| 0.3277 | 2.68 | 2600 | 0.4990 | -0.5401 | -1.6816 | 0.7778 | 1.1415 | -290.8996 | -359.9708 | 23.0449 | 23.1901 |
| 0.3262 | 2.79 | 2700 | 0.4993 | -0.4952 | -1.6410 | 0.7778 | 1.1458 | -290.4932 | -359.5220 | 23.0439 | 23.1878 |
| 0.3566 | 2.89 | 2800 | 0.4985 | -0.4474 | -1.5918 | 0.7778 | 1.1443 | -290.0010 | -359.0445 | 23.0433 | 23.1871 |
| 0.3386 | 2.99 | 2900 | 0.4983 | -0.4598 | -1.6040 | 0.7817 | 1.1442 | -290.1235 | -359.1679 | 23.0427 | 23.1866 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.1+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
chauhoang/baa96401-589f-3e68-5b54-6af808dc0c02
|
chauhoang
| 2025-01-12T20:37:34Z
| 7
| 0
|
peft
|
[
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/codegemma-7b-it",
"base_model:adapter:unsloth/codegemma-7b-it",
"license:apache-2.0",
"region:us"
] | null | 2025-01-12T17:19:30Z
|
---
library_name: peft
license: apache-2.0
base_model: unsloth/codegemma-7b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: baa96401-589f-3e68-5b54-6af808dc0c02
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/codegemma-7b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3b1c907d61911f89_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3b1c907d61911f89_train_data.json
type:
field_input: ''
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: chauhoang/baa96401-589f-3e68-5b54-6af808dc0c02
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/3b1c907d61911f89_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b69135f5-60c0-4b54-855e-44c16515f329
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b69135f5-60c0-4b54-855e-44c16515f329
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# baa96401-589f-3e68-5b54-6af808dc0c02
This model is a fine-tuned version of [unsloth/codegemma-7b-it](https://huggingface.co/unsloth/codegemma-7b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1329
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 2.5157 |
| 2.4992 | 0.0002 | 10 | 2.2979 |
| 2.1464 | 0.0004 | 20 | 2.1912 |
| 2.161 | 0.0006 | 30 | 2.1511 |
| 2.1858 | 0.0009 | 40 | 2.1366 |
| 2.0831 | 0.0011 | 50 | 2.1329 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso10/cb151fbb-595a-464d-88dd-06303179e04f
|
lesso10
| 2025-01-12T20:37:05Z
| 12
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/llama-3-8b-Instruct",
"base_model:adapter:unsloth/llama-3-8b-Instruct",
"license:llama3",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-12T20:15:00Z
|
---
library_name: peft
license: llama3
base_model: unsloth/llama-3-8b-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: cb151fbb-595a-464d-88dd-06303179e04f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/llama-3-8b-Instruct
bf16: true
chat_template: llama3
datasets:
- data_files:
- 2a533c64ec73f9ac_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2a533c64ec73f9ac_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso10/cb151fbb-595a-464d-88dd-06303179e04f
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 4
mlflow_experiment_name: /tmp/2a533c64ec73f9ac_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 25
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5729fc3b-3dbb-42fa-ba81-0ccef9a26a22
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5729fc3b-3dbb-42fa-ba81-0ccef9a26a22
warmup_steps: 5
weight_decay: 0.0
xformers_attention: true
```
</details><br>
# cb151fbb-595a-464d-88dd-06303179e04f
This model is a fine-tuned version of [unsloth/llama-3-8b-Instruct](https://huggingface.co/unsloth/llama-3-8b-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0012 | 1 | nan |
| 0.0 | 0.0059 | 5 | nan |
| 0.0 | 0.0117 | 10 | nan |
| 0.0 | 0.0176 | 15 | nan |
| 0.0 | 0.0235 | 20 | nan |
| 0.0 | 0.0293 | 25 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
great0001/37f01772-fa92-4fce-9ef0-39ec815e15a4
|
great0001
| 2025-01-12T20:34:36Z
| 11
| 0
|
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-v0.3",
"base_model:adapter:unsloth/mistral-7b-v0.3",
"license:apache-2.0",
"region:us"
] | null | 2025-01-12T20:29:46Z
|
---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-v0.3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 37f01772-fa92-4fce-9ef0-39ec815e15a4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/mistral-7b-v0.3
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f37f4750e6ccfd17_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f37f4750e6ccfd17_train_data.json
type:
field_input: my_solu
field_instruction: prompt
field_output: solution
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: great0001/37f01772-fa92-4fce-9ef0-39ec815e15a4
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/f37f4750e6ccfd17_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: fe496a38-12d8-455d-b139-0123bb7357f3
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: fe496a38-12d8-455d-b139-0123bb7357f3
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 37f01772-fa92-4fce-9ef0-39ec815e15a4
This model is a fine-tuned version of [unsloth/mistral-7b-v0.3](https://huggingface.co/unsloth/mistral-7b-v0.3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0004 | 1 | nan |
| 0.0 | 0.0013 | 3 | nan |
| 0.0 | 0.0026 | 6 | nan |
| 0.0 | 0.0038 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
JayHyeon/Qwen_0.5-DPO_3e-7-3ep_0alp_0lam
|
JayHyeon
| 2025-01-12T20:33:33Z
| 21
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:trl-lib/ultrafeedback_binarized",
"arxiv:2305.18290",
"base_model:JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep",
"base_model:finetune:JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-12T14:22:17Z
|
---
base_model: JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep
datasets: trl-lib/ultrafeedback_binarized
library_name: transformers
model_name: Qwen_0.5-DPO_3e-7-3ep_0alp_0lam
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for Qwen_0.5-DPO_3e-7-3ep_0alp_0lam
This model is a fine-tuned version of [JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep](https://huggingface.co/JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep) on the [trl-lib/ultrafeedback_binarized](https://huggingface.co/datasets/trl-lib/ultrafeedback_binarized) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JayHyeon/Qwen_0.5-DPO_3e-7-3ep_0alp_0lam", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bonin147/huggingface/runs/bwvj3cq0)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.13.0.dev0
- Transformers: 4.47.0.dev0
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
JacksonBrune/d8e319fc-baad-45cb-882a-6442579b1913
|
JacksonBrune
| 2025-01-12T20:32:58Z
| 10
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:defog/sqlcoder-7b-2",
"base_model:adapter:defog/sqlcoder-7b-2",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2025-01-12T20:31:03Z
|
---
library_name: peft
license: cc-by-sa-4.0
base_model: defog/sqlcoder-7b-2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d8e319fc-baad-45cb-882a-6442579b1913
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: defog/sqlcoder-7b-2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 24222e9e99f33788_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/24222e9e99f33788_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: JacksonBrune/d8e319fc-baad-45cb-882a-6442579b1913
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/24222e9e99f33788_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1c89b072-43fd-4d9a-a986-8347ee9352a9
wandb_project: birthdya-sn56-18-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1c89b072-43fd-4d9a-a986-8347ee9352a9
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# d8e319fc-baad-45cb-882a-6442579b1913
This model is a fine-tuned version of [defog/sqlcoder-7b-2](https://huggingface.co/defog/sqlcoder-7b-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0027 | 1 | nan |
| 0.0 | 0.0080 | 3 | nan |
| 0.0 | 0.0159 | 6 | nan |
| 0.0 | 0.0239 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
VERSIL91/1c89b072-43fd-4d9a-a986-8347ee9352a9
|
VERSIL91
| 2025-01-12T20:32:46Z
| 10
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:defog/sqlcoder-7b-2",
"base_model:adapter:defog/sqlcoder-7b-2",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2025-01-12T20:22:53Z
|
---
library_name: peft
license: cc-by-sa-4.0
base_model: defog/sqlcoder-7b-2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1c89b072-43fd-4d9a-a986-8347ee9352a9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
accelerate_config:
dynamo_backend: inductor
mixed_precision: bf16
num_machines: 1
num_processes: auto
use_cpu: false
adapter: lora
base_model: defog/sqlcoder-7b-2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 24222e9e99f33788_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/24222e9e99f33788_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
group_by_length: false
hub_model_id: VERSIL91/1c89b072-43fd-4d9a-a986-8347ee9352a9
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lora_target_modules:
- q_proj
- v_proj
lr_scheduler: cosine
max_memory:
0: 70GiB
max_steps: 20
micro_batch_size: 2
mlflow_experiment_name: /tmp/24222e9e99f33788_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
quantization_config:
llm_int8_enable_fp32_cpu_offload: true
load_in_8bit: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
torch_compile: true
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1c89b072-43fd-4d9a-a986-8347ee9352a9
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1c89b072-43fd-4d9a-a986-8347ee9352a9
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 1c89b072-43fd-4d9a-a986-8347ee9352a9
This model is a fine-tuned version of [defog/sqlcoder-7b-2](https://huggingface.co/defog/sqlcoder-7b-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0106 | 1 | nan |
| 0.0 | 0.0531 | 5 | nan |
| 0.0 | 0.1062 | 10 | nan |
| 0.0 | 0.1593 | 15 | nan |
| 0.0 | 0.2123 | 20 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso11/704051da-f62a-414d-893d-0c20e81fe4d0
|
lesso11
| 2025-01-12T20:29:26Z
| 10
| 0
|
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Mistral-7b-64k",
"base_model:adapter:NousResearch/Yarn-Mistral-7b-64k",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-12T19:26:45Z
|
---
library_name: peft
license: apache-2.0
base_model: NousResearch/Yarn-Mistral-7b-64k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 704051da-f62a-414d-893d-0c20e81fe4d0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Mistral-7b-64k
bf16: true
chat_template: llama3
datasets:
- data_files:
- a05b72f12491e874_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a05b72f12491e874_train_data.json
type:
field_input: llama-generation
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso11/704051da-f62a-414d-893d-0c20e81fe4d0
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/a05b72f12491e874_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8545d224-ec4d-4dfb-907a-6c5cad06d476
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 8545d224-ec4d-4dfb-907a-6c5cad06d476
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 704051da-f62a-414d-893d-0c20e81fe4d0
This model is a fine-tuned version of [NousResearch/Yarn-Mistral-7b-64k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-64k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7693
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.9069 | 0.0002 | 1 | 0.8982 |
| 3.2197 | 0.0011 | 5 | 0.8634 |
| 2.8171 | 0.0021 | 10 | 0.8040 |
| 5.2559 | 0.0032 | 15 | 0.7843 |
| 3.455 | 0.0043 | 20 | 0.7737 |
| 2.7499 | 0.0054 | 25 | 0.7693 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
havinash-ai/6a16764b-cd2f-4f75-a6d2-57a1cade08ab
|
havinash-ai
| 2025-01-12T20:27:43Z
| 10
| 0
|
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-1.5B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-12T20:27:01Z
|
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6a16764b-cd2f-4f75-a6d2-57a1cade08ab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-1.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 107ffab1dfbb4160_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/107ffab1dfbb4160_train_data.json
type:
field_input: URL
field_instruction: domain
field_output: sentence
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: havinash-ai/6a16764b-cd2f-4f75-a6d2-57a1cade08ab
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/107ffab1dfbb4160_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0d22ca37-eb44-4813-87aa-fe209ff97a6a
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0d22ca37-eb44-4813-87aa-fe209ff97a6a
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 6a16764b-cd2f-4f75-a6d2-57a1cade08ab
This model is a fine-tuned version of [unsloth/Qwen2.5-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0047 | 1 | nan |
| 0.0 | 0.0140 | 3 | nan |
| 0.0 | 0.0279 | 6 | nan |
| 0.0 | 0.0419 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
rasta3050/lakh_rock_transfer_model
|
rasta3050
| 2025-01-12T20:26:54Z
| 108
| 0
| null |
[
"safetensors",
"gpt2",
"region:us"
] | null | 2025-01-12T20:08:52Z
|
# Lakh MIDI Model with Rock Transfer Learning
## Model Overview
This repository contains a model retrained on the **Lakh MIDI Dataset** with additional transfer learning applied to the **Rock MIDI Dataset**. The base model was trained from scratch on Lakh MIDI and fine-tuned using a smaller, curated dataset of rock compositions to enhance performance on rock music generation tasks.
### Training Details
1. **Base Training**:
- Dataset: Lakh MIDI Dataset (cleaned and filtered).
- Training: The model was trained from scratch on this dataset to learn general musical structures and styles.
2. **Transfer Learning**:
- Dataset: A subset of **500 rock MIDI compositions**.
- Epochs: **1 epoch**.
- Purpose: Fine-tuning the model to specialize in generating and understanding rock-specific musical patterns.
This two-step approach ensures that the model retains its general understanding of MIDI data while being optimized for rock music-specific tasks.
## Files in the Repository
The repository includes the following files:
1. **`config.json`**:
- Contains the configuration of the model architecture. This includes details such as the number of layers, hidden dimensions, attention heads, and other parameters used to define the model.
2. **`generation_config.json`**:
- Contains generation-specific settings, such as maximum sequence length, temperature, top-k, and top-p sampling parameters. These configurations are crucial for controlling the behavior of the MIDI sequence generation process.
3. **`model.safetensors`**:
- The model weights saved in the `safetensors` format for efficient and secure loading. This format ensures safe deserialization of model weights.
4. **`training_args.bin`**:
- Stores the training arguments and hyperparameters used during both base training and transfer learning. This file can be useful for reproducing the training setup or understanding the specifics of the training process.
## Dataset Details
### Lakh MIDI Dataset
- Focus: General MIDI compositions across various genres.
- Cleaning Process: Removed duplicates, ensured proper formatting, and filtered out noisy data.
### Rock MIDI Dataset (Transfer Learning)
- Focus: Rock genre-specific MIDI compositions.
- Size: 500 compositions.
- Epochs: 1 epoch.
- Purpose: Fine-tuning the model for improved rock music generation.
## Usage
This model is suitable for:
- General MIDI music generation.
- Specialized rock music generation tasks.
- Experimentation with transfer learning techniques in music AI.
## Original Code Base
The original model and architecture are based on the repository [AI-Guru/MMM-JSB](https://github.com/AI-Guru/MMM-JSB/). The base training and transfer learning were performed to adapt this architecture for diverse and genre-specific tasks.
## License
This model follows the licensing terms of the original repository. Please review the license for more details.
|
rasta3050/lakh_pop_transfer_model
|
rasta3050
| 2025-01-12T20:26:19Z
| 8
| 0
| null |
[
"safetensors",
"gpt2",
"region:us"
] | null | 2025-01-11T19:39:18Z
|
# Lakh MIDI Model with Pop Transfer Learning
## Model Overview
This repository contains a model retrained on the **Lakh MIDI Dataset** with additional transfer learning applied to the **Pop MIDI Dataset**. The base model was trained from scratch on Lakh MIDI and fine-tuned using a smaller, curated dataset of pop music to improve performance on pop genre tasks.
### Training Details
1. **Base Training**:
- Dataset: Lakh MIDI Dataset (cleaned and filtered).
- Training: The model was trained from scratch on this dataset to learn general musical structures and styles.
2. **Transfer Learning**:
- Dataset: A subset of **512 pop MIDI compositions**.
- Epochs: **1 epoch**.
- Purpose: Fine-tuning the model to improve its ability to generate and understand pop-specific musical patterns.
This two-step approach ensures that the model retains its general understanding of MIDI data while being optimized for pop genre tasks.
## Files in the Repository
The repository includes the following files:
1. **`config.json`**:
- Contains the configuration of the model architecture. This includes details such as the number of layers, hidden dimensions, attention heads, and other parameters used to define the model.
2. **`generation_config.json`**:
- Contains generation-specific settings, such as maximum sequence length, temperature, top-k, and top-p sampling parameters. These configurations are crucial for controlling the behavior of the MIDI sequence generation process.
3. **`model.safetensors`**:
- The model weights saved in the `safetensors` format for efficient and secure loading. This format ensures safe deserialization of model weights.
4. **`training_args.bin`**:
- Stores the training arguments and hyperparameters used during both base training and transfer learning. This file can be useful for reproducing the training setup or understanding the specifics of the training process.
## Dataset Details
### Lakh MIDI Dataset
- Focus: General MIDI compositions across various genres.
- Cleaning Process: Removed duplicates, ensured proper formatting, and filtered out noisy data.
### Pop MIDI Dataset (Transfer Learning)
- Focus: Pop genre-specific MIDI compositions.
- Size: 512 compositions.
- Purpose: Fine-tuning the model for improved pop music generation.
## Usage
This model is suitable for:
- General MIDI music generation.
- Specialized pop music generation tasks.
- Experimentation with transfer learning techniques in music AI.
## Original Code Base
The original model and architecture are based on the repository [AI-Guru/MMM-JSB](https://github.com/AI-Guru/MMM-JSB/). The base training and transfer learning were performed to adapt this architecture for diverse and genre-specific tasks.
## License
This model follows the licensing terms of the original repository. Please review the license for more details.
|
lesso02/34565c90-71a0-4bc0-b749-e53aa7ea776b
|
lesso02
| 2025-01-12T20:26:13Z
| 10
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:defog/sqlcoder-7b-2",
"base_model:adapter:defog/sqlcoder-7b-2",
"license:cc-by-sa-4.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-12T20:22:49Z
|
---
library_name: peft
license: cc-by-sa-4.0
base_model: defog/sqlcoder-7b-2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 34565c90-71a0-4bc0-b749-e53aa7ea776b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: defog/sqlcoder-7b-2
bf16: true
chat_template: llama3
datasets:
- data_files:
- 24222e9e99f33788_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/24222e9e99f33788_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso02/34565c90-71a0-4bc0-b749-e53aa7ea776b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/24222e9e99f33788_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1c89b072-43fd-4d9a-a986-8347ee9352a9
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1c89b072-43fd-4d9a-a986-8347ee9352a9
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 34565c90-71a0-4bc0-b749-e53aa7ea776b
This model is a fine-tuned version of [defog/sqlcoder-7b-2](https://huggingface.co/defog/sqlcoder-7b-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0027 | 1 | nan |
| 0.0 | 0.0133 | 5 | nan |
| 0.0 | 0.0265 | 10 | nan |
| 0.0 | 0.0398 | 15 | nan |
| 0.0 | 0.0531 | 20 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
rasta3050/aiguru_lakh
|
rasta3050
| 2025-01-12T20:25:02Z
| 996
| 0
| null |
[
"safetensors",
"gpt2",
"region:us"
] | null | 2025-01-04T13:47:46Z
|
# Lakh MIDI Model
## Model Overview
This repository contains the retrained model based on the original code and architecture provided by [AI-Guru/MMM-JSB](https://github.com/AI-Guru/MMM-JSB/). The model has been trained from scratch on the **Lakh MIDI Dataset**, which has been carefully cleaned and prepared for this task.
The model is suitable for generating MIDI sequences and offers enhanced performance due to the improved dataset and careful retraining. The training process took approximately **50 hours** on an **RTX 4080 Super** GPU, utilizing a dataset of about **6,000 MIDI compositions**. The files included in this repository are essential for loading and utilizing the model efficiently.
## Files in the Repository
The repository includes the following files:
1. **`config.json`**:
- Contains the configuration of the model architecture. This includes details such as the number of layers, hidden dimensions, attention heads, and other parameters used to define the model.
2. **`generation_config.json`**:
- Contains generation-specific settings, such as maximum sequence length, temperature, top-k, and top-p sampling parameters. These configurations are crucial for controlling the behavior of the MIDI sequence generation process.
3. **`model.safetensors`**:
- The model weights saved in the `safetensors` format for efficient and secure loading. This format ensures safe deserialization of model weights.
4. **`training_args.bin`**:
- Stores the training arguments and hyperparameters used during the training process. This file can be useful for reproducing the training setup or understanding the specifics of the training process.
## Dataset Details
The model was trained on the **Lakh MIDI Dataset**, which has undergone extensive cleaning to ensure high-quality training data. The cleaning process involved:
- Removing duplicates.
- Ensuring proper formatting of MIDI files.
- Filtering out noisy or incomplete data.
This dataset was chosen for its diverse range of MIDI sequences, providing the model with a rich set of training examples.
## Original Code Base
The original model and architecture are based on the repository [AI-Guru/MMM-JSB](https://github.com/AI-Guru/MMM-JSB/). This implementation has been retrained from scratch to work with the Lakh MIDI Dataset for MIDI generation tasks.
## License
This model follows the licensing terms of the original repository. Please review the license for more details.
|
Triangle104/granite-3.1-8b-instruct-abliterated-Q5_K_S-GGUF
|
Triangle104
| 2025-01-12T20:24:56Z
| 23
| 0
|
transformers
|
[
"transformers",
"gguf",
"language",
"granite-3.1",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:huihui-ai/granite-3.1-8b-instruct-abliterated",
"base_model:quantized:huihui-ai/granite-3.1-8b-instruct-abliterated",
"license:apache-2.0",
"region:us",
"conversational"
] |
text-generation
| 2024-12-26T14:37:37Z
|
---
pipeline_tag: text-generation
inference: false
license: apache-2.0
library_name: transformers
tags:
- language
- granite-3.1
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
base_model: huihui-ai/granite-3.1-8b-instruct-abliterated
---
# Triangle104/granite-3.1-8b-instruct-abliterated-Q5_K_S-GGUF
This model was converted to GGUF format from [`huihui-ai/granite-3.1-8b-instruct-abliterated`](https://huggingface.co/huihui-ai/granite-3.1-8b-instruct-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/granite-3.1-8b-instruct-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/granite-3.1-8b-instruct-abliterated-Q5_K_S-GGUF --hf-file granite-3.1-8b-instruct-abliterated-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/granite-3.1-8b-instruct-abliterated-Q5_K_S-GGUF --hf-file granite-3.1-8b-instruct-abliterated-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/granite-3.1-8b-instruct-abliterated-Q5_K_S-GGUF --hf-file granite-3.1-8b-instruct-abliterated-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/granite-3.1-8b-instruct-abliterated-Q5_K_S-GGUF --hf-file granite-3.1-8b-instruct-abliterated-q5_k_s.gguf -c 2048
```
|
kostiantynk1205/cbb3ef21-ae49-43f8-974a-869ec00743ae
|
kostiantynk1205
| 2025-01-12T20:24:52Z
| 20
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:lmsys/vicuna-7b-v1.3",
"base_model:adapter:lmsys/vicuna-7b-v1.3",
"region:us"
] | null | 2025-01-12T20:05:00Z
|
---
library_name: peft
base_model: lmsys/vicuna-7b-v1.3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: cbb3ef21-ae49-43f8-974a-869ec00743ae
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: lmsys/vicuna-7b-v1.3
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c3f29cc94841d3ff_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c3f29cc94841d3ff_train_data.json
type:
field_input: my_solu
field_instruction: prompt
field_output: solution
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk1205/cbb3ef21-ae49-43f8-974a-869ec00743ae
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/c3f29cc94841d3ff_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 98d503ad-cb5d-4e0c-9f8c-67ed3226c6ee
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 98d503ad-cb5d-4e0c-9f8c-67ed3226c6ee
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# cbb3ef21-ae49-43f8-974a-869ec00743ae
This model is a fine-tuned version of [lmsys/vicuna-7b-v1.3](https://huggingface.co/lmsys/vicuna-7b-v1.3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1578
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.1216 | 0.0001 | 1 | 1.2380 |
| 1.3219 | 0.0002 | 3 | 1.2361 |
| 1.3853 | 0.0005 | 6 | 1.2195 |
| 1.1493 | 0.0007 | 9 | 1.1578 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
rasta3050/aiguru
|
rasta3050
| 2025-01-12T20:24:29Z
| 9
| 0
| null |
[
"safetensors",
"gpt2",
"region:us"
] | null | 2024-12-21T10:20:38Z
|
# Lakh MIDI Model
## Model Overview
This repository contains the retrained model based on the original code and architecture provided by [AI-Guru/MMM-JSB](https://github.com/AI-Guru/MMM-JSB/). The model has been trained from scratch on the **Lakh MIDI Dataset**.
The model is suitable for generating MIDI sequences and serves as a baseline implementation without additional optimizations to the training process or code. The files included in this repository are essential for loading and utilizing the model efficiently.
## Files in the Repository
The repository includes the following files:
1. **`config.json`**:
- Contains the configuration of the model architecture. This includes details such as the number of layers, hidden dimensions, attention heads, and other parameters used to define the model.
2. **`generation_config.json`**:
- Contains generation-specific settings, such as maximum sequence length, temperature, top-k, and top-p sampling parameters. These configurations are crucial for controlling the behavior of the MIDI sequence generation process.
3. **`model.safetensors`**:
- The model weights saved in the `safetensors` format for efficient and secure loading. This format ensures safe deserialization of model weights.
4. **`training_args.bin`**:
- Stores the training arguments and hyperparameters used during the training process. This file can be useful for reproducing the training setup or understanding the specifics of the training process.
## Dataset Details
The model was trained on the **Lakh MIDI Dataset**, which has undergone cleaning to ensure high-quality training data. The cleaning process involved removing duplicates, ensuring proper formatting, and filtering out noisy or incomplete data.
## Original Code Base
The original model and architecture are based on the repository [AI-Guru/MMM-JSB](https://github.com/AI-Guru/MMM-JSB/). This implementation has been retrained from scratch to work with the Lakh MIDI Dataset for MIDI generation tasks.
Additionally, you can find an improved version of this model, trained on the same dataset but with modifications to the code for better training performance. You can access it here: [aiguru_lakh](https://huggingface.co/rasta3050/aiguru_lakh).
## License
This model follows the licensing terms of the original repository. Please review the license for more details.
|
filipesantoscv11/7d6b9e8f-9f22-4ead-bb86-e5b3e99ac1da
|
filipesantoscv11
| 2025-01-12T20:24:26Z
| 10
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:defog/sqlcoder-7b-2",
"base_model:adapter:defog/sqlcoder-7b-2",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2025-01-12T20:22:29Z
|
---
library_name: peft
license: cc-by-sa-4.0
base_model: defog/sqlcoder-7b-2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7d6b9e8f-9f22-4ead-bb86-e5b3e99ac1da
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: defog/sqlcoder-7b-2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 24222e9e99f33788_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/24222e9e99f33788_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: filipesantoscv11/7d6b9e8f-9f22-4ead-bb86-e5b3e99ac1da
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/24222e9e99f33788_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_hf
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1c89b072-43fd-4d9a-a986-8347ee9352a9
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1c89b072-43fd-4d9a-a986-8347ee9352a9
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 7d6b9e8f-9f22-4ead-bb86-e5b3e99ac1da
This model is a fine-tuned version of [defog/sqlcoder-7b-2](https://huggingface.co/defog/sqlcoder-7b-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0027 | 1 | nan |
| 0.0 | 0.0212 | 8 | nan |
| 0.0 | 0.0425 | 16 | nan |
| 0.0 | 0.0637 | 24 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
nhoxinh/7f4ffc3c-c44f-4d43-812d-33911fd40425
|
nhoxinh
| 2025-01-12T20:24:08Z
| 11
| 0
|
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-7B",
"base_model:adapter:Qwen/Qwen1.5-7B",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-12T19:39:20Z
|
---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7f4ffc3c-c44f-4d43-812d-33911fd40425
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 00408dd316cd9929_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/00408dd316cd9929_train_data.json
type:
field_input: intent
field_instruction: instruction
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhoxinh/7f4ffc3c-c44f-4d43-812d-33911fd40425
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/00408dd316cd9929_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: c5e8f9db-4386-4cd4-a076-74cc0ad8ee6a
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: c5e8f9db-4386-4cd4-a076-74cc0ad8ee6a
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 7f4ffc3c-c44f-4d43-812d-33911fd40425
This model is a fine-tuned version of [Qwen/Qwen1.5-7B](https://huggingface.co/Qwen/Qwen1.5-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4018
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.3941 | 0.0112 | 200 | 0.4018 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
ariannap22/collectiveaction_sft_annotated_only_v6_prompt_v6_p100_synthetic_balanced_more_layered
|
ariannap22
| 2025-01-12T20:23:44Z
| 35
| 0
| null |
[
"safetensors",
"llama",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"region:us"
] | null | 2025-01-12T16:47:09Z
|
---
base_model:
- meta-llama/Llama-3.1-8B-Instruct
---
# Collective Action Participation Detection Model - Fine-Tuned LLama3
**Note: this is the second step of a layered approach, see [this model](https://huggingface.co/ariannap22/collectiveaction_roberta_simplified_synthetic_weights) for the first step.**
This model detects expressions of levels of participation in collective action from text.
First, the binary presence of participation expression should be detected with [this model](https://huggingface.co/ariannap22/collectiveaction_roberta_simplified_synthetic_weights) for the first step.
Second, for the messages expressing participation, participation levels can be detected.
For details on the framework and useful code snippets, see the paper "Extracting Participation in Collective Action from Social Media", Pera and Aiello (2025).
## Usage Example
To use the model, follow the example below:
```python
from transformers import (AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
pipeline)
model_dir = "ariannap22/collectiveaction_sft_annotated_only_v6_prompt_v6_p100_synthetic_balanced_more_layered"
# Define the text you want to predict
texts = [
"We need to stand together for our rights!",
"I volunteer at the local food bank."
]
# Define levels of participation in collective action¨
dim_def = {'Problem-Solution': "The comment highlights an issue and possibly suggests a way to fix it, often naming those responsible.",
'Call-to-Action': "The comment asks readers to take part in a specific activity, effort, or movement.",
'Intention': "The commenter shares their own desire to do something or be involved in solving a particular issue.",
'Execution': "The commenter is describing their personal experience taking direct actions towards a common goal."}
# Define the prompt
def generate_test_prompt6(data_point):
return f"""
You have the following knowledge about levels of participation in collective action that can be expressed in social media comments: {dim_def}.
### Definitions and Criteria:
**Collective Action Problem:** A present issue caused by human actions or decisions that affects a group and can be addressed through individual or collective efforts.
**Participation in collective action**: A comment must clearly reference a collective action problem, social movement, or activism by meeting at least one of the levels in the list {dim_def.keys()}.
Classify the following social media comment into one of the levels within the list {list(dim_def.keys())}.
### Example of correct output format:
text: xyz
label: None
Return the answer as the corresponding participation in collective action level label.
text: {data_point}
label: """.strip()
texts_prompts = [generate_test_prompt6(text) for text in texts]
# Prepare datasets and load model
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=False,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype="float16",
)
model = AutoModelForCausalLM.from_pretrained(
model_dir,
device_map="auto",
torch_dtype="float16",
quantization_config=bnb_config,
)
model.config.use_cache = False
model.config.pretraining_tp = 1
tokenizer = AutoTokenizer.from_pretrained(model_dir)
tokenizer.pad_token_id = tokenizer.eos_token_id
# Define prediction
def predict(texts, model, tokenizer):
y_pred = []
answers = []
categories = list(dim_def.keys())
for i in range(len(texts)):
prompt = texts[i]
pipe = pipeline(task="text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=20,
temperature=0.1)
result = pipe(prompt)
answer = result[0]['generated_text'].split("label:")[-1].strip()
answers.append(answer)
# Determine the predicted category
for category in categories:
if category.lower() in answer.lower():
y_pred.append(category)
break
else:
y_pred.append("error")
return y_pred, answers
y_pred, answer = predict(texts_prompts, model, tokenizer)
# Print results
for text, pred in zip(texts, y_pred):
print(f"Text: {text}")
print(f"Predicted Class: {pred}")
print("---")
|
nhung03/b3bf527c-03fb-415d-ba92-b90322018d1a
|
nhung03
| 2025-01-12T20:23:39Z
| 10
| 0
|
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Mistral-Nemo-Base-2407",
"base_model:adapter:unsloth/Mistral-Nemo-Base-2407",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-12T19:57:14Z
|
---
library_name: peft
license: apache-2.0
base_model: unsloth/Mistral-Nemo-Base-2407
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b3bf527c-03fb-415d-ba92-b90322018d1a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Mistral-Nemo-Base-2407
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ac97fde3045e6c49_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ac97fde3045e6c49_train_data.json
type:
field_instruction: title
field_output: abstract
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung03/b3bf527c-03fb-415d-ba92-b90322018d1a
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/ac97fde3045e6c49_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0c937bed-41f3-4a3f-afb4-4db0e61eff26
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0c937bed-41f3-4a3f-afb4-4db0e61eff26
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# b3bf527c-03fb-415d-ba92-b90322018d1a
This model is a fine-tuned version of [unsloth/Mistral-Nemo-Base-2407](https://huggingface.co/unsloth/Mistral-Nemo-Base-2407) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1083
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 7.7143 | 0.1471 | 200 | 2.1083 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
prxy5605/65b0335f-58a3-406d-a73b-93ae7b7b38ef
|
prxy5605
| 2025-01-12T20:22:37Z
| 10
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:MLP-KTLim/llama-3-Korean-Bllossom-8B",
"base_model:adapter:MLP-KTLim/llama-3-Korean-Bllossom-8B",
"license:llama3",
"region:us"
] | null | 2025-01-12T20:11:31Z
|
---
library_name: peft
license: llama3
base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 65b0335f-58a3-406d-a73b-93ae7b7b38ef
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b03261914fc5eea7_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b03261914fc5eea7_train_data.json
type:
field_instruction: prompt
field_output: response-suggestion
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: true
hub_model_id: prxy5605/65b0335f-58a3-406d-a73b-93ae7b7b38ef
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 400
micro_batch_size: 2
mlflow_experiment_name: /tmp/b03261914fc5eea7_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 761b9917-3fec-41e1-81b6-128f7eff9b04
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 761b9917-3fec-41e1-81b6-128f7eff9b04
warmup_steps: 30
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 65b0335f-58a3-406d-a73b-93ae7b7b38ef
This model is a fine-tuned version of [MLP-KTLim/llama-3-Korean-Bllossom-8B](https://huggingface.co/MLP-KTLim/llama-3-Korean-Bllossom-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6958
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 30
- training_steps: 361
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0028 | 1 | 0.9357 |
| 0.7989 | 0.2521 | 91 | 0.7140 |
| 0.6568 | 0.5042 | 182 | 0.7127 |
| 0.6714 | 0.7562 | 273 | 0.6958 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Triangle104/granite-3.1-8b-instruct-abliterated-Q4_K_M-GGUF
|
Triangle104
| 2025-01-12T20:21:06Z
| 25
| 0
|
transformers
|
[
"transformers",
"gguf",
"language",
"granite-3.1",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:huihui-ai/granite-3.1-8b-instruct-abliterated",
"base_model:quantized:huihui-ai/granite-3.1-8b-instruct-abliterated",
"license:apache-2.0",
"region:us",
"conversational"
] |
text-generation
| 2024-12-26T14:32:17Z
|
---
pipeline_tag: text-generation
inference: false
license: apache-2.0
library_name: transformers
tags:
- language
- granite-3.1
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
base_model: huihui-ai/granite-3.1-8b-instruct-abliterated
---
# Triangle104/granite-3.1-8b-instruct-abliterated-Q4_K_M-GGUF
This model was converted to GGUF format from [`huihui-ai/granite-3.1-8b-instruct-abliterated`](https://huggingface.co/huihui-ai/granite-3.1-8b-instruct-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/granite-3.1-8b-instruct-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/granite-3.1-8b-instruct-abliterated-Q4_K_M-GGUF --hf-file granite-3.1-8b-instruct-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/granite-3.1-8b-instruct-abliterated-Q4_K_M-GGUF --hf-file granite-3.1-8b-instruct-abliterated-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/granite-3.1-8b-instruct-abliterated-Q4_K_M-GGUF --hf-file granite-3.1-8b-instruct-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/granite-3.1-8b-instruct-abliterated-Q4_K_M-GGUF --hf-file granite-3.1-8b-instruct-abliterated-q4_k_m.gguf -c 2048
```
|
BallAd-15/llama-3-8b-instruct-task10-subtask3-v1
|
BallAd-15
| 2025-01-12T20:20:11Z
| 7
| 0
|
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-12T20:13:08Z
|
---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** BallAd-15
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
nttx/d4dd798b-b5f2-42da-b2b1-1889bcec868c
|
nttx
| 2025-01-12T20:18:54Z
| 10
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:MLP-KTLim/llama-3-Korean-Bllossom-8B",
"base_model:adapter:MLP-KTLim/llama-3-Korean-Bllossom-8B",
"license:llama3",
"region:us"
] | null | 2025-01-12T20:11:25Z
|
---
library_name: peft
license: llama3
base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d4dd798b-b5f2-42da-b2b1-1889bcec868c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b03261914fc5eea7_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b03261914fc5eea7_train_data.json
type:
field_instruction: prompt
field_output: response-suggestion
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: true
hub_model_id: nttx/d4dd798b-b5f2-42da-b2b1-1889bcec868c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/b03261914fc5eea7_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 761b9917-3fec-41e1-81b6-128f7eff9b04
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 761b9917-3fec-41e1-81b6-128f7eff9b04
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# d4dd798b-b5f2-42da-b2b1-1889bcec868c
This model is a fine-tuned version of [MLP-KTLim/llama-3-Korean-Bllossom-8B](https://huggingface.co/MLP-KTLim/llama-3-Korean-Bllossom-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7233
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0028 | 1 | 0.9357 |
| 1.2579 | 0.1385 | 50 | 0.7369 |
| 1.0147 | 0.2770 | 100 | 0.7201 |
| 1.0811 | 0.4155 | 150 | 0.7046 |
| 1.09 | 0.5540 | 200 | 0.7233 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
hongngo/0f3ab3a0-02af-4632-bfb0-e49c15cbd075
|
hongngo
| 2025-01-12T20:18:29Z
| 10
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Hermes-2-SOLAR-10.7B",
"base_model:adapter:NousResearch/Nous-Hermes-2-SOLAR-10.7B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-12T19:39:51Z
|
---
library_name: peft
license: apache-2.0
base_model: NousResearch/Nous-Hermes-2-SOLAR-10.7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0f3ab3a0-02af-4632-bfb0-e49c15cbd075
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Nous-Hermes-2-SOLAR-10.7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 35e42979deef2ace_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/35e42979deef2ace_train_data.json
type:
field_instruction: prompt
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: hongngo/0f3ab3a0-02af-4632-bfb0-e49c15cbd075
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/35e42979deef2ace_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8a25c2d0-3f47-4475-82ef-74ba7cd1fcaa
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 8a25c2d0-3f47-4475-82ef-74ba7cd1fcaa
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 0f3ab3a0-02af-4632-bfb0-e49c15cbd075
This model is a fine-tuned version of [NousResearch/Nous-Hermes-2-SOLAR-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.5872 | 0.7583 | 200 | 1.2444 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
MayBashendy/ArabicNewSplits7_FineTuningAraBERT_run1_AugV5_k15_task5_organization
|
MayBashendy
| 2025-01-12T20:18:16Z
| 8
| 0
|
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-12T20:11:02Z
|
---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_FineTuningAraBERT_run1_AugV5_k15_task5_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_FineTuningAraBERT_run1_AugV5_k15_task5_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7568
- Qwk: 0.5446
- Mse: 0.7568
- Rmse: 0.8699
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0513 | 2 | 4.6165 | -0.0179 | 4.6165 | 2.1486 |
| No log | 0.1026 | 4 | 2.8710 | -0.0231 | 2.8710 | 1.6944 |
| No log | 0.1538 | 6 | 2.1203 | -0.0647 | 2.1203 | 1.4561 |
| No log | 0.2051 | 8 | 1.4524 | 0.0279 | 1.4524 | 1.2051 |
| No log | 0.2564 | 10 | 1.6575 | 0.0300 | 1.6575 | 1.2874 |
| No log | 0.3077 | 12 | 1.5431 | 0.0371 | 1.5431 | 1.2422 |
| No log | 0.3590 | 14 | 1.3230 | -0.0511 | 1.3230 | 1.1502 |
| No log | 0.4103 | 16 | 1.1595 | 0.0882 | 1.1595 | 1.0768 |
| No log | 0.4615 | 18 | 1.2108 | 0.0909 | 1.2108 | 1.1004 |
| No log | 0.5128 | 20 | 1.1741 | 0.1154 | 1.1741 | 1.0836 |
| No log | 0.5641 | 22 | 1.1535 | 0.0792 | 1.1535 | 1.0740 |
| No log | 0.6154 | 24 | 1.1449 | 0.1408 | 1.1449 | 1.0700 |
| No log | 0.6667 | 26 | 1.1799 | 0.0970 | 1.1799 | 1.0862 |
| No log | 0.7179 | 28 | 1.2804 | 0.0232 | 1.2804 | 1.1315 |
| No log | 0.7692 | 30 | 1.3531 | 0.0 | 1.3531 | 1.1632 |
| No log | 0.8205 | 32 | 1.2932 | 0.0380 | 1.2932 | 1.1372 |
| No log | 0.8718 | 34 | 1.1336 | 0.2074 | 1.1336 | 1.0647 |
| No log | 0.9231 | 36 | 1.0738 | 0.1218 | 1.0738 | 1.0363 |
| No log | 0.9744 | 38 | 1.0661 | 0.1370 | 1.0661 | 1.0325 |
| No log | 1.0256 | 40 | 1.0737 | 0.1848 | 1.0737 | 1.0362 |
| No log | 1.0769 | 42 | 1.0947 | 0.2074 | 1.0947 | 1.0463 |
| No log | 1.1282 | 44 | 1.1012 | 0.2125 | 1.1012 | 1.0494 |
| No log | 1.1795 | 46 | 1.0272 | 0.1725 | 1.0272 | 1.0135 |
| No log | 1.2308 | 48 | 0.9670 | 0.2944 | 0.9670 | 0.9833 |
| No log | 1.2821 | 50 | 0.9680 | 0.3288 | 0.9680 | 0.9838 |
| No log | 1.3333 | 52 | 0.9473 | 0.2944 | 0.9473 | 0.9733 |
| No log | 1.3846 | 54 | 0.9451 | 0.2842 | 0.9451 | 0.9722 |
| No log | 1.4359 | 56 | 0.9414 | 0.2865 | 0.9414 | 0.9703 |
| No log | 1.4872 | 58 | 0.9470 | 0.3979 | 0.9470 | 0.9731 |
| No log | 1.5385 | 60 | 1.0563 | 0.2441 | 1.0563 | 1.0277 |
| No log | 1.5897 | 62 | 1.0513 | 0.2834 | 1.0513 | 1.0253 |
| No log | 1.6410 | 64 | 1.0556 | 0.3108 | 1.0556 | 1.0274 |
| No log | 1.6923 | 66 | 1.0785 | 0.3547 | 1.0785 | 1.0385 |
| No log | 1.7436 | 68 | 1.1076 | 0.2835 | 1.1076 | 1.0524 |
| No log | 1.7949 | 70 | 1.1437 | 0.2669 | 1.1437 | 1.0694 |
| No log | 1.8462 | 72 | 1.0474 | 0.2551 | 1.0474 | 1.0234 |
| No log | 1.8974 | 74 | 1.0008 | 0.3414 | 1.0008 | 1.0004 |
| No log | 1.9487 | 76 | 0.9496 | 0.3819 | 0.9496 | 0.9745 |
| No log | 2.0 | 78 | 0.9968 | 0.3063 | 0.9968 | 0.9984 |
| No log | 2.0513 | 80 | 1.0099 | 0.2474 | 1.0099 | 1.0050 |
| No log | 2.1026 | 82 | 0.9361 | 0.3153 | 0.9361 | 0.9675 |
| No log | 2.1538 | 84 | 0.9225 | 0.3304 | 0.9225 | 0.9605 |
| No log | 2.2051 | 86 | 0.8860 | 0.3214 | 0.8860 | 0.9413 |
| No log | 2.2564 | 88 | 0.8880 | 0.3175 | 0.8880 | 0.9423 |
| No log | 2.3077 | 90 | 0.8734 | 0.3744 | 0.8734 | 0.9346 |
| No log | 2.3590 | 92 | 0.8972 | 0.3976 | 0.8972 | 0.9472 |
| No log | 2.4103 | 94 | 0.9447 | 0.4466 | 0.9447 | 0.9719 |
| No log | 2.4615 | 96 | 0.9653 | 0.4231 | 0.9653 | 0.9825 |
| No log | 2.5128 | 98 | 0.9781 | 0.4231 | 0.9781 | 0.9890 |
| No log | 2.5641 | 100 | 0.9767 | 0.4404 | 0.9767 | 0.9883 |
| No log | 2.6154 | 102 | 1.0069 | 0.4711 | 1.0069 | 1.0034 |
| No log | 2.6667 | 104 | 0.9996 | 0.4662 | 0.9996 | 0.9998 |
| No log | 2.7179 | 106 | 1.1371 | 0.3437 | 1.1371 | 1.0663 |
| No log | 2.7692 | 108 | 1.0179 | 0.4211 | 1.0179 | 1.0089 |
| No log | 2.8205 | 110 | 0.9999 | 0.4211 | 0.9999 | 0.9999 |
| No log | 2.8718 | 112 | 1.0285 | 0.3787 | 1.0285 | 1.0142 |
| No log | 2.9231 | 114 | 1.0194 | 0.3787 | 1.0194 | 1.0096 |
| No log | 2.9744 | 116 | 0.9353 | 0.3335 | 0.9353 | 0.9671 |
| No log | 3.0256 | 118 | 0.9335 | 0.4606 | 0.9335 | 0.9662 |
| No log | 3.0769 | 120 | 1.0004 | 0.4278 | 1.0004 | 1.0002 |
| No log | 3.1282 | 122 | 0.8726 | 0.4879 | 0.8726 | 0.9341 |
| No log | 3.1795 | 124 | 1.0981 | 0.4515 | 1.0981 | 1.0479 |
| No log | 3.2308 | 126 | 1.1562 | 0.4471 | 1.1562 | 1.0753 |
| No log | 3.2821 | 128 | 0.9591 | 0.5106 | 0.9591 | 0.9794 |
| No log | 3.3333 | 130 | 0.8275 | 0.5618 | 0.8275 | 0.9097 |
| No log | 3.3846 | 132 | 1.0220 | 0.3878 | 1.0220 | 1.0109 |
| No log | 3.4359 | 134 | 1.1379 | 0.3666 | 1.1379 | 1.0667 |
| No log | 3.4872 | 136 | 0.9192 | 0.4268 | 0.9192 | 0.9587 |
| No log | 3.5385 | 138 | 0.8057 | 0.4792 | 0.8057 | 0.8976 |
| No log | 3.5897 | 140 | 0.9502 | 0.2543 | 0.9502 | 0.9748 |
| No log | 3.6410 | 142 | 0.9009 | 0.3622 | 0.9009 | 0.9492 |
| No log | 3.6923 | 144 | 0.7538 | 0.5510 | 0.7538 | 0.8682 |
| No log | 3.7436 | 146 | 0.8009 | 0.4943 | 0.8009 | 0.8949 |
| No log | 3.7949 | 148 | 0.9927 | 0.4493 | 0.9927 | 0.9963 |
| No log | 3.8462 | 150 | 0.9957 | 0.4579 | 0.9957 | 0.9979 |
| No log | 3.8974 | 152 | 0.8076 | 0.5291 | 0.8076 | 0.8986 |
| No log | 3.9487 | 154 | 0.7955 | 0.5920 | 0.7955 | 0.8919 |
| No log | 4.0 | 156 | 0.7960 | 0.6082 | 0.7960 | 0.8922 |
| No log | 4.0513 | 158 | 0.7556 | 0.5260 | 0.7556 | 0.8693 |
| No log | 4.1026 | 160 | 0.8811 | 0.3001 | 0.8811 | 0.9387 |
| No log | 4.1538 | 162 | 1.0139 | 0.1487 | 1.0139 | 1.0069 |
| No log | 4.2051 | 164 | 0.9353 | 0.3743 | 0.9353 | 0.9671 |
| No log | 4.2564 | 166 | 0.7945 | 0.4988 | 0.7945 | 0.8913 |
| No log | 4.3077 | 168 | 0.8683 | 0.3541 | 0.8683 | 0.9318 |
| No log | 4.3590 | 170 | 0.8541 | 0.3704 | 0.8541 | 0.9242 |
| No log | 4.4103 | 172 | 0.8820 | 0.5065 | 0.8820 | 0.9391 |
| No log | 4.4615 | 174 | 0.9687 | 0.4794 | 0.9687 | 0.9842 |
| No log | 4.5128 | 176 | 0.8793 | 0.4824 | 0.8793 | 0.9377 |
| No log | 4.5641 | 178 | 0.7916 | 0.4119 | 0.7916 | 0.8897 |
| No log | 4.6154 | 180 | 0.8567 | 0.4004 | 0.8567 | 0.9256 |
| No log | 4.6667 | 182 | 0.8216 | 0.5089 | 0.8216 | 0.9064 |
| No log | 4.7179 | 184 | 0.7702 | 0.5939 | 0.7702 | 0.8776 |
| No log | 4.7692 | 186 | 0.8413 | 0.5059 | 0.8413 | 0.9172 |
| No log | 4.8205 | 188 | 0.8852 | 0.4607 | 0.8852 | 0.9409 |
| No log | 4.8718 | 190 | 0.8603 | 0.5305 | 0.8603 | 0.9275 |
| No log | 4.9231 | 192 | 0.8574 | 0.4799 | 0.8574 | 0.9259 |
| No log | 4.9744 | 194 | 0.8373 | 0.4661 | 0.8373 | 0.9150 |
| No log | 5.0256 | 196 | 0.8090 | 0.4110 | 0.8090 | 0.8995 |
| No log | 5.0769 | 198 | 0.7960 | 0.4244 | 0.7960 | 0.8922 |
| No log | 5.1282 | 200 | 0.7876 | 0.4411 | 0.7876 | 0.8874 |
| No log | 5.1795 | 202 | 0.8073 | 0.3941 | 0.8073 | 0.8985 |
| No log | 5.2308 | 204 | 0.9128 | 0.5292 | 0.9128 | 0.9554 |
| No log | 5.2821 | 206 | 0.9012 | 0.4250 | 0.9012 | 0.9493 |
| No log | 5.3333 | 208 | 0.8633 | 0.4875 | 0.8633 | 0.9292 |
| No log | 5.3846 | 210 | 0.8718 | 0.4869 | 0.8718 | 0.9337 |
| No log | 5.4359 | 212 | 0.8762 | 0.5002 | 0.8762 | 0.9360 |
| No log | 5.4872 | 214 | 0.8698 | 0.5129 | 0.8698 | 0.9326 |
| No log | 5.5385 | 216 | 0.8721 | 0.4863 | 0.8721 | 0.9338 |
| No log | 5.5897 | 218 | 0.8440 | 0.5304 | 0.8440 | 0.9187 |
| No log | 5.6410 | 220 | 0.9335 | 0.4270 | 0.9335 | 0.9662 |
| No log | 5.6923 | 222 | 0.9326 | 0.4349 | 0.9326 | 0.9657 |
| No log | 5.7436 | 224 | 0.8294 | 0.4728 | 0.8294 | 0.9107 |
| No log | 5.7949 | 226 | 0.8017 | 0.4353 | 0.8017 | 0.8954 |
| No log | 5.8462 | 228 | 0.8143 | 0.3959 | 0.8143 | 0.9024 |
| No log | 5.8974 | 230 | 0.8392 | 0.4712 | 0.8392 | 0.9161 |
| No log | 5.9487 | 232 | 0.7845 | 0.5010 | 0.7845 | 0.8857 |
| No log | 6.0 | 234 | 0.7825 | 0.5370 | 0.7825 | 0.8846 |
| No log | 6.0513 | 236 | 0.8422 | 0.5279 | 0.8422 | 0.9177 |
| No log | 6.1026 | 238 | 0.8607 | 0.5057 | 0.8607 | 0.9278 |
| No log | 6.1538 | 240 | 0.8598 | 0.5057 | 0.8598 | 0.9273 |
| No log | 6.2051 | 242 | 0.8166 | 0.5463 | 0.8166 | 0.9036 |
| No log | 6.2564 | 244 | 0.8020 | 0.4918 | 0.8020 | 0.8955 |
| No log | 6.3077 | 246 | 0.8120 | 0.4012 | 0.8120 | 0.9011 |
| No log | 6.3590 | 248 | 0.8552 | 0.3922 | 0.8552 | 0.9248 |
| No log | 6.4103 | 250 | 0.8461 | 0.3922 | 0.8461 | 0.9198 |
| No log | 6.4615 | 252 | 0.7902 | 0.4223 | 0.7902 | 0.8889 |
| No log | 6.5128 | 254 | 0.7790 | 0.4692 | 0.7790 | 0.8826 |
| No log | 6.5641 | 256 | 0.7647 | 0.4804 | 0.7647 | 0.8745 |
| No log | 6.6154 | 258 | 0.7802 | 0.5074 | 0.7802 | 0.8833 |
| No log | 6.6667 | 260 | 0.8180 | 0.4845 | 0.8180 | 0.9044 |
| No log | 6.7179 | 262 | 0.8144 | 0.5370 | 0.8144 | 0.9025 |
| No log | 6.7692 | 264 | 0.8191 | 0.5669 | 0.8191 | 0.9050 |
| No log | 6.8205 | 266 | 0.8116 | 0.5370 | 0.8116 | 0.9009 |
| No log | 6.8718 | 268 | 0.8203 | 0.4706 | 0.8203 | 0.9057 |
| No log | 6.9231 | 270 | 0.8084 | 0.4706 | 0.8084 | 0.8991 |
| No log | 6.9744 | 272 | 0.8021 | 0.5275 | 0.8021 | 0.8956 |
| No log | 7.0256 | 274 | 0.7935 | 0.5580 | 0.7935 | 0.8908 |
| No log | 7.0769 | 276 | 0.7835 | 0.5545 | 0.7835 | 0.8852 |
| No log | 7.1282 | 278 | 0.8000 | 0.5494 | 0.8000 | 0.8944 |
| No log | 7.1795 | 280 | 0.8327 | 0.5366 | 0.8327 | 0.9125 |
| No log | 7.2308 | 282 | 0.7913 | 0.5331 | 0.7913 | 0.8895 |
| No log | 7.2821 | 284 | 0.7506 | 0.5570 | 0.7506 | 0.8664 |
| No log | 7.3333 | 286 | 0.7462 | 0.6076 | 0.7462 | 0.8638 |
| No log | 7.3846 | 288 | 0.7442 | 0.5582 | 0.7442 | 0.8627 |
| No log | 7.4359 | 290 | 0.7762 | 0.5558 | 0.7762 | 0.8810 |
| No log | 7.4872 | 292 | 0.7830 | 0.5331 | 0.7830 | 0.8849 |
| No log | 7.5385 | 294 | 0.7635 | 0.5121 | 0.7635 | 0.8738 |
| No log | 7.5897 | 296 | 0.7906 | 0.5234 | 0.7906 | 0.8892 |
| No log | 7.6410 | 298 | 0.8031 | 0.4645 | 0.8031 | 0.8962 |
| No log | 7.6923 | 300 | 0.7985 | 0.4645 | 0.7985 | 0.8936 |
| No log | 7.7436 | 302 | 0.8470 | 0.5291 | 0.8470 | 0.9203 |
| No log | 7.7949 | 304 | 0.9654 | 0.5222 | 0.9654 | 0.9826 |
| No log | 7.8462 | 306 | 1.0083 | 0.4354 | 1.0083 | 1.0041 |
| No log | 7.8974 | 308 | 0.9160 | 0.3523 | 0.9160 | 0.9571 |
| No log | 7.9487 | 310 | 0.8343 | 0.4251 | 0.8343 | 0.9134 |
| No log | 8.0 | 312 | 0.8437 | 0.4165 | 0.8437 | 0.9185 |
| No log | 8.0513 | 314 | 0.8416 | 0.4440 | 0.8416 | 0.9174 |
| No log | 8.1026 | 316 | 0.7828 | 0.4660 | 0.7828 | 0.8848 |
| No log | 8.1538 | 318 | 0.8145 | 0.4630 | 0.8145 | 0.9025 |
| No log | 8.2051 | 320 | 0.9292 | 0.5230 | 0.9292 | 0.9640 |
| No log | 8.2564 | 322 | 0.8687 | 0.5372 | 0.8687 | 0.9320 |
| No log | 8.3077 | 324 | 0.7532 | 0.4760 | 0.7532 | 0.8679 |
| No log | 8.3590 | 326 | 0.7863 | 0.4984 | 0.7863 | 0.8867 |
| No log | 8.4103 | 328 | 0.8325 | 0.5220 | 0.8325 | 0.9124 |
| No log | 8.4615 | 330 | 0.7714 | 0.5176 | 0.7714 | 0.8783 |
| No log | 8.5128 | 332 | 0.7843 | 0.5442 | 0.7843 | 0.8856 |
| No log | 8.5641 | 334 | 0.8575 | 0.5291 | 0.8575 | 0.9260 |
| No log | 8.6154 | 336 | 0.8189 | 0.5410 | 0.8189 | 0.9049 |
| No log | 8.6667 | 338 | 0.7704 | 0.5010 | 0.7704 | 0.8777 |
| No log | 8.7179 | 340 | 0.7651 | 0.5402 | 0.7651 | 0.8747 |
| No log | 8.7692 | 342 | 0.7648 | 0.4760 | 0.7648 | 0.8745 |
| No log | 8.8205 | 344 | 0.7866 | 0.4353 | 0.7866 | 0.8869 |
| No log | 8.8718 | 346 | 0.7921 | 0.4082 | 0.7921 | 0.8900 |
| No log | 8.9231 | 348 | 0.8006 | 0.4082 | 0.8006 | 0.8948 |
| No log | 8.9744 | 350 | 0.7787 | 0.4223 | 0.7787 | 0.8824 |
| No log | 9.0256 | 352 | 0.7718 | 0.4625 | 0.7718 | 0.8785 |
| No log | 9.0769 | 354 | 0.7708 | 0.5142 | 0.7708 | 0.8779 |
| No log | 9.1282 | 356 | 0.7802 | 0.4760 | 0.7802 | 0.8833 |
| No log | 9.1795 | 358 | 0.8296 | 0.4491 | 0.8296 | 0.9108 |
| No log | 9.2308 | 360 | 0.8122 | 0.4960 | 0.8122 | 0.9012 |
| No log | 9.2821 | 362 | 0.7961 | 0.5288 | 0.7961 | 0.8922 |
| No log | 9.3333 | 364 | 0.8481 | 0.4749 | 0.8481 | 0.9209 |
| No log | 9.3846 | 366 | 0.8183 | 0.4444 | 0.8183 | 0.9046 |
| No log | 9.4359 | 368 | 0.8061 | 0.4371 | 0.8061 | 0.8978 |
| No log | 9.4872 | 370 | 0.8997 | 0.5305 | 0.8997 | 0.9485 |
| No log | 9.5385 | 372 | 0.8998 | 0.4952 | 0.8998 | 0.9486 |
| No log | 9.5897 | 374 | 0.8120 | 0.4216 | 0.8120 | 0.9011 |
| No log | 9.6410 | 376 | 0.7773 | 0.4277 | 0.7773 | 0.8816 |
| No log | 9.6923 | 378 | 0.8506 | 0.4752 | 0.8506 | 0.9223 |
| No log | 9.7436 | 380 | 0.8252 | 0.5204 | 0.8252 | 0.9084 |
| No log | 9.7949 | 382 | 0.7663 | 0.5548 | 0.7663 | 0.8754 |
| No log | 9.8462 | 384 | 0.7358 | 0.5288 | 0.7358 | 0.8578 |
| No log | 9.8974 | 386 | 0.7881 | 0.5208 | 0.7881 | 0.8877 |
| No log | 9.9487 | 388 | 0.8637 | 0.5160 | 0.8637 | 0.9294 |
| No log | 10.0 | 390 | 0.8491 | 0.4946 | 0.8491 | 0.9215 |
| No log | 10.0513 | 392 | 0.8079 | 0.4494 | 0.8079 | 0.8988 |
| No log | 10.1026 | 394 | 0.7924 | 0.4507 | 0.7924 | 0.8902 |
| No log | 10.1538 | 396 | 0.7694 | 0.4405 | 0.7694 | 0.8772 |
| No log | 10.2051 | 398 | 0.7560 | 0.5142 | 0.7560 | 0.8695 |
| No log | 10.2564 | 400 | 0.7817 | 0.4858 | 0.7817 | 0.8842 |
| No log | 10.3077 | 402 | 0.8922 | 0.5458 | 0.8922 | 0.9445 |
| No log | 10.3590 | 404 | 0.9171 | 0.4916 | 0.9171 | 0.9577 |
| No log | 10.4103 | 406 | 0.8472 | 0.5306 | 0.8472 | 0.9205 |
| No log | 10.4615 | 408 | 0.7580 | 0.4628 | 0.7580 | 0.8706 |
| No log | 10.5128 | 410 | 0.7043 | 0.5399 | 0.7043 | 0.8392 |
| No log | 10.5641 | 412 | 0.6979 | 0.5399 | 0.6979 | 0.8354 |
| No log | 10.6154 | 414 | 0.6915 | 0.5498 | 0.6915 | 0.8316 |
| No log | 10.6667 | 416 | 0.7144 | 0.5346 | 0.7144 | 0.8452 |
| No log | 10.7179 | 418 | 0.7029 | 0.5492 | 0.7029 | 0.8384 |
| No log | 10.7692 | 420 | 0.7007 | 0.5831 | 0.7007 | 0.8371 |
| No log | 10.8205 | 422 | 0.7120 | 0.5409 | 0.7120 | 0.8438 |
| No log | 10.8718 | 424 | 0.7198 | 0.4778 | 0.7198 | 0.8484 |
| No log | 10.9231 | 426 | 0.7308 | 0.4659 | 0.7308 | 0.8549 |
| No log | 10.9744 | 428 | 0.7425 | 0.4659 | 0.7425 | 0.8617 |
| No log | 11.0256 | 430 | 0.7532 | 0.4540 | 0.7532 | 0.8679 |
| No log | 11.0769 | 432 | 0.7558 | 0.4908 | 0.7558 | 0.8694 |
| No log | 11.1282 | 434 | 0.7613 | 0.4628 | 0.7613 | 0.8725 |
| No log | 11.1795 | 436 | 0.7515 | 0.4628 | 0.7515 | 0.8669 |
| No log | 11.2308 | 438 | 0.7441 | 0.5017 | 0.7441 | 0.8626 |
| No log | 11.2821 | 440 | 0.7399 | 0.4888 | 0.7399 | 0.8602 |
| No log | 11.3333 | 442 | 0.7240 | 0.4644 | 0.7240 | 0.8509 |
| No log | 11.3846 | 444 | 0.7116 | 0.4988 | 0.7116 | 0.8436 |
| No log | 11.4359 | 446 | 0.7072 | 0.5654 | 0.7072 | 0.8410 |
| No log | 11.4872 | 448 | 0.6701 | 0.5905 | 0.6701 | 0.8186 |
| No log | 11.5385 | 450 | 0.6630 | 0.5988 | 0.6630 | 0.8142 |
| No log | 11.5897 | 452 | 0.6584 | 0.5988 | 0.6584 | 0.8114 |
| No log | 11.6410 | 454 | 0.6558 | 0.5988 | 0.6558 | 0.8098 |
| No log | 11.6923 | 456 | 0.6526 | 0.5988 | 0.6526 | 0.8079 |
| No log | 11.7436 | 458 | 0.6552 | 0.5988 | 0.6552 | 0.8095 |
| No log | 11.7949 | 460 | 0.6613 | 0.5988 | 0.6613 | 0.8132 |
| No log | 11.8462 | 462 | 0.6737 | 0.5988 | 0.6737 | 0.8208 |
| No log | 11.8974 | 464 | 0.6644 | 0.5988 | 0.6644 | 0.8151 |
| No log | 11.9487 | 466 | 0.6623 | 0.5961 | 0.6623 | 0.8138 |
| No log | 12.0 | 468 | 0.6670 | 0.5542 | 0.6670 | 0.8167 |
| No log | 12.0513 | 470 | 0.6739 | 0.5845 | 0.6739 | 0.8209 |
| No log | 12.1026 | 472 | 0.6654 | 0.5542 | 0.6654 | 0.8157 |
| No log | 12.1538 | 474 | 0.6615 | 0.5542 | 0.6615 | 0.8133 |
| No log | 12.2051 | 476 | 0.6597 | 0.5759 | 0.6597 | 0.8122 |
| No log | 12.2564 | 478 | 0.6686 | 0.5492 | 0.6686 | 0.8177 |
| No log | 12.3077 | 480 | 0.6927 | 0.5235 | 0.6927 | 0.8323 |
| No log | 12.3590 | 482 | 0.7280 | 0.5654 | 0.7280 | 0.8532 |
| No log | 12.4103 | 484 | 0.7181 | 0.5208 | 0.7181 | 0.8474 |
| No log | 12.4615 | 486 | 0.7171 | 0.5657 | 0.7171 | 0.8468 |
| No log | 12.5128 | 488 | 0.7037 | 0.6028 | 0.7037 | 0.8389 |
| No log | 12.5641 | 490 | 0.6923 | 0.5713 | 0.6923 | 0.8321 |
| No log | 12.6154 | 492 | 0.6854 | 0.5606 | 0.6854 | 0.8279 |
| No log | 12.6667 | 494 | 0.6867 | 0.4923 | 0.6867 | 0.8287 |
| No log | 12.7179 | 496 | 0.6851 | 0.5174 | 0.6851 | 0.8277 |
| No log | 12.7692 | 498 | 0.6821 | 0.5626 | 0.6821 | 0.8259 |
| 0.313 | 12.8205 | 500 | 0.6865 | 0.5736 | 0.6865 | 0.8286 |
| 0.313 | 12.8718 | 502 | 0.6896 | 0.5500 | 0.6896 | 0.8304 |
| 0.313 | 12.9231 | 504 | 0.6987 | 0.5945 | 0.6987 | 0.8359 |
| 0.313 | 12.9744 | 506 | 0.7213 | 0.6234 | 0.7213 | 0.8493 |
| 0.313 | 13.0256 | 508 | 0.7276 | 0.6256 | 0.7276 | 0.8530 |
| 0.313 | 13.0769 | 510 | 0.6891 | 0.6078 | 0.6891 | 0.8301 |
| 0.313 | 13.1282 | 512 | 0.7008 | 0.5010 | 0.7008 | 0.8372 |
| 0.313 | 13.1795 | 514 | 0.7365 | 0.5093 | 0.7365 | 0.8582 |
| 0.313 | 13.2308 | 516 | 0.7148 | 0.4988 | 0.7148 | 0.8454 |
| 0.313 | 13.2821 | 518 | 0.6964 | 0.5428 | 0.6964 | 0.8345 |
| 0.313 | 13.3333 | 520 | 0.7568 | 0.5446 | 0.7568 | 0.8699 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
phungkhaccuong/01b092b7-7e7a-ccbc-4011-c74e23a869d1
|
phungkhaccuong
| 2025-01-12T20:16:28Z
| 10
| 0
|
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Mistral-Nemo-Base-2407",
"base_model:adapter:unsloth/Mistral-Nemo-Base-2407",
"license:apache-2.0",
"region:us"
] | null | 2025-01-12T19:56:59Z
|
---
library_name: peft
license: apache-2.0
base_model: unsloth/Mistral-Nemo-Base-2407
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 01b092b7-7e7a-ccbc-4011-c74e23a869d1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Mistral-Nemo-Base-2407
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ac97fde3045e6c49_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ac97fde3045e6c49_train_data.json
type:
field_instruction: title
field_output: abstract
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: phungkhaccuong/01b092b7-7e7a-ccbc-4011-c74e23a869d1
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/ac97fde3045e6c49_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0c937bed-41f3-4a3f-afb4-4db0e61eff26
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0c937bed-41f3-4a3f-afb4-4db0e61eff26
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 01b092b7-7e7a-ccbc-4011-c74e23a869d1
This model is a fine-tuned version of [unsloth/Mistral-Nemo-Base-2407](https://huggingface.co/unsloth/Mistral-Nemo-Base-2407) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1157
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0007 | 1 | 2.2312 |
| 8.8291 | 0.0074 | 10 | 2.1780 |
| 8.6161 | 0.0147 | 20 | 2.1303 |
| 8.1037 | 0.0221 | 30 | 2.1198 |
| 8.2349 | 0.0294 | 40 | 2.1164 |
| 8.4466 | 0.0368 | 50 | 2.1157 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
nhung01/73ee70c3-e362-4fbc-bad0-95142e478684
|
nhung01
| 2025-01-12T20:15:57Z
| 10
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Hermes-2-SOLAR-10.7B",
"base_model:adapter:NousResearch/Nous-Hermes-2-SOLAR-10.7B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-12T19:39:47Z
|
---
library_name: peft
license: apache-2.0
base_model: NousResearch/Nous-Hermes-2-SOLAR-10.7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 73ee70c3-e362-4fbc-bad0-95142e478684
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Nous-Hermes-2-SOLAR-10.7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 35e42979deef2ace_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/35e42979deef2ace_train_data.json
type:
field_instruction: prompt
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung01/73ee70c3-e362-4fbc-bad0-95142e478684
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/35e42979deef2ace_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8a25c2d0-3f47-4475-82ef-74ba7cd1fcaa
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 8a25c2d0-3f47-4475-82ef-74ba7cd1fcaa
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 73ee70c3-e362-4fbc-bad0-95142e478684
This model is a fine-tuned version of [NousResearch/Nous-Hermes-2-SOLAR-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2362
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.5874 | 0.7583 | 200 | 1.2362 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
vermoney/c07ec7b8-c7fb-4b72-97ba-27729a734d72
|
vermoney
| 2025-01-12T20:15:02Z
| 11
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:MLP-KTLim/llama-3-Korean-Bllossom-8B",
"base_model:adapter:MLP-KTLim/llama-3-Korean-Bllossom-8B",
"license:llama3",
"region:us"
] | null | 2025-01-12T20:11:58Z
|
---
library_name: peft
license: llama3
base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c07ec7b8-c7fb-4b72-97ba-27729a734d72
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b03261914fc5eea7_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b03261914fc5eea7_train_data.json
type:
field_instruction: prompt
field_output: response-suggestion
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: vermoney/c07ec7b8-c7fb-4b72-97ba-27729a734d72
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 78GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/b03261914fc5eea7_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_hf
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 761b9917-3fec-41e1-81b6-128f7eff9b04
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 761b9917-3fec-41e1-81b6-128f7eff9b04
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# c07ec7b8-c7fb-4b72-97ba-27729a734d72
This model is a fine-tuned version of [MLP-KTLim/llama-3-Korean-Bllossom-8B](https://huggingface.co/MLP-KTLim/llama-3-Korean-Bllossom-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0593
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0028 | 1 | 1.2117 |
| 1.1092 | 0.0222 | 8 | 1.1768 |
| 0.9801 | 0.0443 | 16 | 1.0897 |
| 1.0679 | 0.0665 | 24 | 1.0593 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
ell-hol/zr-all-lr-fx-dv
|
ell-hol
| 2025-01-12T20:12:36Z
| 76
| 0
|
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-01-12T20:12:35Z
|
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Zara
---
# Zr All Lr Fx Dv
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Zara` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('ell-hol/zr-all-lr-fx-dv', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
SenhorDasMoscas/acho-classification-06-01-2025-update
|
SenhorDasMoscas
| 2025-01-12T20:12:31Z
| 33
| 0
|
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-06T18:22:20Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MayBashendy/ArabicNewSplits7_FineTuningAraBERT_run1_AugV5_k14_task5_organization
|
MayBashendy
| 2025-01-12T20:10:37Z
| 8
| 0
|
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-12T20:03:10Z
|
---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_FineTuningAraBERT_run1_AugV5_k14_task5_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_FineTuningAraBERT_run1_AugV5_k14_task5_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6658
- Qwk: 0.4692
- Mse: 0.6658
- Rmse: 0.8160
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0556 | 2 | 3.8567 | -0.0047 | 3.8567 | 1.9639 |
| No log | 0.1111 | 4 | 2.0105 | 0.0727 | 2.0105 | 1.4179 |
| No log | 0.1667 | 6 | 1.2549 | -0.0148 | 1.2549 | 1.1202 |
| No log | 0.2222 | 8 | 1.3524 | -0.0245 | 1.3524 | 1.1629 |
| No log | 0.2778 | 10 | 2.0829 | 0.0342 | 2.0829 | 1.4432 |
| No log | 0.3333 | 12 | 1.9277 | 0.0733 | 1.9277 | 1.3884 |
| No log | 0.3889 | 14 | 1.3579 | -0.0212 | 1.3579 | 1.1653 |
| No log | 0.4444 | 16 | 1.1292 | 0.2068 | 1.1292 | 1.0626 |
| No log | 0.5 | 18 | 1.0421 | 0.2140 | 1.0421 | 1.0209 |
| No log | 0.5556 | 20 | 1.0086 | 0.1504 | 1.0086 | 1.0043 |
| No log | 0.6111 | 22 | 1.0858 | 0.2873 | 1.0858 | 1.0420 |
| No log | 0.6667 | 24 | 1.0611 | 0.4051 | 1.0611 | 1.0301 |
| No log | 0.7222 | 26 | 1.0456 | 0.3014 | 1.0456 | 1.0226 |
| No log | 0.7778 | 28 | 1.1047 | 0.2004 | 1.1047 | 1.0510 |
| No log | 0.8333 | 30 | 1.1097 | 0.2030 | 1.1097 | 1.0534 |
| No log | 0.8889 | 32 | 1.0770 | 0.2100 | 1.0770 | 1.0378 |
| No log | 0.9444 | 34 | 1.0185 | 0.3540 | 1.0185 | 1.0092 |
| No log | 1.0 | 36 | 1.0004 | 0.2416 | 1.0004 | 1.0002 |
| No log | 1.0556 | 38 | 1.0030 | 0.2978 | 1.0030 | 1.0015 |
| No log | 1.1111 | 40 | 0.9584 | 0.2214 | 0.9584 | 0.9790 |
| No log | 1.1667 | 42 | 1.0020 | 0.2611 | 1.0020 | 1.0010 |
| No log | 1.2222 | 44 | 0.9943 | 0.2196 | 0.9943 | 0.9971 |
| No log | 1.2778 | 46 | 0.9484 | 0.2114 | 0.9484 | 0.9738 |
| No log | 1.3333 | 48 | 0.9981 | 0.2077 | 0.9981 | 0.9991 |
| No log | 1.3889 | 50 | 1.1638 | 0.2188 | 1.1638 | 1.0788 |
| No log | 1.4444 | 52 | 1.0401 | 0.2604 | 1.0401 | 1.0199 |
| No log | 1.5 | 54 | 0.8369 | 0.4 | 0.8369 | 0.9148 |
| No log | 1.5556 | 56 | 0.9467 | 0.3541 | 0.9467 | 0.9730 |
| No log | 1.6111 | 58 | 0.9212 | 0.3666 | 0.9212 | 0.9598 |
| No log | 1.6667 | 60 | 0.8263 | 0.3876 | 0.8263 | 0.9090 |
| No log | 1.7222 | 62 | 0.8329 | 0.3961 | 0.8329 | 0.9126 |
| No log | 1.7778 | 64 | 0.9074 | 0.4025 | 0.9074 | 0.9526 |
| No log | 1.8333 | 66 | 0.9100 | 0.4273 | 0.9100 | 0.9539 |
| No log | 1.8889 | 68 | 0.7956 | 0.4516 | 0.7956 | 0.8920 |
| No log | 1.9444 | 70 | 0.7761 | 0.4019 | 0.7761 | 0.8810 |
| No log | 2.0 | 72 | 0.7771 | 0.4086 | 0.7771 | 0.8816 |
| No log | 2.0556 | 74 | 0.8137 | 0.3071 | 0.8137 | 0.9021 |
| No log | 2.1111 | 76 | 0.8307 | 0.3476 | 0.8307 | 0.9114 |
| No log | 2.1667 | 78 | 0.8999 | 0.3569 | 0.8999 | 0.9487 |
| No log | 2.2222 | 80 | 0.9733 | 0.3308 | 0.9733 | 0.9865 |
| No log | 2.2778 | 82 | 0.9681 | 0.4533 | 0.9681 | 0.9839 |
| No log | 2.3333 | 84 | 0.8659 | 0.5176 | 0.8659 | 0.9305 |
| No log | 2.3889 | 86 | 0.8541 | 0.3721 | 0.8541 | 0.9242 |
| No log | 2.4444 | 88 | 0.9335 | 0.3654 | 0.9335 | 0.9662 |
| No log | 2.5 | 90 | 1.0241 | 0.2794 | 1.0241 | 1.0120 |
| No log | 2.5556 | 92 | 0.9529 | 0.2618 | 0.9529 | 0.9762 |
| No log | 2.6111 | 94 | 1.0164 | 0.2171 | 1.0164 | 1.0081 |
| No log | 2.6667 | 96 | 1.0313 | 0.2465 | 1.0313 | 1.0155 |
| No log | 2.7222 | 98 | 0.8804 | 0.3414 | 0.8804 | 0.9383 |
| No log | 2.7778 | 100 | 0.8379 | 0.4301 | 0.8379 | 0.9154 |
| No log | 2.8333 | 102 | 0.9872 | 0.3400 | 0.9872 | 0.9936 |
| No log | 2.8889 | 104 | 1.0138 | 0.3663 | 1.0138 | 1.0069 |
| No log | 2.9444 | 106 | 1.1835 | 0.3293 | 1.1835 | 1.0879 |
| No log | 3.0 | 108 | 1.0381 | 0.3744 | 1.0381 | 1.0189 |
| No log | 3.0556 | 110 | 0.7990 | 0.5680 | 0.7990 | 0.8938 |
| No log | 3.1111 | 112 | 0.8047 | 0.4734 | 0.8047 | 0.8970 |
| No log | 3.1667 | 114 | 0.7846 | 0.4644 | 0.7846 | 0.8858 |
| No log | 3.2222 | 116 | 0.8866 | 0.2873 | 0.8866 | 0.9416 |
| No log | 3.2778 | 118 | 0.8347 | 0.3658 | 0.8347 | 0.9136 |
| No log | 3.3333 | 120 | 0.7583 | 0.4822 | 0.7583 | 0.8708 |
| No log | 3.3889 | 122 | 0.7847 | 0.4472 | 0.7847 | 0.8859 |
| No log | 3.4444 | 124 | 0.7473 | 0.4914 | 0.7473 | 0.8645 |
| No log | 3.5 | 126 | 0.8041 | 0.4697 | 0.8041 | 0.8967 |
| No log | 3.5556 | 128 | 0.7461 | 0.5572 | 0.7461 | 0.8638 |
| No log | 3.6111 | 130 | 0.6856 | 0.5302 | 0.6856 | 0.8280 |
| No log | 3.6667 | 132 | 0.6942 | 0.5428 | 0.6942 | 0.8332 |
| No log | 3.7222 | 134 | 0.6679 | 0.5635 | 0.6679 | 0.8172 |
| No log | 3.7778 | 136 | 0.6672 | 0.5635 | 0.6672 | 0.8168 |
| No log | 3.8333 | 138 | 0.6548 | 0.5635 | 0.6548 | 0.8092 |
| No log | 3.8889 | 140 | 0.6604 | 0.6001 | 0.6604 | 0.8126 |
| No log | 3.9444 | 142 | 0.6668 | 0.5302 | 0.6668 | 0.8166 |
| No log | 4.0 | 144 | 0.8030 | 0.5359 | 0.8030 | 0.8961 |
| No log | 4.0556 | 146 | 0.9382 | 0.4764 | 0.9382 | 0.9686 |
| No log | 4.1111 | 148 | 0.8548 | 0.4752 | 0.8548 | 0.9246 |
| No log | 4.1667 | 150 | 0.6831 | 0.4960 | 0.6831 | 0.8265 |
| No log | 4.2222 | 152 | 0.7376 | 0.4560 | 0.7376 | 0.8589 |
| No log | 4.2778 | 154 | 0.7601 | 0.4162 | 0.7601 | 0.8719 |
| No log | 4.3333 | 156 | 0.7126 | 0.5108 | 0.7126 | 0.8442 |
| No log | 4.3889 | 158 | 0.6749 | 0.6127 | 0.6749 | 0.8215 |
| No log | 4.4444 | 160 | 0.7123 | 0.5083 | 0.7123 | 0.8440 |
| No log | 4.5 | 162 | 0.9315 | 0.3847 | 0.9315 | 0.9651 |
| No log | 4.5556 | 164 | 1.0390 | 0.3744 | 1.0390 | 1.0193 |
| No log | 4.6111 | 166 | 0.8702 | 0.4284 | 0.8702 | 0.9329 |
| No log | 4.6667 | 168 | 0.7008 | 0.5712 | 0.7008 | 0.8372 |
| No log | 4.7222 | 170 | 0.7060 | 0.5202 | 0.7060 | 0.8402 |
| No log | 4.7778 | 172 | 0.6833 | 0.5060 | 0.6833 | 0.8266 |
| No log | 4.8333 | 174 | 0.7466 | 0.4714 | 0.7466 | 0.8641 |
| No log | 4.8889 | 176 | 0.8303 | 0.3864 | 0.8303 | 0.9112 |
| No log | 4.9444 | 178 | 0.8768 | 0.4407 | 0.8768 | 0.9364 |
| No log | 5.0 | 180 | 0.7330 | 0.5400 | 0.7330 | 0.8562 |
| No log | 5.0556 | 182 | 0.6685 | 0.5432 | 0.6685 | 0.8176 |
| No log | 5.1111 | 184 | 0.6778 | 0.5441 | 0.6778 | 0.8233 |
| No log | 5.1667 | 186 | 0.6483 | 0.6475 | 0.6483 | 0.8052 |
| No log | 5.2222 | 188 | 0.6542 | 0.6465 | 0.6542 | 0.8089 |
| No log | 5.2778 | 190 | 0.6345 | 0.5432 | 0.6345 | 0.7966 |
| No log | 5.3333 | 192 | 0.7223 | 0.4893 | 0.7223 | 0.8499 |
| No log | 5.3889 | 194 | 0.8536 | 0.4841 | 0.8536 | 0.9239 |
| No log | 5.4444 | 196 | 0.7374 | 0.5459 | 0.7374 | 0.8587 |
| No log | 5.5 | 198 | 0.6579 | 0.5386 | 0.6579 | 0.8111 |
| No log | 5.5556 | 200 | 0.7636 | 0.5356 | 0.7636 | 0.8739 |
| No log | 5.6111 | 202 | 0.8157 | 0.4681 | 0.8157 | 0.9032 |
| No log | 5.6667 | 204 | 0.7589 | 0.4352 | 0.7589 | 0.8712 |
| No log | 5.7222 | 206 | 0.7304 | 0.5171 | 0.7304 | 0.8547 |
| No log | 5.7778 | 208 | 0.8545 | 0.4578 | 0.8545 | 0.9244 |
| No log | 5.8333 | 210 | 0.8673 | 0.4240 | 0.8673 | 0.9313 |
| No log | 5.8889 | 212 | 0.7512 | 0.4998 | 0.7512 | 0.8667 |
| No log | 5.9444 | 214 | 0.6793 | 0.5156 | 0.6793 | 0.8242 |
| No log | 6.0 | 216 | 0.7309 | 0.5433 | 0.7309 | 0.8549 |
| No log | 6.0556 | 218 | 0.7435 | 0.5618 | 0.7435 | 0.8622 |
| No log | 6.1111 | 220 | 0.6781 | 0.5605 | 0.6781 | 0.8235 |
| No log | 6.1667 | 222 | 0.6499 | 0.5523 | 0.6499 | 0.8062 |
| No log | 6.2222 | 224 | 0.6521 | 0.5647 | 0.6521 | 0.8075 |
| No log | 6.2778 | 226 | 0.6465 | 0.5626 | 0.6465 | 0.8041 |
| No log | 6.3333 | 228 | 0.6457 | 0.5274 | 0.6457 | 0.8036 |
| No log | 6.3889 | 230 | 0.6560 | 0.6073 | 0.6560 | 0.8099 |
| No log | 6.4444 | 232 | 0.6533 | 0.5505 | 0.6533 | 0.8083 |
| No log | 6.5 | 234 | 0.6527 | 0.5523 | 0.6527 | 0.8079 |
| No log | 6.5556 | 236 | 0.6646 | 0.6154 | 0.6646 | 0.8153 |
| No log | 6.6111 | 238 | 0.6942 | 0.6415 | 0.6942 | 0.8332 |
| No log | 6.6667 | 240 | 0.6928 | 0.6051 | 0.6928 | 0.8323 |
| No log | 6.7222 | 242 | 0.6591 | 0.6311 | 0.6591 | 0.8118 |
| No log | 6.7778 | 244 | 0.6321 | 0.6076 | 0.6321 | 0.7951 |
| No log | 6.8333 | 246 | 0.6480 | 0.5830 | 0.6480 | 0.8050 |
| No log | 6.8889 | 248 | 0.6462 | 0.6144 | 0.6462 | 0.8039 |
| No log | 6.9444 | 250 | 0.6119 | 0.6046 | 0.6119 | 0.7823 |
| No log | 7.0 | 252 | 0.6487 | 0.6479 | 0.6487 | 0.8054 |
| No log | 7.0556 | 254 | 0.7613 | 0.5120 | 0.7613 | 0.8725 |
| No log | 7.1111 | 256 | 0.7467 | 0.5120 | 0.7467 | 0.8641 |
| No log | 7.1667 | 258 | 0.6497 | 0.5774 | 0.6497 | 0.8060 |
| No log | 7.2222 | 260 | 0.6021 | 0.6606 | 0.6021 | 0.7760 |
| No log | 7.2778 | 262 | 0.6476 | 0.5536 | 0.6476 | 0.8047 |
| No log | 7.3333 | 264 | 0.6541 | 0.5635 | 0.6541 | 0.8088 |
| No log | 7.3889 | 266 | 0.6790 | 0.5558 | 0.6790 | 0.8240 |
| No log | 7.4444 | 268 | 0.7201 | 0.5400 | 0.7201 | 0.8486 |
| No log | 7.5 | 270 | 0.7019 | 0.5688 | 0.7019 | 0.8378 |
| No log | 7.5556 | 272 | 0.6896 | 0.5415 | 0.6896 | 0.8304 |
| No log | 7.6111 | 274 | 0.7182 | 0.5312 | 0.7182 | 0.8475 |
| No log | 7.6667 | 276 | 0.7048 | 0.5300 | 0.7048 | 0.8395 |
| No log | 7.7222 | 278 | 0.7254 | 0.5748 | 0.7254 | 0.8517 |
| No log | 7.7778 | 280 | 0.7108 | 0.6035 | 0.7108 | 0.8431 |
| No log | 7.8333 | 282 | 0.7227 | 0.5748 | 0.7227 | 0.8501 |
| No log | 7.8889 | 284 | 0.6794 | 0.5936 | 0.6794 | 0.8242 |
| No log | 7.9444 | 286 | 0.6504 | 0.5887 | 0.6504 | 0.8065 |
| No log | 8.0 | 288 | 0.6483 | 0.6154 | 0.6483 | 0.8052 |
| No log | 8.0556 | 290 | 0.6534 | 0.6262 | 0.6534 | 0.8083 |
| No log | 8.1111 | 292 | 0.6549 | 0.5932 | 0.6549 | 0.8093 |
| No log | 8.1667 | 294 | 0.6574 | 0.6325 | 0.6574 | 0.8108 |
| No log | 8.2222 | 296 | 0.6522 | 0.6113 | 0.6522 | 0.8076 |
| No log | 8.2778 | 298 | 0.6485 | 0.6335 | 0.6485 | 0.8053 |
| No log | 8.3333 | 300 | 0.6521 | 0.5722 | 0.6521 | 0.8075 |
| No log | 8.3889 | 302 | 0.6817 | 0.5640 | 0.6817 | 0.8256 |
| No log | 8.4444 | 304 | 0.6939 | 0.4473 | 0.6939 | 0.8330 |
| No log | 8.5 | 306 | 0.6712 | 0.5432 | 0.6712 | 0.8193 |
| No log | 8.5556 | 308 | 0.6979 | 0.4510 | 0.6979 | 0.8354 |
| No log | 8.6111 | 310 | 0.6809 | 0.4868 | 0.6809 | 0.8252 |
| No log | 8.6667 | 312 | 0.6536 | 0.5856 | 0.6536 | 0.8084 |
| No log | 8.7222 | 314 | 0.6560 | 0.6165 | 0.6560 | 0.8100 |
| No log | 8.7778 | 316 | 0.6687 | 0.6043 | 0.6687 | 0.8177 |
| No log | 8.8333 | 318 | 0.6792 | 0.5375 | 0.6792 | 0.8241 |
| No log | 8.8889 | 320 | 0.6884 | 0.4778 | 0.6884 | 0.8297 |
| No log | 8.9444 | 322 | 0.7004 | 0.4888 | 0.7004 | 0.8369 |
| No log | 9.0 | 324 | 0.7251 | 0.4981 | 0.7251 | 0.8515 |
| No log | 9.0556 | 326 | 0.7389 | 0.5446 | 0.7389 | 0.8596 |
| No log | 9.1111 | 328 | 0.7136 | 0.4858 | 0.7136 | 0.8447 |
| No log | 9.1667 | 330 | 0.6789 | 0.4995 | 0.6789 | 0.8240 |
| No log | 9.2222 | 332 | 0.6778 | 0.4858 | 0.6778 | 0.8233 |
| No log | 9.2778 | 334 | 0.7040 | 0.4966 | 0.7040 | 0.8391 |
| No log | 9.3333 | 336 | 0.7311 | 0.5385 | 0.7311 | 0.8550 |
| No log | 9.3889 | 338 | 0.7625 | 0.5672 | 0.7625 | 0.8732 |
| No log | 9.4444 | 340 | 0.7041 | 0.5292 | 0.7041 | 0.8391 |
| No log | 9.5 | 342 | 0.6118 | 0.6096 | 0.6118 | 0.7822 |
| No log | 9.5556 | 344 | 0.5940 | 0.6096 | 0.5940 | 0.7707 |
| No log | 9.6111 | 346 | 0.6146 | 0.5712 | 0.6146 | 0.7840 |
| No log | 9.6667 | 348 | 0.6602 | 0.6053 | 0.6602 | 0.8125 |
| No log | 9.7222 | 350 | 0.6863 | 0.5905 | 0.6863 | 0.8284 |
| No log | 9.7778 | 352 | 0.6276 | 0.5932 | 0.6276 | 0.7922 |
| No log | 9.8333 | 354 | 0.6096 | 0.5647 | 0.6096 | 0.7808 |
| No log | 9.8889 | 356 | 0.6545 | 0.5242 | 0.6545 | 0.8090 |
| No log | 9.9444 | 358 | 0.6652 | 0.5242 | 0.6652 | 0.8156 |
| No log | 10.0 | 360 | 0.6239 | 0.5199 | 0.6239 | 0.7898 |
| No log | 10.0556 | 362 | 0.6163 | 0.6096 | 0.6163 | 0.7851 |
| No log | 10.1111 | 364 | 0.6531 | 0.6054 | 0.6531 | 0.8082 |
| No log | 10.1667 | 366 | 0.6405 | 0.6065 | 0.6405 | 0.8003 |
| No log | 10.2222 | 368 | 0.6097 | 0.5724 | 0.6097 | 0.7808 |
| No log | 10.2778 | 370 | 0.6081 | 0.5505 | 0.6081 | 0.7798 |
| No log | 10.3333 | 372 | 0.6033 | 0.5724 | 0.6033 | 0.7767 |
| No log | 10.3889 | 374 | 0.5995 | 0.6065 | 0.5995 | 0.7743 |
| No log | 10.4444 | 376 | 0.6005 | 0.6065 | 0.6005 | 0.7749 |
| No log | 10.5 | 378 | 0.6027 | 0.6407 | 0.6027 | 0.7763 |
| No log | 10.5556 | 380 | 0.5776 | 0.6186 | 0.5776 | 0.7600 |
| No log | 10.6111 | 382 | 0.5751 | 0.5988 | 0.5751 | 0.7583 |
| No log | 10.6667 | 384 | 0.5997 | 0.6119 | 0.5997 | 0.7744 |
| No log | 10.7222 | 386 | 0.5899 | 0.6119 | 0.5899 | 0.7680 |
| No log | 10.7778 | 388 | 0.5658 | 0.6796 | 0.5658 | 0.7522 |
| No log | 10.8333 | 390 | 0.5950 | 0.6597 | 0.5950 | 0.7713 |
| No log | 10.8889 | 392 | 0.5903 | 0.6639 | 0.5903 | 0.7683 |
| No log | 10.9444 | 394 | 0.5907 | 0.5882 | 0.5907 | 0.7685 |
| No log | 11.0 | 396 | 0.5926 | 0.5659 | 0.5926 | 0.7698 |
| No log | 11.0556 | 398 | 0.5859 | 0.6427 | 0.5859 | 0.7655 |
| No log | 11.1111 | 400 | 0.6109 | 0.6479 | 0.6109 | 0.7816 |
| No log | 11.1667 | 402 | 0.6601 | 0.5846 | 0.6601 | 0.8124 |
| No log | 11.2222 | 404 | 0.6575 | 0.5521 | 0.6575 | 0.8108 |
| No log | 11.2778 | 406 | 0.6268 | 0.5640 | 0.6268 | 0.7917 |
| No log | 11.3333 | 408 | 0.6207 | 0.5644 | 0.6207 | 0.7878 |
| No log | 11.3889 | 410 | 0.6283 | 0.4554 | 0.6283 | 0.7927 |
| No log | 11.4444 | 412 | 0.6256 | 0.4554 | 0.6256 | 0.7910 |
| No log | 11.5 | 414 | 0.6096 | 0.5288 | 0.6096 | 0.7808 |
| No log | 11.5556 | 416 | 0.5969 | 0.5939 | 0.5969 | 0.7726 |
| No log | 11.6111 | 418 | 0.5971 | 0.6133 | 0.5971 | 0.7727 |
| No log | 11.6667 | 420 | 0.6110 | 0.6500 | 0.6110 | 0.7817 |
| No log | 11.7222 | 422 | 0.6229 | 0.6564 | 0.6229 | 0.7892 |
| No log | 11.7778 | 424 | 0.6050 | 0.6133 | 0.6050 | 0.7778 |
| No log | 11.8333 | 426 | 0.6056 | 0.5505 | 0.6056 | 0.7782 |
| No log | 11.8889 | 428 | 0.6218 | 0.5428 | 0.6218 | 0.7885 |
| No log | 11.9444 | 430 | 0.6260 | 0.5555 | 0.6260 | 0.7912 |
| No log | 12.0 | 432 | 0.6217 | 0.6013 | 0.6217 | 0.7885 |
| No log | 12.0556 | 434 | 0.6157 | 0.6219 | 0.6157 | 0.7846 |
| No log | 12.1111 | 436 | 0.6028 | 0.6606 | 0.6028 | 0.7764 |
| No log | 12.1667 | 438 | 0.6124 | 0.6632 | 0.6124 | 0.7825 |
| No log | 12.2222 | 440 | 0.7129 | 0.5572 | 0.7129 | 0.8443 |
| No log | 12.2778 | 442 | 0.7703 | 0.5417 | 0.7703 | 0.8777 |
| No log | 12.3333 | 444 | 0.7334 | 0.5543 | 0.7334 | 0.8564 |
| No log | 12.3889 | 446 | 0.6798 | 0.5888 | 0.6798 | 0.8245 |
| No log | 12.4444 | 448 | 0.6413 | 0.5949 | 0.6413 | 0.8008 |
| No log | 12.5 | 450 | 0.6256 | 0.5784 | 0.6256 | 0.7910 |
| No log | 12.5556 | 452 | 0.6255 | 0.5784 | 0.6255 | 0.7909 |
| No log | 12.6111 | 454 | 0.6456 | 0.5823 | 0.6456 | 0.8035 |
| No log | 12.6667 | 456 | 0.6693 | 0.5833 | 0.6693 | 0.8181 |
| No log | 12.7222 | 458 | 0.6561 | 0.6177 | 0.6561 | 0.8100 |
| No log | 12.7778 | 460 | 0.6258 | 0.6335 | 0.6258 | 0.7911 |
| No log | 12.8333 | 462 | 0.6302 | 0.6133 | 0.6302 | 0.7939 |
| No log | 12.8889 | 464 | 0.6594 | 0.6133 | 0.6594 | 0.8120 |
| No log | 12.9444 | 466 | 0.6565 | 0.6133 | 0.6565 | 0.8102 |
| No log | 13.0 | 468 | 0.6298 | 0.5712 | 0.6298 | 0.7936 |
| No log | 13.0556 | 470 | 0.6144 | 0.5498 | 0.6144 | 0.7839 |
| No log | 13.1111 | 472 | 0.6108 | 0.5939 | 0.6108 | 0.7815 |
| No log | 13.1667 | 474 | 0.6064 | 0.5939 | 0.6064 | 0.7787 |
| No log | 13.2222 | 476 | 0.6052 | 0.5939 | 0.6052 | 0.7779 |
| No log | 13.2778 | 478 | 0.6047 | 0.5939 | 0.6047 | 0.7776 |
| No log | 13.3333 | 480 | 0.6035 | 0.5939 | 0.6035 | 0.7769 |
| No log | 13.3889 | 482 | 0.6271 | 0.5329 | 0.6271 | 0.7919 |
| No log | 13.4444 | 484 | 0.6543 | 0.5343 | 0.6543 | 0.8089 |
| No log | 13.5 | 486 | 0.6666 | 0.5228 | 0.6666 | 0.8164 |
| No log | 13.5556 | 488 | 0.6569 | 0.5228 | 0.6569 | 0.8105 |
| No log | 13.6111 | 490 | 0.6600 | 0.5112 | 0.6600 | 0.8124 |
| No log | 13.6667 | 492 | 0.6355 | 0.5328 | 0.6355 | 0.7972 |
| No log | 13.7222 | 494 | 0.6101 | 0.5626 | 0.6101 | 0.7811 |
| No log | 13.7778 | 496 | 0.6057 | 0.5939 | 0.6057 | 0.7783 |
| No log | 13.8333 | 498 | 0.6061 | 0.6133 | 0.6061 | 0.7785 |
| 0.2855 | 13.8889 | 500 | 0.6159 | 0.5412 | 0.6159 | 0.7848 |
| 0.2855 | 13.9444 | 502 | 0.6816 | 0.5112 | 0.6816 | 0.8256 |
| 0.2855 | 14.0 | 504 | 0.8116 | 0.4686 | 0.8116 | 0.9009 |
| 0.2855 | 14.0556 | 506 | 0.8213 | 0.4670 | 0.8213 | 0.9063 |
| 0.2855 | 14.1111 | 508 | 0.7411 | 0.4755 | 0.7411 | 0.8609 |
| 0.2855 | 14.1667 | 510 | 0.6658 | 0.4692 | 0.6658 | 0.8160 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
chchen/Llama-3.1-8B-Instruct-SFT-500
|
chchen
| 2025-01-12T20:08:32Z
| 9
| 0
|
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"region:us"
] | null | 2025-01-12T20:01:34Z
|
---
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: peft
license: llama3.1
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: Llama-3.1-8B-Instruct-SFT-500
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.1-8B-Instruct-SFT-500
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on the bct_non_cot_sft_500 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0781
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8493 | 1.7778 | 50 | 0.8185 |
| 0.1595 | 3.5556 | 100 | 0.1123 |
| 0.0797 | 5.3333 | 150 | 0.0811 |
| 0.0997 | 7.1111 | 200 | 0.0789 |
| 0.0896 | 8.8889 | 250 | 0.0781 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.45.2
- Pytorch 2.3.0
- Datasets 2.19.0
- Tokenizers 0.20.0
|
marialvsantiago/001e5a2b-be1d-4d67-af2c-1a4f49b19281
|
marialvsantiago
| 2025-01-12T20:06:14Z
| 10
| 0
|
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Mistral-Nemo-Base-2407",
"base_model:adapter:unsloth/Mistral-Nemo-Base-2407",
"license:apache-2.0",
"region:us"
] | null | 2025-01-12T19:57:20Z
|
---
library_name: peft
license: apache-2.0
base_model: unsloth/Mistral-Nemo-Base-2407
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 001e5a2b-be1d-4d67-af2c-1a4f49b19281
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Mistral-Nemo-Base-2407
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ac97fde3045e6c49_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ac97fde3045e6c49_train_data.json
type:
field_instruction: title
field_output: abstract
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: marialvsantiago/001e5a2b-be1d-4d67-af2c-1a4f49b19281
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 78GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/ac97fde3045e6c49_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_hf
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0c937bed-41f3-4a3f-afb4-4db0e61eff26
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0c937bed-41f3-4a3f-afb4-4db0e61eff26
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 001e5a2b-be1d-4d67-af2c-1a4f49b19281
This model is a fine-tuned version of [unsloth/Mistral-Nemo-Base-2407](https://huggingface.co/unsloth/Mistral-Nemo-Base-2407) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0007 | 1 | nan |
| 0.0 | 0.0059 | 8 | nan |
| 0.0 | 0.0118 | 16 | nan |
| 0.0 | 0.0176 | 24 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
kostiantynk-out/377d5b04-4449-464f-9d43-f479c577f5f0
|
kostiantynk-out
| 2025-01-12T20:01:11Z
| 8
| 0
|
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-Coder-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-Coder-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-12T19:59:37Z
|
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 377d5b04-4449-464f-9d43-f479c577f5f0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1516d1ee6d08c7db_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1516d1ee6d08c7db_train_data.json
type:
field_input: p
field_instruction: asks-for
field_output: explanation
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk-out/377d5b04-4449-464f-9d43-f479c577f5f0
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/1516d1ee6d08c7db_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1849022f-60a5-4fce-8dec-ce632a995207
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1849022f-60a5-4fce-8dec-ce632a995207
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 377d5b04-4449-464f-9d43-f479c577f5f0
This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3282
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.5635 | 0.0009 | 1 | 4.9762 |
| 5.135 | 0.0026 | 3 | 4.9728 |
| 5.2364 | 0.0053 | 6 | 4.8602 |
| 5.4171 | 0.0079 | 9 | 4.3282 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
AriKu999/autotrain-09geq-q069u
|
AriKu999
| 2025-01-12T20:01:08Z
| 9
| 0
| null |
[
"tensorboard",
"safetensors",
"bert",
"autotrain",
"text-classification",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"region:us"
] |
text-classification
| 2025-01-12T19:10:48Z
|
---
tags:
- autotrain
- text-classification
base_model: google-bert/bert-base-multilingual-cased
widget:
- text: "I love AutoTrain"
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 1.1588923931121826
f1_macro: 0.47992976001676585
f1_micro: 0.62
f1_weighted: 0.5779788692093073
precision_macro: 0.5334613415258577
precision_micro: 0.62
precision_weighted: 0.5919508448540707
recall_macro: 0.5009906477566362
recall_micro: 0.62
recall_weighted: 0.62
accuracy: 0.62
|
saifkabeer/scottrsg
|
saifkabeer
| 2025-01-12T20:00:50Z
| 23
| 0
|
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-01-12T19:13:35Z
|
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: scottrsg
---
# Scottrsg
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `scottrsg` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('saifkabeer/scottrsg', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
thaffggg/dd9efce6-2343-4f78-a69f-6dccfad4eea2
|
thaffggg
| 2025-01-12T20:00:28Z
| 9
| 0
|
peft
|
[
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:beomi/polyglot-ko-12.8b-safetensors",
"base_model:adapter:beomi/polyglot-ko-12.8b-safetensors",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-12T18:35:10Z
|
---
library_name: peft
license: apache-2.0
base_model: beomi/polyglot-ko-12.8b-safetensors
tags:
- axolotl
- generated_from_trainer
model-index:
- name: dd9efce6-2343-4f78-a69f-6dccfad4eea2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: beomi/polyglot-ko-12.8b-safetensors
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 9b614758bc251daf_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9b614758bc251daf_train_data.json
type:
field_instruction: query
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thaffggg/dd9efce6-2343-4f78-a69f-6dccfad4eea2
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/9b614758bc251daf_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: bc82f8eb-5dea-492d-bdd6-fe8377922ab6
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: bc82f8eb-5dea-492d-bdd6-fe8377922ab6
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# dd9efce6-2343-4f78-a69f-6dccfad4eea2
This model is a fine-tuned version of [beomi/polyglot-ko-12.8b-safetensors](https://huggingface.co/beomi/polyglot-ko-12.8b-safetensors) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7661
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.009 | 0.0282 | 200 | 0.7661 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
LucileFavero/AM_model_AAEC_1
|
LucileFavero
| 2025-01-12T19:58:23Z
| 25
| 0
|
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-01-12T19:57:21Z
|
---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** LucileFavero
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
lesso08/549cb4b2-b770-4326-9eaa-113ac962c8bd
|
lesso08
| 2025-01-12T19:57:59Z
| 8
| 0
|
peft
|
[
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:beomi/polyglot-ko-12.8b-safetensors",
"base_model:adapter:beomi/polyglot-ko-12.8b-safetensors",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-12T18:34:27Z
|
---
library_name: peft
license: apache-2.0
base_model: beomi/polyglot-ko-12.8b-safetensors
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 549cb4b2-b770-4326-9eaa-113ac962c8bd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: beomi/polyglot-ko-12.8b-safetensors
bf16: true
chat_template: llama3
datasets:
- data_files:
- 9b614758bc251daf_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9b614758bc251daf_train_data.json
type:
field_instruction: query
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso08/549cb4b2-b770-4326-9eaa-113ac962c8bd
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/9b614758bc251daf_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: bc82f8eb-5dea-492d-bdd6-fe8377922ab6
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: bc82f8eb-5dea-492d-bdd6-fe8377922ab6
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 549cb4b2-b770-4326-9eaa-113ac962c8bd
This model is a fine-tuned version of [beomi/polyglot-ko-12.8b-safetensors](https://huggingface.co/beomi/polyglot-ko-12.8b-safetensors) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9208
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 5.2829 | 0.0001 | 1 | 1.1717 |
| 4.1689 | 0.0007 | 5 | 1.1591 |
| 4.8329 | 0.0014 | 10 | 1.0454 |
| 4.1252 | 0.0021 | 15 | 0.9561 |
| 3.6456 | 0.0028 | 20 | 0.9251 |
| 3.7173 | 0.0035 | 25 | 0.9208 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
3004skylar/robin_lora_xl
|
3004skylar
| 2025-01-12T19:57:45Z
| 33
| 1
|
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:stabilityai/stable-diffusion-3.5-large",
"base_model:adapter:stabilityai/stable-diffusion-3.5-large",
"region:us"
] |
text-to-image
| 2025-01-12T19:56:12Z
|
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: None
output:
url: >-
images/692a9c0f277de7dd6e5eb8f722286c48ac309d3c12948c5bde679158234ba185.png
base_model: stabilityai/stable-diffusion-3.5-large
instance_prompt: null
---
# robin
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/3004skylar/robin_lora_xl/tree/main) them in the Files & versions tab.
|
duyphu/6a675f5b-c4e9-4aa2-ea2c-5d906bf3bf4e
|
duyphu
| 2025-01-12T19:55:04Z
| 12
| 0
|
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:jhflow/mistral7b-lora-multi-turn-v2",
"base_model:adapter:jhflow/mistral7b-lora-multi-turn-v2",
"region:us"
] | null | 2025-01-12T19:46:19Z
|
---
library_name: peft
base_model: jhflow/mistral7b-lora-multi-turn-v2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6a675f5b-c4e9-4aa2-ea2c-5d906bf3bf4e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: jhflow/mistral7b-lora-multi-turn-v2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- fb0d93ffd295c2a8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fb0d93ffd295c2a8_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: duyphu/6a675f5b-c4e9-4aa2-ea2c-5d906bf3bf4e
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/fb0d93ffd295c2a8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7a5a77d7-23c7-4fa5-91b5-25fb954aebc0
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7a5a77d7-23c7-4fa5-91b5-25fb954aebc0
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 6a675f5b-c4e9-4aa2-ea2c-5d906bf3bf4e
This model is a fine-tuned version of [jhflow/mistral7b-lora-multi-turn-v2](https://huggingface.co/jhflow/mistral7b-lora-multi-turn-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1013
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0092 | 1 | 2.4488 |
| 8.1093 | 0.0924 | 10 | 2.2831 |
| 8.3922 | 0.1848 | 20 | 2.1757 |
| 8.6908 | 0.2771 | 30 | 2.1262 |
| 7.8807 | 0.3695 | 40 | 2.1058 |
| 7.9822 | 0.4619 | 50 | 2.1013 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Triangle104/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated-Q8_0-GGUF
|
Triangle104
| 2025-01-12T19:54:09Z
| 35
| 0
|
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:Devarui379/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated",
"base_model:quantized:Devarui379/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-01-12T19:52:58Z
|
---
base_model: Devarui379/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated
license: cc-by-4.0
language:
- en
pipeline_tag: text-generation
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated-Q8_0-GGUF
This model was converted to GGUF format from [`Devarui379/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated`](https://huggingface.co/Devarui379/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Devarui379/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated) for more details on the model.
---
Model details:
-
Small but Smart
Fine-Tuned on Vast dataset of Conversations
Able to Generate Human like text with high performance within its size.
It is Very Versatile when compared for it's size and Parameters and offers capability almost as good as Llama 3.1 8B Instruct
Feel free to Check it out!!
[This model was trained for 5hrs on GPU T4 15gb vram]
Developed by: Meta AI
Fine-Tuned by: Devarui379
Model type: Transformers
Language(s) (NLP): English
License: cc-by-4.0
Model Sources [optional]
base model:meta-llama/Llama-3.2-3B-Instruct
Repository: Devarui379/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated
Demo: Use LM Studio with the Quantized version
Uses
Use desired System prompt when using in LM Studio The optimal chat template seems to be Jinja but feel free to test it out as you want!
Technical Specifications
Model Architecture and Objective
Llama 3.2
Hardware
NVIDIA TESLA T4 GPU 15GB VRAM
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated-Q8_0-GGUF --hf-file versatillama-llama-3.2-3b-instruct-abliterated-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated-Q8_0-GGUF --hf-file versatillama-llama-3.2-3b-instruct-abliterated-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated-Q8_0-GGUF --hf-file versatillama-llama-3.2-3b-instruct-abliterated-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated-Q8_0-GGUF --hf-file versatillama-llama-3.2-3b-instruct-abliterated-q8_0.gguf -c 2048
```
|
MayBashendy/ArabicNewSplits7_FineTuningAraBERT_run1_AugV5_k12_task5_organization
|
MayBashendy
| 2025-01-12T19:53:09Z
| 8
| 0
|
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-12T19:43:59Z
|
---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_FineTuningAraBERT_run1_AugV5_k12_task5_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_FineTuningAraBERT_run1_AugV5_k12_task5_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6880
- Qwk: 0.5028
- Mse: 0.6880
- Rmse: 0.8294
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0625 | 2 | 3.8998 | 0.0124 | 3.8998 | 1.9748 |
| No log | 0.125 | 4 | 1.8693 | 0.0318 | 1.8693 | 1.3672 |
| No log | 0.1875 | 6 | 1.1996 | -0.0627 | 1.1996 | 1.0953 |
| No log | 0.25 | 8 | 1.0795 | 0.2441 | 1.0795 | 1.0390 |
| No log | 0.3125 | 10 | 1.0976 | 0.1418 | 1.0976 | 1.0476 |
| No log | 0.375 | 12 | 1.2351 | 0.0249 | 1.2351 | 1.1114 |
| No log | 0.4375 | 14 | 1.4945 | -0.0858 | 1.4945 | 1.2225 |
| No log | 0.5 | 16 | 1.6856 | -0.0411 | 1.6856 | 1.2983 |
| No log | 0.5625 | 18 | 1.5115 | -0.0560 | 1.5115 | 1.2294 |
| No log | 0.625 | 20 | 1.2983 | -0.0328 | 1.2983 | 1.1394 |
| No log | 0.6875 | 22 | 1.1399 | 0.1268 | 1.1399 | 1.0676 |
| No log | 0.75 | 24 | 1.0546 | 0.2416 | 1.0546 | 1.0270 |
| No log | 0.8125 | 26 | 1.0514 | 0.0762 | 1.0514 | 1.0254 |
| No log | 0.875 | 28 | 1.0289 | 0.1076 | 1.0289 | 1.0143 |
| No log | 0.9375 | 30 | 1.0153 | 0.4051 | 1.0153 | 1.0076 |
| No log | 1.0 | 32 | 1.0239 | 0.2343 | 1.0239 | 1.0119 |
| No log | 1.0625 | 34 | 1.1216 | 0.1142 | 1.1216 | 1.0591 |
| No log | 1.125 | 36 | 1.1869 | 0.0 | 1.1869 | 1.0894 |
| No log | 1.1875 | 38 | 1.1328 | 0.0996 | 1.1328 | 1.0643 |
| No log | 1.25 | 40 | 0.9713 | 0.4167 | 0.9713 | 0.9855 |
| No log | 1.3125 | 42 | 0.9117 | 0.4031 | 0.9117 | 0.9548 |
| No log | 1.375 | 44 | 0.9185 | 0.4218 | 0.9185 | 0.9584 |
| No log | 1.4375 | 46 | 0.9131 | 0.4512 | 0.9131 | 0.9556 |
| No log | 1.5 | 48 | 0.9881 | 0.3790 | 0.9881 | 0.9940 |
| No log | 1.5625 | 50 | 1.1039 | 0.2513 | 1.1039 | 1.0507 |
| No log | 1.625 | 52 | 1.1056 | 0.2850 | 1.1056 | 1.0515 |
| No log | 1.6875 | 54 | 0.9419 | 0.375 | 0.9419 | 0.9705 |
| No log | 1.75 | 56 | 0.9125 | 0.2314 | 0.9125 | 0.9553 |
| No log | 1.8125 | 58 | 1.0117 | 0.1799 | 1.0117 | 1.0058 |
| No log | 1.875 | 60 | 1.0029 | 0.1545 | 1.0029 | 1.0014 |
| No log | 1.9375 | 62 | 0.9630 | 0.1783 | 0.9630 | 0.9813 |
| No log | 2.0 | 64 | 0.9759 | 0.3310 | 0.9759 | 0.9879 |
| No log | 2.0625 | 66 | 0.9466 | 0.4167 | 0.9466 | 0.9730 |
| No log | 2.125 | 68 | 0.8062 | 0.3435 | 0.8062 | 0.8979 |
| No log | 2.1875 | 70 | 0.7766 | 0.3652 | 0.7766 | 0.8813 |
| No log | 2.25 | 72 | 0.8180 | 0.3164 | 0.8180 | 0.9044 |
| No log | 2.3125 | 74 | 0.7902 | 0.3603 | 0.7902 | 0.8889 |
| No log | 2.375 | 76 | 0.7130 | 0.4831 | 0.7130 | 0.8444 |
| No log | 2.4375 | 78 | 0.7098 | 0.5763 | 0.7098 | 0.8425 |
| No log | 2.5 | 80 | 0.7035 | 0.5559 | 0.7035 | 0.8387 |
| No log | 2.5625 | 82 | 0.6607 | 0.5153 | 0.6607 | 0.8128 |
| No log | 2.625 | 84 | 0.6674 | 0.5562 | 0.6674 | 0.8170 |
| No log | 2.6875 | 86 | 0.6472 | 0.6272 | 0.6472 | 0.8045 |
| No log | 2.75 | 88 | 0.6978 | 0.6015 | 0.6978 | 0.8353 |
| No log | 2.8125 | 90 | 0.8254 | 0.5614 | 0.8254 | 0.9085 |
| No log | 2.875 | 92 | 1.0173 | 0.3942 | 1.0173 | 1.0086 |
| No log | 2.9375 | 94 | 1.0334 | 0.4073 | 1.0334 | 1.0166 |
| No log | 3.0 | 96 | 0.9522 | 0.4668 | 0.9522 | 0.9758 |
| No log | 3.0625 | 98 | 0.8172 | 0.6035 | 0.8172 | 0.9040 |
| No log | 3.125 | 100 | 0.7584 | 0.5902 | 0.7584 | 0.8708 |
| No log | 3.1875 | 102 | 0.7434 | 0.5675 | 0.7434 | 0.8622 |
| No log | 3.25 | 104 | 0.7553 | 0.5521 | 0.7553 | 0.8691 |
| No log | 3.3125 | 106 | 0.6617 | 0.6071 | 0.6617 | 0.8134 |
| No log | 3.375 | 108 | 0.6579 | 0.6445 | 0.6579 | 0.8111 |
| No log | 3.4375 | 110 | 0.7094 | 0.6529 | 0.7094 | 0.8423 |
| No log | 3.5 | 112 | 0.7746 | 0.5275 | 0.7746 | 0.8801 |
| No log | 3.5625 | 114 | 0.8172 | 0.5485 | 0.8172 | 0.9040 |
| No log | 3.625 | 116 | 0.7899 | 0.5239 | 0.7899 | 0.8887 |
| No log | 3.6875 | 118 | 0.8011 | 0.5968 | 0.8011 | 0.8950 |
| No log | 3.75 | 120 | 0.8089 | 0.6141 | 0.8089 | 0.8994 |
| No log | 3.8125 | 122 | 0.7102 | 0.6147 | 0.7102 | 0.8427 |
| No log | 3.875 | 124 | 0.6779 | 0.5495 | 0.6779 | 0.8234 |
| No log | 3.9375 | 126 | 0.6320 | 0.5603 | 0.6320 | 0.7950 |
| No log | 4.0 | 128 | 0.6061 | 0.5934 | 0.6061 | 0.7785 |
| No log | 4.0625 | 130 | 0.5689 | 0.6886 | 0.5689 | 0.7543 |
| No log | 4.125 | 132 | 0.5828 | 0.6719 | 0.5828 | 0.7634 |
| No log | 4.1875 | 134 | 0.5386 | 0.6878 | 0.5386 | 0.7339 |
| No log | 4.25 | 136 | 0.4913 | 0.7231 | 0.4913 | 0.7009 |
| No log | 4.3125 | 138 | 0.4835 | 0.7182 | 0.4835 | 0.6954 |
| No log | 4.375 | 140 | 0.5285 | 0.7483 | 0.5285 | 0.7270 |
| No log | 4.4375 | 142 | 0.6167 | 0.7469 | 0.6167 | 0.7853 |
| No log | 4.5 | 144 | 0.5436 | 0.7437 | 0.5436 | 0.7373 |
| No log | 4.5625 | 146 | 0.4737 | 0.7544 | 0.4737 | 0.6883 |
| No log | 4.625 | 148 | 0.4855 | 0.7449 | 0.4855 | 0.6967 |
| No log | 4.6875 | 150 | 0.5315 | 0.7437 | 0.5315 | 0.7291 |
| No log | 4.75 | 152 | 0.6915 | 0.6653 | 0.6915 | 0.8315 |
| No log | 4.8125 | 154 | 0.7098 | 0.6061 | 0.7098 | 0.8425 |
| No log | 4.875 | 156 | 0.6298 | 0.6053 | 0.6298 | 0.7936 |
| No log | 4.9375 | 158 | 0.6191 | 0.6301 | 0.6191 | 0.7868 |
| No log | 5.0 | 160 | 0.6033 | 0.6311 | 0.6033 | 0.7767 |
| No log | 5.0625 | 162 | 0.6026 | 0.5798 | 0.6026 | 0.7763 |
| No log | 5.125 | 164 | 0.7041 | 0.6170 | 0.7041 | 0.8391 |
| No log | 5.1875 | 166 | 0.9354 | 0.4854 | 0.9354 | 0.9671 |
| No log | 5.25 | 168 | 0.9265 | 0.5404 | 0.9265 | 0.9626 |
| No log | 5.3125 | 170 | 0.7382 | 0.6071 | 0.7382 | 0.8592 |
| No log | 5.375 | 172 | 0.6831 | 0.6743 | 0.6831 | 0.8265 |
| No log | 5.4375 | 174 | 0.7235 | 0.5995 | 0.7235 | 0.8506 |
| No log | 5.5 | 176 | 0.7447 | 0.5800 | 0.7447 | 0.8629 |
| No log | 5.5625 | 178 | 0.6538 | 0.6362 | 0.6538 | 0.8086 |
| No log | 5.625 | 180 | 0.5523 | 0.6973 | 0.5523 | 0.7432 |
| No log | 5.6875 | 182 | 0.5142 | 0.6788 | 0.5142 | 0.7171 |
| No log | 5.75 | 184 | 0.5317 | 0.6748 | 0.5317 | 0.7292 |
| No log | 5.8125 | 186 | 0.6390 | 0.6079 | 0.6390 | 0.7994 |
| No log | 5.875 | 188 | 0.6886 | 0.5943 | 0.6886 | 0.8298 |
| No log | 5.9375 | 190 | 0.6433 | 0.6229 | 0.6433 | 0.8021 |
| No log | 6.0 | 192 | 0.5722 | 0.6746 | 0.5722 | 0.7564 |
| No log | 6.0625 | 194 | 0.5545 | 0.7436 | 0.5545 | 0.7447 |
| No log | 6.125 | 196 | 0.5597 | 0.7436 | 0.5597 | 0.7481 |
| No log | 6.1875 | 198 | 0.5615 | 0.7079 | 0.5615 | 0.7493 |
| No log | 6.25 | 200 | 0.7072 | 0.6563 | 0.7072 | 0.8410 |
| No log | 6.3125 | 202 | 0.7178 | 0.6466 | 0.7178 | 0.8472 |
| No log | 6.375 | 204 | 0.5644 | 0.7368 | 0.5644 | 0.7513 |
| No log | 6.4375 | 206 | 0.4260 | 0.6980 | 0.4260 | 0.6527 |
| No log | 6.5 | 208 | 0.5349 | 0.6974 | 0.5349 | 0.7314 |
| No log | 6.5625 | 210 | 0.5654 | 0.6974 | 0.5654 | 0.7520 |
| No log | 6.625 | 212 | 0.4946 | 0.6087 | 0.4946 | 0.7033 |
| No log | 6.6875 | 214 | 0.5122 | 0.6296 | 0.5122 | 0.7157 |
| No log | 6.75 | 216 | 0.5672 | 0.5811 | 0.5672 | 0.7531 |
| No log | 6.8125 | 218 | 0.5607 | 0.6301 | 0.5607 | 0.7488 |
| No log | 6.875 | 220 | 0.5755 | 0.6014 | 0.5755 | 0.7586 |
| No log | 6.9375 | 222 | 0.6424 | 0.6015 | 0.6424 | 0.8015 |
| No log | 7.0 | 224 | 0.7807 | 0.6029 | 0.7807 | 0.8836 |
| No log | 7.0625 | 226 | 0.9065 | 0.5123 | 0.9065 | 0.9521 |
| No log | 7.125 | 228 | 0.8704 | 0.5145 | 0.8704 | 0.9330 |
| No log | 7.1875 | 230 | 0.7475 | 0.5147 | 0.7475 | 0.8646 |
| No log | 7.25 | 232 | 0.6928 | 0.4809 | 0.6928 | 0.8324 |
| No log | 7.3125 | 234 | 0.6443 | 0.5232 | 0.6443 | 0.8027 |
| No log | 7.375 | 236 | 0.6312 | 0.5949 | 0.6312 | 0.7945 |
| No log | 7.4375 | 238 | 0.6608 | 0.4937 | 0.6608 | 0.8129 |
| No log | 7.5 | 240 | 0.7649 | 0.5405 | 0.7649 | 0.8746 |
| No log | 7.5625 | 242 | 0.7997 | 0.6110 | 0.7997 | 0.8942 |
| No log | 7.625 | 244 | 0.6729 | 0.6275 | 0.6729 | 0.8203 |
| No log | 7.6875 | 246 | 0.5453 | 0.7477 | 0.5453 | 0.7384 |
| No log | 7.75 | 248 | 0.4845 | 0.7283 | 0.4845 | 0.6961 |
| No log | 7.8125 | 250 | 0.5035 | 0.7283 | 0.5035 | 0.7096 |
| No log | 7.875 | 252 | 0.5655 | 0.7477 | 0.5655 | 0.7520 |
| No log | 7.9375 | 254 | 0.5849 | 0.7531 | 0.5849 | 0.7648 |
| No log | 8.0 | 256 | 0.5314 | 0.7217 | 0.5314 | 0.7290 |
| No log | 8.0625 | 258 | 0.4647 | 0.7171 | 0.4647 | 0.6817 |
| No log | 8.125 | 260 | 0.4610 | 0.7179 | 0.4610 | 0.6790 |
| No log | 8.1875 | 262 | 0.4578 | 0.7066 | 0.4578 | 0.6766 |
| No log | 8.25 | 264 | 0.5117 | 0.7492 | 0.5117 | 0.7153 |
| No log | 8.3125 | 266 | 0.6155 | 0.6401 | 0.6155 | 0.7846 |
| No log | 8.375 | 268 | 0.6728 | 0.6151 | 0.6728 | 0.8203 |
| No log | 8.4375 | 270 | 0.6915 | 0.5734 | 0.6915 | 0.8316 |
| No log | 8.5 | 272 | 0.6398 | 0.6102 | 0.6398 | 0.7998 |
| No log | 8.5625 | 274 | 0.6145 | 0.6065 | 0.6145 | 0.7839 |
| No log | 8.625 | 276 | 0.6022 | 0.5747 | 0.6022 | 0.7760 |
| No log | 8.6875 | 278 | 0.5641 | 0.6198 | 0.5641 | 0.7511 |
| No log | 8.75 | 280 | 0.6241 | 0.5579 | 0.6241 | 0.7900 |
| No log | 8.8125 | 282 | 0.6690 | 0.5346 | 0.6690 | 0.8179 |
| No log | 8.875 | 284 | 0.6612 | 0.5463 | 0.6612 | 0.8131 |
| No log | 8.9375 | 286 | 0.6295 | 0.5663 | 0.6295 | 0.7934 |
| No log | 9.0 | 288 | 0.6285 | 0.5856 | 0.6285 | 0.7928 |
| No log | 9.0625 | 290 | 0.5589 | 0.6310 | 0.5589 | 0.7476 |
| No log | 9.125 | 292 | 0.5466 | 0.6420 | 0.5466 | 0.7393 |
| No log | 9.1875 | 294 | 0.6236 | 0.5833 | 0.6236 | 0.7897 |
| No log | 9.25 | 296 | 0.7751 | 0.5920 | 0.7751 | 0.8804 |
| No log | 9.3125 | 298 | 0.8750 | 0.5668 | 0.8750 | 0.9354 |
| No log | 9.375 | 300 | 0.8819 | 0.5330 | 0.8819 | 0.9391 |
| No log | 9.4375 | 302 | 0.7336 | 0.5320 | 0.7336 | 0.8565 |
| No log | 9.5 | 304 | 0.6952 | 0.5644 | 0.6952 | 0.8338 |
| No log | 9.5625 | 306 | 0.6473 | 0.5663 | 0.6473 | 0.8045 |
| No log | 9.625 | 308 | 0.6357 | 0.6151 | 0.6357 | 0.7973 |
| No log | 9.6875 | 310 | 0.7411 | 0.5614 | 0.7411 | 0.8609 |
| No log | 9.75 | 312 | 0.9103 | 0.5943 | 0.9103 | 0.9541 |
| No log | 9.8125 | 314 | 0.9162 | 0.5943 | 0.9162 | 0.9572 |
| No log | 9.875 | 316 | 0.7175 | 0.5631 | 0.7175 | 0.8470 |
| No log | 9.9375 | 318 | 0.5872 | 0.5927 | 0.5872 | 0.7663 |
| No log | 10.0 | 320 | 0.5730 | 0.6301 | 0.5730 | 0.7570 |
| No log | 10.0625 | 322 | 0.5982 | 0.5733 | 0.5982 | 0.7734 |
| No log | 10.125 | 324 | 0.6469 | 0.5437 | 0.6469 | 0.8043 |
| No log | 10.1875 | 326 | 0.7588 | 0.5320 | 0.7588 | 0.8711 |
| No log | 10.25 | 328 | 0.8064 | 0.5272 | 0.8064 | 0.8980 |
| No log | 10.3125 | 330 | 0.7763 | 0.5272 | 0.7763 | 0.8811 |
| No log | 10.375 | 332 | 0.7256 | 0.5750 | 0.7256 | 0.8518 |
| No log | 10.4375 | 334 | 0.7657 | 0.5562 | 0.7657 | 0.8750 |
| No log | 10.5 | 336 | 0.8493 | 0.4969 | 0.8493 | 0.9216 |
| No log | 10.5625 | 338 | 0.8219 | 0.4775 | 0.8219 | 0.9066 |
| No log | 10.625 | 340 | 0.8065 | 0.4775 | 0.8065 | 0.8981 |
| No log | 10.6875 | 342 | 0.6869 | 0.5265 | 0.6869 | 0.8288 |
| No log | 10.75 | 344 | 0.5937 | 0.6004 | 0.5937 | 0.7705 |
| No log | 10.8125 | 346 | 0.5117 | 0.7277 | 0.5117 | 0.7153 |
| No log | 10.875 | 348 | 0.4934 | 0.7171 | 0.4934 | 0.7024 |
| No log | 10.9375 | 350 | 0.5352 | 0.6719 | 0.5352 | 0.7316 |
| No log | 11.0 | 352 | 0.6561 | 0.5636 | 0.6561 | 0.8100 |
| No log | 11.0625 | 354 | 0.6766 | 0.5543 | 0.6766 | 0.8225 |
| No log | 11.125 | 356 | 0.6126 | 0.5491 | 0.6126 | 0.7827 |
| No log | 11.1875 | 358 | 0.5561 | 0.6413 | 0.5561 | 0.7457 |
| No log | 11.25 | 360 | 0.5310 | 0.7051 | 0.5310 | 0.7287 |
| No log | 11.3125 | 362 | 0.5524 | 0.6639 | 0.5524 | 0.7432 |
| No log | 11.375 | 364 | 0.6326 | 0.6226 | 0.6326 | 0.7954 |
| No log | 11.4375 | 366 | 0.6798 | 0.6385 | 0.6798 | 0.8245 |
| No log | 11.5 | 368 | 0.6136 | 0.6247 | 0.6136 | 0.7833 |
| No log | 11.5625 | 370 | 0.5218 | 0.6946 | 0.5218 | 0.7224 |
| No log | 11.625 | 372 | 0.4883 | 0.6597 | 0.4883 | 0.6988 |
| No log | 11.6875 | 374 | 0.4914 | 0.6175 | 0.4914 | 0.7010 |
| No log | 11.75 | 376 | 0.5096 | 0.6499 | 0.5096 | 0.7139 |
| No log | 11.8125 | 378 | 0.5218 | 0.6392 | 0.5218 | 0.7223 |
| No log | 11.875 | 380 | 0.5099 | 0.6764 | 0.5099 | 0.7141 |
| No log | 11.9375 | 382 | 0.5140 | 0.7012 | 0.5140 | 0.7169 |
| No log | 12.0 | 384 | 0.5074 | 0.7012 | 0.5074 | 0.7123 |
| No log | 12.0625 | 386 | 0.4942 | 0.7012 | 0.4942 | 0.7030 |
| No log | 12.125 | 388 | 0.4984 | 0.7213 | 0.4984 | 0.7059 |
| No log | 12.1875 | 390 | 0.5431 | 0.6940 | 0.5431 | 0.7370 |
| No log | 12.25 | 392 | 0.6403 | 0.7149 | 0.6403 | 0.8002 |
| No log | 12.3125 | 394 | 0.6484 | 0.6878 | 0.6484 | 0.8052 |
| No log | 12.375 | 396 | 0.5750 | 0.7036 | 0.5750 | 0.7583 |
| No log | 12.4375 | 398 | 0.4893 | 0.7341 | 0.4893 | 0.6995 |
| No log | 12.5 | 400 | 0.4747 | 0.7035 | 0.4747 | 0.6890 |
| No log | 12.5625 | 402 | 0.4704 | 0.7101 | 0.4704 | 0.6858 |
| No log | 12.625 | 404 | 0.4819 | 0.7141 | 0.4819 | 0.6942 |
| No log | 12.6875 | 406 | 0.5444 | 0.6815 | 0.5444 | 0.7378 |
| No log | 12.75 | 408 | 0.7225 | 0.6020 | 0.7225 | 0.8500 |
| No log | 12.8125 | 410 | 0.8058 | 0.5546 | 0.8058 | 0.8977 |
| No log | 12.875 | 412 | 0.7505 | 0.5177 | 0.7505 | 0.8663 |
| No log | 12.9375 | 414 | 0.6641 | 0.5463 | 0.6641 | 0.8149 |
| No log | 13.0 | 416 | 0.6724 | 0.5515 | 0.6724 | 0.8200 |
| No log | 13.0625 | 418 | 0.7042 | 0.5045 | 0.7042 | 0.8392 |
| No log | 13.125 | 420 | 0.7629 | 0.4157 | 0.7629 | 0.8735 |
| No log | 13.1875 | 422 | 0.7825 | 0.4157 | 0.7825 | 0.8846 |
| No log | 13.25 | 424 | 0.8108 | 0.4175 | 0.8108 | 0.9004 |
| No log | 13.3125 | 426 | 0.8044 | 0.4197 | 0.8044 | 0.8969 |
| No log | 13.375 | 428 | 0.8090 | 0.4326 | 0.8090 | 0.8995 |
| No log | 13.4375 | 430 | 0.7798 | 0.4667 | 0.7798 | 0.8831 |
| No log | 13.5 | 432 | 0.7228 | 0.5360 | 0.7228 | 0.8502 |
| No log | 13.5625 | 434 | 0.6733 | 0.5824 | 0.6733 | 0.8205 |
| No log | 13.625 | 436 | 0.6322 | 0.5875 | 0.6322 | 0.7951 |
| No log | 13.6875 | 438 | 0.6288 | 0.5875 | 0.6288 | 0.7930 |
| No log | 13.75 | 440 | 0.6844 | 0.5390 | 0.6844 | 0.8273 |
| No log | 13.8125 | 442 | 0.7568 | 0.5320 | 0.7568 | 0.8699 |
| No log | 13.875 | 444 | 0.7395 | 0.5420 | 0.7395 | 0.8599 |
| No log | 13.9375 | 446 | 0.6493 | 0.5567 | 0.6493 | 0.8058 |
| No log | 14.0 | 448 | 0.6001 | 0.6021 | 0.6001 | 0.7746 |
| No log | 14.0625 | 450 | 0.5753 | 0.6121 | 0.5753 | 0.7585 |
| No log | 14.125 | 452 | 0.5772 | 0.6121 | 0.5772 | 0.7598 |
| No log | 14.1875 | 454 | 0.5761 | 0.5970 | 0.5761 | 0.7590 |
| No log | 14.25 | 456 | 0.5841 | 0.6220 | 0.5841 | 0.7643 |
| No log | 14.3125 | 458 | 0.6320 | 0.6336 | 0.6320 | 0.7950 |
| No log | 14.375 | 460 | 0.6716 | 0.6053 | 0.6716 | 0.8195 |
| No log | 14.4375 | 462 | 0.6388 | 0.6154 | 0.6388 | 0.7992 |
| No log | 14.5 | 464 | 0.5529 | 0.6601 | 0.5529 | 0.7435 |
| No log | 14.5625 | 466 | 0.4954 | 0.6728 | 0.4954 | 0.7038 |
| No log | 14.625 | 468 | 0.4680 | 0.7402 | 0.4680 | 0.6841 |
| No log | 14.6875 | 470 | 0.4710 | 0.7285 | 0.4710 | 0.6863 |
| No log | 14.75 | 472 | 0.5060 | 0.6871 | 0.5060 | 0.7113 |
| No log | 14.8125 | 474 | 0.5924 | 0.6290 | 0.5924 | 0.7697 |
| No log | 14.875 | 476 | 0.5959 | 0.6489 | 0.5959 | 0.7720 |
| No log | 14.9375 | 478 | 0.5627 | 0.6422 | 0.5627 | 0.7502 |
| No log | 15.0 | 480 | 0.5402 | 0.6983 | 0.5402 | 0.7349 |
| No log | 15.0625 | 482 | 0.4998 | 0.7193 | 0.4998 | 0.7070 |
| No log | 15.125 | 484 | 0.4732 | 0.6659 | 0.4732 | 0.6879 |
| No log | 15.1875 | 486 | 0.4898 | 0.6779 | 0.4898 | 0.6998 |
| No log | 15.25 | 488 | 0.4895 | 0.7012 | 0.4895 | 0.6996 |
| No log | 15.3125 | 490 | 0.5249 | 0.7388 | 0.5249 | 0.7245 |
| No log | 15.375 | 492 | 0.5612 | 0.6885 | 0.5612 | 0.7491 |
| No log | 15.4375 | 494 | 0.5642 | 0.6619 | 0.5642 | 0.7511 |
| No log | 15.5 | 496 | 0.6264 | 0.6640 | 0.6264 | 0.7914 |
| No log | 15.5625 | 498 | 0.6910 | 0.6589 | 0.6910 | 0.8313 |
| 0.2924 | 15.625 | 500 | 0.6767 | 0.6094 | 0.6767 | 0.8226 |
| 0.2924 | 15.6875 | 502 | 0.6553 | 0.5952 | 0.6553 | 0.8095 |
| 0.2924 | 15.75 | 504 | 0.6032 | 0.6021 | 0.6032 | 0.7767 |
| 0.2924 | 15.8125 | 506 | 0.5653 | 0.6290 | 0.5653 | 0.7519 |
| 0.2924 | 15.875 | 508 | 0.5607 | 0.6582 | 0.5607 | 0.7488 |
| 0.2924 | 15.9375 | 510 | 0.5676 | 0.6132 | 0.5676 | 0.7534 |
| 0.2924 | 16.0 | 512 | 0.6098 | 0.6275 | 0.6098 | 0.7809 |
| 0.2924 | 16.0625 | 514 | 0.6297 | 0.6281 | 0.6297 | 0.7935 |
| 0.2924 | 16.125 | 516 | 0.6173 | 0.6608 | 0.6173 | 0.7857 |
| 0.2924 | 16.1875 | 518 | 0.5480 | 0.7013 | 0.5480 | 0.7403 |
| 0.2924 | 16.25 | 520 | 0.4986 | 0.7198 | 0.4986 | 0.7061 |
| 0.2924 | 16.3125 | 522 | 0.5003 | 0.7198 | 0.5003 | 0.7073 |
| 0.2924 | 16.375 | 524 | 0.5152 | 0.7191 | 0.5152 | 0.7177 |
| 0.2924 | 16.4375 | 526 | 0.5144 | 0.7348 | 0.5144 | 0.7172 |
| 0.2924 | 16.5 | 528 | 0.5287 | 0.6821 | 0.5287 | 0.7271 |
| 0.2924 | 16.5625 | 530 | 0.5562 | 0.6791 | 0.5562 | 0.7458 |
| 0.2924 | 16.625 | 532 | 0.5832 | 0.6529 | 0.5832 | 0.7637 |
| 0.2924 | 16.6875 | 534 | 0.5558 | 0.6993 | 0.5558 | 0.7455 |
| 0.2924 | 16.75 | 536 | 0.5247 | 0.6842 | 0.5247 | 0.7244 |
| 0.2924 | 16.8125 | 538 | 0.5110 | 0.6995 | 0.5110 | 0.7149 |
| 0.2924 | 16.875 | 540 | 0.5042 | 0.7131 | 0.5042 | 0.7100 |
| 0.2924 | 16.9375 | 542 | 0.4960 | 0.6886 | 0.4960 | 0.7043 |
| 0.2924 | 17.0 | 544 | 0.5230 | 0.6871 | 0.5230 | 0.7232 |
| 0.2924 | 17.0625 | 546 | 0.5488 | 0.6556 | 0.5488 | 0.7408 |
| 0.2924 | 17.125 | 548 | 0.5529 | 0.6278 | 0.5529 | 0.7436 |
| 0.2924 | 17.1875 | 550 | 0.5724 | 0.6450 | 0.5724 | 0.7566 |
| 0.2924 | 17.25 | 552 | 0.5595 | 0.6417 | 0.5595 | 0.7480 |
| 0.2924 | 17.3125 | 554 | 0.5368 | 0.7050 | 0.5368 | 0.7326 |
| 0.2924 | 17.375 | 556 | 0.5374 | 0.7015 | 0.5374 | 0.7331 |
| 0.2924 | 17.4375 | 558 | 0.5410 | 0.6906 | 0.5410 | 0.7355 |
| 0.2924 | 17.5 | 560 | 0.5600 | 0.6411 | 0.5600 | 0.7483 |
| 0.2924 | 17.5625 | 562 | 0.5271 | 0.7191 | 0.5271 | 0.7260 |
| 0.2924 | 17.625 | 564 | 0.4938 | 0.6916 | 0.4938 | 0.7027 |
| 0.2924 | 17.6875 | 566 | 0.4951 | 0.6916 | 0.4951 | 0.7036 |
| 0.2924 | 17.75 | 568 | 0.5220 | 0.7059 | 0.5220 | 0.7225 |
| 0.2924 | 17.8125 | 570 | 0.5961 | 0.6413 | 0.5961 | 0.7721 |
| 0.2924 | 17.875 | 572 | 0.6279 | 0.5902 | 0.6279 | 0.7924 |
| 0.2924 | 17.9375 | 574 | 0.6013 | 0.6596 | 0.6013 | 0.7754 |
| 0.2924 | 18.0 | 576 | 0.5338 | 0.6584 | 0.5338 | 0.7306 |
| 0.2924 | 18.0625 | 578 | 0.5031 | 0.6806 | 0.5031 | 0.7093 |
| 0.2924 | 18.125 | 580 | 0.4860 | 0.7081 | 0.4860 | 0.6971 |
| 0.2924 | 18.1875 | 582 | 0.4981 | 0.6929 | 0.4981 | 0.7058 |
| 0.2924 | 18.25 | 584 | 0.5627 | 0.7203 | 0.5627 | 0.7501 |
| 0.2924 | 18.3125 | 586 | 0.6312 | 0.6878 | 0.6312 | 0.7945 |
| 0.2924 | 18.375 | 588 | 0.6485 | 0.6909 | 0.6485 | 0.8053 |
| 0.2924 | 18.4375 | 590 | 0.5977 | 0.6738 | 0.5977 | 0.7731 |
| 0.2924 | 18.5 | 592 | 0.5620 | 0.6520 | 0.5620 | 0.7496 |
| 0.2924 | 18.5625 | 594 | 0.5461 | 0.6753 | 0.5461 | 0.7390 |
| 0.2924 | 18.625 | 596 | 0.5394 | 0.6925 | 0.5394 | 0.7344 |
| 0.2924 | 18.6875 | 598 | 0.5242 | 0.7109 | 0.5242 | 0.7240 |
| 0.2924 | 18.75 | 600 | 0.5296 | 0.6946 | 0.5296 | 0.7277 |
| 0.2924 | 18.8125 | 602 | 0.5359 | 0.6983 | 0.5359 | 0.7321 |
| 0.2924 | 18.875 | 604 | 0.5611 | 0.6871 | 0.5611 | 0.7491 |
| 0.2924 | 18.9375 | 606 | 0.5999 | 0.6035 | 0.5999 | 0.7745 |
| 0.2924 | 19.0 | 608 | 0.6188 | 0.6035 | 0.6188 | 0.7867 |
| 0.2924 | 19.0625 | 610 | 0.6217 | 0.6181 | 0.6217 | 0.7885 |
| 0.2924 | 19.125 | 612 | 0.5918 | 0.7030 | 0.5918 | 0.7693 |
| 0.2924 | 19.1875 | 614 | 0.5289 | 0.6766 | 0.5289 | 0.7273 |
| 0.2924 | 19.25 | 616 | 0.4887 | 0.7291 | 0.4887 | 0.6990 |
| 0.2924 | 19.3125 | 618 | 0.4783 | 0.7253 | 0.4783 | 0.6916 |
| 0.2924 | 19.375 | 620 | 0.4849 | 0.7049 | 0.4849 | 0.6963 |
| 0.2924 | 19.4375 | 622 | 0.4865 | 0.7253 | 0.4865 | 0.6975 |
| 0.2924 | 19.5 | 624 | 0.5522 | 0.6869 | 0.5522 | 0.7431 |
| 0.2924 | 19.5625 | 626 | 0.6427 | 0.6071 | 0.6427 | 0.8017 |
| 0.2924 | 19.625 | 628 | 0.6374 | 0.6071 | 0.6374 | 0.7984 |
| 0.2924 | 19.6875 | 630 | 0.5602 | 0.6869 | 0.5602 | 0.7485 |
| 0.2924 | 19.75 | 632 | 0.4917 | 0.6753 | 0.4917 | 0.7012 |
| 0.2924 | 19.8125 | 634 | 0.4830 | 0.6750 | 0.4830 | 0.6950 |
| 0.2924 | 19.875 | 636 | 0.4962 | 0.6566 | 0.4962 | 0.7044 |
| 0.2924 | 19.9375 | 638 | 0.4991 | 0.6598 | 0.4991 | 0.7065 |
| 0.2924 | 20.0 | 640 | 0.5182 | 0.6805 | 0.5182 | 0.7198 |
| 0.2924 | 20.0625 | 642 | 0.5493 | 0.6878 | 0.5493 | 0.7412 |
| 0.2924 | 20.125 | 644 | 0.6080 | 0.6605 | 0.6080 | 0.7797 |
| 0.2924 | 20.1875 | 646 | 0.6353 | 0.6136 | 0.6353 | 0.7970 |
| 0.2924 | 20.25 | 648 | 0.6259 | 0.6296 | 0.6259 | 0.7912 |
| 0.2924 | 20.3125 | 650 | 0.6217 | 0.6263 | 0.6217 | 0.7885 |
| 0.2924 | 20.375 | 652 | 0.5919 | 0.6740 | 0.5919 | 0.7694 |
| 0.2924 | 20.4375 | 654 | 0.5643 | 0.6938 | 0.5643 | 0.7512 |
| 0.2924 | 20.5 | 656 | 0.5456 | 0.7109 | 0.5456 | 0.7386 |
| 0.2924 | 20.5625 | 658 | 0.5330 | 0.7223 | 0.5330 | 0.7300 |
| 0.2924 | 20.625 | 660 | 0.5325 | 0.7335 | 0.5325 | 0.7297 |
| 0.2924 | 20.6875 | 662 | 0.5473 | 0.7385 | 0.5473 | 0.7398 |
| 0.2924 | 20.75 | 664 | 0.5997 | 0.7001 | 0.5997 | 0.7744 |
| 0.2924 | 20.8125 | 666 | 0.6200 | 0.6455 | 0.6200 | 0.7874 |
| 0.2924 | 20.875 | 668 | 0.6244 | 0.6385 | 0.6244 | 0.7902 |
| 0.2924 | 20.9375 | 670 | 0.6483 | 0.5953 | 0.6483 | 0.8052 |
| 0.2924 | 21.0 | 672 | 0.6735 | 0.5463 | 0.6735 | 0.8207 |
| 0.2924 | 21.0625 | 674 | 0.6887 | 0.5229 | 0.6887 | 0.8299 |
| 0.2924 | 21.125 | 676 | 0.6850 | 0.5028 | 0.6850 | 0.8277 |
| 0.2924 | 21.1875 | 678 | 0.6880 | 0.5028 | 0.6880 | 0.8294 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
VERSIL91/a58ac106-6bc2-4e63-bb6f-30052f9b9185
|
VERSIL91
| 2025-01-12T19:53:04Z
| 10
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Maykeye/TinyLLama-v0",
"base_model:adapter:Maykeye/TinyLLama-v0",
"license:apache-2.0",
"region:us"
] | null | 2025-01-12T19:51:15Z
|
---
library_name: peft
license: apache-2.0
base_model: Maykeye/TinyLLama-v0
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a58ac106-6bc2-4e63-bb6f-30052f9b9185
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
accelerate_config:
dynamo_backend: inductor
mixed_precision: bf16
num_machines: 1
num_processes: auto
use_cpu: false
adapter: lora
base_model: Maykeye/TinyLLama-v0
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- cc55b427d5fc1cea_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/cc55b427d5fc1cea_train_data.json
type:
field_instruction: context
field_output: question
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
group_by_length: false
hub_model_id: VERSIL91/a58ac106-6bc2-4e63-bb6f-30052f9b9185
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lora_target_modules:
- q_proj
- v_proj
lr_scheduler: cosine
max_memory:
0: 70GiB
max_steps: 20
micro_batch_size: 2
mlflow_experiment_name: /tmp/cc55b427d5fc1cea_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
quantization_config:
llm_int8_enable_fp32_cpu_offload: true
load_in_8bit: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
torch_compile: true
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a58ac106-6bc2-4e63-bb6f-30052f9b9185
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: a58ac106-6bc2-4e63-bb6f-30052f9b9185
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a58ac106-6bc2-4e63-bb6f-30052f9b9185
This model is a fine-tuned version of [Maykeye/TinyLLama-v0](https://huggingface.co/Maykeye/TinyLLama-v0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.6674
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 7.1517 | 0.0182 | 1 | 7.0191 |
| 7.2267 | 0.0912 | 5 | 6.9810 |
| 6.9111 | 0.1824 | 10 | 6.8554 |
| 6.9094 | 0.2737 | 15 | 6.7058 |
| 6.7143 | 0.3649 | 20 | 6.6674 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Viscoke/qx25
|
Viscoke
| 2025-01-12T19:53:04Z
| 82
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-12T18:57:34Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
prxy5605/619a503b-e15e-48f6-970e-7f84e37b7bf0
|
prxy5605
| 2025-01-12T19:52:21Z
| 11
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Hermes-2-SOLAR-10.7B",
"base_model:adapter:NousResearch/Nous-Hermes-2-SOLAR-10.7B",
"license:apache-2.0",
"region:us"
] | null | 2025-01-12T19:39:26Z
|
---
library_name: peft
license: apache-2.0
base_model: NousResearch/Nous-Hermes-2-SOLAR-10.7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 619a503b-e15e-48f6-970e-7f84e37b7bf0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Nous-Hermes-2-SOLAR-10.7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 35e42979deef2ace_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/35e42979deef2ace_train_data.json
type:
field_instruction: prompt
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: true
hub_model_id: prxy5605/619a503b-e15e-48f6-970e-7f84e37b7bf0
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 400
micro_batch_size: 2
mlflow_experiment_name: /tmp/35e42979deef2ace_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8a25c2d0-3f47-4475-82ef-74ba7cd1fcaa
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 8a25c2d0-3f47-4475-82ef-74ba7cd1fcaa
warmup_steps: 30
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 619a503b-e15e-48f6-970e-7f84e37b7bf0
This model is a fine-tuned version of [NousResearch/Nous-Hermes-2-SOLAR-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1479
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 30
- training_steps: 264
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0038 | 1 | 2.0063 |
| 1.0091 | 0.2502 | 66 | 1.3693 |
| 1.0864 | 0.5005 | 132 | 1.2408 |
| 1.4613 | 0.7507 | 198 | 1.1725 |
| 1.0496 | 1.0028 | 264 | 1.1479 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
DebopamC/Text-to-SQL__Qwen2.5-Coder-3B-FineTuned
|
DebopamC
| 2025-01-12T19:51:37Z
| 25
| 0
| null |
[
"gguf",
"qwen2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-05T17:50:16Z
|
---
license: apache-2.0
---
|
AmberYifan/Gemma-7B-sft-gen-dpo-10k
|
AmberYifan
| 2025-01-12T19:50:11Z
| 17
| 0
|
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:AmberYifan/Gemma-7b-sft-ultrachat-safeRLHF",
"base_model:finetune:AmberYifan/Gemma-7b-sft-ultrachat-safeRLHF",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-12T18:15:07Z
|
---
base_model: AmberYifan/Gemma-7b-sft-ultrachat-safeRLHF
library_name: transformers
model_name: Gemma-7B-sft-gen-dpo-10k
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for Gemma-7B-sft-gen-dpo-10k
This model is a fine-tuned version of [AmberYifan/Gemma-7b-sft-ultrachat-safeRLHF](https://huggingface.co/AmberYifan/Gemma-7b-sft-ultrachat-safeRLHF).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AmberYifan/Gemma-7B-sft-gen-dpo-10k", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/k8ru4b4w)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.5.1+cu118
- Datasets: 3.2.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Triangle104/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated-Q5_K_M-GGUF
|
Triangle104
| 2025-01-12T19:49:30Z
| 56
| 0
|
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:Devarui379/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated",
"base_model:quantized:Devarui379/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-01-12T19:48:20Z
|
---
base_model: Devarui379/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated
license: cc-by-4.0
language:
- en
pipeline_tag: text-generation
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated-Q5_K_M-GGUF
This model was converted to GGUF format from [`Devarui379/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated`](https://huggingface.co/Devarui379/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Devarui379/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated) for more details on the model.
---
Model details:
-
Small but Smart
Fine-Tuned on Vast dataset of Conversations
Able to Generate Human like text with high performance within its size.
It is Very Versatile when compared for it's size and Parameters and offers capability almost as good as Llama 3.1 8B Instruct
Feel free to Check it out!!
[This model was trained for 5hrs on GPU T4 15gb vram]
Developed by: Meta AI
Fine-Tuned by: Devarui379
Model type: Transformers
Language(s) (NLP): English
License: cc-by-4.0
Model Sources [optional]
base model:meta-llama/Llama-3.2-3B-Instruct
Repository: Devarui379/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated
Demo: Use LM Studio with the Quantized version
Uses
Use desired System prompt when using in LM Studio The optimal chat template seems to be Jinja but feel free to test it out as you want!
Technical Specifications
Model Architecture and Objective
Llama 3.2
Hardware
NVIDIA TESLA T4 GPU 15GB VRAM
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated-Q5_K_M-GGUF --hf-file versatillama-llama-3.2-3b-instruct-abliterated-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated-Q5_K_M-GGUF --hf-file versatillama-llama-3.2-3b-instruct-abliterated-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated-Q5_K_M-GGUF --hf-file versatillama-llama-3.2-3b-instruct-abliterated-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated-Q5_K_M-GGUF --hf-file versatillama-llama-3.2-3b-instruct-abliterated-q5_k_m.gguf -c 2048
```
|
chchen/Llama-3.1-8B-Instruct-SFT-200
|
chchen
| 2025-01-12T19:48:07Z
| 12
| 0
|
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"region:us"
] | null | 2025-01-12T19:45:03Z
|
---
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: peft
license: llama3.1
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: Llama-3.1-8B-Instruct-SFT-200
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.1-8B-Instruct-SFT-200
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on the bct_non_cot_sft_200 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2912
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7576 | 4.4444 | 50 | 0.6173 |
| 0.3664 | 8.8889 | 100 | 0.2912 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.45.2
- Pytorch 2.3.0
- Datasets 2.19.0
- Tokenizers 0.20.0
|
lesso01/0300f9a2-cfe5-41bb-9f8c-d50f48298b0d
|
lesso01
| 2025-01-12T19:47:59Z
| 10
| 0
|
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:jhflow/mistral7b-lora-multi-turn-v2",
"base_model:adapter:jhflow/mistral7b-lora-multi-turn-v2",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-12T19:46:33Z
|
---
library_name: peft
base_model: jhflow/mistral7b-lora-multi-turn-v2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0300f9a2-cfe5-41bb-9f8c-d50f48298b0d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: jhflow/mistral7b-lora-multi-turn-v2
bf16: true
chat_template: llama3
datasets:
- data_files:
- fb0d93ffd295c2a8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fb0d93ffd295c2a8_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso01/0300f9a2-cfe5-41bb-9f8c-d50f48298b0d
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/fb0d93ffd295c2a8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7a5a77d7-23c7-4fa5-91b5-25fb954aebc0
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7a5a77d7-23c7-4fa5-91b5-25fb954aebc0
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 0300f9a2-cfe5-41bb-9f8c-d50f48298b0d
This model is a fine-tuned version of [jhflow/mistral7b-lora-multi-turn-v2](https://huggingface.co/jhflow/mistral7b-lora-multi-turn-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0092 | 1 | nan |
| 0.0 | 0.0462 | 5 | nan |
| 0.0 | 0.0924 | 10 | nan |
| 0.0 | 0.1386 | 15 | nan |
| 0.0 | 0.1848 | 20 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
filipesantoscv11/f92ca67b-efbc-4d17-b065-c095de7e2b56
|
filipesantoscv11
| 2025-01-12T19:47:32Z
| 10
| 0
|
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:jhflow/mistral7b-lora-multi-turn-v2",
"base_model:adapter:jhflow/mistral7b-lora-multi-turn-v2",
"region:us"
] | null | 2025-01-12T19:46:15Z
|
---
library_name: peft
base_model: jhflow/mistral7b-lora-multi-turn-v2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f92ca67b-efbc-4d17-b065-c095de7e2b56
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: jhflow/mistral7b-lora-multi-turn-v2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- fb0d93ffd295c2a8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fb0d93ffd295c2a8_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: filipesantoscv11/f92ca67b-efbc-4d17-b065-c095de7e2b56
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/fb0d93ffd295c2a8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_hf
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7a5a77d7-23c7-4fa5-91b5-25fb954aebc0
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7a5a77d7-23c7-4fa5-91b5-25fb954aebc0
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# f92ca67b-efbc-4d17-b065-c095de7e2b56
This model is a fine-tuned version of [jhflow/mistral7b-lora-multi-turn-v2](https://huggingface.co/jhflow/mistral7b-lora-multi-turn-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0092 | 1 | nan |
| 0.0 | 0.0739 | 8 | nan |
| 0.0 | 0.1478 | 16 | nan |
| 0.0 | 0.2217 | 24 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Triangle104/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated-Q4_K_M-GGUF
|
Triangle104
| 2025-01-12T19:46:49Z
| 33
| 0
|
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:Devarui379/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated",
"base_model:quantized:Devarui379/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-01-12T19:45:47Z
|
---
base_model: Devarui379/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated
license: cc-by-4.0
language:
- en
pipeline_tag: text-generation
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated-Q4_K_M-GGUF
This model was converted to GGUF format from [`Devarui379/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated`](https://huggingface.co/Devarui379/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Devarui379/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated) for more details on the model.
---
Model details:
-
Small but Smart
Fine-Tuned on Vast dataset of Conversations
Able to Generate Human like text with high performance within its size.
It is Very Versatile when compared for it's size and Parameters and offers capability almost as good as Llama 3.1 8B Instruct
Feel free to Check it out!!
[This model was trained for 5hrs on GPU T4 15gb vram]
Developed by: Meta AI
Fine-Tuned by: Devarui379
Model type: Transformers
Language(s) (NLP): English
License: cc-by-4.0
Model Sources [optional]
base model:meta-llama/Llama-3.2-3B-Instruct
Repository: Devarui379/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated
Demo: Use LM Studio with the Quantized version
Uses
Use desired System prompt when using in LM Studio The optimal chat template seems to be Jinja but feel free to test it out as you want!
Technical Specifications
Model Architecture and Objective
Llama 3.2
Hardware
NVIDIA TESLA T4 GPU 15GB VRAM
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated-Q4_K_M-GGUF --hf-file versatillama-llama-3.2-3b-instruct-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated-Q4_K_M-GGUF --hf-file versatillama-llama-3.2-3b-instruct-abliterated-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated-Q4_K_M-GGUF --hf-file versatillama-llama-3.2-3b-instruct-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated-Q4_K_M-GGUF --hf-file versatillama-llama-3.2-3b-instruct-abliterated-q4_k_m.gguf -c 2048
```
|
0x1202/e9bfbbca-67a1-4bcc-ab50-fdfe6d558cbf
|
0x1202
| 2025-01-12T19:45:09Z
| 11
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:HuggingFaceM4/tiny-random-LlamaForCausalLM",
"base_model:adapter:HuggingFaceM4/tiny-random-LlamaForCausalLM",
"region:us"
] | null | 2025-01-12T19:44:21Z
|
---
library_name: peft
base_model: HuggingFaceM4/tiny-random-LlamaForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e9bfbbca-67a1-4bcc-ab50-fdfe6d558cbf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: HuggingFaceM4/tiny-random-LlamaForCausalLM
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 120e2b58d59a1b2e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/120e2b58d59a1b2e_train_data.json
type:
field_input: original_code
field_instruction: update_snippet
field_output: final_code
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: true
hub_model_id: 0x1202/e9bfbbca-67a1-4bcc-ab50-fdfe6d558cbf
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 400
micro_batch_size: 2
mlflow_experiment_name: /tmp/120e2b58d59a1b2e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 562f173b-b07d-4eb4-a59f-d230672ec843
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 562f173b-b07d-4eb4-a59f-d230672ec843
warmup_steps: 30
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# e9bfbbca-67a1-4bcc-ab50-fdfe6d558cbf
This model is a fine-tuned version of [HuggingFaceM4/tiny-random-LlamaForCausalLM](https://huggingface.co/HuggingFaceM4/tiny-random-LlamaForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.3488
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 30
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0015 | 1 | 10.3748 |
| 10.3663 | 0.1480 | 100 | 10.3656 |
| 10.3467 | 0.2961 | 200 | 10.3506 |
| 10.3463 | 0.4441 | 300 | 10.3490 |
| 10.3444 | 0.5922 | 400 | 10.3488 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
hgutjh/VJ4
|
hgutjh
| 2025-01-12T19:44:13Z
| 550
| 1
|
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-01-12T19:44:03Z
|
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/big15-31-21_00001_.jpeg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# VJ4
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/hgutjh/VJ4/tree/main) them in the Files & versions tab.
|
mergekit-community/mergekit-task_arithmetic-abcjxga
|
mergekit-community
| 2025-01-12T19:41:12Z
| 21
| 0
|
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2212.04089",
"base_model:IlyaGusev/gemma-2-9b-it-abliterated",
"base_model:merge:IlyaGusev/gemma-2-9b-it-abliterated",
"base_model:KR-X-AI/gemma-2-9b-untied",
"base_model:merge:KR-X-AI/gemma-2-9b-untied",
"base_model:sam-paech/Darkest-muse-v1",
"base_model:merge:sam-paech/Darkest-muse-v1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-12T19:33:20Z
|
---
base_model:
- KR-X-AI/gemma-2-9b-untied
- sam-paech/Darkest-muse-v1
- IlyaGusev/gemma-2-9b-it-abliterated
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [KR-X-AI/gemma-2-9b-untied](https://huggingface.co/KR-X-AI/gemma-2-9b-untied) as a base.
### Models Merged
The following models were included in the merge:
* [sam-paech/Darkest-muse-v1](https://huggingface.co/sam-paech/Darkest-muse-v1)
* [IlyaGusev/gemma-2-9b-it-abliterated](https://huggingface.co/IlyaGusev/gemma-2-9b-it-abliterated)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: KR-X-AI/gemma-2-9b-untied
dtype: float32
merge_method: task_arithmetic
slices:
- sources:
- layer_range: [0, 42]
model: sam-paech/Darkest-muse-v1
parameters:
weight: 1.0
- layer_range: [0, 42]
model: IlyaGusev/gemma-2-9b-it-abliterated
parameters:
weight: 1.0
- layer_range: [0, 42]
model: KR-X-AI/gemma-2-9b-untied
```
|
chchen/Llama-3.1-8B-Instruct-SAA-1000
|
chchen
| 2025-01-12T19:40:55Z
| 8
| 0
|
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lora",
"trl",
"dpo",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"region:us"
] | null | 2025-01-12T19:20:06Z
|
---
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: peft
license: llama3.1
tags:
- llama-factory
- lora
- trl
- dpo
- generated_from_trainer
model-index:
- name: Llama-3.1-8B-Instruct-SAA-1000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.1-8B-Instruct-SAA-1000
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on the bct_non_cot_dpo_1000 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1041
- Rewards/chosen: -0.0071
- Rewards/rejected: -0.0574
- Rewards/accuracies: 0.8700
- Rewards/margins: 0.0503
- Logps/rejected: -0.5741
- Logps/chosen: -0.0707
- Logits/rejected: -0.3997
- Logits/chosen: -0.3439
- Sft Loss: 0.0083
- Odds Ratio Loss: 0.9577
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | Sft Loss | Odds Ratio Loss |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|:--------:|:---------------:|
| 1.61 | 0.8889 | 50 | 1.4462 | -0.1395 | -0.1818 | 0.7900 | 0.0423 | -1.8179 | -1.3950 | -0.4872 | -0.4121 | 0.1643 | 12.8185 |
| 0.3241 | 1.7778 | 100 | 0.2648 | -0.0222 | -0.0659 | 0.8200 | 0.0438 | -0.6595 | -0.2217 | -0.4637 | -0.3875 | 0.0232 | 2.4164 |
| 0.1509 | 2.6667 | 150 | 0.1238 | -0.0084 | -0.0490 | 0.8600 | 0.0406 | -0.4900 | -0.0840 | -0.4176 | -0.3601 | 0.0101 | 1.1374 |
| 0.1335 | 3.5556 | 200 | 0.1089 | -0.0074 | -0.0505 | 0.8600 | 0.0432 | -0.5055 | -0.0738 | -0.4038 | -0.3492 | 0.0087 | 1.0023 |
| 0.1253 | 4.4444 | 250 | 0.1136 | -0.0078 | -0.0536 | 0.8800 | 0.0458 | -0.5355 | -0.0776 | -0.3998 | -0.3449 | 0.0097 | 1.0396 |
| 0.0851 | 5.3333 | 300 | 0.1041 | -0.0071 | -0.0574 | 0.8700 | 0.0503 | -0.5741 | -0.0707 | -0.3997 | -0.3439 | 0.0083 | 0.9577 |
| 0.0824 | 6.2222 | 350 | 0.1065 | -0.0073 | -0.0587 | 0.8700 | 0.0514 | -0.5869 | -0.0728 | -0.3969 | -0.3419 | 0.0088 | 0.9767 |
| 0.0869 | 7.1111 | 400 | 0.1160 | -0.0080 | -0.0625 | 0.8800 | 0.0545 | -0.6250 | -0.0801 | -0.3942 | -0.3392 | 0.0102 | 1.0581 |
| 0.0715 | 8.0 | 450 | 0.1095 | -0.0075 | -0.0618 | 0.8800 | 0.0543 | -0.6184 | -0.0750 | -0.3933 | -0.3379 | 0.0092 | 1.0028 |
| 0.0751 | 8.8889 | 500 | 0.1095 | -0.0075 | -0.0618 | 0.8800 | 0.0543 | -0.6181 | -0.0752 | -0.3939 | -0.3386 | 0.0093 | 1.0026 |
| 0.0784 | 9.7778 | 550 | 0.1089 | -0.0075 | -0.0622 | 0.8700 | 0.0547 | -0.6221 | -0.0747 | -0.3937 | -0.3381 | 0.0091 | 0.9983 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.45.2
- Pytorch 2.3.0
- Datasets 2.19.0
- Tokenizers 0.20.0
|
thakkkkkk/7254df0c-d7e5-45c1-8650-89c330831582
|
thakkkkkk
| 2025-01-12T19:38:53Z
| 16
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:sethuiyer/Medichat-Llama3-8B",
"base_model:adapter:sethuiyer/Medichat-Llama3-8B",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-12T19:07:49Z
|
---
library_name: peft
license: other
base_model: sethuiyer/Medichat-Llama3-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7254df0c-d7e5-45c1-8650-89c330831582
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: sethuiyer/Medichat-Llama3-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 597f64d3ad401cba_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/597f64d3ad401cba_train_data.json
type:
field_instruction: instruction
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thakkkkkk/7254df0c-d7e5-45c1-8650-89c330831582
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/597f64d3ad401cba_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b050f9b1-cf69-4630-ae14-4b41180a7aa7
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b050f9b1-cf69-4630-ae14-4b41180a7aa7
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 7254df0c-d7e5-45c1-8650-89c330831582
This model is a fine-tuned version of [sethuiyer/Medichat-Llama3-8B](https://huggingface.co/sethuiyer/Medichat-Llama3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4470
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.4491 | 0.1375 | 200 | 0.4470 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
AIR-hl/Mistral-7B-Base-WPO-bf16
|
AIR-hl
| 2025-01-12T19:36:25Z
| 16
| 0
|
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"wpo",
"alignment",
"conversational",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:HuggingFaceH4/mistral-7b-sft-beta",
"base_model:finetune:HuggingFaceH4/mistral-7b-sft-beta",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-12T19:16:40Z
|
---
license: apache-2.0
base_model:
- wzhouad/zephyr-7B-WPO-FP
- HuggingFaceH4/mistral-7b-sft-beta
tags:
- wpo
- mistral
- alignment
datasets:
- HuggingFaceH4/ultrafeedback_binarized
pipeline_tag: text-generation
library_name: transformers
---
following [wzhouad/zephyr-7B-WPO-FP](https://huggingface.co/wzhouad/zephyr-7B-WPO-FP)
Transfer original weights from `float32` to `bfloat16` type
|
Best000/27516f2d-a92f-4252-bcfc-15e88cb6bd87
|
Best000
| 2025-01-12T19:36:05Z
| 12
| 0
|
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Mistral-7b-64k",
"base_model:adapter:NousResearch/Yarn-Mistral-7b-64k",
"license:apache-2.0",
"region:us"
] | null | 2025-01-12T19:27:14Z
|
---
library_name: peft
license: apache-2.0
base_model: NousResearch/Yarn-Mistral-7b-64k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 27516f2d-a92f-4252-bcfc-15e88cb6bd87
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Mistral-7b-64k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a05b72f12491e874_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a05b72f12491e874_train_data.json
type:
field_input: llama-generation
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: Best000/27516f2d-a92f-4252-bcfc-15e88cb6bd87
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/a05b72f12491e874_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8545d224-ec4d-4dfb-907a-6c5cad06d476
wandb_project: birthday-sn56-16-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 8545d224-ec4d-4dfb-907a-6c5cad06d476
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 27516f2d-a92f-4252-bcfc-15e88cb6bd87
This model is a fine-tuned version of [NousResearch/Yarn-Mistral-7b-64k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-64k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8072
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.887 | 0.0002 | 1 | 0.8960 |
| 3.6236 | 0.0006 | 3 | 0.8897 |
| 3.8499 | 0.0013 | 6 | 0.8413 |
| 3.6246 | 0.0019 | 9 | 0.8072 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
timsek/MobileCLIP-B-OpenCLIP
|
timsek
| 2025-01-12T19:34:44Z
| 19
| 0
|
open_clip
|
[
"open_clip",
"safetensors",
"clip",
"zero-shot-image-classification",
"arxiv:2311.17049",
"arxiv:2103.00020",
"arxiv:2303.15343",
"arxiv:2309.17425",
"license:other",
"region:us"
] |
zero-shot-image-classification
| 2025-01-12T19:33:47Z
|
---
tags:
- clip
library_name: open_clip
pipeline_tag: zero-shot-image-classification
license: other
license_name: apple-ascl
license_link: https://github.com/apple/ml-mobileclip/blob/main/LICENSE_weights_data
---
# MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training
MobileCLIP was introduced in [MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training
](https://arxiv.org/pdf/2311.17049.pdf) (CVPR 2024), by Pavan Kumar Anasosalu Vasu, Hadi Pouransari, Fartash Faghri, Raviteja Vemulapalli, Oncel Tuzel.
This repository contains the **MobileCLIP-B (LT)** checkpoint for OpenCLIP.

### Highlights
* Our smallest variant `MobileCLIP-S0` obtains similar zero-shot performance as [OpenAI](https://arxiv.org/abs/2103.00020)'s ViT-B/16 model while being 4.8x faster and 2.8x smaller.
* `MobileCLIP-S2` obtains better avg zero-shot performance than [SigLIP](https://arxiv.org/abs/2303.15343)'s ViT-B/16 model while being 2.3x faster and 2.1x smaller, and trained with 3x less seen samples.
* `MobileCLIP-B`(LT) attains zero-shot ImageNet performance of **77.2%** which is significantly better than recent works like [DFN](https://arxiv.org/abs/2309.17425) and [SigLIP](https://arxiv.org/abs/2303.15343) with similar architectures or even [OpenAI's ViT-L/14@336](https://arxiv.org/abs/2103.00020).
## Checkpoints
| Model | # Seen <BR>Samples (B) | # Params (M) <BR> (img + txt) | Latency (ms) <BR> (img + txt) | IN-1k Zero-Shot <BR> Top-1 Acc. (%) | Avg. Perf. (%) <BR> on 38 datasets |
|:----------------------------------------------------------|:----------------------:|:-----------------------------:|:-----------------------------:|:-----------------------------------:|:----------------------------------:|
| [MobileCLIP-S0](https://hf.co/pcuenq/MobileCLIP-S0) | 13 | 11.4 + 42.4 | 1.5 + 1.6 | 67.8 | 58.1 |
| [MobileCLIP-S1](https://hf.co/pcuenq/MobileCLIP-S1) | 13 | 21.5 + 63.4 | 2.5 + 3.3 | 72.6 | 61.3 |
| [MobileCLIP-S2](https://hf.co/pcuenq/MobileCLIP-S2) | 13 | 35.7 + 63.4 | 3.6 + 3.3 | 74.4 | 63.7 |
| [MobileCLIP-B](https://hf.co/pcuenq/MobileCLIP-B) | 13 | 86.3 + 63.4 | 10.4 + 3.3 | 76.8 | 65.2 |
| [MobileCLIP-B (LT)](https://hf.co/pcuenq/MobileCLIP-B-LT) | 36 | 86.3 + 63.4 | 10.4 + 3.3 | 77.2 | 65.8 |
|
mradermacher/Qwenstein2.5-32B-Instruct-GGUF
|
mradermacher
| 2025-01-12T19:34:21Z
| 360
| 0
|
transformers
|
[
"transformers",
"gguf",
"chat",
"conversational",
"en",
"base_model:maldv/Qwenstein2.5-32B-Instruct",
"base_model:quantized:maldv/Qwenstein2.5-32B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-01-12T11:57:41Z
|
---
base_model: maldv/Qwenstein2.5-32B-Instruct
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- chat
- conversational
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/maldv/Qwenstein2.5-32B-Instruct
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwenstein2.5-32B-Instruct-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwenstein2.5-32B-Instruct-GGUF/resolve/main/Qwenstein2.5-32B-Instruct.Q2_K.gguf) | Q2_K | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwenstein2.5-32B-Instruct-GGUF/resolve/main/Qwenstein2.5-32B-Instruct.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwenstein2.5-32B-Instruct-GGUF/resolve/main/Qwenstein2.5-32B-Instruct.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwenstein2.5-32B-Instruct-GGUF/resolve/main/Qwenstein2.5-32B-Instruct.Q3_K_L.gguf) | Q3_K_L | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwenstein2.5-32B-Instruct-GGUF/resolve/main/Qwenstein2.5-32B-Instruct.IQ4_XS.gguf) | IQ4_XS | 18.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwenstein2.5-32B-Instruct-GGUF/resolve/main/Qwenstein2.5-32B-Instruct.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwenstein2.5-32B-Instruct-GGUF/resolve/main/Qwenstein2.5-32B-Instruct.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwenstein2.5-32B-Instruct-GGUF/resolve/main/Qwenstein2.5-32B-Instruct.Q5_K_S.gguf) | Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwenstein2.5-32B-Instruct-GGUF/resolve/main/Qwenstein2.5-32B-Instruct.Q5_K_M.gguf) | Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwenstein2.5-32B-Instruct-GGUF/resolve/main/Qwenstein2.5-32B-Instruct.Q6_K.gguf) | Q6_K | 27.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwenstein2.5-32B-Instruct-GGUF/resolve/main/Qwenstein2.5-32B-Instruct.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
lesso04/b2a9636b-31a5-43a5-9f52-f4d644fd6de6
|
lesso04
| 2025-01-12T19:32:56Z
| 10
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:sethuiyer/Medichat-Llama3-8B",
"base_model:adapter:sethuiyer/Medichat-Llama3-8B",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-12T19:07:56Z
|
---
library_name: peft
license: other
base_model: sethuiyer/Medichat-Llama3-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b2a9636b-31a5-43a5-9f52-f4d644fd6de6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: sethuiyer/Medichat-Llama3-8B
bf16: true
chat_template: llama3
datasets:
- data_files:
- 597f64d3ad401cba_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/597f64d3ad401cba_train_data.json
type:
field_instruction: instruction
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso04/b2a9636b-31a5-43a5-9f52-f4d644fd6de6
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/597f64d3ad401cba_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b050f9b1-cf69-4630-ae14-4b41180a7aa7
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b050f9b1-cf69-4630-ae14-4b41180a7aa7
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b2a9636b-31a5-43a5-9f52-f4d644fd6de6
This model is a fine-tuned version of [sethuiyer/Medichat-Llama3-8B](https://huggingface.co/sethuiyer/Medichat-Llama3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8535
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.881 | 0.0003 | 1 | 1.9236 |
| 1.9244 | 0.0017 | 5 | 1.8103 |
| 1.2319 | 0.0034 | 10 | 1.1934 |
| 0.9751 | 0.0052 | 15 | 0.9587 |
| 0.9049 | 0.0069 | 20 | 0.8750 |
| 0.8734 | 0.0086 | 25 | 0.8535 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
laquythang/c6fa6f8d-fa00-4edd-9b4c-5f9f10c7362c
|
laquythang
| 2025-01-12T19:32:06Z
| 13
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:lmsys/vicuna-7b-v1.3",
"base_model:adapter:lmsys/vicuna-7b-v1.3",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-12T18:40:21Z
|
---
library_name: peft
base_model: lmsys/vicuna-7b-v1.3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c6fa6f8d-fa00-4edd-9b4c-5f9f10c7362c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: lmsys/vicuna-7b-v1.3
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c3f29cc94841d3ff_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c3f29cc94841d3ff_train_data.json
type:
field_input: my_solu
field_instruction: prompt
field_output: solution
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: laquythang/c6fa6f8d-fa00-4edd-9b4c-5f9f10c7362c
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/c3f29cc94841d3ff_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 98d503ad-cb5d-4e0c-9f8c-67ed3226c6ee
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 98d503ad-cb5d-4e0c-9f8c-67ed3226c6ee
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# c6fa6f8d-fa00-4edd-9b4c-5f9f10c7362c
This model is a fine-tuned version of [lmsys/vicuna-7b-v1.3](https://huggingface.co/lmsys/vicuna-7b-v1.3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9257
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9448 | 0.0153 | 200 | 0.9257 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
cunghoctienganh/46937069-1569-4e83-a475-ab349fea3b45
|
cunghoctienganh
| 2025-01-12T19:31:31Z
| 15
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:lmsys/vicuna-7b-v1.3",
"base_model:adapter:lmsys/vicuna-7b-v1.3",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-12T18:39:49Z
|
---
library_name: peft
base_model: lmsys/vicuna-7b-v1.3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 46937069-1569-4e83-a475-ab349fea3b45
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: lmsys/vicuna-7b-v1.3
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c3f29cc94841d3ff_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c3f29cc94841d3ff_train_data.json
type:
field_input: my_solu
field_instruction: prompt
field_output: solution
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: cunghoctienganh/46937069-1569-4e83-a475-ab349fea3b45
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/c3f29cc94841d3ff_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 98d503ad-cb5d-4e0c-9f8c-67ed3226c6ee
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 98d503ad-cb5d-4e0c-9f8c-67ed3226c6ee
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 46937069-1569-4e83-a475-ab349fea3b45
This model is a fine-tuned version of [lmsys/vicuna-7b-v1.3](https://huggingface.co/lmsys/vicuna-7b-v1.3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9254
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9458 | 0.0153 | 200 | 0.9254 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
adammandic87/6f982321-b0d0-4bfe-a17a-bcb77ad53fc7
|
adammandic87
| 2025-01-12T19:31:00Z
| 10
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:llamafactory/tiny-random-Llama-3",
"base_model:adapter:llamafactory/tiny-random-Llama-3",
"license:apache-2.0",
"region:us"
] | null | 2025-01-12T19:30:39Z
|
---
library_name: peft
license: apache-2.0
base_model: llamafactory/tiny-random-Llama-3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6f982321-b0d0-4bfe-a17a-bcb77ad53fc7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: llamafactory/tiny-random-Llama-3
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1fa7e81da1420fca_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1fa7e81da1420fca_train_data.json
type:
field_input: "\uB2F5\uBCC0"
field_instruction: "\uC81C\uBAA9"
field_output: "\uC9C8\uBB38"
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: adammandic87/6f982321-b0d0-4bfe-a17a-bcb77ad53fc7
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/1fa7e81da1420fca_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: db75554a-637d-46d9-a6c4-15d5e4dc4e7d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: db75554a-637d-46d9-a6c4-15d5e4dc4e7d
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 6f982321-b0d0-4bfe-a17a-bcb77ad53fc7
This model is a fine-tuned version of [llamafactory/tiny-random-Llama-3](https://huggingface.co/llamafactory/tiny-random-Llama-3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.7555
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 11.7582 | 0.0037 | 1 | 11.7559 |
| 11.7552 | 0.0112 | 3 | 11.7559 |
| 11.7532 | 0.0224 | 6 | 11.7557 |
| 11.7538 | 0.0336 | 9 | 11.7555 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
nhoxinh/eb3d52f5-52c6-4697-8d58-8b34c827634a
|
nhoxinh
| 2025-01-12T19:28:57Z
| 11
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:sethuiyer/Medichat-Llama3-8B",
"base_model:adapter:sethuiyer/Medichat-Llama3-8B",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-12T19:07:52Z
|
---
library_name: peft
license: other
base_model: sethuiyer/Medichat-Llama3-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: eb3d52f5-52c6-4697-8d58-8b34c827634a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: sethuiyer/Medichat-Llama3-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 597f64d3ad401cba_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/597f64d3ad401cba_train_data.json
type:
field_instruction: instruction
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhoxinh/eb3d52f5-52c6-4697-8d58-8b34c827634a
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/597f64d3ad401cba_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b050f9b1-cf69-4630-ae14-4b41180a7aa7
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b050f9b1-cf69-4630-ae14-4b41180a7aa7
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# eb3d52f5-52c6-4697-8d58-8b34c827634a
This model is a fine-tuned version of [sethuiyer/Medichat-Llama3-8B](https://huggingface.co/sethuiyer/Medichat-Llama3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4965
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.4794 | 0.0687 | 200 | 0.4965 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
nhung01/07c8c42d-ee47-4a56-bd54-e146c6500ad1
|
nhung01
| 2025-01-12T19:28:46Z
| 10
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:sethuiyer/Medichat-Llama3-8B",
"base_model:adapter:sethuiyer/Medichat-Llama3-8B",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-12T19:07:49Z
|
---
library_name: peft
license: other
base_model: sethuiyer/Medichat-Llama3-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 07c8c42d-ee47-4a56-bd54-e146c6500ad1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: sethuiyer/Medichat-Llama3-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 597f64d3ad401cba_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/597f64d3ad401cba_train_data.json
type:
field_instruction: instruction
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung01/07c8c42d-ee47-4a56-bd54-e146c6500ad1
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/597f64d3ad401cba_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b050f9b1-cf69-4630-ae14-4b41180a7aa7
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b050f9b1-cf69-4630-ae14-4b41180a7aa7
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 07c8c42d-ee47-4a56-bd54-e146c6500ad1
This model is a fine-tuned version of [sethuiyer/Medichat-Llama3-8B](https://huggingface.co/sethuiyer/Medichat-Llama3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4961
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.4758 | 0.0687 | 200 | 0.4961 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
ReasoningMila/ver_gen_partial_ft_model_meta-llama_Llama-32-1B_checkpoint-5634
|
ReasoningMila
| 2025-01-12T19:25:26Z
| 10
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-12T19:23:29Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Triangle104/Human-Like-Qwen2.5-7B-Instruct-Q5_K_S-GGUF
|
Triangle104
| 2025-01-12T19:23:51Z
| 31
| 0
| null |
[
"gguf",
"axolotl",
"dpo",
"trl",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:HumanLLMs/Human-Like-DPO-Dataset",
"base_model:HumanLLMs/Human-Like-Qwen2.5-7B-Instruct",
"base_model:quantized:HumanLLMs/Human-Like-Qwen2.5-7B-Instruct",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-12T19:21:56Z
|
---
license: apache-2.0
tags:
- axolotl
- dpo
- trl
- llama-cpp
- gguf-my-repo
base_model: HumanLLMs/Human-Like-Qwen2.5-7B-Instruct
datasets:
- HumanLLMs/Human-Like-DPO-Dataset
language:
- en
model-index:
- name: Humanish-Qwen2.5-7B-Instruct
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 72.84
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=HumanLLMs/Humanish-Qwen2.5-7B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 34.48
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=HumanLLMs/Humanish-Qwen2.5-7B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 0
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=HumanLLMs/Humanish-Qwen2.5-7B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 6.49
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=HumanLLMs/Humanish-Qwen2.5-7B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 8.42
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=HumanLLMs/Humanish-Qwen2.5-7B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 37.76
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=HumanLLMs/Humanish-Qwen2.5-7B-Instruct
name: Open LLM Leaderboard
---
# Triangle104/Human-Like-Qwen2.5-7B-Instruct-Q5_K_S-GGUF
This model was converted to GGUF format from [`HumanLLMs/Human-Like-Qwen2.5-7B-Instruct`](https://huggingface.co/HumanLLMs/Human-Like-Qwen2.5-7B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/HumanLLMs/Human-Like-Qwen2.5-7B-Instruct) for more details on the model.
---
Model details:
-
This model is a fine-tuned version of Qwen/Qwen2.5-7B-Instruct, specifically optimized to generate more human-like and conversational responses.
The fine-tuning process employed both Low-Rank Adaptation (LoRA) and Direct Preference Optimization (DPO) to enhance natural language understanding, conversational coherence, and emotional intelligence in interactions.
The proccess of creating this models is detailed in the research paper “Enhancing Human-Like Responses in Large Language Models”.
🛠️ Training Configuration
Base Model: Qwen2.5-7B-Instruct
Framework: Axolotl v0.4.1
Hardware: 2x NVIDIA A100 (80 GB) GPUs
Training Time: ~2 hours 15 minutes
Dataset: Synthetic dataset with ≈11,000 samples across 256 diverse topics
See axolotl config
axolotl version: 0.4.1
base_model: Qwen/Qwen2.5-7B-Instruct
model_type: AutoModalForCausalLM
tokenizer_type: AutoTokenizer
trust_remote_code: true
load_in_8bit: true
load_in_4bit: false
strict: false
chat_template: chatml
rl: dpo
datasets:
- path: HumanLLMs/humanish-dpo-project
type: chatml.prompt_pairs
chat_template: chatml
dataset_prepared_path:
val_set_size: 0.05
output_dir: ./humanish-qwen2.5-7b-instruct
sequence_len: 8192
sample_packing: false
pad_to_sequence_len: true
adapter: lora
lora_model_dir:
lora_r: 8
lora_alpha: 4
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project: Humanish-DPO
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
hub_model_id: HumanLLMs/Humanish-Qwen2.5-7B-Instruct
gradient_accumulation_steps: 8
micro_batch_size: 2
num_epochs: 1
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
s2_attention:
warmup_steps: 10
evals_per_epoch: 2
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
save_safetensors: true
💬 Prompt Template
You can use ChatML prompt template while using the model:
ChatML
<|im_start|>system
{system}<|im_end|>
<|im_start|>user
{user}<|im_end|>
<|im_start|>assistant
{asistant}<|im_end|>
This prompt template is available as a chat template, which means you can format messages using the tokenizer.apply_chat_template() method:
messages = [
{"role": "system", "content": "You are helpful AI asistant."},
{"role": "user", "content": "Hello!"}
]
gen_input = tokenizer.apply_chat_template(message, return_tensors="pt")
model.generate(**gen_input)
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Human-Like-Qwen2.5-7B-Instruct-Q5_K_S-GGUF --hf-file human-like-qwen2.5-7b-instruct-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Human-Like-Qwen2.5-7B-Instruct-Q5_K_S-GGUF --hf-file human-like-qwen2.5-7b-instruct-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Human-Like-Qwen2.5-7B-Instruct-Q5_K_S-GGUF --hf-file human-like-qwen2.5-7b-instruct-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Human-Like-Qwen2.5-7B-Instruct-Q5_K_S-GGUF --hf-file human-like-qwen2.5-7b-instruct-q5_k_s.gguf -c 2048
```
|
outlookAi/nAXELZbqSM
|
outlookAi
| 2025-01-12T19:20:24Z
| 12
| 0
|
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-01-12T18:49:06Z
|
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: SquidGame
---
# Naxelzbqsm
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `SquidGame` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('outlookAi/nAXELZbqSM', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
MayBashendy/ArabicNewSplits7_FineTuningAraBERT_run1_AugV5_k10_task5_organization
|
MayBashendy
| 2025-01-12T19:20:13Z
| 8
| 0
|
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-12T19:12:56Z
|
---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_FineTuningAraBERT_run1_AugV5_k10_task5_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_FineTuningAraBERT_run1_AugV5_k10_task5_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5311
- Qwk: 0.5959
- Mse: 0.5311
- Rmse: 0.7287
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0741 | 2 | 4.0880 | 0.0024 | 4.0880 | 2.0219 |
| No log | 0.1481 | 4 | 2.3282 | 0.0541 | 2.3282 | 1.5258 |
| No log | 0.2222 | 6 | 2.0560 | -0.0450 | 2.0560 | 1.4339 |
| No log | 0.2963 | 8 | 1.4911 | 0.0294 | 1.4911 | 1.2211 |
| No log | 0.3704 | 10 | 1.1092 | 0.3003 | 1.1092 | 1.0532 |
| No log | 0.4444 | 12 | 1.0452 | 0.3625 | 1.0452 | 1.0223 |
| No log | 0.5185 | 14 | 1.0255 | 0.3521 | 1.0255 | 1.0127 |
| No log | 0.5926 | 16 | 1.0632 | 0.1764 | 1.0632 | 1.0311 |
| No log | 0.6667 | 18 | 1.1130 | 0.1764 | 1.1130 | 1.0550 |
| No log | 0.7407 | 20 | 1.0794 | 0.2981 | 1.0794 | 1.0389 |
| No log | 0.8148 | 22 | 1.0039 | 0.2108 | 1.0039 | 1.0019 |
| No log | 0.8889 | 24 | 1.0397 | 0.1516 | 1.0397 | 1.0197 |
| No log | 0.9630 | 26 | 1.0547 | 0.1137 | 1.0547 | 1.0270 |
| No log | 1.0370 | 28 | 1.2706 | 0.0814 | 1.2706 | 1.1272 |
| No log | 1.1111 | 30 | 1.3902 | 0.1487 | 1.3902 | 1.1791 |
| No log | 1.1852 | 32 | 1.0989 | 0.2441 | 1.0989 | 1.0483 |
| No log | 1.2593 | 34 | 1.0044 | 0.2265 | 1.0044 | 1.0022 |
| No log | 1.3333 | 36 | 1.1569 | 0.2293 | 1.1569 | 1.0756 |
| No log | 1.4074 | 38 | 1.3568 | -0.0296 | 1.3568 | 1.1648 |
| No log | 1.4815 | 40 | 1.4769 | -0.0148 | 1.4769 | 1.2153 |
| No log | 1.5556 | 42 | 1.3959 | -0.0148 | 1.3959 | 1.1815 |
| No log | 1.6296 | 44 | 1.3592 | 0.0 | 1.3592 | 1.1658 |
| No log | 1.7037 | 46 | 1.1966 | 0.1024 | 1.1966 | 1.0939 |
| No log | 1.7778 | 48 | 1.0175 | 0.3003 | 1.0175 | 1.0087 |
| No log | 1.8519 | 50 | 0.9568 | 0.2566 | 0.9568 | 0.9782 |
| No log | 1.9259 | 52 | 0.9354 | 0.2849 | 0.9354 | 0.9672 |
| No log | 2.0 | 54 | 0.9409 | 0.1389 | 0.9409 | 0.9700 |
| No log | 2.0741 | 56 | 0.9543 | 0.1601 | 0.9543 | 0.9769 |
| No log | 2.1481 | 58 | 0.9327 | 0.2818 | 0.9327 | 0.9658 |
| No log | 2.2222 | 60 | 0.9016 | 0.4402 | 0.9016 | 0.9495 |
| No log | 2.2963 | 62 | 0.8892 | 0.4312 | 0.8892 | 0.9430 |
| No log | 2.3704 | 64 | 0.8344 | 0.4022 | 0.8344 | 0.9135 |
| No log | 2.4444 | 66 | 0.8468 | 0.3288 | 0.8468 | 0.9202 |
| No log | 2.5185 | 68 | 0.9091 | 0.2262 | 0.9091 | 0.9535 |
| No log | 2.5926 | 70 | 0.9734 | 0.1998 | 0.9734 | 0.9866 |
| No log | 2.6667 | 72 | 0.9566 | 0.1799 | 0.9566 | 0.9781 |
| No log | 2.7407 | 74 | 0.8942 | 0.3094 | 0.8942 | 0.9456 |
| No log | 2.8148 | 76 | 0.8805 | 0.4275 | 0.8805 | 0.9384 |
| No log | 2.8889 | 78 | 0.8467 | 0.4710 | 0.8467 | 0.9201 |
| No log | 2.9630 | 80 | 0.8043 | 0.4727 | 0.8043 | 0.8968 |
| No log | 3.0370 | 82 | 0.7295 | 0.4932 | 0.7295 | 0.8541 |
| No log | 3.1111 | 84 | 0.7177 | 0.5146 | 0.7177 | 0.8472 |
| No log | 3.1852 | 86 | 0.8133 | 0.3844 | 0.8133 | 0.9019 |
| No log | 3.2593 | 88 | 0.8868 | 0.4004 | 0.8868 | 0.9417 |
| No log | 3.3333 | 90 | 0.9298 | 0.2960 | 0.9298 | 0.9642 |
| No log | 3.4074 | 92 | 0.9505 | 0.3283 | 0.9505 | 0.9749 |
| No log | 3.4815 | 94 | 0.7993 | 0.4650 | 0.7993 | 0.8941 |
| No log | 3.5556 | 96 | 0.7235 | 0.5403 | 0.7235 | 0.8506 |
| No log | 3.6296 | 98 | 0.7522 | 0.5435 | 0.7522 | 0.8673 |
| No log | 3.7037 | 100 | 0.7399 | 0.5994 | 0.7399 | 0.8602 |
| No log | 3.7778 | 102 | 0.7164 | 0.6079 | 0.7164 | 0.8464 |
| No log | 3.8519 | 104 | 0.6414 | 0.6209 | 0.6414 | 0.8009 |
| No log | 3.9259 | 106 | 0.5822 | 0.6252 | 0.5822 | 0.7630 |
| No log | 4.0 | 108 | 0.5971 | 0.6032 | 0.5971 | 0.7727 |
| No log | 4.0741 | 110 | 0.7344 | 0.5916 | 0.7344 | 0.8570 |
| No log | 4.1481 | 112 | 0.8790 | 0.4681 | 0.8790 | 0.9375 |
| No log | 4.2222 | 114 | 0.8021 | 0.4902 | 0.8021 | 0.8956 |
| No log | 4.2963 | 116 | 0.6624 | 0.5923 | 0.6624 | 0.8139 |
| No log | 4.3704 | 118 | 0.5633 | 0.7049 | 0.5633 | 0.7505 |
| No log | 4.4444 | 120 | 0.5428 | 0.7018 | 0.5428 | 0.7367 |
| No log | 4.5185 | 122 | 0.5315 | 0.6931 | 0.5315 | 0.7291 |
| No log | 4.5926 | 124 | 0.5686 | 0.6324 | 0.5686 | 0.7540 |
| No log | 4.6667 | 126 | 0.5711 | 0.5840 | 0.5711 | 0.7557 |
| No log | 4.7407 | 128 | 0.5483 | 0.6301 | 0.5483 | 0.7404 |
| No log | 4.8148 | 130 | 0.5646 | 0.6634 | 0.5646 | 0.7514 |
| No log | 4.8889 | 132 | 0.5655 | 0.6419 | 0.5655 | 0.7520 |
| No log | 4.9630 | 134 | 0.5288 | 0.6324 | 0.5288 | 0.7272 |
| No log | 5.0370 | 136 | 0.7335 | 0.6539 | 0.7335 | 0.8565 |
| No log | 5.1111 | 138 | 0.7308 | 0.6539 | 0.7308 | 0.8549 |
| No log | 5.1852 | 140 | 0.5509 | 0.6324 | 0.5509 | 0.7422 |
| No log | 5.2593 | 142 | 0.6552 | 0.6080 | 0.6552 | 0.8095 |
| No log | 5.3333 | 144 | 0.6951 | 0.6275 | 0.6951 | 0.8337 |
| No log | 5.4074 | 146 | 0.5921 | 0.6215 | 0.5921 | 0.7695 |
| No log | 5.4815 | 148 | 0.6195 | 0.6314 | 0.6195 | 0.7871 |
| No log | 5.5556 | 150 | 0.6243 | 0.6700 | 0.6243 | 0.7902 |
| No log | 5.6296 | 152 | 0.5638 | 0.6796 | 0.5638 | 0.7509 |
| No log | 5.7037 | 154 | 0.5513 | 0.6690 | 0.5513 | 0.7425 |
| No log | 5.7778 | 156 | 0.5388 | 0.6164 | 0.5388 | 0.7341 |
| No log | 5.8519 | 158 | 0.5407 | 0.6455 | 0.5407 | 0.7353 |
| No log | 5.9259 | 160 | 0.6336 | 0.6160 | 0.6336 | 0.7960 |
| No log | 6.0 | 162 | 0.6444 | 0.5867 | 0.6444 | 0.8028 |
| No log | 6.0741 | 164 | 0.5529 | 0.6584 | 0.5529 | 0.7436 |
| No log | 6.1481 | 166 | 0.5504 | 0.5679 | 0.5504 | 0.7419 |
| No log | 6.2222 | 168 | 0.5494 | 0.5549 | 0.5494 | 0.7412 |
| No log | 6.2963 | 170 | 0.5432 | 0.5972 | 0.5432 | 0.7370 |
| No log | 6.3704 | 172 | 0.5579 | 0.6688 | 0.5579 | 0.7469 |
| No log | 6.4444 | 174 | 0.5326 | 0.6445 | 0.5326 | 0.7298 |
| No log | 6.5185 | 176 | 0.5077 | 0.6363 | 0.5077 | 0.7126 |
| No log | 6.5926 | 178 | 0.4858 | 0.6897 | 0.4858 | 0.6970 |
| No log | 6.6667 | 180 | 0.4975 | 0.6479 | 0.4975 | 0.7053 |
| No log | 6.7407 | 182 | 0.4967 | 0.6833 | 0.4967 | 0.7047 |
| No log | 6.8148 | 184 | 0.5164 | 0.6822 | 0.5164 | 0.7186 |
| No log | 6.8889 | 186 | 0.5533 | 0.6675 | 0.5533 | 0.7438 |
| No log | 6.9630 | 188 | 0.5325 | 0.6667 | 0.5325 | 0.7297 |
| No log | 7.0370 | 190 | 0.5851 | 0.6128 | 0.5851 | 0.7649 |
| No log | 7.1111 | 192 | 0.6682 | 0.6170 | 0.6682 | 0.8174 |
| No log | 7.1852 | 194 | 0.6646 | 0.5756 | 0.6646 | 0.8152 |
| No log | 7.2593 | 196 | 0.6077 | 0.6396 | 0.6077 | 0.7795 |
| No log | 7.3333 | 198 | 0.5901 | 0.6296 | 0.5901 | 0.7682 |
| No log | 7.4074 | 200 | 0.6094 | 0.6209 | 0.6094 | 0.7806 |
| No log | 7.4815 | 202 | 0.5836 | 0.5534 | 0.5836 | 0.7639 |
| No log | 7.5556 | 204 | 0.5686 | 0.5534 | 0.5686 | 0.7541 |
| No log | 7.6296 | 206 | 0.5745 | 0.5607 | 0.5745 | 0.7580 |
| No log | 7.7037 | 208 | 0.5442 | 0.6157 | 0.5442 | 0.7377 |
| No log | 7.7778 | 210 | 0.5340 | 0.6756 | 0.5340 | 0.7307 |
| No log | 7.8519 | 212 | 0.5313 | 0.6756 | 0.5313 | 0.7289 |
| No log | 7.9259 | 214 | 0.5340 | 0.6936 | 0.5340 | 0.7307 |
| No log | 8.0 | 216 | 0.6477 | 0.5938 | 0.6477 | 0.8048 |
| No log | 8.0741 | 218 | 0.7247 | 0.5905 | 0.7247 | 0.8513 |
| No log | 8.1481 | 220 | 0.6574 | 0.6209 | 0.6574 | 0.8108 |
| No log | 8.2222 | 222 | 0.5352 | 0.6528 | 0.5352 | 0.7315 |
| No log | 8.2963 | 224 | 0.5347 | 0.7175 | 0.5347 | 0.7313 |
| No log | 8.3704 | 226 | 0.5377 | 0.6572 | 0.5377 | 0.7333 |
| No log | 8.4444 | 228 | 0.5858 | 0.6227 | 0.5858 | 0.7654 |
| No log | 8.5185 | 230 | 0.6664 | 0.5745 | 0.6664 | 0.8163 |
| No log | 8.5926 | 232 | 0.5966 | 0.6455 | 0.5966 | 0.7724 |
| No log | 8.6667 | 234 | 0.5653 | 0.6974 | 0.5653 | 0.7518 |
| No log | 8.7407 | 236 | 0.5805 | 0.6010 | 0.5805 | 0.7619 |
| No log | 8.8148 | 238 | 0.5363 | 0.6833 | 0.5363 | 0.7323 |
| No log | 8.8889 | 240 | 0.6265 | 0.5318 | 0.6265 | 0.7915 |
| No log | 8.9630 | 242 | 0.6642 | 0.5589 | 0.6642 | 0.8150 |
| No log | 9.0370 | 244 | 0.5817 | 0.6751 | 0.5817 | 0.7627 |
| No log | 9.1111 | 246 | 0.5652 | 0.6814 | 0.5652 | 0.7518 |
| No log | 9.1852 | 248 | 0.5895 | 0.6865 | 0.5895 | 0.7678 |
| No log | 9.2593 | 250 | 0.6564 | 0.5414 | 0.6564 | 0.8102 |
| No log | 9.3333 | 252 | 0.6371 | 0.5777 | 0.6371 | 0.7982 |
| No log | 9.4074 | 254 | 0.5711 | 0.6003 | 0.5711 | 0.7557 |
| No log | 9.4815 | 256 | 0.5697 | 0.6157 | 0.5697 | 0.7548 |
| No log | 9.5556 | 258 | 0.5762 | 0.6445 | 0.5762 | 0.7591 |
| No log | 9.6296 | 260 | 0.5783 | 0.6410 | 0.5783 | 0.7604 |
| No log | 9.7037 | 262 | 0.5325 | 0.6310 | 0.5325 | 0.7297 |
| No log | 9.7778 | 264 | 0.4879 | 0.6602 | 0.4879 | 0.6985 |
| No log | 9.8519 | 266 | 0.4888 | 0.6736 | 0.4888 | 0.6991 |
| No log | 9.9259 | 268 | 0.5136 | 0.6639 | 0.5136 | 0.7166 |
| No log | 10.0 | 270 | 0.5441 | 0.6841 | 0.5441 | 0.7376 |
| No log | 10.0741 | 272 | 0.5635 | 0.6731 | 0.5635 | 0.7507 |
| No log | 10.1481 | 274 | 0.5521 | 0.6950 | 0.5521 | 0.7430 |
| No log | 10.2222 | 276 | 0.4993 | 0.7338 | 0.4993 | 0.7066 |
| No log | 10.2963 | 278 | 0.5022 | 0.7141 | 0.5022 | 0.7087 |
| No log | 10.3704 | 280 | 0.5143 | 0.6838 | 0.5143 | 0.7172 |
| No log | 10.4444 | 282 | 0.5331 | 0.7444 | 0.5331 | 0.7302 |
| No log | 10.5185 | 284 | 0.5707 | 0.6748 | 0.5707 | 0.7554 |
| No log | 10.5926 | 286 | 0.6048 | 0.6558 | 0.6048 | 0.7777 |
| No log | 10.6667 | 288 | 0.5719 | 0.6231 | 0.5719 | 0.7563 |
| No log | 10.7407 | 290 | 0.5712 | 0.6231 | 0.5712 | 0.7558 |
| No log | 10.8148 | 292 | 0.5789 | 0.6422 | 0.5789 | 0.7609 |
| No log | 10.8889 | 294 | 0.5946 | 0.6455 | 0.5946 | 0.7711 |
| No log | 10.9630 | 296 | 0.5765 | 0.6639 | 0.5765 | 0.7593 |
| No log | 11.0370 | 298 | 0.5718 | 0.6584 | 0.5718 | 0.7562 |
| No log | 11.1111 | 300 | 0.5694 | 0.6584 | 0.5694 | 0.7546 |
| No log | 11.1852 | 302 | 0.5607 | 0.6330 | 0.5607 | 0.7488 |
| No log | 11.2593 | 304 | 0.5809 | 0.6227 | 0.5809 | 0.7622 |
| No log | 11.3333 | 306 | 0.6187 | 0.6544 | 0.6187 | 0.7866 |
| No log | 11.4074 | 308 | 0.6037 | 0.6215 | 0.6037 | 0.7770 |
| No log | 11.4815 | 310 | 0.6234 | 0.6179 | 0.6234 | 0.7896 |
| No log | 11.5556 | 312 | 0.6970 | 0.5443 | 0.6970 | 0.8349 |
| No log | 11.6296 | 314 | 0.6613 | 0.5788 | 0.6613 | 0.8132 |
| No log | 11.7037 | 316 | 0.5711 | 0.6387 | 0.5711 | 0.7557 |
| No log | 11.7778 | 318 | 0.5437 | 0.6942 | 0.5437 | 0.7373 |
| No log | 11.8519 | 320 | 0.5270 | 0.6796 | 0.5270 | 0.7259 |
| No log | 11.9259 | 322 | 0.5476 | 0.6404 | 0.5476 | 0.7400 |
| No log | 12.0 | 324 | 0.5625 | 0.6573 | 0.5625 | 0.7500 |
| No log | 12.0741 | 326 | 0.5356 | 0.6581 | 0.5356 | 0.7318 |
| No log | 12.1481 | 328 | 0.4956 | 0.7095 | 0.4956 | 0.7040 |
| No log | 12.2222 | 330 | 0.4938 | 0.7132 | 0.4938 | 0.7027 |
| No log | 12.2963 | 332 | 0.5227 | 0.6841 | 0.5227 | 0.7230 |
| No log | 12.3704 | 334 | 0.5512 | 0.6500 | 0.5512 | 0.7425 |
| No log | 12.4444 | 336 | 0.5774 | 0.6670 | 0.5774 | 0.7599 |
| No log | 12.5185 | 338 | 0.5477 | 0.6623 | 0.5477 | 0.7400 |
| No log | 12.5926 | 340 | 0.5283 | 0.6690 | 0.5283 | 0.7268 |
| No log | 12.6667 | 342 | 0.5300 | 0.6805 | 0.5300 | 0.7280 |
| No log | 12.7407 | 344 | 0.5051 | 0.6519 | 0.5051 | 0.7107 |
| No log | 12.8148 | 346 | 0.5502 | 0.6914 | 0.5502 | 0.7418 |
| No log | 12.8889 | 348 | 0.5823 | 0.6521 | 0.5823 | 0.7631 |
| No log | 12.9630 | 350 | 0.5666 | 0.6735 | 0.5666 | 0.7527 |
| No log | 13.0370 | 352 | 0.5225 | 0.7005 | 0.5225 | 0.7228 |
| No log | 13.1111 | 354 | 0.5225 | 0.6813 | 0.5225 | 0.7228 |
| No log | 13.1852 | 356 | 0.5474 | 0.6732 | 0.5474 | 0.7399 |
| No log | 13.2593 | 358 | 0.6483 | 0.6099 | 0.6483 | 0.8052 |
| No log | 13.3333 | 360 | 0.7749 | 0.5408 | 0.7749 | 0.8803 |
| No log | 13.4074 | 362 | 0.7527 | 0.5111 | 0.7527 | 0.8676 |
| No log | 13.4815 | 364 | 0.6315 | 0.6637 | 0.6315 | 0.7946 |
| No log | 13.5556 | 366 | 0.5859 | 0.6032 | 0.5859 | 0.7654 |
| No log | 13.6296 | 368 | 0.5913 | 0.6161 | 0.5913 | 0.7690 |
| No log | 13.7037 | 370 | 0.5856 | 0.6435 | 0.5856 | 0.7652 |
| No log | 13.7778 | 372 | 0.5938 | 0.6655 | 0.5938 | 0.7706 |
| No log | 13.8519 | 374 | 0.6597 | 0.5555 | 0.6597 | 0.8122 |
| No log | 13.9259 | 376 | 0.6756 | 0.5745 | 0.6756 | 0.8220 |
| No log | 14.0 | 378 | 0.5992 | 0.5677 | 0.5992 | 0.7741 |
| No log | 14.0741 | 380 | 0.5271 | 0.6857 | 0.5271 | 0.7260 |
| No log | 14.1481 | 382 | 0.5575 | 0.6775 | 0.5575 | 0.7466 |
| No log | 14.2222 | 384 | 0.5765 | 0.6569 | 0.5765 | 0.7592 |
| No log | 14.2963 | 386 | 0.5664 | 0.6209 | 0.5664 | 0.7526 |
| No log | 14.3704 | 388 | 0.6267 | 0.5356 | 0.6267 | 0.7917 |
| No log | 14.4444 | 390 | 0.6765 | 0.5745 | 0.6765 | 0.8225 |
| No log | 14.5185 | 392 | 0.6159 | 0.6015 | 0.6159 | 0.7848 |
| No log | 14.5926 | 394 | 0.5461 | 0.6593 | 0.5461 | 0.7390 |
| No log | 14.6667 | 396 | 0.5308 | 0.6632 | 0.5308 | 0.7286 |
| No log | 14.7407 | 398 | 0.5280 | 0.6528 | 0.5280 | 0.7266 |
| No log | 14.8148 | 400 | 0.5510 | 0.6656 | 0.5510 | 0.7423 |
| No log | 14.8889 | 402 | 0.5783 | 0.6218 | 0.5783 | 0.7605 |
| No log | 14.9630 | 404 | 0.5767 | 0.6137 | 0.5767 | 0.7594 |
| No log | 15.0370 | 406 | 0.5758 | 0.6361 | 0.5758 | 0.7588 |
| No log | 15.1111 | 408 | 0.5616 | 0.6584 | 0.5616 | 0.7494 |
| No log | 15.1852 | 410 | 0.5804 | 0.6473 | 0.5804 | 0.7618 |
| No log | 15.2593 | 412 | 0.5608 | 0.6584 | 0.5608 | 0.7488 |
| No log | 15.3333 | 414 | 0.5657 | 0.6584 | 0.5657 | 0.7521 |
| No log | 15.4074 | 416 | 0.5641 | 0.6695 | 0.5641 | 0.7511 |
| No log | 15.4815 | 418 | 0.5746 | 0.6445 | 0.5746 | 0.7580 |
| No log | 15.5556 | 420 | 0.5863 | 0.6243 | 0.5863 | 0.7657 |
| No log | 15.6296 | 422 | 0.6218 | 0.5654 | 0.6218 | 0.7885 |
| No log | 15.7037 | 424 | 0.6302 | 0.5279 | 0.6302 | 0.7938 |
| No log | 15.7778 | 426 | 0.5712 | 0.6073 | 0.5712 | 0.7558 |
| No log | 15.8519 | 428 | 0.5219 | 0.6488 | 0.5219 | 0.7224 |
| No log | 15.9259 | 430 | 0.5121 | 0.6888 | 0.5121 | 0.7156 |
| No log | 16.0 | 432 | 0.5126 | 0.6888 | 0.5126 | 0.7160 |
| No log | 16.0741 | 434 | 0.5315 | 0.6593 | 0.5315 | 0.7290 |
| No log | 16.1481 | 436 | 0.6147 | 0.5686 | 0.6147 | 0.7840 |
| No log | 16.2222 | 438 | 0.6604 | 0.5447 | 0.6604 | 0.8126 |
| No log | 16.2963 | 440 | 0.6404 | 0.5463 | 0.6404 | 0.8002 |
| No log | 16.3704 | 442 | 0.5848 | 0.5721 | 0.5848 | 0.7647 |
| No log | 16.4444 | 444 | 0.5820 | 0.4764 | 0.5820 | 0.7629 |
| No log | 16.5185 | 446 | 0.5829 | 0.5273 | 0.5829 | 0.7635 |
| No log | 16.5926 | 448 | 0.5910 | 0.5348 | 0.5910 | 0.7688 |
| No log | 16.6667 | 450 | 0.6212 | 0.5540 | 0.6212 | 0.7882 |
| No log | 16.7407 | 452 | 0.6159 | 0.5948 | 0.6159 | 0.7848 |
| No log | 16.8148 | 454 | 0.6009 | 0.6259 | 0.6009 | 0.7752 |
| No log | 16.8889 | 456 | 0.5895 | 0.6147 | 0.5895 | 0.7678 |
| No log | 16.9630 | 458 | 0.5696 | 0.6405 | 0.5696 | 0.7547 |
| No log | 17.0370 | 460 | 0.5753 | 0.5917 | 0.5753 | 0.7585 |
| No log | 17.1111 | 462 | 0.5826 | 0.6078 | 0.5826 | 0.7633 |
| No log | 17.1852 | 464 | 0.5775 | 0.6185 | 0.5775 | 0.7600 |
| No log | 17.2593 | 466 | 0.5566 | 0.5785 | 0.5566 | 0.7461 |
| No log | 17.3333 | 468 | 0.5596 | 0.6575 | 0.5596 | 0.7481 |
| No log | 17.4074 | 470 | 0.5257 | 0.6575 | 0.5257 | 0.7251 |
| No log | 17.4815 | 472 | 0.5139 | 0.6575 | 0.5139 | 0.7169 |
| No log | 17.5556 | 474 | 0.5078 | 0.6581 | 0.5078 | 0.7126 |
| No log | 17.6296 | 476 | 0.4817 | 0.6857 | 0.4817 | 0.6941 |
| No log | 17.7037 | 478 | 0.4822 | 0.7016 | 0.4822 | 0.6944 |
| No log | 17.7778 | 480 | 0.4919 | 0.6832 | 0.4919 | 0.7014 |
| No log | 17.8519 | 482 | 0.5146 | 0.6117 | 0.5146 | 0.7174 |
| No log | 17.9259 | 484 | 0.5686 | 0.6112 | 0.5686 | 0.7540 |
| No log | 18.0 | 486 | 0.5576 | 0.6301 | 0.5576 | 0.7467 |
| No log | 18.0741 | 488 | 0.5225 | 0.6370 | 0.5225 | 0.7228 |
| No log | 18.1481 | 490 | 0.5114 | 0.6380 | 0.5114 | 0.7151 |
| No log | 18.2222 | 492 | 0.4994 | 0.6733 | 0.4994 | 0.7067 |
| No log | 18.2963 | 494 | 0.5018 | 0.6575 | 0.5018 | 0.7084 |
| No log | 18.3704 | 496 | 0.5075 | 0.6712 | 0.5075 | 0.7124 |
| No log | 18.4444 | 498 | 0.4982 | 0.6610 | 0.4982 | 0.7058 |
| 0.3206 | 18.5185 | 500 | 0.4932 | 0.6649 | 0.4932 | 0.7023 |
| 0.3206 | 18.5926 | 502 | 0.5077 | 0.6455 | 0.5077 | 0.7125 |
| 0.3206 | 18.6667 | 504 | 0.4989 | 0.6455 | 0.4989 | 0.7063 |
| 0.3206 | 18.7407 | 506 | 0.4852 | 0.6762 | 0.4852 | 0.6966 |
| 0.3206 | 18.8148 | 508 | 0.4992 | 0.6528 | 0.4992 | 0.7066 |
| 0.3206 | 18.8889 | 510 | 0.5226 | 0.6655 | 0.5226 | 0.7229 |
| 0.3206 | 18.9630 | 512 | 0.5192 | 0.6456 | 0.5192 | 0.7206 |
| 0.3206 | 19.0370 | 514 | 0.5159 | 0.6506 | 0.5159 | 0.7182 |
| 0.3206 | 19.1111 | 516 | 0.4996 | 0.6547 | 0.4996 | 0.7068 |
| 0.3206 | 19.1852 | 518 | 0.4922 | 0.6753 | 0.4922 | 0.7016 |
| 0.3206 | 19.2593 | 520 | 0.5074 | 0.6593 | 0.5074 | 0.7123 |
| 0.3206 | 19.3333 | 522 | 0.5119 | 0.6806 | 0.5119 | 0.7154 |
| 0.3206 | 19.4074 | 524 | 0.5103 | 0.6745 | 0.5103 | 0.7144 |
| 0.3206 | 19.4815 | 526 | 0.5049 | 0.6616 | 0.5049 | 0.7106 |
| 0.3206 | 19.5556 | 528 | 0.5098 | 0.7067 | 0.5098 | 0.7140 |
| 0.3206 | 19.6296 | 530 | 0.5044 | 0.7075 | 0.5044 | 0.7102 |
| 0.3206 | 19.7037 | 532 | 0.5191 | 0.6269 | 0.5191 | 0.7205 |
| 0.3206 | 19.7778 | 534 | 0.5387 | 0.5959 | 0.5387 | 0.7340 |
| 0.3206 | 19.8519 | 536 | 0.5383 | 0.5959 | 0.5383 | 0.7337 |
| 0.3206 | 19.9259 | 538 | 0.5534 | 0.5933 | 0.5534 | 0.7439 |
| 0.3206 | 20.0 | 540 | 0.5311 | 0.5959 | 0.5311 | 0.7287 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
vmpsergio/4d949ec6-8fd9-4b2d-be30-9ab5153a01b6
|
vmpsergio
| 2025-01-12T19:20:04Z
| 14
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:defog/sqlcoder-7b-2",
"base_model:adapter:defog/sqlcoder-7b-2",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2025-01-12T15:49:14Z
|
---
library_name: peft
license: cc-by-sa-4.0
base_model: defog/sqlcoder-7b-2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4d949ec6-8fd9-4b2d-be30-9ab5153a01b6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: defog/sqlcoder-7b-2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ae620ae66c9aa5f5_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ae620ae66c9aa5f5_train_data.json
type:
field_input: categories
field_instruction: title
field_output: abstract
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: vmpsergio/4d949ec6-8fd9-4b2d-be30-9ab5153a01b6
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/ae620ae66c9aa5f5_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_hf
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4d0db4ed-a894-4160-9e46-b38612015782
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 4d0db4ed-a894-4160-9e46-b38612015782
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 4d949ec6-8fd9-4b2d-be30-9ab5153a01b6
This model is a fine-tuned version of [defog/sqlcoder-7b-2](https://huggingface.co/defog/sqlcoder-7b-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | nan |
| 0.0 | 0.0001 | 8 | nan |
| 0.0 | 0.0002 | 16 | nan |
| 0.0 | 0.0004 | 24 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
chchen/Llama-3.1-8B-Instruct-SAA-900
|
chchen
| 2025-01-12T19:19:13Z
| 8
| 0
|
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lora",
"trl",
"dpo",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"region:us"
] | null | 2025-01-12T19:00:15Z
|
---
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: peft
license: llama3.1
tags:
- llama-factory
- lora
- trl
- dpo
- generated_from_trainer
model-index:
- name: Llama-3.1-8B-Instruct-SAA-900
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.1-8B-Instruct-SAA-900
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on the bct_non_cot_dpo_900 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1515
- Rewards/chosen: -0.0108
- Rewards/rejected: -0.0582
- Rewards/accuracies: 0.8222
- Rewards/margins: 0.0474
- Logps/rejected: -0.5819
- Logps/chosen: -0.1084
- Logits/rejected: -0.4031
- Logits/chosen: -0.3480
- Sft Loss: 0.0132
- Odds Ratio Loss: 1.3828
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | Sft Loss | Odds Ratio Loss |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|:--------:|:---------------:|
| 1.5773 | 0.9877 | 50 | 1.3696 | -0.1315 | -0.1754 | 0.7667 | 0.0440 | -1.7544 | -1.3147 | -0.4663 | -0.4034 | 0.1831 | 11.8657 |
| 0.2518 | 1.9753 | 100 | 0.2349 | -0.0190 | -0.0732 | 0.8111 | 0.0542 | -0.7321 | -0.1898 | -0.4483 | -0.3781 | 0.0216 | 2.1323 |
| 0.1304 | 2.9630 | 150 | 0.1530 | -0.0109 | -0.0612 | 0.8111 | 0.0502 | -0.6117 | -0.1094 | -0.4032 | -0.3454 | 0.0131 | 1.3988 |
| 0.1129 | 3.9506 | 200 | 0.1515 | -0.0108 | -0.0582 | 0.8222 | 0.0474 | -0.5819 | -0.1084 | -0.4031 | -0.3480 | 0.0132 | 1.3828 |
| 0.1194 | 4.9383 | 250 | 0.1522 | -0.0109 | -0.0642 | 0.8222 | 0.0533 | -0.6417 | -0.1088 | -0.3982 | -0.3417 | 0.0133 | 1.3891 |
| 0.0898 | 5.9259 | 300 | 0.1535 | -0.0110 | -0.0684 | 0.8111 | 0.0574 | -0.6839 | -0.1101 | -0.3960 | -0.3402 | 0.0136 | 1.3989 |
| 0.0928 | 6.9136 | 350 | 0.1572 | -0.0113 | -0.0679 | 0.7889 | 0.0567 | -0.6794 | -0.1125 | -0.3949 | -0.3394 | 0.0140 | 1.4318 |
| 0.0855 | 7.9012 | 400 | 0.1578 | -0.0112 | -0.0722 | 0.8000 | 0.0609 | -0.7215 | -0.1125 | -0.3935 | -0.3375 | 0.0138 | 1.4394 |
| 0.0985 | 8.8889 | 450 | 0.1574 | -0.0112 | -0.0720 | 0.8000 | 0.0608 | -0.7205 | -0.1122 | -0.3934 | -0.3372 | 0.0138 | 1.4358 |
| 0.0859 | 9.8765 | 500 | 0.1582 | -0.0113 | -0.0724 | 0.7889 | 0.0611 | -0.7239 | -0.1129 | -0.3937 | -0.3373 | 0.0140 | 1.4419 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.45.2
- Pytorch 2.3.0
- Datasets 2.19.0
- Tokenizers 0.20.0
|
matrixportal/Mistral-Small-Instruct-2409-Q4_K_M-GGUF
|
matrixportal
| 2025-01-12T19:19:08Z
| 15
| 0
|
vllm
|
[
"vllm",
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"fr",
"de",
"es",
"it",
"pt",
"zh",
"ja",
"ru",
"ko",
"base_model:mistralai/Mistral-Small-Instruct-2409",
"base_model:quantized:mistralai/Mistral-Small-Instruct-2409",
"license:other",
"region:us",
"conversational"
] | null | 2025-01-12T19:17:57Z
|
---
language:
- en
- fr
- de
- es
- it
- pt
- zh
- ja
- ru
- ko
license: other
license_name: mrl
inference: false
license_link: https://mistral.ai/licenses/MRL-0.1.md
extra_gated_prompt: '# Mistral AI Research License
If You want to use a Mistral Model, a Derivative or an Output for any purpose that
is not expressly authorized under this Agreement, You must request a license from
Mistral AI, which Mistral AI may grant to You in Mistral AI''s sole discretion.
To discuss such a license, please contact Mistral AI via the website contact form:
https://mistral.ai/contact/
## 1. Scope and acceptance
**1.1. Scope of the Agreement.** This Agreement applies to any use, modification,
or Distribution of any Mistral Model by You, regardless of the source You obtained
a copy of such Mistral Model.
**1.2. Acceptance.** By accessing, using, modifying, Distributing a Mistral Model,
or by creating, using or distributing a Derivative of the Mistral Model, You agree
to be bound by this Agreement.
**1.3. Acceptance on behalf of a third-party.** If You accept this Agreement on
behalf of Your employer or another person or entity, You warrant and represent that
You have the authority to act and accept this Agreement on their behalf. In such
a case, the word "You" in this Agreement will refer to Your employer or such other
person or entity.
## 2. License
**2.1. Grant of rights**. Subject to Section 3 below, Mistral AI hereby grants
You a non-exclusive, royalty-free, worldwide, non-sublicensable, non-transferable,
limited license to use, copy, modify, and Distribute under the conditions provided
in Section 2.2 below, the Mistral Model and any Derivatives made by or for Mistral
AI and to create Derivatives of the Mistral Model.
**2.2. Distribution of Mistral Model and Derivatives made by or for Mistral AI.**
Subject to Section 3 below, You may Distribute copies of the Mistral Model and/or
Derivatives made by or for Mistral AI, under the following conditions: You must
make available a copy of this Agreement to third-party recipients of the Mistral
Models and/or Derivatives made by or for Mistral AI you Distribute, it being specified
that any rights to use the Mistral Models and/or Derivatives made by or for Mistral
AI shall be directly granted by Mistral AI to said third-party recipients pursuant
to the Mistral AI Research License agreement executed between these parties; You
must retain in all copies of the Mistral Models the following attribution notice
within a "Notice" text file distributed as part of such copies: "Licensed by Mistral
AI under the Mistral AI Research License".
**2.3. Distribution of Derivatives made by or for You.** Subject to Section 3 below,
You may Distribute any Derivatives made by or for You under additional or different
terms and conditions, provided that: In any event, the use and modification of Mistral
Model and/or Derivatives made by or for Mistral AI shall remain governed by the
terms and conditions of this Agreement; You include in any such Derivatives made
by or for You prominent notices stating that You modified the concerned Mistral
Model; and Any terms and conditions You impose on any third-party recipients relating
to Derivatives made by or for You shall neither limit such third-party recipients''
use of the Mistral Model or any Derivatives made by or for Mistral AI in accordance
with the Mistral AI Research License nor conflict with any of its terms and conditions.
## 3. Limitations
**3.1. Misrepresentation.** You must not misrepresent or imply, through any means,
that the Derivatives made by or for You and/or any modified version of the Mistral
Model You Distribute under your name and responsibility is an official product of
Mistral AI or has been endorsed, approved or validated by Mistral AI, unless You
are authorized by Us to do so in writing.
**3.2. Usage Limitation.** You shall only use the Mistral Models, Derivatives (whether
or not created by Mistral AI) and Outputs for Research Purposes.
## 4. Intellectual Property
**4.1. Trademarks.** No trademark licenses are granted under this Agreement, and
in connection with the Mistral Models, You may not use any name or mark owned by
or associated with Mistral AI or any of its affiliates, except (i) as required for
reasonable and customary use in describing and Distributing the Mistral Models and
Derivatives made by or for Mistral AI and (ii) for attribution purposes as required
by this Agreement.
**4.2. Outputs.** We claim no ownership rights in and to the Outputs. You are solely
responsible for the Outputs You generate and their subsequent uses in accordance
with this Agreement. Any Outputs shall be subject to the restrictions set out in
Section 3 of this Agreement.
**4.3. Derivatives.** By entering into this Agreement, You accept that any Derivatives
that You may create or that may be created for You shall be subject to the restrictions
set out in Section 3 of this Agreement.
## 5. Liability
**5.1. Limitation of liability.** In no event, unless required by applicable law
(such as deliberate and grossly negligent acts) or agreed to in writing, shall Mistral
AI be liable to You for damages, including any direct, indirect, special, incidental,
or consequential damages of any character arising as a result of this Agreement
or out of the use or inability to use the Mistral Models and Derivatives (including
but not limited to damages for loss of data, loss of goodwill, loss of expected
profit or savings, work stoppage, computer failure or malfunction, or any damage
caused by malware or security breaches), even if Mistral AI has been advised of
the possibility of such damages.
**5.2. Indemnification.** You agree to indemnify and hold harmless Mistral AI from
and against any claims, damages, or losses arising out of or related to Your use
or Distribution of the Mistral Models and Derivatives.
## 6. Warranty
**6.1. Disclaimer.** Unless required by applicable law or prior agreed to by Mistral
AI in writing, Mistral AI provides the Mistral Models and Derivatives on an "AS
IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied,
including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT,
MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. Mistral AI does not represent
nor warrant that the Mistral Models and Derivatives will be error-free, meet Your
or any third party''s requirements, be secure or will allow You or any third party
to achieve any kind of result or generate any kind of content. You are solely responsible
for determining the appropriateness of using or Distributing the Mistral Models
and Derivatives and assume any risks associated with Your exercise of rights under
this Agreement.
## 7. Termination
**7.1. Term.** This Agreement is effective as of the date of your acceptance of
this Agreement or access to the concerned Mistral Models or Derivatives and will
continue until terminated in accordance with the following terms.
**7.2. Termination.** Mistral AI may terminate this Agreement at any time if You
are in breach of this Agreement. Upon termination of this Agreement, You must cease
to use all Mistral Models and Derivatives and shall permanently delete any copy
thereof. The following provisions, in their relevant parts, will survive any termination
or expiration of this Agreement, each for the duration necessary to achieve its
own intended purpose (e.g. the liability provision will survive until the end of
the applicable limitation period):Sections 5 (Liability), 6(Warranty), 7 (Termination)
and 8 (General Provisions).
**7.3. Litigation.** If You initiate any legal action or proceedings against Us
or any other entity (including a cross-claim or counterclaim in a lawsuit), alleging
that the Model or a Derivative, or any part thereof, infringe upon intellectual
property or other rights owned or licensable by You, then any licenses granted to
You under this Agreement will immediately terminate as of the date such legal action
or claim is filed or initiated.
## 8. General provisions
**8.1. Governing laws.** This Agreement will be governed by the laws of France,
without regard to choice of law principles, and the UN Convention on Contracts for
the International Sale of Goods does not apply to this Agreement.
**8.2. Competent jurisdiction.** The courts of Paris shall have exclusive jurisdiction
of any dispute arising out of this Agreement.
**8.3. Severability.** If any provision of this Agreement is held to be invalid,
illegal or unenforceable, the remaining provisions shall be unaffected thereby and
remain valid as if such provision had not been set forth herein.
## 9. Definitions
"Agreement": means this Mistral AI Research License agreement governing the access,
use, and Distribution of the Mistral Models, Derivatives and Outputs.
"Derivative": means any (i) modified version of the Mistral Model (including but
not limited to any customized or fine-tuned version thereof), (ii) work based on
the Mistral Model, or (iii) any other derivative work thereof.
"Distribution", "Distributing", "Distribute" or "Distributed": means supplying,
providing or making available, by any means, a copy of the Mistral Models and/or
the Derivatives as the case may be, subject to Section 3 of this Agreement.
"Mistral AI", "We" or "Us": means Mistral AI, a French société par actions simplifiée
registered in the Paris commercial registry under the number 952 418 325, and having
its registered seat at 15, rue des Halles, 75001 Paris.
"Mistral Model": means the foundational large language model(s), and its elements
which include algorithms, software, instructed checkpoints, parameters, source code
(inference code, evaluation code and, if applicable, fine-tuning code) and any other
elements associated thereto made available by Mistral AI under this Agreement, including,
if any, the technical documentation, manuals and instructions for the use and operation
thereof.
"Research Purposes": means any use of a Mistral Model, Derivative, or Output that
is solely for (a) personal, scientific or academic research, and (b) for non-profit
and non-commercial purposes, and not directly or indirectly connected to any commercial
activities or business operations. For illustration purposes, Research Purposes
does not include (1) any usage of the Mistral Model, Derivative or Output by individuals
or contractors employed in or engaged by companies in the context of (a) their daily
tasks, or (b) any activity (including but not limited to any testing or proof-of-concept)
that is intended to generate revenue, nor (2) any Distribution by a commercial entity
of the Mistral Model, Derivative or Output whether in return for payment or free
of charge, in any medium or form, including but not limited to through a hosted
or managed service (e.g. SaaS, cloud instances, etc.), or behind a software layer.
"Outputs": means any content generated by the operation of the Mistral Models or
the Derivatives from a prompt (i.e., text instructions) provided by users. For
the avoidance of doubt, Outputs do not include any components of a Mistral Models,
such as any fine-tuned versions of the Mistral Models, the weights, or parameters.
"You": means the individual or entity entering into this Agreement with Mistral
AI.
*Mistral AI processes your personal data below to provide the model and enforce
its license. If you are affiliated with a commercial entity, we may also send you
communications about our models. For more information on your rights and data handling,
please see our <a href="https://mistral.ai/terms/">privacy policy</a>.*'
extra_gated_fields:
First Name: text
Last Name: text
Country: country
Affiliation: text
Job title: text
I understand that I can only use the model, any derivative versions and their outputs for non-commercial research purposes: checkbox
? I understand that if I am a commercial entity, I am not permitted to use or distribute
the model internally or externally, or expose it in my own offerings without a
commercial license
: checkbox
? I understand that if I upload the model, or any derivative version, on any platform,
I must include the Mistral Research License
: checkbox
? I understand that for commercial use of the model, I can contact Mistral or use
the Mistral AI API on la Plateforme or any of our cloud provider partners
: checkbox
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Mistral Privacy Policy
: checkbox
geo: ip_location
extra_gated_description: Mistral AI processes your personal data below to provide
the model and enforce its license. If you are affiliated with a commercial entity,
we may also send you communications about our models. For more information on your
rights and data handling, please see our <a href="https://mistral.ai/terms/">privacy
policy</a>.
extra_gated_button_content: Submit
library_name: vllm
tags:
- llama-cpp
- gguf-my-repo
base_model: mistralai/Mistral-Small-Instruct-2409
---
# matrixportal/Mistral-Small-Instruct-2409-Q4_K_M-GGUF
This model was converted to GGUF format from [`mistralai/Mistral-Small-Instruct-2409`](https://huggingface.co/mistralai/Mistral-Small-Instruct-2409) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mistralai/Mistral-Small-Instruct-2409) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo matrixportal/Mistral-Small-Instruct-2409-Q4_K_M-GGUF --hf-file mistral-small-instruct-2409-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo matrixportal/Mistral-Small-Instruct-2409-Q4_K_M-GGUF --hf-file mistral-small-instruct-2409-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo matrixportal/Mistral-Small-Instruct-2409-Q4_K_M-GGUF --hf-file mistral-small-instruct-2409-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo matrixportal/Mistral-Small-Instruct-2409-Q4_K_M-GGUF --hf-file mistral-small-instruct-2409-q4_k_m.gguf -c 2048
```
|
krish4950/detr-finetuned-wireharness
|
krish4950
| 2025-01-12T19:18:21Z
| 20
| 0
|
transformers
|
[
"transformers",
"safetensors",
"detr",
"object-detection",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2025-01-12T18:26:50Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MayBashendy/ArabicNewSplits7_FineTuningAraBERT_run1_AugV5_k9_task5_organization
|
MayBashendy
| 2025-01-12T19:12:33Z
| 7
| 0
|
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-12T19:05:17Z
|
---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_FineTuningAraBERT_run1_AugV5_k9_task5_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_FineTuningAraBERT_run1_AugV5_k9_task5_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7778
- Qwk: 0.5588
- Mse: 0.7778
- Rmse: 0.8819
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0833 | 2 | 4.1156 | 0.0024 | 4.1156 | 2.0287 |
| No log | 0.1667 | 4 | 1.9847 | 0.0633 | 1.9847 | 1.4088 |
| No log | 0.25 | 6 | 1.2650 | 0.0232 | 1.2650 | 1.1247 |
| No log | 0.3333 | 8 | 1.1427 | 0.1296 | 1.1427 | 1.0690 |
| No log | 0.4167 | 10 | 1.4212 | 0.0273 | 1.4212 | 1.1921 |
| No log | 0.5 | 12 | 1.4855 | 0.1438 | 1.4855 | 1.2188 |
| No log | 0.5833 | 14 | 1.3519 | 0.0170 | 1.3519 | 1.1627 |
| No log | 0.6667 | 16 | 1.3687 | 0.0712 | 1.3687 | 1.1699 |
| No log | 0.75 | 18 | 1.0846 | 0.2539 | 1.0846 | 1.0414 |
| No log | 0.8333 | 20 | 1.0034 | 0.2035 | 1.0034 | 1.0017 |
| No log | 0.9167 | 22 | 1.1764 | 0.0427 | 1.1764 | 1.0846 |
| No log | 1.0 | 24 | 1.6202 | 0.0399 | 1.6202 | 1.2729 |
| No log | 1.0833 | 26 | 1.7089 | 0.0651 | 1.7089 | 1.3073 |
| No log | 1.1667 | 28 | 1.2862 | -0.0296 | 1.2862 | 1.1341 |
| No log | 1.25 | 30 | 1.0896 | 0.2734 | 1.0896 | 1.0438 |
| No log | 1.3333 | 32 | 1.1734 | 0.2150 | 1.1734 | 1.0833 |
| No log | 1.4167 | 34 | 1.1268 | 0.1910 | 1.1268 | 1.0615 |
| No log | 1.5 | 36 | 1.1471 | 0.1910 | 1.1471 | 1.0710 |
| No log | 1.5833 | 38 | 1.2530 | 0.0380 | 1.2530 | 1.1194 |
| No log | 1.6667 | 40 | 1.1814 | 0.1910 | 1.1814 | 1.0869 |
| No log | 1.75 | 42 | 1.1412 | 0.2150 | 1.1412 | 1.0683 |
| No log | 1.8333 | 44 | 1.1151 | 0.2150 | 1.1151 | 1.0560 |
| No log | 1.9167 | 46 | 1.1561 | 0.2295 | 1.1561 | 1.0752 |
| No log | 2.0 | 48 | 1.1455 | 0.2150 | 1.1455 | 1.0703 |
| No log | 2.0833 | 50 | 1.1505 | 0.2150 | 1.1505 | 1.0726 |
| No log | 2.1667 | 52 | 1.0827 | 0.1979 | 1.0827 | 1.0405 |
| No log | 2.25 | 54 | 1.0039 | 0.2416 | 1.0039 | 1.0019 |
| No log | 2.3333 | 56 | 0.9863 | 0.2068 | 0.9863 | 0.9931 |
| No log | 2.4167 | 58 | 1.0020 | 0.2441 | 1.0020 | 1.0010 |
| No log | 2.5 | 60 | 1.1079 | 0.2175 | 1.1079 | 1.0526 |
| No log | 2.5833 | 62 | 1.1474 | 0.2143 | 1.1474 | 1.0712 |
| No log | 2.6667 | 64 | 0.9963 | 0.2781 | 0.9963 | 0.9981 |
| No log | 2.75 | 66 | 0.9530 | 0.2390 | 0.9530 | 0.9762 |
| No log | 2.8333 | 68 | 1.0258 | 0.0445 | 1.0258 | 1.0128 |
| No log | 2.9167 | 70 | 0.9939 | 0.1076 | 0.9939 | 0.9970 |
| No log | 3.0 | 72 | 0.9553 | 0.2912 | 0.9553 | 0.9774 |
| No log | 3.0833 | 74 | 1.0256 | 0.2731 | 1.0256 | 1.0127 |
| No log | 3.1667 | 76 | 1.1163 | 0.2260 | 1.1163 | 1.0566 |
| No log | 3.25 | 78 | 1.0419 | 0.3131 | 1.0419 | 1.0207 |
| No log | 3.3333 | 80 | 0.9537 | 0.3370 | 0.9537 | 0.9766 |
| No log | 3.4167 | 82 | 0.9233 | 0.4438 | 0.9233 | 0.9609 |
| No log | 3.5 | 84 | 0.9231 | 0.4275 | 0.9231 | 0.9608 |
| No log | 3.5833 | 86 | 0.9396 | 0.4365 | 0.9396 | 0.9693 |
| No log | 3.6667 | 88 | 0.9266 | 0.4915 | 0.9266 | 0.9626 |
| No log | 3.75 | 90 | 0.8538 | 0.4769 | 0.8538 | 0.9240 |
| No log | 3.8333 | 92 | 0.7824 | 0.6133 | 0.7824 | 0.8845 |
| No log | 3.9167 | 94 | 0.7449 | 0.5035 | 0.7449 | 0.8631 |
| No log | 4.0 | 96 | 0.7973 | 0.4421 | 0.7973 | 0.8929 |
| No log | 4.0833 | 98 | 1.0362 | 0.3283 | 1.0362 | 1.0180 |
| No log | 4.1667 | 100 | 1.1811 | 0.3001 | 1.1811 | 1.0868 |
| No log | 4.25 | 102 | 1.0545 | 0.3218 | 1.0545 | 1.0269 |
| No log | 4.3333 | 104 | 0.7491 | 0.4949 | 0.7491 | 0.8655 |
| No log | 4.4167 | 106 | 0.6625 | 0.5446 | 0.6625 | 0.8139 |
| No log | 4.5 | 108 | 0.6912 | 0.5329 | 0.6912 | 0.8314 |
| No log | 4.5833 | 110 | 0.7396 | 0.4444 | 0.7396 | 0.8600 |
| No log | 4.6667 | 112 | 0.7369 | 0.5057 | 0.7369 | 0.8585 |
| No log | 4.75 | 114 | 0.7602 | 0.5127 | 0.7602 | 0.8719 |
| No log | 4.8333 | 116 | 0.7781 | 0.4615 | 0.7781 | 0.8821 |
| No log | 4.9167 | 118 | 0.8226 | 0.5065 | 0.8226 | 0.9070 |
| No log | 5.0 | 120 | 0.9131 | 0.4051 | 0.9131 | 0.9556 |
| No log | 5.0833 | 122 | 0.8026 | 0.5079 | 0.8026 | 0.8959 |
| No log | 5.1667 | 124 | 0.7402 | 0.4962 | 0.7402 | 0.8603 |
| No log | 5.25 | 126 | 0.7355 | 0.5512 | 0.7355 | 0.8576 |
| No log | 5.3333 | 128 | 0.8009 | 0.5181 | 0.8009 | 0.8949 |
| No log | 5.4167 | 130 | 0.9722 | 0.4359 | 0.9722 | 0.9860 |
| No log | 5.5 | 132 | 0.8379 | 0.5538 | 0.8379 | 0.9154 |
| No log | 5.5833 | 134 | 0.7056 | 0.5692 | 0.7056 | 0.8400 |
| No log | 5.6667 | 136 | 0.8537 | 0.5019 | 0.8537 | 0.9240 |
| No log | 5.75 | 138 | 0.7697 | 0.4893 | 0.7697 | 0.8773 |
| No log | 5.8333 | 140 | 0.6772 | 0.5949 | 0.6772 | 0.8229 |
| No log | 5.9167 | 142 | 0.7273 | 0.5540 | 0.7273 | 0.8528 |
| No log | 6.0 | 144 | 0.6865 | 0.6043 | 0.6865 | 0.8286 |
| No log | 6.0833 | 146 | 0.6663 | 0.5485 | 0.6663 | 0.8163 |
| No log | 6.1667 | 148 | 0.6526 | 0.5262 | 0.6526 | 0.8078 |
| No log | 6.25 | 150 | 0.6653 | 0.6325 | 0.6653 | 0.8157 |
| No log | 6.3333 | 152 | 0.6915 | 0.6315 | 0.6915 | 0.8316 |
| No log | 6.4167 | 154 | 0.6887 | 0.5980 | 0.6887 | 0.8299 |
| No log | 6.5 | 156 | 0.7030 | 0.5980 | 0.7030 | 0.8385 |
| No log | 6.5833 | 158 | 0.7386 | 0.5869 | 0.7386 | 0.8594 |
| No log | 6.6667 | 160 | 0.7053 | 0.5680 | 0.7053 | 0.8398 |
| No log | 6.75 | 162 | 0.7432 | 0.5759 | 0.7432 | 0.8621 |
| No log | 6.8333 | 164 | 0.7517 | 0.5890 | 0.7517 | 0.8670 |
| No log | 6.9167 | 166 | 0.7268 | 0.5659 | 0.7268 | 0.8525 |
| No log | 7.0 | 168 | 0.7370 | 0.5204 | 0.7370 | 0.8585 |
| No log | 7.0833 | 170 | 0.6638 | 0.6307 | 0.6638 | 0.8147 |
| No log | 7.1667 | 172 | 0.6463 | 0.6762 | 0.6464 | 0.8040 |
| No log | 7.25 | 174 | 0.6661 | 0.5955 | 0.6661 | 0.8162 |
| No log | 7.3333 | 176 | 0.6305 | 0.6610 | 0.6305 | 0.7940 |
| No log | 7.4167 | 178 | 0.7525 | 0.5735 | 0.7525 | 0.8675 |
| No log | 7.5 | 180 | 0.7804 | 0.5443 | 0.7804 | 0.8834 |
| No log | 7.5833 | 182 | 0.6912 | 0.5666 | 0.6912 | 0.8314 |
| No log | 7.6667 | 184 | 0.6456 | 0.6456 | 0.6456 | 0.8035 |
| No log | 7.75 | 186 | 0.6756 | 0.6165 | 0.6756 | 0.8220 |
| No log | 7.8333 | 188 | 0.7471 | 0.5397 | 0.7471 | 0.8643 |
| No log | 7.9167 | 190 | 0.7352 | 0.5410 | 0.7352 | 0.8575 |
| No log | 8.0 | 192 | 0.7067 | 0.6724 | 0.7067 | 0.8407 |
| No log | 8.0833 | 194 | 0.7465 | 0.5774 | 0.7465 | 0.8640 |
| No log | 8.1667 | 196 | 0.8731 | 0.4470 | 0.8731 | 0.9344 |
| No log | 8.25 | 198 | 0.8658 | 0.4588 | 0.8658 | 0.9305 |
| No log | 8.3333 | 200 | 0.8049 | 0.5195 | 0.8049 | 0.8971 |
| No log | 8.4167 | 202 | 0.7887 | 0.5160 | 0.7887 | 0.8881 |
| No log | 8.5 | 204 | 0.8056 | 0.5301 | 0.8056 | 0.8976 |
| No log | 8.5833 | 206 | 0.7984 | 0.5017 | 0.7984 | 0.8935 |
| No log | 8.6667 | 208 | 0.8057 | 0.4375 | 0.8057 | 0.8976 |
| No log | 8.75 | 210 | 0.7880 | 0.4757 | 0.7880 | 0.8877 |
| No log | 8.8333 | 212 | 0.7851 | 0.4757 | 0.7851 | 0.8861 |
| No log | 8.9167 | 214 | 0.7983 | 0.4974 | 0.7983 | 0.8935 |
| No log | 9.0 | 216 | 0.7876 | 0.5261 | 0.7876 | 0.8875 |
| No log | 9.0833 | 218 | 0.7914 | 0.5248 | 0.7914 | 0.8896 |
| No log | 9.1667 | 220 | 0.7937 | 0.5473 | 0.7937 | 0.8909 |
| No log | 9.25 | 222 | 0.7868 | 0.5798 | 0.7868 | 0.8870 |
| No log | 9.3333 | 224 | 0.7797 | 0.5607 | 0.7797 | 0.8830 |
| No log | 9.4167 | 226 | 0.7597 | 0.5540 | 0.7597 | 0.8716 |
| No log | 9.5 | 228 | 0.7408 | 0.5614 | 0.7408 | 0.8607 |
| No log | 9.5833 | 230 | 0.7787 | 0.5425 | 0.7787 | 0.8825 |
| No log | 9.6667 | 232 | 0.7730 | 0.5635 | 0.7730 | 0.8792 |
| No log | 9.75 | 234 | 0.8063 | 0.5370 | 0.8063 | 0.8979 |
| No log | 9.8333 | 236 | 0.8565 | 0.4834 | 0.8565 | 0.9255 |
| No log | 9.9167 | 238 | 0.8620 | 0.4450 | 0.8620 | 0.9284 |
| No log | 10.0 | 240 | 0.8645 | 0.4537 | 0.8645 | 0.9298 |
| No log | 10.0833 | 242 | 0.8889 | 0.4455 | 0.8889 | 0.9428 |
| No log | 10.1667 | 244 | 0.9977 | 0.3781 | 0.9977 | 0.9989 |
| No log | 10.25 | 246 | 0.9224 | 0.4642 | 0.9224 | 0.9604 |
| No log | 10.3333 | 248 | 0.8796 | 0.4636 | 0.8796 | 0.9379 |
| No log | 10.4167 | 250 | 0.9158 | 0.4517 | 0.9158 | 0.9570 |
| No log | 10.5 | 252 | 0.8244 | 0.4871 | 0.8244 | 0.9079 |
| No log | 10.5833 | 254 | 0.8311 | 0.4849 | 0.8311 | 0.9116 |
| No log | 10.6667 | 256 | 0.8233 | 0.5393 | 0.8233 | 0.9074 |
| No log | 10.75 | 258 | 0.8131 | 0.5518 | 0.8131 | 0.9017 |
| No log | 10.8333 | 260 | 0.8746 | 0.4639 | 0.8746 | 0.9352 |
| No log | 10.9167 | 262 | 0.8527 | 0.4954 | 0.8527 | 0.9234 |
| No log | 11.0 | 264 | 0.8344 | 0.5379 | 0.8344 | 0.9135 |
| No log | 11.0833 | 266 | 0.8635 | 0.4963 | 0.8635 | 0.9293 |
| No log | 11.1667 | 268 | 0.8319 | 0.5671 | 0.8319 | 0.9121 |
| No log | 11.25 | 270 | 0.8751 | 0.4440 | 0.8751 | 0.9354 |
| No log | 11.3333 | 272 | 0.9062 | 0.4601 | 0.9062 | 0.9520 |
| No log | 11.4167 | 274 | 0.8486 | 0.5006 | 0.8486 | 0.9212 |
| No log | 11.5 | 276 | 0.7821 | 0.5637 | 0.7821 | 0.8844 |
| No log | 11.5833 | 278 | 0.8129 | 0.5255 | 0.8129 | 0.9016 |
| No log | 11.6667 | 280 | 0.8372 | 0.5358 | 0.8372 | 0.9150 |
| No log | 11.75 | 282 | 0.8156 | 0.5042 | 0.8156 | 0.9031 |
| No log | 11.8333 | 284 | 0.7989 | 0.5167 | 0.7989 | 0.8938 |
| No log | 11.9167 | 286 | 0.7635 | 0.5774 | 0.7635 | 0.8738 |
| No log | 12.0 | 288 | 0.7476 | 0.5751 | 0.7476 | 0.8647 |
| No log | 12.0833 | 290 | 0.7327 | 0.6177 | 0.7327 | 0.8560 |
| No log | 12.1667 | 292 | 0.8021 | 0.5668 | 0.8021 | 0.8956 |
| No log | 12.25 | 294 | 0.7558 | 0.5934 | 0.7558 | 0.8694 |
| No log | 12.3333 | 296 | 0.6879 | 0.5594 | 0.6879 | 0.8294 |
| No log | 12.4167 | 298 | 0.6936 | 0.5647 | 0.6936 | 0.8328 |
| No log | 12.5 | 300 | 0.7173 | 0.5894 | 0.7173 | 0.8469 |
| No log | 12.5833 | 302 | 0.8856 | 0.4970 | 0.8856 | 0.9411 |
| No log | 12.6667 | 304 | 0.9932 | 0.4458 | 0.9932 | 0.9966 |
| No log | 12.75 | 306 | 0.9394 | 0.4359 | 0.9394 | 0.9692 |
| No log | 12.8333 | 308 | 0.8255 | 0.4825 | 0.8255 | 0.9086 |
| No log | 12.9167 | 310 | 0.7724 | 0.5766 | 0.7724 | 0.8789 |
| No log | 13.0 | 312 | 0.8436 | 0.4719 | 0.8436 | 0.9185 |
| No log | 13.0833 | 314 | 0.8301 | 0.4613 | 0.8301 | 0.9111 |
| No log | 13.1667 | 316 | 0.7263 | 0.6008 | 0.7263 | 0.8522 |
| No log | 13.25 | 318 | 0.6973 | 0.5455 | 0.6973 | 0.8350 |
| No log | 13.3333 | 320 | 0.7162 | 0.5894 | 0.7162 | 0.8463 |
| No log | 13.4167 | 322 | 0.7536 | 0.4586 | 0.7536 | 0.8681 |
| No log | 13.5 | 324 | 0.7285 | 0.5093 | 0.7285 | 0.8535 |
| No log | 13.5833 | 326 | 0.7366 | 0.4850 | 0.7366 | 0.8582 |
| No log | 13.6667 | 328 | 0.7166 | 0.5331 | 0.7166 | 0.8465 |
| No log | 13.75 | 330 | 0.7049 | 0.5858 | 0.7049 | 0.8396 |
| No log | 13.8333 | 332 | 0.6818 | 0.5869 | 0.6818 | 0.8257 |
| No log | 13.9167 | 334 | 0.7261 | 0.5766 | 0.7261 | 0.8521 |
| No log | 14.0 | 336 | 0.8149 | 0.5705 | 0.8149 | 0.9027 |
| No log | 14.0833 | 338 | 0.7586 | 0.5788 | 0.7586 | 0.8710 |
| No log | 14.1667 | 340 | 0.7024 | 0.4772 | 0.7024 | 0.8381 |
| No log | 14.25 | 342 | 0.7079 | 0.5135 | 0.7079 | 0.8413 |
| No log | 14.3333 | 344 | 0.7171 | 0.5274 | 0.7171 | 0.8468 |
| No log | 14.4167 | 346 | 0.7125 | 0.4772 | 0.7125 | 0.8441 |
| No log | 14.5 | 348 | 0.7743 | 0.6071 | 0.7743 | 0.8800 |
| No log | 14.5833 | 350 | 0.7630 | 0.5766 | 0.7630 | 0.8735 |
| No log | 14.6667 | 352 | 0.7198 | 0.6048 | 0.7198 | 0.8484 |
| No log | 14.75 | 354 | 0.7876 | 0.5222 | 0.7876 | 0.8875 |
| No log | 14.8333 | 356 | 0.8086 | 0.4686 | 0.8086 | 0.8992 |
| No log | 14.9167 | 358 | 0.7294 | 0.4565 | 0.7294 | 0.8540 |
| No log | 15.0 | 360 | 0.7745 | 0.5602 | 0.7745 | 0.8801 |
| No log | 15.0833 | 362 | 0.7899 | 0.5487 | 0.7899 | 0.8888 |
| No log | 15.1667 | 364 | 0.7196 | 0.5540 | 0.7196 | 0.8483 |
| No log | 15.25 | 366 | 0.6896 | 0.5038 | 0.6896 | 0.8305 |
| No log | 15.3333 | 368 | 0.6799 | 0.5149 | 0.6799 | 0.8246 |
| No log | 15.4167 | 370 | 0.6943 | 0.5821 | 0.6943 | 0.8332 |
| No log | 15.5 | 372 | 0.7752 | 0.5726 | 0.7752 | 0.8805 |
| No log | 15.5833 | 374 | 0.7772 | 0.5106 | 0.7772 | 0.8816 |
| No log | 15.6667 | 376 | 0.7086 | 0.6081 | 0.7086 | 0.8418 |
| No log | 15.75 | 378 | 0.6802 | 0.6091 | 0.6802 | 0.8247 |
| No log | 15.8333 | 380 | 0.6879 | 0.6091 | 0.6879 | 0.8294 |
| No log | 15.9167 | 382 | 0.6506 | 0.6301 | 0.6506 | 0.8066 |
| No log | 16.0 | 384 | 0.6485 | 0.6154 | 0.6485 | 0.8053 |
| No log | 16.0833 | 386 | 0.6613 | 0.6133 | 0.6613 | 0.8132 |
| No log | 16.1667 | 388 | 0.6644 | 0.5590 | 0.6644 | 0.8151 |
| No log | 16.25 | 390 | 0.6562 | 0.5301 | 0.6562 | 0.8101 |
| No log | 16.3333 | 392 | 0.6545 | 0.5202 | 0.6545 | 0.8090 |
| No log | 16.4167 | 394 | 0.6464 | 0.5934 | 0.6464 | 0.8040 |
| No log | 16.5 | 396 | 0.6429 | 0.6716 | 0.6429 | 0.8018 |
| No log | 16.5833 | 398 | 0.6835 | 0.6266 | 0.6835 | 0.8267 |
| No log | 16.6667 | 400 | 0.6597 | 0.5909 | 0.6597 | 0.8122 |
| No log | 16.75 | 402 | 0.6265 | 0.6518 | 0.6265 | 0.7915 |
| No log | 16.8333 | 404 | 0.6342 | 0.6322 | 0.6342 | 0.7964 |
| No log | 16.9167 | 406 | 0.6359 | 0.6165 | 0.6359 | 0.7974 |
| No log | 17.0 | 408 | 0.6215 | 0.6276 | 0.6215 | 0.7883 |
| No log | 17.0833 | 410 | 0.6144 | 0.5894 | 0.6144 | 0.7839 |
| No log | 17.1667 | 412 | 0.6026 | 0.6441 | 0.6026 | 0.7762 |
| No log | 17.25 | 414 | 0.6059 | 0.6441 | 0.6059 | 0.7784 |
| No log | 17.3333 | 416 | 0.6103 | 0.6623 | 0.6103 | 0.7812 |
| No log | 17.4167 | 418 | 0.6229 | 0.6291 | 0.6229 | 0.7892 |
| No log | 17.5 | 420 | 0.6376 | 0.5869 | 0.6376 | 0.7985 |
| No log | 17.5833 | 422 | 0.6365 | 0.5774 | 0.6365 | 0.7978 |
| No log | 17.6667 | 424 | 0.6622 | 0.4822 | 0.6622 | 0.8137 |
| No log | 17.75 | 426 | 0.6652 | 0.4938 | 0.6652 | 0.8156 |
| No log | 17.8333 | 428 | 0.6641 | 0.5174 | 0.6641 | 0.8149 |
| No log | 17.9167 | 430 | 0.6599 | 0.5032 | 0.6599 | 0.8123 |
| No log | 18.0 | 432 | 0.6770 | 0.6209 | 0.6770 | 0.8228 |
| No log | 18.0833 | 434 | 0.6982 | 0.5708 | 0.6982 | 0.8356 |
| No log | 18.1667 | 436 | 0.6796 | 0.5933 | 0.6796 | 0.8244 |
| No log | 18.25 | 438 | 0.6412 | 0.6390 | 0.6412 | 0.8008 |
| No log | 18.3333 | 440 | 0.6426 | 0.6427 | 0.6426 | 0.8017 |
| No log | 18.4167 | 442 | 0.6695 | 0.6073 | 0.6695 | 0.8182 |
| No log | 18.5 | 444 | 0.6943 | 0.6147 | 0.6943 | 0.8333 |
| No log | 18.5833 | 446 | 0.6640 | 0.6133 | 0.6640 | 0.8149 |
| No log | 18.6667 | 448 | 0.6490 | 0.6107 | 0.6490 | 0.8056 |
| No log | 18.75 | 450 | 0.6559 | 0.5441 | 0.6559 | 0.8099 |
| No log | 18.8333 | 452 | 0.6515 | 0.5315 | 0.6515 | 0.8071 |
| No log | 18.9167 | 454 | 0.6428 | 0.6479 | 0.6428 | 0.8017 |
| No log | 19.0 | 456 | 0.6687 | 0.5708 | 0.6687 | 0.8178 |
| No log | 19.0833 | 458 | 0.6558 | 0.6588 | 0.6558 | 0.8098 |
| No log | 19.1667 | 460 | 0.6510 | 0.5057 | 0.6510 | 0.8069 |
| No log | 19.25 | 462 | 0.6695 | 0.5554 | 0.6695 | 0.8182 |
| No log | 19.3333 | 464 | 0.6635 | 0.5403 | 0.6635 | 0.8145 |
| No log | 19.4167 | 466 | 0.6686 | 0.6259 | 0.6686 | 0.8177 |
| No log | 19.5 | 468 | 0.6990 | 0.5875 | 0.6990 | 0.8361 |
| No log | 19.5833 | 470 | 0.6863 | 0.6325 | 0.6863 | 0.8284 |
| No log | 19.6667 | 472 | 0.6838 | 0.5847 | 0.6838 | 0.8269 |
| No log | 19.75 | 474 | 0.6944 | 0.5516 | 0.6944 | 0.8333 |
| No log | 19.8333 | 476 | 0.7085 | 0.5746 | 0.7085 | 0.8417 |
| No log | 19.9167 | 478 | 0.7675 | 0.5729 | 0.7675 | 0.8760 |
| No log | 20.0 | 480 | 0.8332 | 0.5018 | 0.8332 | 0.9128 |
| No log | 20.0833 | 482 | 0.8044 | 0.5436 | 0.8044 | 0.8969 |
| No log | 20.1667 | 484 | 0.7871 | 0.5571 | 0.7871 | 0.8872 |
| No log | 20.25 | 486 | 0.7719 | 0.6118 | 0.7719 | 0.8786 |
| No log | 20.3333 | 488 | 0.7597 | 0.5933 | 0.7597 | 0.8716 |
| No log | 20.4167 | 490 | 0.7517 | 0.5986 | 0.7517 | 0.8670 |
| No log | 20.5 | 492 | 0.7653 | 0.5774 | 0.7653 | 0.8748 |
| No log | 20.5833 | 494 | 0.7571 | 0.5131 | 0.7571 | 0.8701 |
| No log | 20.6667 | 496 | 0.7491 | 0.5260 | 0.7491 | 0.8655 |
| No log | 20.75 | 498 | 0.7648 | 0.5729 | 0.7648 | 0.8745 |
| 0.3451 | 20.8333 | 500 | 0.7563 | 0.5708 | 0.7563 | 0.8697 |
| 0.3451 | 20.9167 | 502 | 0.7387 | 0.5708 | 0.7387 | 0.8595 |
| 0.3451 | 21.0 | 504 | 0.7095 | 0.6147 | 0.7095 | 0.8423 |
| 0.3451 | 21.0833 | 506 | 0.7003 | 0.5587 | 0.7003 | 0.8368 |
| 0.3451 | 21.1667 | 508 | 0.7426 | 0.4977 | 0.7426 | 0.8617 |
| 0.3451 | 21.25 | 510 | 0.7559 | 0.4641 | 0.7559 | 0.8694 |
| 0.3451 | 21.3333 | 512 | 0.7304 | 0.5563 | 0.7304 | 0.8546 |
| 0.3451 | 21.4167 | 514 | 0.7778 | 0.5588 | 0.7778 | 0.8819 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
RyanYr/reflect_mini8B_MistlrgOrcl460kSftT1_Om2G8kOm2AgG8k40kIpsdpT1-b1.0
|
RyanYr
| 2025-01-12T19:11:24Z
| 23
| 0
|
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:RyanYr/reflect_mini8B_MistlrgOrcl460kSftT1",
"base_model:finetune:RyanYr/reflect_mini8B_MistlrgOrcl460kSftT1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-12T17:45:41Z
|
---
base_model: RyanYr/reflect_mini8B_MistlrgOrcl460kSftT1
library_name: transformers
model_name: reflect_mini8B_MistlrgOrcl460kSftT1_Om2G8kOm2AgG8k40kIpsdpT1-b1.0
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for reflect_mini8B_MistlrgOrcl460kSftT1_Om2G8kOm2AgG8k40kIpsdpT1-b1.0
This model is a fine-tuned version of [RyanYr/reflect_mini8B_MistlrgOrcl460kSftT1](https://huggingface.co/RyanYr/reflect_mini8B_MistlrgOrcl460kSftT1).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="RyanYr/reflect_mini8B_MistlrgOrcl460kSftT1_Om2G8kOm2AgG8k40kIpsdpT1-b1.0", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yyr/huggingface/runs/qhnts2j4)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.45.2
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
dzanbek/48aaa683-5c1a-43fc-8de7-96a7a901c247
|
dzanbek
| 2025-01-12T19:11:02Z
| 10
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:HuggingFaceM4/tiny-random-LlamaForCausalLM",
"base_model:adapter:HuggingFaceM4/tiny-random-LlamaForCausalLM",
"region:us"
] | null | 2025-01-12T19:10:37Z
|
---
library_name: peft
base_model: HuggingFaceM4/tiny-random-LlamaForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 48aaa683-5c1a-43fc-8de7-96a7a901c247
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: HuggingFaceM4/tiny-random-LlamaForCausalLM
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 120e2b58d59a1b2e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/120e2b58d59a1b2e_train_data.json
type:
field_input: original_code
field_instruction: update_snippet
field_output: final_code
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: dzanbek/48aaa683-5c1a-43fc-8de7-96a7a901c247
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 78GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/120e2b58d59a1b2e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 562f173b-b07d-4eb4-a59f-d230672ec843
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 562f173b-b07d-4eb4-a59f-d230672ec843
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 48aaa683-5c1a-43fc-8de7-96a7a901c247
This model is a fine-tuned version of [HuggingFaceM4/tiny-random-LlamaForCausalLM](https://huggingface.co/HuggingFaceM4/tiny-random-LlamaForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.3716
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0015 | 1 | 10.3744 |
| 10.3758 | 0.0118 | 8 | 10.3739 |
| 10.3737 | 0.0237 | 16 | 10.3725 |
| 10.3709 | 0.0355 | 24 | 10.3716 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
innov8academy/alex
|
innov8academy
| 2025-01-12T19:10:49Z
| 16
| 0
|
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-01-12T18:49:45Z
|
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Alex
---
# Alex
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Alex` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('innov8academy/alex', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Davlan/afro-xlmr-large-76L
|
Davlan
| 2025-01-12T19:10:31Z
| 721
| 3
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"safetensors",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"en",
"am",
"ar",
"so",
"sw",
"pt",
"af",
"fr",
"zu",
"mg",
"ha",
"sn",
"arz",
"ny",
"ig",
"xh",
"yo",
"st",
"rw",
"tn",
"ti",
"ts",
"om",
"run",
"nso",
"ee",
"ln",
"tw",
"pcm",
"gaa",
"loz",
"lg",
"guw",
"bem",
"efi",
"lue",
"lua",
"toi",
"ve",
"tum",
"tll",
"iso",
"kqn",
"zne",
"umb",
"mos",
"tiv",
"lu",
"ff",
"kwy",
"bci",
"rnd",
"luo",
"wal",
"ss",
"lun",
"wo",
"nyk",
"kj",
"ki",
"fon",
"bm",
"cjk",
"din",
"dyu",
"kab",
"kam",
"kbp",
"kr",
"kmb",
"kg",
"nus",
"sg",
"taq",
"tzm",
"nqo",
"arxiv:2309.07445",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-02-18T15:13:51Z
|
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: afro-xlmr-large-76L
results: []
language:
- en
- am
- ar
- so
- sw
- pt
- af
- fr
- zu
- mg
- ha
- sn
- arz
- ny
- ig
- xh
- yo
- st
- rw
- tn
- ti
- ts
- om
- run
- nso
- ee
- ln
- tw
- pcm
- gaa
- loz
- lg
- guw
- bem
- efi
- lue
- lua
- toi
- ve
- tum
- tll
- iso
- kqn
- zne
- umb
- mos
- tiv
- lu
- ff
- kwy
- bci
- rnd
- luo
- wal
- ss
- lun
- wo
- nyk
- kj
- ki
- fon
- bm
- cjk
- din
- dyu
- kab
- kam
- kbp
- kr
- kmb
- kg
- nus
- sg
- taq
- tzm
- nqo
---
# afro-xlmr-large-76L
AfroXLMR-large-76L was created by an MLM adaptation of the expanded XLM-R-large model on 76 languages widely spoken in Africa
including 4 high-resource languages.
### Pre-training corpus
A mix of mC4, Wikipedia and OPUS data
### Languages
There are 76 languages available :
- English (eng)
- Amharic (amh)
- Arabic (ara)
- Somali (som)
- Kiswahili (swa)
- Portuguese (por)
- Afrikaans (afr)
- French (fra)
- isiZulu (zul)
- Malagasy (mlg)
- Hausa (hau)
- chiShona (sna)
- Egyptian Arabic (arz)
- Chichewa (nya)
- Igbo (ibo)
- isiXhosa (xho)
- Yorùbá (yor)
- Sesotho (sot)
- Kinyarwanda (kin)
- Tigrinya (tir)
- Tsonga (tso)
- Oromo (orm)
- Rundi (run)
- Northern Sotho (nso)
- Ewe (ewe)
- Lingala (lin)
- Twi (twi)
- Nigerian Pidgin (pcm)
- Ga (gaa)
- Lozi (loz)
- Luganda (lug)
- Gun (guw)
- Bemba (bem)
- Efik (efi)
- Luvale (lue)
- Luba-Lulua (lua)
- Tonga (toi)
- Tshivenḓa (ven)
- Tumbuka (tum)
- Tetela (tll)
- Isoko (iso)
- Kaonde (kqn)
- Zande (zne)
- Umbundu (umb)
- Mossi (mos)
- Tiv (tiv)
- Luba-Katanga (lub)
- Fula (fuv)
- San Salvador Kongo (kwy)
- Baoulé (bci)
- Ruund (rnd)
- Luo (luo)
- Wolaitta (wal)
- Swazi (ssw)
- Lunda (lun)
- Wolof (wol)
- Nyaneka (nyk)
- Kwanyama (kua)
- Kikuyu (kik)
- Fon (fon)
- Bambara (bam)
- Chokwe (cjk)
- Dinka (dik)
- Dyula (dyu)
- Kabyle (kab)
- Kamba (kam)
- Kabiyè (kbp)
- Kanuri (knc)
- Kimbundu (kmb)
- Kikongo (kon)
- Nuer (nus)
- Sango (sag)
- Tamasheq (taq)
- Tamazight (tzm)
- N'ko (nqo)
### Acknowledgment
We would like to thank Google Cloud for providing us access to TPU v3-8 through the free cloud credits. Model trained using flax, before converted to pytorch.
### BibTeX entry and citation info.
```
@misc{adelani2023sib200,
title={SIB-200: A Simple, Inclusive, and Big Evaluation Dataset for Topic Classification in 200+ Languages and Dialects},
author={David Ifeoluwa Adelani and Hannah Liu and Xiaoyu Shen and Nikita Vassilyev and Jesujoba O. Alabi and Yanke Mao and Haonan Gao and Annie En-Shiun Lee},
year={2023},
eprint={2309.07445},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
ArjTheHacker/diabetic-retinopathy-detection
|
ArjTheHacker
| 2025-01-12T19:06:53Z
| 5
| 0
| null |
[
"pytorch",
"vision-classification",
"region:us"
] | null | 2025-01-12T19:06:46Z
|
# Diabetic Retinopathy Detection Model
This model is designed to detect and classify diabetic retinopathy from retinal images. It provides both color and black & white image analysis capabilities.
## Model Description
The model comes in two variants:
1. Color image model (`the_full_color_model.pth`)
2. Black & White image model (`the_full_BW_model.pth`)
### Input
- Image size: [Please specify the input image size requirements]
- Format: Both RGB and grayscale images supported
### Output
- Classification of diabetic retinopathy severity
- Confidence scores for each class
## Usage
```python
from transformers import AutoModelForImageClassification
model = AutoModelForImageClassification.from_pretrained("ArjTheHacker/diabetic-retinopathy-detection")
```
## Training
The model was originally trained on [specify dataset] and fine-tuned for diabetic retinopathy detection.
## Performance
[Add performance metrics when available]
## Limitations
This model is intended to assist in diabetic retinopathy screening but should not be used as the sole diagnostic tool. Always consult healthcare professionals for medical decisions.
|
nhung03/1854cbe5-4cf0-4910-848f-ff80137befc9
|
nhung03
| 2025-01-12T19:05:00Z
| 10
| 0
|
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-1.5B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-12T18:54:54Z
|
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1854cbe5-4cf0-4910-848f-ff80137befc9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-1.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 107ffab1dfbb4160_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/107ffab1dfbb4160_train_data.json
type:
field_input: URL
field_instruction: domain
field_output: sentence
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung03/1854cbe5-4cf0-4910-848f-ff80137befc9
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/107ffab1dfbb4160_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0d22ca37-eb44-4813-87aa-fe209ff97a6a
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0d22ca37-eb44-4813-87aa-fe209ff97a6a
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 1854cbe5-4cf0-4910-848f-ff80137befc9
This model is a fine-tuned version of [unsloth/Qwen2.5-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4324
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.0652 | 0.9313 | 200 | 3.4324 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
MayBashendy/ArabicNewSplits7_FineTuningAraBERT_run1_AugV5_k8_task5_organization
|
MayBashendy
| 2025-01-12T19:04:53Z
| 7
| 0
|
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-12T18:51:00Z
|
---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_FineTuningAraBERT_run1_AugV5_k8_task5_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_FineTuningAraBERT_run1_AugV5_k8_task5_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5623
- Qwk: 0.6610
- Mse: 0.5623
- Rmse: 0.7499
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0909 | 2 | 3.9092 | -0.0323 | 3.9092 | 1.9772 |
| No log | 0.1818 | 4 | 2.2398 | -0.0409 | 2.2398 | 1.4966 |
| No log | 0.2727 | 6 | 1.9043 | -0.0623 | 1.9043 | 1.3800 |
| No log | 0.3636 | 8 | 1.4071 | 0.0143 | 1.4071 | 1.1862 |
| No log | 0.4545 | 10 | 1.1548 | 0.0760 | 1.1548 | 1.0746 |
| No log | 0.5455 | 12 | 1.1534 | 0.0374 | 1.1534 | 1.0739 |
| No log | 0.6364 | 14 | 1.2925 | -0.0963 | 1.2925 | 1.1369 |
| No log | 0.7273 | 16 | 1.3598 | -0.1043 | 1.3598 | 1.1661 |
| No log | 0.8182 | 18 | 1.1437 | 0.2341 | 1.1437 | 1.0695 |
| No log | 0.9091 | 20 | 1.0272 | 0.1799 | 1.0272 | 1.0135 |
| No log | 1.0 | 22 | 1.0297 | 0.1203 | 1.0297 | 1.0147 |
| No log | 1.0909 | 24 | 0.9481 | 0.2161 | 0.9481 | 0.9737 |
| No log | 1.1818 | 26 | 1.0111 | 0.2711 | 1.0111 | 1.0055 |
| No log | 1.2727 | 28 | 1.4042 | 0.1136 | 1.4042 | 1.1850 |
| No log | 1.3636 | 30 | 1.4280 | 0.1136 | 1.4280 | 1.1950 |
| No log | 1.4545 | 32 | 1.2236 | 0.0 | 1.2236 | 1.1062 |
| No log | 1.5455 | 34 | 1.0511 | 0.1764 | 1.0511 | 1.0252 |
| No log | 1.6364 | 36 | 0.9370 | 0.3666 | 0.9370 | 0.9680 |
| No log | 1.7273 | 38 | 0.9329 | 0.375 | 0.9329 | 0.9658 |
| No log | 1.8182 | 40 | 0.9752 | 0.3229 | 0.9752 | 0.9875 |
| No log | 1.9091 | 42 | 0.9844 | 0.3104 | 0.9844 | 0.9922 |
| No log | 2.0 | 44 | 0.8800 | 0.3243 | 0.8800 | 0.9381 |
| No log | 2.0909 | 46 | 0.8903 | 0.2350 | 0.8903 | 0.9435 |
| No log | 2.1818 | 48 | 0.8951 | 0.2796 | 0.8951 | 0.9461 |
| No log | 2.2727 | 50 | 0.8685 | 0.3921 | 0.8685 | 0.9319 |
| No log | 2.3636 | 52 | 0.8208 | 0.4078 | 0.8208 | 0.9060 |
| No log | 2.4545 | 54 | 0.7933 | 0.4557 | 0.7933 | 0.8907 |
| No log | 2.5455 | 56 | 0.8182 | 0.4472 | 0.8182 | 0.9045 |
| No log | 2.6364 | 58 | 0.7379 | 0.4796 | 0.7379 | 0.8590 |
| No log | 2.7273 | 60 | 0.7649 | 0.5715 | 0.7649 | 0.8746 |
| No log | 2.8182 | 62 | 0.8347 | 0.5251 | 0.8347 | 0.9136 |
| No log | 2.9091 | 64 | 0.7545 | 0.5849 | 0.7545 | 0.8686 |
| No log | 3.0 | 66 | 0.7995 | 0.5435 | 0.7995 | 0.8941 |
| No log | 3.0909 | 68 | 0.7304 | 0.5478 | 0.7304 | 0.8546 |
| No log | 3.1818 | 70 | 0.6738 | 0.6429 | 0.6738 | 0.8209 |
| No log | 3.2727 | 72 | 0.7029 | 0.6251 | 0.7029 | 0.8384 |
| No log | 3.3636 | 74 | 0.6662 | 0.6139 | 0.6662 | 0.8162 |
| No log | 3.4545 | 76 | 0.6546 | 0.6377 | 0.6546 | 0.8091 |
| No log | 3.5455 | 78 | 0.6404 | 0.6377 | 0.6404 | 0.8003 |
| No log | 3.6364 | 80 | 0.6202 | 0.6570 | 0.6202 | 0.7875 |
| No log | 3.7273 | 82 | 0.6225 | 0.6835 | 0.6225 | 0.7890 |
| No log | 3.8182 | 84 | 0.6741 | 0.7011 | 0.6741 | 0.8210 |
| No log | 3.9091 | 86 | 0.7546 | 0.6344 | 0.7546 | 0.8687 |
| No log | 4.0 | 88 | 0.8790 | 0.5435 | 0.8790 | 0.9376 |
| No log | 4.0909 | 90 | 0.8167 | 0.6291 | 0.8167 | 0.9037 |
| No log | 4.1818 | 92 | 0.9996 | 0.4826 | 0.9996 | 0.9998 |
| No log | 4.2727 | 94 | 0.9352 | 0.5262 | 0.9352 | 0.9670 |
| No log | 4.3636 | 96 | 0.8110 | 0.6174 | 0.8110 | 0.9005 |
| No log | 4.4545 | 98 | 0.6637 | 0.6886 | 0.6637 | 0.8147 |
| No log | 4.5455 | 100 | 0.7296 | 0.6557 | 0.7296 | 0.8542 |
| No log | 4.6364 | 102 | 0.8078 | 0.6450 | 0.8078 | 0.8988 |
| No log | 4.7273 | 104 | 0.7972 | 0.6244 | 0.7972 | 0.8929 |
| No log | 4.8182 | 106 | 0.6259 | 0.6594 | 0.6259 | 0.7911 |
| No log | 4.9091 | 108 | 0.6092 | 0.6298 | 0.6092 | 0.7805 |
| No log | 5.0 | 110 | 0.6659 | 0.6626 | 0.6659 | 0.8161 |
| No log | 5.0909 | 112 | 0.7127 | 0.7088 | 0.7127 | 0.8442 |
| No log | 5.1818 | 114 | 0.6168 | 0.6568 | 0.6168 | 0.7854 |
| No log | 5.2727 | 116 | 0.6510 | 0.6199 | 0.6510 | 0.8069 |
| No log | 5.3636 | 118 | 0.6162 | 0.6112 | 0.6162 | 0.7850 |
| No log | 5.4545 | 120 | 0.6233 | 0.6322 | 0.6233 | 0.7895 |
| No log | 5.5455 | 122 | 0.6498 | 0.6322 | 0.6498 | 0.8061 |
| No log | 5.6364 | 124 | 0.6154 | 0.6060 | 0.6154 | 0.7845 |
| No log | 5.7273 | 126 | 0.6079 | 0.6553 | 0.6079 | 0.7797 |
| No log | 5.8182 | 128 | 0.6146 | 0.6345 | 0.6146 | 0.7840 |
| No log | 5.9091 | 130 | 0.6870 | 0.6420 | 0.6870 | 0.8289 |
| No log | 6.0 | 132 | 0.8137 | 0.5731 | 0.8137 | 0.9021 |
| No log | 6.0909 | 134 | 0.7674 | 0.5911 | 0.7674 | 0.8760 |
| No log | 6.1818 | 136 | 0.6482 | 0.6295 | 0.6482 | 0.8051 |
| No log | 6.2727 | 138 | 0.6058 | 0.6157 | 0.6058 | 0.7783 |
| No log | 6.3636 | 140 | 0.6172 | 0.6328 | 0.6172 | 0.7856 |
| No log | 6.4545 | 142 | 0.5726 | 0.6646 | 0.5726 | 0.7567 |
| No log | 6.5455 | 144 | 0.6113 | 0.6704 | 0.6113 | 0.7819 |
| No log | 6.6364 | 146 | 0.7584 | 0.5908 | 0.7584 | 0.8709 |
| No log | 6.7273 | 148 | 0.6990 | 0.6076 | 0.6990 | 0.8361 |
| No log | 6.8182 | 150 | 0.6135 | 0.6380 | 0.6135 | 0.7833 |
| No log | 6.9091 | 152 | 0.5873 | 0.6239 | 0.5873 | 0.7664 |
| No log | 7.0 | 154 | 0.6610 | 0.6053 | 0.6610 | 0.8130 |
| No log | 7.0909 | 156 | 0.6385 | 0.5676 | 0.6385 | 0.7990 |
| No log | 7.1818 | 158 | 0.6411 | 0.4730 | 0.6411 | 0.8007 |
| No log | 7.2727 | 160 | 0.6331 | 0.5302 | 0.6331 | 0.7957 |
| No log | 7.3636 | 162 | 0.6095 | 0.5160 | 0.6095 | 0.7807 |
| No log | 7.4545 | 164 | 0.6183 | 0.6053 | 0.6183 | 0.7863 |
| No log | 7.5455 | 166 | 0.6101 | 0.6045 | 0.6101 | 0.7811 |
| No log | 7.6364 | 168 | 0.5907 | 0.6528 | 0.5907 | 0.7686 |
| No log | 7.7273 | 170 | 0.5883 | 0.6067 | 0.5883 | 0.7670 |
| No log | 7.8182 | 172 | 0.5909 | 0.5882 | 0.5909 | 0.7687 |
| No log | 7.9091 | 174 | 0.5942 | 0.5882 | 0.5942 | 0.7708 |
| No log | 8.0 | 176 | 0.6508 | 0.5366 | 0.6508 | 0.8067 |
| No log | 8.0909 | 178 | 0.8051 | 0.5272 | 0.8051 | 0.8973 |
| No log | 8.1818 | 180 | 0.7174 | 0.5451 | 0.7174 | 0.8470 |
| No log | 8.2727 | 182 | 0.6595 | 0.6709 | 0.6595 | 0.8121 |
| No log | 8.3636 | 184 | 0.7896 | 0.5750 | 0.7896 | 0.8886 |
| No log | 8.4545 | 186 | 0.7673 | 0.5975 | 0.7673 | 0.8760 |
| No log | 8.5455 | 188 | 0.6851 | 0.6374 | 0.6851 | 0.8277 |
| No log | 8.6364 | 190 | 0.7824 | 0.5455 | 0.7824 | 0.8846 |
| No log | 8.7273 | 192 | 0.9163 | 0.5095 | 0.9163 | 0.9572 |
| No log | 8.8182 | 194 | 0.8269 | 0.5273 | 0.8269 | 0.9094 |
| No log | 8.9091 | 196 | 0.6290 | 0.6740 | 0.6290 | 0.7931 |
| No log | 9.0 | 198 | 0.7383 | 0.5864 | 0.7383 | 0.8592 |
| No log | 9.0909 | 200 | 0.9172 | 0.5251 | 0.9172 | 0.9577 |
| No log | 9.1818 | 202 | 0.8445 | 0.5251 | 0.8445 | 0.9190 |
| No log | 9.2727 | 204 | 0.6484 | 0.6240 | 0.6484 | 0.8052 |
| No log | 9.3636 | 206 | 0.5585 | 0.6460 | 0.5585 | 0.7473 |
| No log | 9.4545 | 208 | 0.5669 | 0.6460 | 0.5669 | 0.7529 |
| No log | 9.5455 | 210 | 0.5905 | 0.6555 | 0.5905 | 0.7685 |
| No log | 9.6364 | 212 | 0.7448 | 0.6145 | 0.7448 | 0.8630 |
| No log | 9.7273 | 214 | 0.8127 | 0.6208 | 0.8127 | 0.9015 |
| No log | 9.8182 | 216 | 0.7194 | 0.6106 | 0.7194 | 0.8482 |
| No log | 9.9091 | 218 | 0.6678 | 0.5964 | 0.6678 | 0.8172 |
| No log | 10.0 | 220 | 0.7135 | 0.5727 | 0.7135 | 0.8447 |
| No log | 10.0909 | 222 | 0.6958 | 0.5816 | 0.6958 | 0.8341 |
| No log | 10.1818 | 224 | 0.6813 | 0.5603 | 0.6813 | 0.8254 |
| No log | 10.2727 | 226 | 0.7207 | 0.6362 | 0.7207 | 0.8489 |
| No log | 10.3636 | 228 | 0.6957 | 0.6209 | 0.6957 | 0.8341 |
| No log | 10.4545 | 230 | 0.6225 | 0.5986 | 0.6225 | 0.7890 |
| No log | 10.5455 | 232 | 0.6129 | 0.5693 | 0.6129 | 0.7829 |
| No log | 10.6364 | 234 | 0.6250 | 0.5585 | 0.6250 | 0.7906 |
| No log | 10.7273 | 236 | 0.6462 | 0.6460 | 0.6462 | 0.8039 |
| No log | 10.8182 | 238 | 0.6643 | 0.6414 | 0.6643 | 0.8150 |
| No log | 10.9091 | 240 | 0.6863 | 0.6414 | 0.6863 | 0.8285 |
| No log | 11.0 | 242 | 0.6806 | 0.6347 | 0.6806 | 0.8250 |
| No log | 11.0909 | 244 | 0.6878 | 0.6009 | 0.6878 | 0.8293 |
| No log | 11.1818 | 246 | 0.6821 | 0.6144 | 0.6821 | 0.8259 |
| No log | 11.2727 | 248 | 0.7301 | 0.5642 | 0.7301 | 0.8545 |
| No log | 11.3636 | 250 | 0.7298 | 0.5661 | 0.7298 | 0.8543 |
| No log | 11.4545 | 252 | 0.7089 | 0.6065 | 0.7089 | 0.8420 |
| No log | 11.5455 | 254 | 0.6160 | 0.6097 | 0.6160 | 0.7849 |
| No log | 11.6364 | 256 | 0.5889 | 0.6701 | 0.5889 | 0.7674 |
| No log | 11.7273 | 258 | 0.5841 | 0.6758 | 0.5841 | 0.7643 |
| No log | 11.8182 | 260 | 0.5800 | 0.6617 | 0.5800 | 0.7616 |
| No log | 11.9091 | 262 | 0.5605 | 0.7211 | 0.5605 | 0.7486 |
| No log | 12.0 | 264 | 0.5416 | 0.6903 | 0.5416 | 0.7359 |
| No log | 12.0909 | 266 | 0.5364 | 0.6903 | 0.5364 | 0.7324 |
| No log | 12.1818 | 268 | 0.5296 | 0.7003 | 0.5296 | 0.7277 |
| No log | 12.2727 | 270 | 0.5243 | 0.7003 | 0.5243 | 0.7241 |
| No log | 12.3636 | 272 | 0.5489 | 0.7101 | 0.5489 | 0.7409 |
| No log | 12.4545 | 274 | 0.5630 | 0.6951 | 0.5630 | 0.7504 |
| No log | 12.5455 | 276 | 0.5826 | 0.6377 | 0.5826 | 0.7633 |
| No log | 12.6364 | 278 | 0.6001 | 0.6195 | 0.6001 | 0.7747 |
| No log | 12.7273 | 280 | 0.6108 | 0.6377 | 0.6108 | 0.7815 |
| No log | 12.8182 | 282 | 0.6206 | 0.6630 | 0.6206 | 0.7878 |
| No log | 12.9091 | 284 | 0.6168 | 0.6553 | 0.6168 | 0.7854 |
| No log | 13.0 | 286 | 0.6172 | 0.5599 | 0.6172 | 0.7856 |
| No log | 13.0909 | 288 | 0.6439 | 0.5542 | 0.6439 | 0.8025 |
| No log | 13.1818 | 290 | 0.6906 | 0.5342 | 0.6906 | 0.8310 |
| No log | 13.2727 | 292 | 0.6958 | 0.5342 | 0.6958 | 0.8342 |
| No log | 13.3636 | 294 | 0.6308 | 0.5342 | 0.6308 | 0.7942 |
| No log | 13.4545 | 296 | 0.5751 | 0.5770 | 0.5751 | 0.7584 |
| No log | 13.5455 | 298 | 0.5466 | 0.6553 | 0.5466 | 0.7393 |
| No log | 13.6364 | 300 | 0.5575 | 0.6901 | 0.5575 | 0.7467 |
| No log | 13.7273 | 302 | 0.6591 | 0.6631 | 0.6591 | 0.8118 |
| No log | 13.8182 | 304 | 0.7236 | 0.5982 | 0.7236 | 0.8506 |
| No log | 13.9091 | 306 | 0.6016 | 0.6791 | 0.6016 | 0.7756 |
| No log | 14.0 | 308 | 0.5181 | 0.7437 | 0.5181 | 0.7198 |
| No log | 14.0909 | 310 | 0.4892 | 0.7314 | 0.4892 | 0.6994 |
| No log | 14.1818 | 312 | 0.4981 | 0.7266 | 0.4981 | 0.7058 |
| No log | 14.2727 | 314 | 0.5207 | 0.6572 | 0.5207 | 0.7216 |
| No log | 14.3636 | 316 | 0.5297 | 0.6447 | 0.5297 | 0.7278 |
| No log | 14.4545 | 318 | 0.5429 | 0.6380 | 0.5429 | 0.7368 |
| No log | 14.5455 | 320 | 0.5478 | 0.6575 | 0.5478 | 0.7401 |
| No log | 14.6364 | 322 | 0.5614 | 0.6358 | 0.5614 | 0.7493 |
| No log | 14.7273 | 324 | 0.6095 | 0.5839 | 0.6095 | 0.7807 |
| No log | 14.8182 | 326 | 0.6097 | 0.5839 | 0.6097 | 0.7808 |
| No log | 14.9091 | 328 | 0.5810 | 0.6256 | 0.5810 | 0.7622 |
| No log | 15.0 | 330 | 0.5683 | 0.6488 | 0.5683 | 0.7539 |
| No log | 15.0909 | 332 | 0.5619 | 0.6606 | 0.5619 | 0.7496 |
| No log | 15.1818 | 334 | 0.5678 | 0.6311 | 0.5678 | 0.7535 |
| No log | 15.2727 | 336 | 0.5689 | 0.6435 | 0.5689 | 0.7543 |
| No log | 15.3636 | 338 | 0.5687 | 0.6067 | 0.5687 | 0.7541 |
| No log | 15.4545 | 340 | 0.6046 | 0.5805 | 0.6046 | 0.7776 |
| No log | 15.5455 | 342 | 0.6203 | 0.6004 | 0.6203 | 0.7876 |
| No log | 15.6364 | 344 | 0.6030 | 0.5966 | 0.6030 | 0.7765 |
| No log | 15.7273 | 346 | 0.5892 | 0.6491 | 0.5892 | 0.7676 |
| No log | 15.8182 | 348 | 0.5922 | 0.6813 | 0.5922 | 0.7695 |
| No log | 15.9091 | 350 | 0.5830 | 0.6284 | 0.5830 | 0.7635 |
| No log | 16.0 | 352 | 0.5810 | 0.6798 | 0.5810 | 0.7623 |
| No log | 16.0909 | 354 | 0.5928 | 0.6249 | 0.5928 | 0.7699 |
| No log | 16.1818 | 356 | 0.6027 | 0.6157 | 0.6027 | 0.7763 |
| No log | 16.2727 | 358 | 0.6141 | 0.5953 | 0.6141 | 0.7836 |
| No log | 16.3636 | 360 | 0.6065 | 0.6157 | 0.6065 | 0.7788 |
| No log | 16.4545 | 362 | 0.5964 | 0.6249 | 0.5964 | 0.7723 |
| No log | 16.5455 | 364 | 0.6321 | 0.6933 | 0.6321 | 0.7950 |
| No log | 16.6364 | 366 | 0.7402 | 0.6023 | 0.7402 | 0.8604 |
| No log | 16.7273 | 368 | 0.7393 | 0.6423 | 0.7393 | 0.8598 |
| No log | 16.8182 | 370 | 0.6731 | 0.6137 | 0.6731 | 0.8204 |
| No log | 16.9091 | 372 | 0.6213 | 0.6415 | 0.6213 | 0.7882 |
| No log | 17.0 | 374 | 0.6112 | 0.5861 | 0.6112 | 0.7818 |
| No log | 17.0909 | 376 | 0.6133 | 0.5861 | 0.6133 | 0.7832 |
| No log | 17.1818 | 378 | 0.6049 | 0.5763 | 0.6049 | 0.7778 |
| No log | 17.2727 | 380 | 0.5946 | 0.5881 | 0.5946 | 0.7711 |
| No log | 17.3636 | 382 | 0.5843 | 0.5716 | 0.5843 | 0.7644 |
| No log | 17.4545 | 384 | 0.5761 | 0.5716 | 0.5761 | 0.7590 |
| No log | 17.5455 | 386 | 0.5755 | 0.5905 | 0.5755 | 0.7586 |
| No log | 17.6364 | 388 | 0.5732 | 0.6586 | 0.5732 | 0.7571 |
| No log | 17.7273 | 390 | 0.5741 | 0.6493 | 0.5741 | 0.7577 |
| No log | 17.8182 | 392 | 0.5839 | 0.5915 | 0.5839 | 0.7642 |
| No log | 17.9091 | 394 | 0.5907 | 0.5726 | 0.5907 | 0.7686 |
| No log | 18.0 | 396 | 0.5591 | 0.6134 | 0.5591 | 0.7477 |
| No log | 18.0909 | 398 | 0.5593 | 0.5820 | 0.5593 | 0.7478 |
| No log | 18.1818 | 400 | 0.5678 | 0.5626 | 0.5678 | 0.7535 |
| No log | 18.2727 | 402 | 0.5748 | 0.6249 | 0.5748 | 0.7581 |
| No log | 18.3636 | 404 | 0.5841 | 0.6186 | 0.5841 | 0.7642 |
| No log | 18.4545 | 406 | 0.6027 | 0.5316 | 0.6027 | 0.7763 |
| No log | 18.5455 | 408 | 0.6357 | 0.5089 | 0.6357 | 0.7973 |
| No log | 18.6364 | 410 | 0.7081 | 0.5339 | 0.7081 | 0.8415 |
| No log | 18.7273 | 412 | 0.7752 | 0.5233 | 0.7752 | 0.8805 |
| No log | 18.8182 | 414 | 0.7621 | 0.4750 | 0.7621 | 0.8730 |
| No log | 18.9091 | 416 | 0.6734 | 0.5891 | 0.6734 | 0.8206 |
| No log | 19.0 | 418 | 0.7248 | 0.6521 | 0.7248 | 0.8513 |
| No log | 19.0909 | 420 | 0.7156 | 0.6458 | 0.7156 | 0.8459 |
| No log | 19.1818 | 422 | 0.6237 | 0.7110 | 0.6237 | 0.7897 |
| No log | 19.2727 | 424 | 0.5489 | 0.7048 | 0.5489 | 0.7409 |
| No log | 19.3636 | 426 | 0.5363 | 0.6695 | 0.5363 | 0.7323 |
| No log | 19.4545 | 428 | 0.5621 | 0.7136 | 0.5621 | 0.7497 |
| No log | 19.5455 | 430 | 0.6385 | 0.6209 | 0.6385 | 0.7990 |
| No log | 19.6364 | 432 | 0.7423 | 0.5686 | 0.7423 | 0.8616 |
| No log | 19.7273 | 434 | 0.7729 | 0.5358 | 0.7729 | 0.8791 |
| No log | 19.8182 | 436 | 0.7096 | 0.5475 | 0.7096 | 0.8424 |
| No log | 19.9091 | 438 | 0.6199 | 0.6272 | 0.6199 | 0.7873 |
| No log | 20.0 | 440 | 0.5678 | 0.6198 | 0.5678 | 0.7535 |
| No log | 20.0909 | 442 | 0.5544 | 0.6841 | 0.5544 | 0.7445 |
| No log | 20.1818 | 444 | 0.5386 | 0.6553 | 0.5386 | 0.7339 |
| No log | 20.2727 | 446 | 0.5304 | 0.6796 | 0.5304 | 0.7283 |
| No log | 20.3636 | 448 | 0.5335 | 0.6875 | 0.5335 | 0.7304 |
| No log | 20.4545 | 450 | 0.5597 | 0.6865 | 0.5597 | 0.7481 |
| No log | 20.5455 | 452 | 0.5578 | 0.6623 | 0.5578 | 0.7469 |
| No log | 20.6364 | 454 | 0.5381 | 0.6841 | 0.5381 | 0.7335 |
| No log | 20.7273 | 456 | 0.5455 | 0.6139 | 0.5455 | 0.7386 |
| No log | 20.8182 | 458 | 0.5868 | 0.6317 | 0.5868 | 0.7660 |
| No log | 20.9091 | 460 | 0.5954 | 0.6317 | 0.5954 | 0.7716 |
| No log | 21.0 | 462 | 0.5515 | 0.6164 | 0.5515 | 0.7426 |
| No log | 21.0909 | 464 | 0.5294 | 0.6703 | 0.5294 | 0.7276 |
| No log | 21.1818 | 466 | 0.5330 | 0.6602 | 0.5330 | 0.7301 |
| No log | 21.2727 | 468 | 0.5446 | 0.7259 | 0.5446 | 0.7380 |
| No log | 21.3636 | 470 | 0.5415 | 0.6680 | 0.5415 | 0.7359 |
| No log | 21.4545 | 472 | 0.5332 | 0.6689 | 0.5332 | 0.7302 |
| No log | 21.5455 | 474 | 0.5356 | 0.6838 | 0.5356 | 0.7319 |
| No log | 21.6364 | 476 | 0.5454 | 0.6327 | 0.5454 | 0.7385 |
| No log | 21.7273 | 478 | 0.5471 | 0.5783 | 0.5471 | 0.7396 |
| No log | 21.8182 | 480 | 0.5516 | 0.5783 | 0.5516 | 0.7427 |
| No log | 21.9091 | 482 | 0.5469 | 0.5783 | 0.5469 | 0.7395 |
| No log | 22.0 | 484 | 0.5309 | 0.6251 | 0.5309 | 0.7287 |
| No log | 22.0909 | 486 | 0.5268 | 0.6251 | 0.5268 | 0.7258 |
| No log | 22.1818 | 488 | 0.5281 | 0.6118 | 0.5281 | 0.7267 |
| No log | 22.2727 | 490 | 0.5392 | 0.6164 | 0.5392 | 0.7343 |
| No log | 22.3636 | 492 | 0.5782 | 0.6623 | 0.5782 | 0.7604 |
| No log | 22.4545 | 494 | 0.6317 | 0.6071 | 0.6317 | 0.7948 |
| No log | 22.5455 | 496 | 0.7019 | 0.6032 | 0.7019 | 0.8378 |
| No log | 22.6364 | 498 | 0.6928 | 0.5860 | 0.6928 | 0.8323 |
| 0.2493 | 22.7273 | 500 | 0.6768 | 0.5217 | 0.6768 | 0.8227 |
| 0.2493 | 22.8182 | 502 | 0.6890 | 0.5416 | 0.6890 | 0.8301 |
| 0.2493 | 22.9091 | 504 | 0.6969 | 0.5835 | 0.6969 | 0.8348 |
| 0.2493 | 23.0 | 506 | 0.6781 | 0.5953 | 0.6781 | 0.8235 |
| 0.2493 | 23.0909 | 508 | 0.6262 | 0.6305 | 0.6262 | 0.7913 |
| 0.2493 | 23.1818 | 510 | 0.5766 | 0.5784 | 0.5766 | 0.7593 |
| 0.2493 | 23.2727 | 512 | 0.5741 | 0.5469 | 0.5741 | 0.7577 |
| 0.2493 | 23.3636 | 514 | 0.5709 | 0.5469 | 0.5709 | 0.7555 |
| 0.2493 | 23.4545 | 516 | 0.5615 | 0.6107 | 0.5615 | 0.7493 |
| 0.2493 | 23.5455 | 518 | 0.5671 | 0.6441 | 0.5671 | 0.7530 |
| 0.2493 | 23.6364 | 520 | 0.6036 | 0.6538 | 0.6036 | 0.7769 |
| 0.2493 | 23.7273 | 522 | 0.6142 | 0.6218 | 0.6142 | 0.7837 |
| 0.2493 | 23.8182 | 524 | 0.6055 | 0.6282 | 0.6055 | 0.7781 |
| 0.2493 | 23.9091 | 526 | 0.5834 | 0.6441 | 0.5834 | 0.7638 |
| 0.2493 | 24.0 | 528 | 0.5681 | 0.6488 | 0.5681 | 0.7537 |
| 0.2493 | 24.0909 | 530 | 0.5534 | 0.6690 | 0.5534 | 0.7439 |
| 0.2493 | 24.1818 | 532 | 0.5384 | 0.6649 | 0.5384 | 0.7337 |
| 0.2493 | 24.2727 | 534 | 0.5317 | 0.6959 | 0.5317 | 0.7291 |
| 0.2493 | 24.3636 | 536 | 0.5311 | 0.6561 | 0.5311 | 0.7288 |
| 0.2493 | 24.4545 | 538 | 0.5358 | 0.6812 | 0.5358 | 0.7320 |
| 0.2493 | 24.5455 | 540 | 0.5316 | 0.6947 | 0.5316 | 0.7291 |
| 0.2493 | 24.6364 | 542 | 0.5352 | 0.6947 | 0.5352 | 0.7316 |
| 0.2493 | 24.7273 | 544 | 0.5322 | 0.6770 | 0.5322 | 0.7295 |
| 0.2493 | 24.8182 | 546 | 0.5292 | 0.6667 | 0.5292 | 0.7274 |
| 0.2493 | 24.9091 | 548 | 0.5240 | 0.6659 | 0.5240 | 0.7239 |
| 0.2493 | 25.0 | 550 | 0.5235 | 0.6720 | 0.5235 | 0.7235 |
| 0.2493 | 25.0909 | 552 | 0.5440 | 0.7042 | 0.5440 | 0.7375 |
| 0.2493 | 25.1818 | 554 | 0.5589 | 0.7489 | 0.5589 | 0.7476 |
| 0.2493 | 25.2727 | 556 | 0.5798 | 0.7269 | 0.5798 | 0.7614 |
| 0.2493 | 25.3636 | 558 | 0.5686 | 0.7210 | 0.5686 | 0.7540 |
| 0.2493 | 25.4545 | 560 | 0.5372 | 0.6629 | 0.5372 | 0.7329 |
| 0.2493 | 25.5455 | 562 | 0.5353 | 0.6024 | 0.5353 | 0.7316 |
| 0.2493 | 25.6364 | 564 | 0.5512 | 0.5464 | 0.5512 | 0.7424 |
| 0.2493 | 25.7273 | 566 | 0.5718 | 0.5225 | 0.5718 | 0.7562 |
| 0.2493 | 25.8182 | 568 | 0.5657 | 0.5210 | 0.5657 | 0.7521 |
| 0.2493 | 25.9091 | 570 | 0.5554 | 0.6269 | 0.5554 | 0.7452 |
| 0.2493 | 26.0 | 572 | 0.5615 | 0.6508 | 0.5615 | 0.7493 |
| 0.2493 | 26.0909 | 574 | 0.5450 | 0.7059 | 0.5450 | 0.7382 |
| 0.2493 | 26.1818 | 576 | 0.5317 | 0.6973 | 0.5317 | 0.7292 |
| 0.2493 | 26.2727 | 578 | 0.5313 | 0.7384 | 0.5313 | 0.7289 |
| 0.2493 | 26.3636 | 580 | 0.5281 | 0.7059 | 0.5281 | 0.7267 |
| 0.2493 | 26.4545 | 582 | 0.5204 | 0.7171 | 0.5204 | 0.7214 |
| 0.2493 | 26.5455 | 584 | 0.5125 | 0.6649 | 0.5125 | 0.7159 |
| 0.2493 | 26.6364 | 586 | 0.5240 | 0.6409 | 0.5240 | 0.7239 |
| 0.2493 | 26.7273 | 588 | 0.5291 | 0.5871 | 0.5291 | 0.7274 |
| 0.2493 | 26.8182 | 590 | 0.5432 | 0.6815 | 0.5432 | 0.7370 |
| 0.2493 | 26.9091 | 592 | 0.5842 | 0.6450 | 0.5842 | 0.7643 |
| 0.2493 | 27.0 | 594 | 0.5942 | 0.5964 | 0.5942 | 0.7709 |
| 0.2493 | 27.0909 | 596 | 0.5896 | 0.6697 | 0.5896 | 0.7679 |
| 0.2493 | 27.1818 | 598 | 0.5986 | 0.6697 | 0.5986 | 0.7737 |
| 0.2493 | 27.2727 | 600 | 0.5985 | 0.6232 | 0.5985 | 0.7737 |
| 0.2493 | 27.3636 | 602 | 0.5949 | 0.5440 | 0.5949 | 0.7713 |
| 0.2493 | 27.4545 | 604 | 0.5978 | 0.6232 | 0.5978 | 0.7732 |
| 0.2493 | 27.5455 | 606 | 0.5899 | 0.6508 | 0.5899 | 0.7681 |
| 0.2493 | 27.6364 | 608 | 0.5739 | 0.6054 | 0.5739 | 0.7576 |
| 0.2493 | 27.7273 | 610 | 0.5721 | 0.6087 | 0.5721 | 0.7564 |
| 0.2493 | 27.8182 | 612 | 0.6100 | 0.5554 | 0.6100 | 0.7810 |
| 0.2493 | 27.9091 | 614 | 0.6368 | 0.5674 | 0.6368 | 0.7980 |
| 0.2493 | 28.0 | 616 | 0.6006 | 0.5554 | 0.6006 | 0.7750 |
| 0.2493 | 28.0909 | 618 | 0.5968 | 0.5795 | 0.5968 | 0.7725 |
| 0.2493 | 28.1818 | 620 | 0.6074 | 0.5183 | 0.6074 | 0.7794 |
| 0.2493 | 28.2727 | 622 | 0.6103 | 0.4935 | 0.6103 | 0.7812 |
| 0.2493 | 28.3636 | 624 | 0.6083 | 0.5432 | 0.6083 | 0.7799 |
| 0.2493 | 28.4545 | 626 | 0.5975 | 0.5783 | 0.5975 | 0.7730 |
| 0.2493 | 28.5455 | 628 | 0.5906 | 0.5409 | 0.5906 | 0.7685 |
| 0.2493 | 28.6364 | 630 | 0.5949 | 0.5770 | 0.5949 | 0.7713 |
| 0.2493 | 28.7273 | 632 | 0.5786 | 0.5736 | 0.5786 | 0.7606 |
| 0.2493 | 28.8182 | 634 | 0.5754 | 0.6087 | 0.5754 | 0.7585 |
| 0.2493 | 28.9091 | 636 | 0.5859 | 0.5972 | 0.5859 | 0.7654 |
| 0.2493 | 29.0 | 638 | 0.5977 | 0.6620 | 0.5977 | 0.7731 |
| 0.2493 | 29.0909 | 640 | 0.5887 | 0.6620 | 0.5887 | 0.7673 |
| 0.2493 | 29.1818 | 642 | 0.5809 | 0.6578 | 0.5809 | 0.7622 |
| 0.2493 | 29.2727 | 644 | 0.5719 | 0.6771 | 0.5719 | 0.7562 |
| 0.2493 | 29.3636 | 646 | 0.5542 | 0.7074 | 0.5542 | 0.7444 |
| 0.2493 | 29.4545 | 648 | 0.5447 | 0.6507 | 0.5447 | 0.7381 |
| 0.2493 | 29.5455 | 650 | 0.5501 | 0.6750 | 0.5501 | 0.7417 |
| 0.2493 | 29.6364 | 652 | 0.5722 | 0.6384 | 0.5722 | 0.7565 |
| 0.2493 | 29.7273 | 654 | 0.5740 | 0.6545 | 0.5740 | 0.7576 |
| 0.2493 | 29.8182 | 656 | 0.5446 | 0.6750 | 0.5446 | 0.7380 |
| 0.2493 | 29.9091 | 658 | 0.5245 | 0.6788 | 0.5245 | 0.7242 |
| 0.2493 | 30.0 | 660 | 0.5585 | 0.7050 | 0.5585 | 0.7473 |
| 0.2493 | 30.0909 | 662 | 0.5942 | 0.6791 | 0.5942 | 0.7708 |
| 0.2493 | 30.1818 | 664 | 0.5875 | 0.6892 | 0.5875 | 0.7665 |
| 0.2493 | 30.2727 | 666 | 0.5531 | 0.7253 | 0.5531 | 0.7437 |
| 0.2493 | 30.3636 | 668 | 0.5266 | 0.6689 | 0.5266 | 0.7257 |
| 0.2493 | 30.4545 | 670 | 0.5388 | 0.6830 | 0.5388 | 0.7340 |
| 0.2493 | 30.5455 | 672 | 0.5418 | 0.6830 | 0.5418 | 0.7361 |
| 0.2493 | 30.6364 | 674 | 0.5350 | 0.6770 | 0.5350 | 0.7315 |
| 0.2493 | 30.7273 | 676 | 0.5511 | 0.6704 | 0.5511 | 0.7424 |
| 0.2493 | 30.8182 | 678 | 0.5770 | 0.6822 | 0.5770 | 0.7596 |
| 0.2493 | 30.9091 | 680 | 0.6327 | 0.6209 | 0.6327 | 0.7954 |
| 0.2493 | 31.0 | 682 | 0.7090 | 0.5800 | 0.7090 | 0.8420 |
| 0.2493 | 31.0909 | 684 | 0.7522 | 0.5636 | 0.7522 | 0.8673 |
| 0.2493 | 31.1818 | 686 | 0.7369 | 0.5636 | 0.7369 | 0.8584 |
| 0.2493 | 31.2727 | 688 | 0.6961 | 0.5745 | 0.6961 | 0.8343 |
| 0.2493 | 31.3636 | 690 | 0.6609 | 0.6127 | 0.6609 | 0.8129 |
| 0.2493 | 31.4545 | 692 | 0.6422 | 0.6127 | 0.6422 | 0.8014 |
| 0.2493 | 31.5455 | 694 | 0.6225 | 0.6807 | 0.6225 | 0.7890 |
| 0.2493 | 31.6364 | 696 | 0.5864 | 0.6929 | 0.5864 | 0.7658 |
| 0.2493 | 31.7273 | 698 | 0.5770 | 0.6872 | 0.5770 | 0.7596 |
| 0.2493 | 31.8182 | 700 | 0.5725 | 0.6954 | 0.5725 | 0.7567 |
| 0.2493 | 31.9091 | 702 | 0.5542 | 0.6680 | 0.5542 | 0.7444 |
| 0.2493 | 32.0 | 704 | 0.5531 | 0.6874 | 0.5531 | 0.7437 |
| 0.2493 | 32.0909 | 706 | 0.5547 | 0.7203 | 0.5547 | 0.7448 |
| 0.2493 | 32.1818 | 708 | 0.5568 | 0.7203 | 0.5568 | 0.7462 |
| 0.2493 | 32.2727 | 710 | 0.5596 | 0.7095 | 0.5596 | 0.7481 |
| 0.2493 | 32.3636 | 712 | 0.5493 | 0.6896 | 0.5493 | 0.7411 |
| 0.2493 | 32.4545 | 714 | 0.5350 | 0.6796 | 0.5350 | 0.7314 |
| 0.2493 | 32.5455 | 716 | 0.5239 | 0.6796 | 0.5239 | 0.7238 |
| 0.2493 | 32.6364 | 718 | 0.5218 | 0.6796 | 0.5218 | 0.7224 |
| 0.2493 | 32.7273 | 720 | 0.5285 | 0.6796 | 0.5285 | 0.7270 |
| 0.2493 | 32.8182 | 722 | 0.5450 | 0.6909 | 0.5450 | 0.7383 |
| 0.2493 | 32.9091 | 724 | 0.5569 | 0.6936 | 0.5569 | 0.7462 |
| 0.2493 | 33.0 | 726 | 0.5516 | 0.7034 | 0.5516 | 0.7427 |
| 0.2493 | 33.0909 | 728 | 0.5392 | 0.6699 | 0.5392 | 0.7343 |
| 0.2493 | 33.1818 | 730 | 0.5327 | 0.6708 | 0.5327 | 0.7299 |
| 0.2493 | 33.2727 | 732 | 0.5313 | 0.6667 | 0.5313 | 0.7289 |
| 0.2493 | 33.3636 | 734 | 0.5255 | 0.6796 | 0.5255 | 0.7249 |
| 0.2493 | 33.4545 | 736 | 0.5174 | 0.6764 | 0.5174 | 0.7193 |
| 0.2493 | 33.5455 | 738 | 0.5199 | 0.6764 | 0.5199 | 0.7210 |
| 0.2493 | 33.6364 | 740 | 0.5254 | 0.6703 | 0.5254 | 0.7248 |
| 0.2493 | 33.7273 | 742 | 0.5211 | 0.6703 | 0.5211 | 0.7218 |
| 0.2493 | 33.8182 | 744 | 0.5191 | 0.6861 | 0.5191 | 0.7205 |
| 0.2493 | 33.9091 | 746 | 0.5284 | 0.6861 | 0.5284 | 0.7269 |
| 0.2493 | 34.0 | 748 | 0.5289 | 0.6896 | 0.5289 | 0.7273 |
| 0.2493 | 34.0909 | 750 | 0.5197 | 0.6861 | 0.5197 | 0.7209 |
| 0.2493 | 34.1818 | 752 | 0.5167 | 0.6838 | 0.5167 | 0.7188 |
| 0.2493 | 34.2727 | 754 | 0.5218 | 0.6667 | 0.5218 | 0.7223 |
| 0.2493 | 34.3636 | 756 | 0.5359 | 0.6770 | 0.5359 | 0.7320 |
| 0.2493 | 34.4545 | 758 | 0.5458 | 0.6788 | 0.5458 | 0.7388 |
| 0.2493 | 34.5455 | 760 | 0.5501 | 0.6962 | 0.5501 | 0.7417 |
| 0.2493 | 34.6364 | 762 | 0.5498 | 0.6962 | 0.5498 | 0.7415 |
| 0.2493 | 34.7273 | 764 | 0.5314 | 0.7165 | 0.5314 | 0.7290 |
| 0.2493 | 34.8182 | 766 | 0.5064 | 0.7070 | 0.5064 | 0.7116 |
| 0.2493 | 34.9091 | 768 | 0.4980 | 0.6959 | 0.4980 | 0.7057 |
| 0.2493 | 35.0 | 770 | 0.5009 | 0.7115 | 0.5009 | 0.7078 |
| 0.2493 | 35.0909 | 772 | 0.5037 | 0.7070 | 0.5037 | 0.7097 |
| 0.2493 | 35.1818 | 774 | 0.5022 | 0.7077 | 0.5022 | 0.7086 |
| 0.2493 | 35.2727 | 776 | 0.5022 | 0.7115 | 0.5022 | 0.7087 |
| 0.2493 | 35.3636 | 778 | 0.5088 | 0.6796 | 0.5088 | 0.7133 |
| 0.2493 | 35.4545 | 780 | 0.5180 | 0.6796 | 0.5180 | 0.7197 |
| 0.2493 | 35.5455 | 782 | 0.5257 | 0.6796 | 0.5257 | 0.7251 |
| 0.2493 | 35.6364 | 784 | 0.5228 | 0.6796 | 0.5228 | 0.7230 |
| 0.2493 | 35.7273 | 786 | 0.5219 | 0.6667 | 0.5219 | 0.7224 |
| 0.2493 | 35.8182 | 788 | 0.5281 | 0.6644 | 0.5281 | 0.7267 |
| 0.2493 | 35.9091 | 790 | 0.5369 | 0.6551 | 0.5369 | 0.7327 |
| 0.2493 | 36.0 | 792 | 0.5385 | 0.6586 | 0.5385 | 0.7338 |
| 0.2493 | 36.0909 | 794 | 0.5344 | 0.6426 | 0.5344 | 0.7310 |
| 0.2493 | 36.1818 | 796 | 0.5386 | 0.6708 | 0.5386 | 0.7339 |
| 0.2493 | 36.2727 | 798 | 0.5365 | 0.6708 | 0.5365 | 0.7324 |
| 0.2493 | 36.3636 | 800 | 0.5341 | 0.6822 | 0.5341 | 0.7309 |
| 0.2493 | 36.4545 | 802 | 0.5298 | 0.6822 | 0.5298 | 0.7279 |
| 0.2493 | 36.5455 | 804 | 0.5280 | 0.6667 | 0.5280 | 0.7266 |
| 0.2493 | 36.6364 | 806 | 0.5280 | 0.6667 | 0.5280 | 0.7266 |
| 0.2493 | 36.7273 | 808 | 0.5287 | 0.6903 | 0.5287 | 0.7271 |
| 0.2493 | 36.8182 | 810 | 0.5270 | 0.6835 | 0.5270 | 0.7259 |
| 0.2493 | 36.9091 | 812 | 0.5336 | 0.6773 | 0.5336 | 0.7305 |
| 0.2493 | 37.0 | 814 | 0.5419 | 0.6780 | 0.5419 | 0.7362 |
| 0.2493 | 37.0909 | 816 | 0.5563 | 0.7111 | 0.5563 | 0.7459 |
| 0.2493 | 37.1818 | 818 | 0.6007 | 0.6640 | 0.6007 | 0.7750 |
| 0.2493 | 37.2727 | 820 | 0.6044 | 0.6470 | 0.6044 | 0.7774 |
| 0.2493 | 37.3636 | 822 | 0.5905 | 0.6525 | 0.5905 | 0.7684 |
| 0.2493 | 37.4545 | 824 | 0.5754 | 0.6194 | 0.5754 | 0.7586 |
| 0.2493 | 37.5455 | 826 | 0.5702 | 0.6080 | 0.5702 | 0.7551 |
| 0.2493 | 37.6364 | 828 | 0.5582 | 0.6368 | 0.5582 | 0.7471 |
| 0.2493 | 37.7273 | 830 | 0.5593 | 0.6244 | 0.5593 | 0.7479 |
| 0.2493 | 37.8182 | 832 | 0.5458 | 0.6843 | 0.5458 | 0.7388 |
| 0.2493 | 37.9091 | 834 | 0.5354 | 0.6835 | 0.5354 | 0.7317 |
| 0.2493 | 38.0 | 836 | 0.5282 | 0.6796 | 0.5282 | 0.7268 |
| 0.2493 | 38.0909 | 838 | 0.5276 | 0.6796 | 0.5276 | 0.7264 |
| 0.2493 | 38.1818 | 840 | 0.5315 | 0.6830 | 0.5315 | 0.7291 |
| 0.2493 | 38.2727 | 842 | 0.5415 | 0.6919 | 0.5415 | 0.7359 |
| 0.2493 | 38.3636 | 844 | 0.5619 | 0.6682 | 0.5619 | 0.7496 |
| 0.2493 | 38.4545 | 846 | 0.5742 | 0.6697 | 0.5742 | 0.7578 |
| 0.2493 | 38.5455 | 848 | 0.5636 | 0.6841 | 0.5636 | 0.7507 |
| 0.2493 | 38.6364 | 850 | 0.5412 | 0.7074 | 0.5412 | 0.7356 |
| 0.2493 | 38.7273 | 852 | 0.5286 | 0.6796 | 0.5286 | 0.7271 |
| 0.2493 | 38.8182 | 854 | 0.5242 | 0.6796 | 0.5242 | 0.7240 |
| 0.2493 | 38.9091 | 856 | 0.5379 | 0.6830 | 0.5379 | 0.7334 |
| 0.2493 | 39.0 | 858 | 0.5468 | 0.6238 | 0.5468 | 0.7395 |
| 0.2493 | 39.0909 | 860 | 0.5405 | 0.6238 | 0.5405 | 0.7352 |
| 0.2493 | 39.1818 | 862 | 0.5382 | 0.6634 | 0.5382 | 0.7336 |
| 0.2493 | 39.2727 | 864 | 0.5451 | 0.6753 | 0.5451 | 0.7383 |
| 0.2493 | 39.3636 | 866 | 0.5589 | 0.6894 | 0.5589 | 0.7476 |
| 0.2493 | 39.4545 | 868 | 0.5766 | 0.6886 | 0.5766 | 0.7594 |
| 0.2493 | 39.5455 | 870 | 0.5687 | 0.7001 | 0.5687 | 0.7541 |
| 0.2493 | 39.6364 | 872 | 0.5466 | 0.6796 | 0.5466 | 0.7393 |
| 0.2493 | 39.7273 | 874 | 0.5318 | 0.6796 | 0.5318 | 0.7292 |
| 0.2493 | 39.8182 | 876 | 0.5335 | 0.6796 | 0.5335 | 0.7304 |
| 0.2493 | 39.9091 | 878 | 0.5360 | 0.6667 | 0.5360 | 0.7321 |
| 0.2493 | 40.0 | 880 | 0.5360 | 0.6667 | 0.5360 | 0.7321 |
| 0.2493 | 40.0909 | 882 | 0.5353 | 0.6796 | 0.5353 | 0.7317 |
| 0.2493 | 40.1818 | 884 | 0.5385 | 0.6796 | 0.5385 | 0.7338 |
| 0.2493 | 40.2727 | 886 | 0.5437 | 0.6830 | 0.5437 | 0.7373 |
| 0.2493 | 40.3636 | 888 | 0.5531 | 0.6460 | 0.5531 | 0.7437 |
| 0.2493 | 40.4545 | 890 | 0.5630 | 0.6460 | 0.5630 | 0.7503 |
| 0.2493 | 40.5455 | 892 | 0.5630 | 0.6262 | 0.5630 | 0.7503 |
| 0.2493 | 40.6364 | 894 | 0.5597 | 0.6262 | 0.5597 | 0.7481 |
| 0.2493 | 40.7273 | 896 | 0.5588 | 0.6262 | 0.5588 | 0.7475 |
| 0.2493 | 40.8182 | 898 | 0.5717 | 0.6229 | 0.5717 | 0.7561 |
| 0.2493 | 40.9091 | 900 | 0.5828 | 0.6147 | 0.5828 | 0.7634 |
| 0.2493 | 41.0 | 902 | 0.5940 | 0.6228 | 0.5940 | 0.7707 |
| 0.2493 | 41.0909 | 904 | 0.5632 | 0.6347 | 0.5632 | 0.7505 |
| 0.2493 | 41.1818 | 906 | 0.5410 | 0.6572 | 0.5410 | 0.7356 |
| 0.2493 | 41.2727 | 908 | 0.5269 | 0.6756 | 0.5269 | 0.7259 |
| 0.2493 | 41.3636 | 910 | 0.5248 | 0.6903 | 0.5248 | 0.7244 |
| 0.2493 | 41.4545 | 912 | 0.5295 | 0.6772 | 0.5295 | 0.7276 |
| 0.2493 | 41.5455 | 914 | 0.5280 | 0.6903 | 0.5280 | 0.7266 |
| 0.2493 | 41.6364 | 916 | 0.5348 | 0.6756 | 0.5348 | 0.7313 |
| 0.2493 | 41.7273 | 918 | 0.5746 | 0.6237 | 0.5746 | 0.7580 |
| 0.2493 | 41.8182 | 920 | 0.6400 | 0.6564 | 0.6400 | 0.8000 |
| 0.2493 | 41.9091 | 922 | 0.7096 | 0.5938 | 0.7096 | 0.8424 |
| 0.2493 | 42.0 | 924 | 0.7669 | 0.6069 | 0.7669 | 0.8757 |
| 0.2493 | 42.0909 | 926 | 0.7547 | 0.5895 | 0.7547 | 0.8688 |
| 0.2493 | 42.1818 | 928 | 0.6965 | 0.5470 | 0.6965 | 0.8346 |
| 0.2493 | 42.2727 | 930 | 0.6469 | 0.6157 | 0.6469 | 0.8043 |
| 0.2493 | 42.3636 | 932 | 0.6207 | 0.5964 | 0.6207 | 0.7878 |
| 0.2493 | 42.4545 | 934 | 0.6111 | 0.6082 | 0.6111 | 0.7817 |
| 0.2493 | 42.5455 | 936 | 0.6040 | 0.6728 | 0.6040 | 0.7772 |
| 0.2493 | 42.6364 | 938 | 0.6043 | 0.6841 | 0.6043 | 0.7773 |
| 0.2493 | 42.7273 | 940 | 0.6142 | 0.6900 | 0.6142 | 0.7837 |
| 0.2493 | 42.8182 | 942 | 0.6416 | 0.6669 | 0.6416 | 0.8010 |
| 0.2493 | 42.9091 | 944 | 0.6519 | 0.6669 | 0.6519 | 0.8074 |
| 0.2493 | 43.0 | 946 | 0.6415 | 0.6708 | 0.6415 | 0.8009 |
| 0.2493 | 43.0909 | 948 | 0.6128 | 0.6687 | 0.6128 | 0.7828 |
| 0.2493 | 43.1818 | 950 | 0.5728 | 0.6528 | 0.5728 | 0.7568 |
| 0.2493 | 43.2727 | 952 | 0.5553 | 0.6649 | 0.5553 | 0.7452 |
| 0.2493 | 43.3636 | 954 | 0.5486 | 0.6830 | 0.5486 | 0.7406 |
| 0.2493 | 43.4545 | 956 | 0.5508 | 0.6697 | 0.5508 | 0.7422 |
| 0.2493 | 43.5455 | 958 | 0.5529 | 0.6697 | 0.5529 | 0.7436 |
| 0.2493 | 43.6364 | 960 | 0.5552 | 0.6796 | 0.5552 | 0.7451 |
| 0.2493 | 43.7273 | 962 | 0.5696 | 0.6924 | 0.5696 | 0.7547 |
| 0.2493 | 43.8182 | 964 | 0.6014 | 0.6237 | 0.6014 | 0.7755 |
| 0.2493 | 43.9091 | 966 | 0.6277 | 0.6438 | 0.6277 | 0.7923 |
| 0.2493 | 44.0 | 968 | 0.6284 | 0.6510 | 0.6284 | 0.7927 |
| 0.2493 | 44.0909 | 970 | 0.6217 | 0.6510 | 0.6217 | 0.7885 |
| 0.2493 | 44.1818 | 972 | 0.6243 | 0.6510 | 0.6243 | 0.7901 |
| 0.2493 | 44.2727 | 974 | 0.6108 | 0.6623 | 0.6108 | 0.7815 |
| 0.2493 | 44.3636 | 976 | 0.5902 | 0.6602 | 0.5902 | 0.7683 |
| 0.2493 | 44.4545 | 978 | 0.5777 | 0.6610 | 0.5777 | 0.7601 |
| 0.2493 | 44.5455 | 980 | 0.5689 | 0.6610 | 0.5689 | 0.7542 |
| 0.2493 | 44.6364 | 982 | 0.5610 | 0.6788 | 0.5610 | 0.7490 |
| 0.2493 | 44.7273 | 984 | 0.5612 | 0.6756 | 0.5612 | 0.7491 |
| 0.2493 | 44.8182 | 986 | 0.5646 | 0.6756 | 0.5646 | 0.7514 |
| 0.2493 | 44.9091 | 988 | 0.5702 | 0.6610 | 0.5702 | 0.7551 |
| 0.2493 | 45.0 | 990 | 0.5674 | 0.6610 | 0.5674 | 0.7532 |
| 0.2493 | 45.0909 | 992 | 0.5612 | 0.6788 | 0.5612 | 0.7491 |
| 0.2493 | 45.1818 | 994 | 0.5600 | 0.6788 | 0.5600 | 0.7484 |
| 0.2493 | 45.2727 | 996 | 0.5602 | 0.6788 | 0.5602 | 0.7485 |
| 0.2493 | 45.3636 | 998 | 0.5583 | 0.6649 | 0.5583 | 0.7472 |
| 0.0507 | 45.4545 | 1000 | 0.5559 | 0.6830 | 0.5559 | 0.7456 |
| 0.0507 | 45.5455 | 1002 | 0.5518 | 0.6830 | 0.5518 | 0.7428 |
| 0.0507 | 45.6364 | 1004 | 0.5520 | 0.6649 | 0.5520 | 0.7429 |
| 0.0507 | 45.7273 | 1006 | 0.5542 | 0.6610 | 0.5542 | 0.7444 |
| 0.0507 | 45.8182 | 1008 | 0.5575 | 0.6610 | 0.5575 | 0.7467 |
| 0.0507 | 45.9091 | 1010 | 0.5623 | 0.6610 | 0.5623 | 0.7499 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
SemilleroCV/resnet50-finetuned-bwmp2-224
|
SemilleroCV
| 2025-01-12T19:03:11Z
| 14
| 1
| null |
[
"onnx",
"resnet",
"image-classification",
"es",
"en",
"dataset:SemilleroCV/BWMP2",
"doi:10.57967/hf/4039",
"license:mit",
"region:us"
] |
image-classification
| 2024-09-02T12:48:12Z
|
---
license: mit
datasets:
- SemilleroCV/BWMP2
language:
- es
- en
pipeline_tag: image-classification
---
# BWMP2: Dataset RGB para Clasificación de Materiales con un Modelo Fundacional Finamente Ajustado"
<p align="center">
<img src="https://github.com/Sneider-exe/Clasificacion_Materiales/raw/main/logo.jpg" alt="Descripción alternativa de la imagen">
</p>
Este proyecto presenta un modelo fundacional(ResNet50) finamente ajustado para la clasificación de materiales. Utilizando un dataset propio de imágenes RGB que contiene cinco clases (Ladrillo, Metal, Madera, Papel, Plástico), el modelo es capaz de identificar y clasificar correctamente una imagen dentro de estas categorías
# Sobre Resnet50
<p align="center">
<img src="https://miro.medium.com/v2/resize:fit:1400/0*tH9evuOFqk8F41FG.png" alt="Descripción alternativa de la imagen">
</p>
Resnet50 es una arquitectura CNN, perteneciente a la familia de ResNet(redes residuales), modelos diseñados para trabajar con el entrenamiento de redes neuronales profundas
Fue desarrollada por investigador de Microsoft Research Asia, conocida por su profundidad y eficiencia en tareas de clasificación de imágenes.
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.