modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-03 00:49:08
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 549
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-03 00:44:12
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
djtar/adiba
|
djtar
| 2025-02-01T12:12:08Z | 23 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-bnb-4bit",
"base_model:quantized:unsloth/Meta-Llama-3.1-8B-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-02-01T12:08:12Z |
---
base_model: unsloth/Meta-Llama-3.1-8B-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** djtar
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
bhavnicksm/brown-fairy-base-v0
|
bhavnicksm
| 2025-02-01T12:11:52Z | 105 | 1 |
model2vec
|
[
"model2vec",
"safetensors",
"embeddings",
"static-embeddings",
"sentence-transformers",
"mteb",
"en",
"license:mit",
"model-index",
"region:us"
] | null | 2025-01-30T21:43:50Z |
---
base_model: baai/bge-base-en-v1.5
language:
- en
library_name: model2vec
license: mit
model_name: brown-fairy-base-v0
tags:
- embeddings
- static-embeddings
- sentence-transformers
- mteb
model-index:
- name: bhavnicksm/brown-fairy-base-v0
results:
- dataset:
config: en
name: MTEB AmazonCounterfactualClassification (en)
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
split: test
type: mteb/amazon_counterfactual
metrics:
- type: accuracy
value: 69.52239999999999
- type: f1
value: 63.4127
- type: f1_weighted
value: 72.48599999999999
- type: ap
value: 31.8446
- type: ap_weighted
value: 31.8446
- type: main_score
value: 69.52239999999999
task:
type: Classification
- dataset:
config: default
name: MTEB AmazonPolarityClassification (default)
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
split: test
type: mteb/amazon_polarity
metrics:
- type: accuracy
value: 68.709
- type: f1
value: 68.2583
- type: f1_weighted
value: 68.2583
- type: ap
value: 63.728899999999996
- type: ap_weighted
value: 63.728899999999996
- type: main_score
value: 68.709
task:
type: Classification
- dataset:
config: en
name: MTEB AmazonReviewsClassification (en)
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
split: test
type: mteb/amazon_reviews_multi
metrics:
- type: accuracy
value: 34.014
- type: f1
value: 33.4588
- type: f1_weighted
value: 33.4588
- type: main_score
value: 34.014
task:
type: Classification
- dataset:
config: default
name: MTEB ArguAna (default)
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
split: test
type: mteb/arguana
metrics:
- type: ndcg_at_1
value: 20.341
- type: ndcg_at_3
value: 30.547
- type: ndcg_at_5
value: 34.963
- type: ndcg_at_10
value: 39.805
- type: ndcg_at_20
value: 42.397
- type: ndcg_at_100
value: 45.216
- type: ndcg_at_1000
value: 46.339999999999996
- type: map_at_1
value: 20.341
- type: map_at_3
value: 27.962999999999997
- type: map_at_5
value: 30.409999999999997
- type: map_at_10
value: 32.4
- type: map_at_20
value: 33.113
- type: map_at_100
value: 33.512
- type: map_at_1000
value: 33.556000000000004
- type: recall_at_1
value: 20.341
- type: recall_at_3
value: 38.051
- type: recall_at_5
value: 48.791000000000004
- type: recall_at_10
value: 63.798
- type: recall_at_20
value: 74.03999999999999
- type: recall_at_100
value: 89.118
- type: recall_at_1000
value: 97.866
- type: precision_at_1
value: 20.341
- type: precision_at_3
value: 12.684000000000001
- type: precision_at_5
value: 9.758
- type: precision_at_10
value: 6.38
- type: precision_at_20
value: 3.702
- type: precision_at_100
value: 0.8909999999999999
- type: precision_at_1000
value: 0.098
- type: mrr_at_1
value: 20.6259
- type: mrr_at_3
value: 28.058300000000003
- type: mrr_at_5
value: 30.4979
- type: mrr_at_10
value: 32.5131
- type: mrr_at_20
value: 33.222699999999996
- type: mrr_at_100
value: 33.6243
- type: mrr_at_1000
value: 33.6687
- type: nauc_ndcg_at_1_max
value: -6.208
- type: nauc_ndcg_at_1_std
value: 0.6887
- type: nauc_ndcg_at_1_diff1
value: 5.5123
- type: nauc_ndcg_at_3_max
value: -1.8608
- type: nauc_ndcg_at_3_std
value: 3.7832999999999997
- type: nauc_ndcg_at_3_diff1
value: 7.5778
- type: nauc_ndcg_at_5_max
value: 0.0929
- type: nauc_ndcg_at_5_std
value: 5.8453
- type: nauc_ndcg_at_5_diff1
value: 9.316
- type: nauc_ndcg_at_10_max
value: 0.557
- type: nauc_ndcg_at_10_std
value: 5.8692
- type: nauc_ndcg_at_10_diff1
value: 8.3828
- type: nauc_ndcg_at_20_max
value: 1.567
- type: nauc_ndcg_at_20_std
value: 8.2355
- type: nauc_ndcg_at_20_diff1
value: 9.1907
- type: nauc_ndcg_at_100_max
value: 1.0833000000000002
- type: nauc_ndcg_at_100_std
value: 8.6248
- type: nauc_ndcg_at_100_diff1
value: 9.0073
- type: nauc_ndcg_at_1000_max
value: -0.166
- type: nauc_ndcg_at_1000_std
value: 7.394100000000001
- type: nauc_ndcg_at_1000_diff1
value: 8.1955
- type: nauc_map_at_1_max
value: -6.208
- type: nauc_map_at_1_std
value: 0.6887
- type: nauc_map_at_1_diff1
value: 5.5123
- type: nauc_map_at_3_max
value: -3.0332999999999997
- type: nauc_map_at_3_std
value: 2.9010000000000002
- type: nauc_map_at_3_diff1
value: 6.8088
- type: nauc_map_at_5_max
value: -1.9215
- type: nauc_map_at_5_std
value: 4.023000000000001
- type: nauc_map_at_5_diff1
value: 7.8248999999999995
- type: nauc_map_at_10_max
value: -1.8037
- type: nauc_map_at_10_std
value: 3.9838
- type: nauc_map_at_10_diff1
value: 7.3617
- type: nauc_map_at_20_max
value: -1.5614
- type: nauc_map_at_20_std
value: 4.6065000000000005
- type: nauc_map_at_20_diff1
value: 7.5846
- type: nauc_map_at_100_max
value: -1.6330999999999998
- type: nauc_map_at_100_std
value: 4.693
- type: nauc_map_at_100_diff1
value: 7.5309
- type: nauc_map_at_1000_max
value: -1.6847999999999999
- type: nauc_map_at_1000_std
value: 4.6508
- type: nauc_map_at_1000_diff1
value: 7.5036000000000005
- type: nauc_recall_at_1_max
value: -6.208
- type: nauc_recall_at_1_std
value: 0.6887
- type: nauc_recall_at_1_diff1
value: 5.5123
- type: nauc_recall_at_3_max
value: 1.2662
- type: nauc_recall_at_3_std
value: 6.1506
- type: nauc_recall_at_3_diff1
value: 9.6919
- type: nauc_recall_at_5_max
value: 5.7511
- type: nauc_recall_at_5_std
value: 11.0652
- type: nauc_recall_at_5_diff1
value: 13.5713
- type: nauc_recall_at_10_max
value: 8.5342
- type: nauc_recall_at_10_std
value: 12.2161
- type: nauc_recall_at_10_diff1
value: 11.6188
- type: nauc_recall_at_20_max
value: 15.7488
- type: nauc_recall_at_20_std
value: 25.6755
- type: nauc_recall_at_20_diff1
value: 16.3568
- type: nauc_recall_at_100_max
value: 24.424799999999998
- type: nauc_recall_at_100_std
value: 47.6945
- type: nauc_recall_at_100_diff1
value: 22.4622
- type: nauc_recall_at_1000_max
value: 3.0951
- type: nauc_recall_at_1000_std
value: 84.10419999999999
- type: nauc_recall_at_1000_diff1
value: -2.6364
- type: nauc_precision_at_1_max
value: -6.208
- type: nauc_precision_at_1_std
value: 0.6887
- type: nauc_precision_at_1_diff1
value: 5.5123
- type: nauc_precision_at_3_max
value: 1.2662
- type: nauc_precision_at_3_std
value: 6.1506
- type: nauc_precision_at_3_diff1
value: 9.6919
- type: nauc_precision_at_5_max
value: 5.7511
- type: nauc_precision_at_5_std
value: 11.0652
- type: nauc_precision_at_5_diff1
value: 13.5713
- type: nauc_precision_at_10_max
value: 8.5342
- type: nauc_precision_at_10_std
value: 12.2161
- type: nauc_precision_at_10_diff1
value: 11.6188
- type: nauc_precision_at_20_max
value: 15.7488
- type: nauc_precision_at_20_std
value: 25.6755
- type: nauc_precision_at_20_diff1
value: 16.3568
- type: nauc_precision_at_100_max
value: 24.424799999999998
- type: nauc_precision_at_100_std
value: 47.6945
- type: nauc_precision_at_100_diff1
value: 22.4622
- type: nauc_precision_at_1000_max
value: 3.0951
- type: nauc_precision_at_1000_std
value: 84.10419999999999
- type: nauc_precision_at_1000_diff1
value: -2.6364
- type: nauc_mrr_at_1_max
value: -5.611800000000001
- type: nauc_mrr_at_1_std
value: 0.2596
- type: nauc_mrr_at_1_diff1
value: 4.5101
- type: nauc_mrr_at_3_max
value: -3.1917
- type: nauc_mrr_at_3_std
value: 2.7559
- type: nauc_mrr_at_3_diff1
value: 5.756
- type: nauc_mrr_at_5_max
value: -2.1292999999999997
- type: nauc_mrr_at_5_std
value: 3.7653
- type: nauc_mrr_at_5_diff1
value: 6.7995
- type: nauc_mrr_at_10_max
value: -1.8915000000000002
- type: nauc_mrr_at_10_std
value: 3.778
- type: nauc_mrr_at_10_diff1
value: 6.4253
- type: nauc_mrr_at_20_max
value: -1.6753
- type: nauc_mrr_at_20_std
value: 4.389
- type: nauc_mrr_at_20_diff1
value: 6.6081
- type: nauc_mrr_at_100_max
value: -1.7302000000000002
- type: nauc_mrr_at_100_std
value: 4.4796000000000005
- type: nauc_mrr_at_100_diff1
value: 6.563199999999999
- type: nauc_mrr_at_1000_max
value: -1.7819000000000003
- type: nauc_mrr_at_1000_std
value: 4.4372
- type: nauc_mrr_at_1000_diff1
value: 6.5346
- type: main_score
value: 39.805
task:
type: Retrieval
- dataset:
config: default
name: MTEB ArxivClusteringP2P (default)
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
split: test
type: mteb/arxiv-clustering-p2p
metrics:
- type: v_measure
value: 30.9023
- type: v_measure_std
value: 14.6095
- type: main_score
value: 30.9023
task:
type: Clustering
- dataset:
config: default
name: MTEB ArxivClusteringS2S (default)
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
split: test
type: mteb/arxiv-clustering-s2s
metrics:
- type: v_measure
value: 19.1012
- type: v_measure_std
value: 15.511800000000001
- type: main_score
value: 19.1012
task:
type: Clustering
- dataset:
config: default
name: MTEB AskUbuntuDupQuestions (default)
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
split: test
type: mteb/askubuntudupquestions-reranking
metrics:
- type: map
value: 54.0474
- type: mrr
value: 67.00150000000001
- type: nAUC_map_max
value: 14.266100000000002
- type: nAUC_map_std
value: 11.7906
- type: nAUC_map_diff1
value: 7.5044
- type: nAUC_mrr_max
value: 20.1721
- type: nAUC_mrr_std
value: 13.1225
- type: nAUC_mrr_diff1
value: 14.3512
- type: main_score
value: 54.0474
task:
type: Reranking
- dataset:
config: default
name: MTEB BIOSSES (default)
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
split: test
type: mteb/biosses-sts
metrics:
- type: pearson
value: 73.3465
- type: spearman
value: 69.6932
- type: cosine_pearson
value: 73.3465
- type: cosine_spearman
value: 69.6932
- type: manhattan_pearson
value: 54.115899999999996
- type: manhattan_spearman
value: 54.1759
- type: euclidean_pearson
value: 54.2153
- type: euclidean_spearman
value: 54.0488
- type: main_score
value: 69.6932
task:
type: STS
- dataset:
config: default
name: MTEB Banking77Classification (default)
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
split: test
type: mteb/banking77
metrics:
- type: accuracy
value: 74.2987
- type: f1
value: 73.85119999999999
- type: f1_weighted
value: 73.85119999999999
- type: main_score
value: 74.2987
task:
type: Classification
- dataset:
config: default
name: MTEB BiorxivClusteringP2P (default)
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
split: test
type: mteb/biorxiv-clustering-p2p
metrics:
- type: v_measure
value: 29.8415
- type: v_measure_std
value: 0.7605
- type: main_score
value: 29.8415
task:
type: Clustering
- dataset:
config: default
name: MTEB BiorxivClusteringS2S (default)
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
split: test
type: mteb/biorxiv-clustering-s2s
metrics:
- type: v_measure
value: 16.4917
- type: v_measure_std
value: 1.2364
- type: main_score
value: 16.4917
task:
type: Clustering
- dataset:
config: default
name: MTEB CQADupstackRetrieval (default)
revision: '1'
split: test
type: CQADupstackRetrieval_is_a_combined_dataset
metrics:
- type: ndcg_at_10
value: 21.9561
- type: main_score
value: 21.9561
task:
type: Retrieval
- dataset:
config: default
name: MTEB ClimateFEVER (default)
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
split: test
type: mteb/climate-fever
metrics:
- type: ndcg_at_1
value: 18.826999999999998
- type: ndcg_at_3
value: 16.482
- type: ndcg_at_5
value: 17.9
- type: ndcg_at_10
value: 20.948
- type: ndcg_at_20
value: 23.665
- type: ndcg_at_100
value: 28.192
- type: ndcg_at_1000
value: 31.846999999999998
- type: map_at_1
value: 8.221
- type: map_at_3
value: 11.72
- type: map_at_5
value: 12.844
- type: map_at_10
value: 14.17
- type: map_at_20
value: 15.043000000000001
- type: map_at_100
value: 15.842
- type: map_at_1000
value: 16.04
- type: recall_at_1
value: 8.221
- type: recall_at_3
value: 15.214
- type: recall_at_5
value: 19.185
- type: recall_at_10
value: 26.14
- type: recall_at_20
value: 33.931
- type: recall_at_100
value: 51.429
- type: recall_at_1000
value: 72.269
- type: precision_at_1
value: 18.826999999999998
- type: precision_at_3
value: 12.4
- type: precision_at_5
value: 9.707
- type: precision_at_10
value: 6.84
- type: precision_at_20
value: 4.557
- type: precision_at_100
value: 1.461
- type: precision_at_1000
value: 0.212
- type: mrr_at_1
value: 18.8274
- type: mrr_at_3
value: 25.2226
- type: mrr_at_5
value: 27.163999999999998
- type: mrr_at_10
value: 28.6116
- type: mrr_at_20
value: 29.3082
- type: mrr_at_100
value: 29.7302
- type: mrr_at_1000
value: 29.786600000000004
- type: nauc_ndcg_at_1_max
value: 23.3019
- type: nauc_ndcg_at_1_std
value: 14.4153
- type: nauc_ndcg_at_1_diff1
value: 21.8879
- type: nauc_ndcg_at_3_max
value: 22.2746
- type: nauc_ndcg_at_3_std
value: 15.487300000000001
- type: nauc_ndcg_at_3_diff1
value: 17.8275
- type: nauc_ndcg_at_5_max
value: 23.0993
- type: nauc_ndcg_at_5_std
value: 16.4617
- type: nauc_ndcg_at_5_diff1
value: 16.7855
- type: nauc_ndcg_at_10_max
value: 24.7783
- type: nauc_ndcg_at_10_std
value: 20.1484
- type: nauc_ndcg_at_10_diff1
value: 17.0753
- type: nauc_ndcg_at_20_max
value: 26.1465
- type: nauc_ndcg_at_20_std
value: 22.3842
- type: nauc_ndcg_at_20_diff1
value: 16.777900000000002
- type: nauc_ndcg_at_100_max
value: 27.703100000000003
- type: nauc_ndcg_at_100_std
value: 25.3223
- type: nauc_ndcg_at_100_diff1
value: 16.1821
- type: nauc_ndcg_at_1000_max
value: 28.778599999999997
- type: nauc_ndcg_at_1000_std
value: 27.9877
- type: nauc_ndcg_at_1000_diff1
value: 16.223499999999998
- type: nauc_map_at_1_max
value: 22.4083
- type: nauc_map_at_1_std
value: 9.546000000000001
- type: nauc_map_at_1_diff1
value: 29.008499999999998
- type: nauc_map_at_3_max
value: 22.0196
- type: nauc_map_at_3_std
value: 11.7774
- type: nauc_map_at_3_diff1
value: 21.7038
- type: nauc_map_at_5_max
value: 22.7222
- type: nauc_map_at_5_std
value: 12.8126
- type: nauc_map_at_5_diff1
value: 20.288
- type: nauc_map_at_10_max
value: 23.566200000000002
- type: nauc_map_at_10_std
value: 14.8877
- type: nauc_map_at_10_diff1
value: 19.9221
- type: nauc_map_at_20_max
value: 24.1809
- type: nauc_map_at_20_std
value: 15.9395
- type: nauc_map_at_20_diff1
value: 19.6606
- type: nauc_map_at_100_max
value: 24.7213
- type: nauc_map_at_100_std
value: 16.8474
- type: nauc_map_at_100_diff1
value: 19.5227
- type: nauc_map_at_1000_max
value: 24.8168
- type: nauc_map_at_1000_std
value: 17.0802
- type: nauc_map_at_1000_diff1
value: 19.496199999999998
- type: nauc_recall_at_1_max
value: 22.4083
- type: nauc_recall_at_1_std
value: 9.546000000000001
- type: nauc_recall_at_1_diff1
value: 29.008499999999998
- type: nauc_recall_at_3_max
value: 19.4585
- type: nauc_recall_at_3_std
value: 14.3753
- type: nauc_recall_at_3_diff1
value: 15.7
- type: nauc_recall_at_5_max
value: 20.5273
- type: nauc_recall_at_5_std
value: 16.2058
- type: nauc_recall_at_5_diff1
value: 12.1747
- type: nauc_recall_at_10_max
value: 22.6961
- type: nauc_recall_at_10_std
value: 22.400000000000002
- type: nauc_recall_at_10_diff1
value: 13.2301
- type: nauc_recall_at_20_max
value: 23.9165
- type: nauc_recall_at_20_std
value: 25.392300000000002
- type: nauc_recall_at_20_diff1
value: 11.8797
- type: nauc_recall_at_100_max
value: 26.6031
- type: nauc_recall_at_100_std
value: 31.7759
- type: nauc_recall_at_100_diff1
value: 8.9369
- type: nauc_recall_at_1000_max
value: 32.4917
- type: nauc_recall_at_1000_std
value: 47.7736
- type: nauc_recall_at_1000_diff1
value: 9.5485
- type: nauc_precision_at_1_max
value: 23.3019
- type: nauc_precision_at_1_std
value: 14.4153
- type: nauc_precision_at_1_diff1
value: 21.8879
- type: nauc_precision_at_3_max
value: 23.9748
- type: nauc_precision_at_3_std
value: 21.5474
- type: nauc_precision_at_3_diff1
value: 10.6452
- type: nauc_precision_at_5_max
value: 24.9076
- type: nauc_precision_at_5_std
value: 23.9797
- type: nauc_precision_at_5_diff1
value: 7.1156999999999995
- type: nauc_precision_at_10_max
value: 26.721
- type: nauc_precision_at_10_std
value: 30.1734
- type: nauc_precision_at_10_diff1
value: 7.0459
- type: nauc_precision_at_20_max
value: 27.9059
- type: nauc_precision_at_20_std
value: 33.1933
- type: nauc_precision_at_20_diff1
value: 5.7082
- type: nauc_precision_at_100_max
value: 25.7203
- type: nauc_precision_at_100_std
value: 35.108
- type: nauc_precision_at_100_diff1
value: 2.2525
- type: nauc_precision_at_1000_max
value: 23.6155
- type: nauc_precision_at_1000_std
value: 39.4567
- type: nauc_precision_at_1000_diff1
value: -1.2073
- type: nauc_mrr_at_1_max
value: 23.3019
- type: nauc_mrr_at_1_std
value: 14.4153
- type: nauc_mrr_at_1_diff1
value: 21.8879
- type: nauc_mrr_at_3_max
value: 23.340700000000002
- type: nauc_mrr_at_3_std
value: 18.1166
- type: nauc_mrr_at_3_diff1
value: 16.4821
- type: nauc_mrr_at_5_max
value: 23.5278
- type: nauc_mrr_at_5_std
value: 19.023200000000003
- type: nauc_mrr_at_5_diff1
value: 15.7295
- type: nauc_mrr_at_10_max
value: 24.199
- type: nauc_mrr_at_10_std
value: 20.218600000000002
- type: nauc_mrr_at_10_diff1
value: 16.173199999999998
- type: nauc_mrr_at_20_max
value: 24.4813
- type: nauc_mrr_at_20_std
value: 20.5169
- type: nauc_mrr_at_20_diff1
value: 16.2274
- type: nauc_mrr_at_100_max
value: 24.378800000000002
- type: nauc_mrr_at_100_std
value: 20.4327
- type: nauc_mrr_at_100_diff1
value: 16.220499999999998
- type: nauc_mrr_at_1000_max
value: 24.3802
- type: nauc_mrr_at_1000_std
value: 20.4123
- type: nauc_mrr_at_1000_diff1
value: 16.2191
- type: main_score
value: 20.948
task:
type: Retrieval
- dataset:
config: default
name: MTEB DBPedia (default)
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
split: test
type: mteb/dbpedia
metrics:
- type: ndcg_at_1
value: 30.375000000000004
- type: ndcg_at_3
value: 26.590999999999998
- type: ndcg_at_5
value: 24.586
- type: ndcg_at_10
value: 23.246
- type: ndcg_at_20
value: 23.025000000000002
- type: ndcg_at_100
value: 26.994
- type: ndcg_at_1000
value: 33.591
- type: map_at_1
value: 4.104
- type: map_at_3
value: 6.869
- type: map_at_5
value: 7.949000000000001
- type: map_at_10
value: 9.511
- type: map_at_20
value: 10.959000000000001
- type: map_at_100
value: 13.444999999999999
- type: map_at_1000
value: 14.482999999999999
- type: recall_at_1
value: 4.104
- type: recall_at_3
value: 8.394
- type: recall_at_5
value: 10.453
- type: recall_at_10
value: 14.413
- type: recall_at_20
value: 19.421
- type: recall_at_100
value: 34.134
- type: recall_at_1000
value: 56.74
- type: precision_at_1
value: 43.0
- type: precision_at_3
value: 32.25
- type: precision_at_5
value: 26.650000000000002
- type: precision_at_10
value: 20.575
- type: precision_at_20
value: 15.587000000000002
- type: precision_at_100
value: 6.784999999999999
- type: precision_at_1000
value: 1.465
- type: mrr_at_1
value: 43.0
- type: mrr_at_3
value: 50.416700000000006
- type: mrr_at_5
value: 51.554199999999994
- type: mrr_at_10
value: 52.5436
- type: mrr_at_20
value: 53.0818
- type: mrr_at_100
value: 53.3559
- type: mrr_at_1000
value: 53.3775
- type: nauc_ndcg_at_1_max
value: 32.3654
- type: nauc_ndcg_at_1_std
value: 10.134799999999998
- type: nauc_ndcg_at_1_diff1
value: 30.7456
- type: nauc_ndcg_at_3_max
value: 35.7454
- type: nauc_ndcg_at_3_std
value: 11.2598
- type: nauc_ndcg_at_3_diff1
value: 28.8957
- type: nauc_ndcg_at_5_max
value: 37.3094
- type: nauc_ndcg_at_5_std
value: 12.0986
- type: nauc_ndcg_at_5_diff1
value: 30.1683
- type: nauc_ndcg_at_10_max
value: 37.8415
- type: nauc_ndcg_at_10_std
value: 13.6007
- type: nauc_ndcg_at_10_diff1
value: 27.7172
- type: nauc_ndcg_at_20_max
value: 36.201899999999995
- type: nauc_ndcg_at_20_std
value: 14.508399999999998
- type: nauc_ndcg_at_20_diff1
value: 25.6504
- type: nauc_ndcg_at_100_max
value: 37.8181
- type: nauc_ndcg_at_100_std
value: 22.2808
- type: nauc_ndcg_at_100_diff1
value: 22.156100000000002
- type: nauc_ndcg_at_1000_max
value: 43.2943
- type: nauc_ndcg_at_1000_std
value: 29.2433
- type: nauc_ndcg_at_1000_diff1
value: 24.593
- type: nauc_map_at_1_max
value: 3.9762
- type: nauc_map_at_1_std
value: 2.929
- type: nauc_map_at_1_diff1
value: 21.787699999999997
- type: nauc_map_at_3_max
value: 7.2749
- type: nauc_map_at_3_std
value: 4.1128
- type: nauc_map_at_3_diff1
value: 19.4785
- type: nauc_map_at_5_max
value: 11.6105
- type: nauc_map_at_5_std
value: 3.9446000000000003
- type: nauc_map_at_5_diff1
value: 21.250700000000002
- type: nauc_map_at_10_max
value: 17.3344
- type: nauc_map_at_10_std
value: 6.990200000000001
- type: nauc_map_at_10_diff1
value: 20.962
- type: nauc_map_at_20_max
value: 23.447200000000002
- type: nauc_map_at_20_std
value: 11.8169
- type: nauc_map_at_20_diff1
value: 21.0181
- type: nauc_map_at_100_max
value: 32.9328
- type: nauc_map_at_100_std
value: 21.3233
- type: nauc_map_at_100_diff1
value: 19.3584
- type: nauc_map_at_1000_max
value: 34.9988
- type: nauc_map_at_1000_std
value: 23.3726
- type: nauc_map_at_1000_diff1
value: 19.9623
- type: nauc_recall_at_1_max
value: 3.9762
- type: nauc_recall_at_1_std
value: 2.929
- type: nauc_recall_at_1_diff1
value: 21.787699999999997
- type: nauc_recall_at_3_max
value: 2.7925999999999997
- type: nauc_recall_at_3_std
value: -2.4797
- type: nauc_recall_at_3_diff1
value: 13.525
- type: nauc_recall_at_5_max
value: 6.8843000000000005
- type: nauc_recall_at_5_std
value: -3.7343
- type: nauc_recall_at_5_diff1
value: 17.638499999999997
- type: nauc_recall_at_10_max
value: 11.6201
- type: nauc_recall_at_10_std
value: -1.0245
- type: nauc_recall_at_10_diff1
value: 15.4671
- type: nauc_recall_at_20_max
value: 15.815999999999999
- type: nauc_recall_at_20_std
value: 3.6186999999999996
- type: nauc_recall_at_20_diff1
value: 15.407000000000002
- type: nauc_recall_at_100_max
value: 24.712
- type: nauc_recall_at_100_std
value: 22.0841
- type: nauc_recall_at_100_diff1
value: 10.1828
- type: nauc_recall_at_1000_max
value: 33.821
- type: nauc_recall_at_1000_std
value: 36.807
- type: nauc_recall_at_1000_diff1
value: 12.8396
- type: nauc_precision_at_1_max
value: 39.2878
- type: nauc_precision_at_1_std
value: 15.6774
- type: nauc_precision_at_1_diff1
value: 31.384
- type: nauc_precision_at_3_max
value: 43.498
- type: nauc_precision_at_3_std
value: 17.592299999999998
- type: nauc_precision_at_3_diff1
value: 25.154799999999998
- type: nauc_precision_at_5_max
value: 47.632600000000004
- type: nauc_precision_at_5_std
value: 19.6694
- type: nauc_precision_at_5_diff1
value: 26.762399999999996
- type: nauc_precision_at_10_max
value: 50.91139999999999
- type: nauc_precision_at_10_std
value: 23.6363
- type: nauc_precision_at_10_diff1
value: 23.097
- type: nauc_precision_at_20_max
value: 52.53489999999999
- type: nauc_precision_at_20_std
value: 28.8839
- type: nauc_precision_at_20_diff1
value: 18.9418
- type: nauc_precision_at_100_max
value: 48.79
- type: nauc_precision_at_100_std
value: 31.642500000000002
- type: nauc_precision_at_100_diff1
value: 13.646700000000001
- type: nauc_precision_at_1000_max
value: 27.015099999999997
- type: nauc_precision_at_1000_std
value: 13.613900000000001
- type: nauc_precision_at_1000_diff1
value: 12.138300000000001
- type: nauc_mrr_at_1_max
value: 39.2878
- type: nauc_mrr_at_1_std
value: 15.6774
- type: nauc_mrr_at_1_diff1
value: 31.384
- type: nauc_mrr_at_3_max
value: 41.747299999999996
- type: nauc_mrr_at_3_std
value: 14.7682
- type: nauc_mrr_at_3_diff1
value: 29.8219
- type: nauc_mrr_at_5_max
value: 42.408699999999996
- type: nauc_mrr_at_5_std
value: 14.769099999999998
- type: nauc_mrr_at_5_diff1
value: 31.1068
- type: nauc_mrr_at_10_max
value: 42.571999999999996
- type: nauc_mrr_at_10_std
value: 14.8256
- type: nauc_mrr_at_10_diff1
value: 31.156299999999998
- type: nauc_mrr_at_20_max
value: 42.4832
- type: nauc_mrr_at_20_std
value: 14.7993
- type: nauc_mrr_at_20_diff1
value: 31.260700000000003
- type: nauc_mrr_at_100_max
value: 42.5018
- type: nauc_mrr_at_100_std
value: 14.9009
- type: nauc_mrr_at_100_diff1
value: 31.2395
- type: nauc_mrr_at_1000_max
value: 42.4996
- type: nauc_mrr_at_1000_std
value: 14.9098
- type: nauc_mrr_at_1000_diff1
value: 31.230400000000003
- type: main_score
value: 23.246
task:
type: Retrieval
- dataset:
config: default
name: MTEB EmotionClassification (default)
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
split: test
type: mteb/emotion
metrics:
- type: accuracy
value: 45.68
- type: f1
value: 43.1207
- type: f1_weighted
value: 48.0349
- type: main_score
value: 45.68
task:
type: Classification
- dataset:
config: default
name: MTEB FEVER (default)
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
split: test
type: mteb/fever
metrics:
- type: ndcg_at_1
value: 16.742
- type: ndcg_at_3
value: 23.316
- type: ndcg_at_5
value: 25.738
- type: ndcg_at_10
value: 28.68
- type: ndcg_at_20
value: 30.959999999999997
- type: ndcg_at_100
value: 34.037
- type: ndcg_at_1000
value: 36.004999999999995
- type: map_at_1
value: 15.797
- type: map_at_3
value: 21.209
- type: map_at_5
value: 22.547
- type: map_at_10
value: 23.762
- type: map_at_20
value: 24.401
- type: map_at_100
value: 24.83
- type: map_at_1000
value: 24.901
- type: recall_at_1
value: 15.797
- type: recall_at_3
value: 28.233000000000004
- type: recall_at_5
value: 33.997
- type: recall_at_10
value: 42.888
- type: recall_at_20
value: 51.635
- type: recall_at_100
value: 67.801
- type: recall_at_1000
value: 82.998
- type: precision_at_1
value: 16.742
- type: precision_at_3
value: 10.096
- type: precision_at_5
value: 7.335999999999999
- type: precision_at_10
value: 4.65
- type: precision_at_20
value: 2.817
- type: precision_at_100
value: 0.748
- type: precision_at_1000
value: 0.093
- type: mrr_at_1
value: 16.7417
- type: mrr_at_3
value: 22.4122
- type: mrr_at_5
value: 23.8374
- type: mrr_at_10
value: 25.101000000000003
- type: mrr_at_20
value: 25.739800000000002
- type: mrr_at_100
value: 26.164199999999997
- type: mrr_at_1000
value: 26.227800000000002
- type: nauc_ndcg_at_1_max
value: 13.991500000000002
- type: nauc_ndcg_at_1_std
value: -25.4382
- type: nauc_ndcg_at_1_diff1
value: 21.2751
- type: nauc_ndcg_at_3_max
value: 15.4019
- type: nauc_ndcg_at_3_std
value: -25.9724
- type: nauc_ndcg_at_3_diff1
value: 16.3365
- type: nauc_ndcg_at_5_max
value: 16.4606
- type: nauc_ndcg_at_5_std
value: -26.063599999999997
- type: nauc_ndcg_at_5_diff1
value: 15.334900000000001
- type: nauc_ndcg_at_10_max
value: 17.1297
- type: nauc_ndcg_at_10_std
value: -26.709
- type: nauc_ndcg_at_10_diff1
value: 14.072799999999999
- type: nauc_ndcg_at_20_max
value: 18.0756
- type: nauc_ndcg_at_20_std
value: -25.849899999999998
- type: nauc_ndcg_at_20_diff1
value: 13.3475
- type: nauc_ndcg_at_100_max
value: 18.5017
- type: nauc_ndcg_at_100_std
value: -25.1975
- type: nauc_ndcg_at_100_diff1
value: 13.128200000000001
- type: nauc_ndcg_at_1000_max
value: 18.570500000000003
- type: nauc_ndcg_at_1000_std
value: -24.5199
- type: nauc_ndcg_at_1000_diff1
value: 13.608600000000001
- type: nauc_map_at_1_max
value: 14.4553
- type: nauc_map_at_1_std
value: -25.291999999999998
- type: nauc_map_at_1_diff1
value: 21.4966
- type: nauc_map_at_3_max
value: 15.1199
- type: nauc_map_at_3_std
value: -25.8608
- type: nauc_map_at_3_diff1
value: 17.5
- type: nauc_map_at_5_max
value: 15.748599999999998
- type: nauc_map_at_5_std
value: -25.928
- type: nauc_map_at_5_diff1
value: 16.8883
- type: nauc_map_at_10_max
value: 16.036
- type: nauc_map_at_10_std
value: -26.2116
- type: nauc_map_at_10_diff1
value: 16.335
- type: nauc_map_at_20_max
value: 16.305500000000002
- type: nauc_map_at_20_std
value: -25.965500000000002
- type: nauc_map_at_20_diff1
value: 16.1305
- type: nauc_map_at_100_max
value: 16.380200000000002
- type: nauc_map_at_100_std
value: -25.870199999999997
- type: nauc_map_at_100_diff1
value: 16.1253
- type: nauc_map_at_1000_max
value: 16.3924
- type: nauc_map_at_1000_std
value: -25.838499999999996
- type: nauc_map_at_1000_diff1
value: 16.1408
- type: nauc_recall_at_1_max
value: 14.4553
- type: nauc_recall_at_1_std
value: -25.291999999999998
- type: nauc_recall_at_1_diff1
value: 21.4966
- type: nauc_recall_at_3_max
value: 16.1074
- type: nauc_recall_at_3_std
value: -25.916099999999997
- type: nauc_recall_at_3_diff1
value: 13.5176
- type: nauc_recall_at_5_max
value: 18.0189
- type: nauc_recall_at_5_std
value: -25.795299999999997
- type: nauc_recall_at_5_diff1
value: 11.3842
- type: nauc_recall_at_10_max
value: 19.4035
- type: nauc_recall_at_10_std
value: -27.2015
- type: nauc_recall_at_10_diff1
value: 7.9085
- type: nauc_recall_at_20_max
value: 22.5578
- type: nauc_recall_at_20_std
value: -24.1674
- type: nauc_recall_at_20_diff1
value: 5.0956
- type: nauc_recall_at_100_max
value: 25.2855
- type: nauc_recall_at_100_std
value: -19.9378
- type: nauc_recall_at_100_diff1
value: 1.3199
- type: nauc_recall_at_1000_max
value: 29.253400000000003
- type: nauc_recall_at_1000_std
value: -8.519599999999999
- type: nauc_recall_at_1000_diff1
value: 0.1057
- type: nauc_precision_at_1_max
value: 13.991500000000002
- type: nauc_precision_at_1_std
value: -25.4382
- type: nauc_precision_at_1_diff1
value: 21.2751
- type: nauc_precision_at_3_max
value: 15.758700000000001
- type: nauc_precision_at_3_std
value: -26.3494
- type: nauc_precision_at_3_diff1
value: 13.6081
- type: nauc_precision_at_5_max
value: 17.851300000000002
- type: nauc_precision_at_5_std
value: -26.3818
- type: nauc_precision_at_5_diff1
value: 11.4331
- type: nauc_precision_at_10_max
value: 19.5748
- type: nauc_precision_at_10_std
value: -27.594400000000004
- type: nauc_precision_at_10_diff1
value: 8.0539
- type: nauc_precision_at_20_max
value: 22.453799999999998
- type: nauc_precision_at_20_std
value: -23.707800000000002
- type: nauc_precision_at_20_diff1
value: 5.2
- type: nauc_precision_at_100_max
value: 24.1067
- type: nauc_precision_at_100_std
value: -16.6068
- type: nauc_precision_at_100_diff1
value: 1.1200999999999999
- type: nauc_precision_at_1000_max
value: 22.516
- type: nauc_precision_at_1000_std
value: -0.621
- type: nauc_precision_at_1000_diff1
value: -0.26749999999999996
- type: nauc_mrr_at_1_max
value: 13.991500000000002
- type: nauc_mrr_at_1_std
value: -25.4382
- type: nauc_mrr_at_1_diff1
value: 21.2751
- type: nauc_mrr_at_3_max
value: 14.95
- type: nauc_mrr_at_3_std
value: -25.885
- type: nauc_mrr_at_3_diff1
value: 17.3215
- type: nauc_mrr_at_5_max
value: 15.5568
- type: nauc_mrr_at_5_std
value: -25.963
- type: nauc_mrr_at_5_diff1
value: 16.699
- type: nauc_mrr_at_10_max
value: 15.901299999999999
- type: nauc_mrr_at_10_std
value: -26.2471
- type: nauc_mrr_at_10_diff1
value: 16.189899999999998
- type: nauc_mrr_at_20_max
value: 16.1798
- type: nauc_mrr_at_20_std
value: -25.989600000000003
- type: nauc_mrr_at_20_diff1
value: 15.984499999999999
- type: nauc_mrr_at_100_max
value: 16.2602
- type: nauc_mrr_at_100_std
value: -25.9187
- type: nauc_mrr_at_100_diff1
value: 16.0136
- type: nauc_mrr_at_1000_max
value: 16.2577
- type: nauc_mrr_at_1000_std
value: -25.9039
- type: nauc_mrr_at_1000_diff1
value: 16.0318
- type: main_score
value: 28.68
task:
type: Retrieval
- dataset:
config: default
name: MTEB FiQA2018 (default)
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
split: test
type: mteb/fiqa
metrics:
- type: ndcg_at_1
value: 14.198
- type: ndcg_at_3
value: 14.018
- type: ndcg_at_5
value: 14.857000000000001
- type: ndcg_at_10
value: 16.509999999999998
- type: ndcg_at_20
value: 18.499
- type: ndcg_at_100
value: 22.658
- type: ndcg_at_1000
value: 26.894000000000002
- type: map_at_1
value: 7.061000000000001
- type: map_at_3
value: 10.151
- type: map_at_5
value: 11.0
- type: map_at_10
value: 11.883000000000001
- type: map_at_20
value: 12.5
- type: map_at_100
value: 13.154
- type: map_at_1000
value: 13.343
- type: recall_at_1
value: 7.061000000000001
- type: recall_at_3
value: 13.339
- type: recall_at_5
value: 16.689999999999998
- type: recall_at_10
value: 21.435000000000002
- type: recall_at_20
value: 27.779999999999998
- type: recall_at_100
value: 45.381
- type: recall_at_1000
value: 71.61699999999999
- type: precision_at_1
value: 14.198
- type: precision_at_3
value: 9.568
- type: precision_at_5
value: 7.099
- type: precision_at_10
value: 4.7379999999999995
- type: precision_at_20
value: 3.1329999999999996
- type: precision_at_100
value: 1.083
- type: precision_at_1000
value: 0.181
- type: mrr_at_1
value: 14.1975
- type: mrr_at_3
value: 18.5185
- type: mrr_at_5
value: 19.8302
- type: mrr_at_10
value: 20.6685
- type: mrr_at_20
value: 21.273
- type: mrr_at_100
value: 21.8076
- type: mrr_at_1000
value: 21.9063
- type: nauc_ndcg_at_1_max
value: 12.2117
- type: nauc_ndcg_at_1_std
value: -10.7059
- type: nauc_ndcg_at_1_diff1
value: 27.4415
- type: nauc_ndcg_at_3_max
value: 12.4823
- type: nauc_ndcg_at_3_std
value: -10.252500000000001
- type: nauc_ndcg_at_3_diff1
value: 20.6834
- type: nauc_ndcg_at_5_max
value: 10.3316
- type: nauc_ndcg_at_5_std
value: -9.8648
- type: nauc_ndcg_at_5_diff1
value: 19.6879
- type: nauc_ndcg_at_10_max
value: 9.2057
- type: nauc_ndcg_at_10_std
value: -9.3284
- type: nauc_ndcg_at_10_diff1
value: 19.5253
- type: nauc_ndcg_at_20_max
value: 8.3092
- type: nauc_ndcg_at_20_std
value: -6.686400000000001
- type: nauc_ndcg_at_20_diff1
value: 19.0031
- type: nauc_ndcg_at_100_max
value: 9.321200000000001
- type: nauc_ndcg_at_100_std
value: -4.4703
- type: nauc_ndcg_at_100_diff1
value: 19.2995
- type: nauc_ndcg_at_1000_max
value: 11.754199999999999
- type: nauc_ndcg_at_1000_std
value: -2.6593999999999998
- type: nauc_ndcg_at_1000_diff1
value: 20.3056
- type: nauc_map_at_1_max
value: 17.227899999999998
- type: nauc_map_at_1_std
value: -6.8508
- type: nauc_map_at_1_diff1
value: 25.9133
- type: nauc_map_at_3_max
value: 13.716999999999999
- type: nauc_map_at_3_std
value: -8.86
- type: nauc_map_at_3_diff1
value: 21.0714
- type: nauc_map_at_5_max
value: 12.146700000000001
- type: nauc_map_at_5_std
value: -8.909400000000002
- type: nauc_map_at_5_diff1
value: 20.3887
- type: nauc_map_at_10_max
value: 11.417
- type: nauc_map_at_10_std
value: -8.9141
- type: nauc_map_at_10_diff1
value: 20.7165
- type: nauc_map_at_20_max
value: 11.0988
- type: nauc_map_at_20_std
value: -7.9453
- type: nauc_map_at_20_diff1
value: 20.7809
- type: nauc_map_at_100_max
value: 11.1694
- type: nauc_map_at_100_std
value: -7.4639
- type: nauc_map_at_100_diff1
value: 20.9252
- type: nauc_map_at_1000_max
value: 11.3405
- type: nauc_map_at_1000_std
value: -7.3102
- type: nauc_map_at_1000_diff1
value: 20.9959
- type: nauc_recall_at_1_max
value: 17.227899999999998
- type: nauc_recall_at_1_std
value: -6.8508
- type: nauc_recall_at_1_diff1
value: 25.9133
- type: nauc_recall_at_3_max
value: 11.2722
- type: nauc_recall_at_3_std
value: -9.4755
- type: nauc_recall_at_3_diff1
value: 15.1741
- type: nauc_recall_at_5_max
value: 6.7860000000000005
- type: nauc_recall_at_5_std
value: -8.9743
- type: nauc_recall_at_5_diff1
value: 14.091999999999999
- type: nauc_recall_at_10_max
value: 4.5781
- type: nauc_recall_at_10_std
value: -8.4828
- type: nauc_recall_at_10_diff1
value: 13.1033
- type: nauc_recall_at_20_max
value: 3.0408999999999997
- type: nauc_recall_at_20_std
value: -1.0319
- type: nauc_recall_at_20_diff1
value: 11.2412
- type: nauc_recall_at_100_max
value: 4.6371
- type: nauc_recall_at_100_std
value: 5.6984
- type: nauc_recall_at_100_diff1
value: 10.648399999999999
- type: nauc_recall_at_1000_max
value: 14.4284
- type: nauc_recall_at_1000_std
value: 20.471
- type: nauc_recall_at_1000_diff1
value: 13.6603
- type: nauc_precision_at_1_max
value: 12.2117
- type: nauc_precision_at_1_std
value: -10.7059
- type: nauc_precision_at_1_diff1
value: 27.4415
- type: nauc_precision_at_3_max
value: 8.3303
- type: nauc_precision_at_3_std
value: -12.3434
- type: nauc_precision_at_3_diff1
value: 20.3774
- type: nauc_precision_at_5_max
value: 5.46
- type: nauc_precision_at_5_std
value: -10.6964
- type: nauc_precision_at_5_diff1
value: 19.3914
- type: nauc_precision_at_10_max
value: 5.8885
- type: nauc_precision_at_10_std
value: -9.0149
- type: nauc_precision_at_10_diff1
value: 21.8392
- type: nauc_precision_at_20_max
value: 3.8181
- type: nauc_precision_at_20_std
value: -4.2505
- type: nauc_precision_at_20_diff1
value: 19.9848
- type: nauc_precision_at_100_max
value: 9.6538
- type: nauc_precision_at_100_std
value: 1.8809
- type: nauc_precision_at_100_diff1
value: 18.6529
- type: nauc_precision_at_1000_max
value: 15.5018
- type: nauc_precision_at_1000_std
value: 5.4286
- type: nauc_precision_at_1000_diff1
value: 13.2946
- type: nauc_mrr_at_1_max
value: 12.2117
- type: nauc_mrr_at_1_std
value: -10.7059
- type: nauc_mrr_at_1_diff1
value: 27.4415
- type: nauc_mrr_at_3_max
value: 10.5481
- type: nauc_mrr_at_3_std
value: -10.7069
- type: nauc_mrr_at_3_diff1
value: 22.1345
- type: nauc_mrr_at_5_max
value: 9.463000000000001
- type: nauc_mrr_at_5_std
value: -10.5558
- type: nauc_mrr_at_5_diff1
value: 21.8622
- type: nauc_mrr_at_10_max
value: 9.6679
- type: nauc_mrr_at_10_std
value: -10.399600000000001
- type: nauc_mrr_at_10_diff1
value: 21.7847
- type: nauc_mrr_at_20_max
value: 9.422600000000001
- type: nauc_mrr_at_20_std
value: -9.8865
- type: nauc_mrr_at_20_diff1
value: 21.4703
- type: nauc_mrr_at_100_max
value: 9.640500000000001
- type: nauc_mrr_at_100_std
value: -9.8299
- type: nauc_mrr_at_100_diff1
value: 21.5227
- type: nauc_mrr_at_1000_max
value: 9.6734
- type: nauc_mrr_at_1000_std
value: -9.8079
- type: nauc_mrr_at_1000_diff1
value: 21.5451
- type: main_score
value: 16.509999999999998
task:
type: Retrieval
- dataset:
config: default
name: MTEB HotpotQA (default)
revision: ab518f4d6fcca38d87c25209f94beba119d02014
split: test
type: mteb/hotpotqa
metrics:
- type: ndcg_at_1
value: 40.297
- type: ndcg_at_3
value: 31.719
- type: ndcg_at_5
value: 33.744
- type: ndcg_at_10
value: 35.72
- type: ndcg_at_20
value: 37.266
- type: ndcg_at_100
value: 39.778000000000006
- type: ndcg_at_1000
value: 42.056
- type: map_at_1
value: 20.149
- type: map_at_3
value: 25.899
- type: map_at_5
value: 27.157999999999998
- type: map_at_10
value: 28.105000000000004
- type: map_at_20
value: 28.586
- type: map_at_100
value: 29.000999999999998
- type: map_at_1000
value: 29.098000000000003
- type: recall_at_1
value: 20.149
- type: recall_at_3
value: 29.932
- type: recall_at_5
value: 33.93
- type: recall_at_10
value: 38.92
- type: recall_at_20
value: 43.903
- type: recall_at_100
value: 55.057
- type: recall_at_1000
value: 70.27
- type: precision_at_1
value: 40.297
- type: precision_at_3
value: 19.955000000000002
- type: precision_at_5
value: 13.572000000000001
- type: precision_at_10
value: 7.784000000000001
- type: precision_at_20
value: 4.390000000000001
- type: precision_at_100
value: 1.101
- type: precision_at_1000
value: 0.14100000000000001
- type: mrr_at_1
value: 40.2971
- type: mrr_at_3
value: 46.041
- type: mrr_at_5
value: 47.199600000000004
- type: mrr_at_10
value: 47.9631
- type: mrr_at_20
value: 48.3871
- type: mrr_at_100
value: 48.661500000000004
- type: mrr_at_1000
value: 48.707
- type: nauc_ndcg_at_1_max
value: 27.8706
- type: nauc_ndcg_at_1_std
value: -8.272300000000001
- type: nauc_ndcg_at_1_diff1
value: 57.8385
- type: nauc_ndcg_at_3_max
value: 27.852500000000003
- type: nauc_ndcg_at_3_std
value: -6.4216
- type: nauc_ndcg_at_3_diff1
value: 48.365
- type: nauc_ndcg_at_5_max
value: 27.509099999999997
- type: nauc_ndcg_at_5_std
value: -5.6179
- type: nauc_ndcg_at_5_diff1
value: 46.5015
- type: nauc_ndcg_at_10_max
value: 27.002
- type: nauc_ndcg_at_10_std
value: -4.5545
- type: nauc_ndcg_at_10_diff1
value: 45.7081
- type: nauc_ndcg_at_20_max
value: 26.984799999999996
- type: nauc_ndcg_at_20_std
value: -3.6883
- type: nauc_ndcg_at_20_diff1
value: 44.9584
- type: nauc_ndcg_at_100_max
value: 27.283600000000003
- type: nauc_ndcg_at_100_std
value: -2.3537
- type: nauc_ndcg_at_100_diff1
value: 44.1115
- type: nauc_ndcg_at_1000_max
value: 27.417399999999997
- type: nauc_ndcg_at_1000_std
value: -1.2178
- type: nauc_ndcg_at_1000_diff1
value: 44.0544
- type: nauc_map_at_1_max
value: 27.8706
- type: nauc_map_at_1_std
value: -8.272300000000001
- type: nauc_map_at_1_diff1
value: 57.8385
- type: nauc_map_at_3_max
value: 27.584799999999998
- type: nauc_map_at_3_std
value: -5.9387
- type: nauc_map_at_3_diff1
value: 47.2019
- type: nauc_map_at_5_max
value: 27.242
- type: nauc_map_at_5_std
value: -5.3224
- type: nauc_map_at_5_diff1
value: 45.831
- type: nauc_map_at_10_max
value: 26.9723
- type: nauc_map_at_10_std
value: -4.7007
- type: nauc_map_at_10_diff1
value: 45.3311
- type: nauc_map_at_20_max
value: 26.919700000000002
- type: nauc_map_at_20_std
value: -4.3851
- type: nauc_map_at_20_diff1
value: 45.0687
- type: nauc_map_at_100_max
value: 26.995400000000004
- type: nauc_map_at_100_std
value: -4.0821000000000005
- type: nauc_map_at_100_diff1
value: 44.9062
- type: nauc_map_at_1000_max
value: 26.998499999999996
- type: nauc_map_at_1000_std
value: -4.0238000000000005
- type: nauc_map_at_1000_diff1
value: 44.8961
- type: nauc_recall_at_1_max
value: 27.8706
- type: nauc_recall_at_1_std
value: -8.272300000000001
- type: nauc_recall_at_1_diff1
value: 57.8385
- type: nauc_recall_at_3_max
value: 27.3795
- type: nauc_recall_at_3_std
value: -5.1751
- type: nauc_recall_at_3_diff1
value: 42.3825
- type: nauc_recall_at_5_max
value: 25.634800000000002
- type: nauc_recall_at_5_std
value: -3.3379
- type: nauc_recall_at_5_diff1
value: 37.0532
- type: nauc_recall_at_10_max
value: 23.5746
- type: nauc_recall_at_10_std
value: -0.5226
- type: nauc_recall_at_10_diff1
value: 34.071200000000005
- type: nauc_recall_at_20_max
value: 22.1536
- type: nauc_recall_at_20_std
value: 2.3993
- type: nauc_recall_at_20_diff1
value: 29.439
- type: nauc_recall_at_100_max
value: 20.7576
- type: nauc_recall_at_100_std
value: 8.468499999999999
- type: nauc_recall_at_100_diff1
value: 21.221799999999998
- type: nauc_recall_at_1000_max
value: 18.7522
- type: nauc_recall_at_1000_std
value: 18.916800000000002
- type: nauc_recall_at_1000_diff1
value: 13.558200000000001
- type: nauc_precision_at_1_max
value: 27.8706
- type: nauc_precision_at_1_std
value: -8.272300000000001
- type: nauc_precision_at_1_diff1
value: 57.8385
- type: nauc_precision_at_3_max
value: 27.3795
- type: nauc_precision_at_3_std
value: -5.1751
- type: nauc_precision_at_3_diff1
value: 42.3825
- type: nauc_precision_at_5_max
value: 25.634800000000002
- type: nauc_precision_at_5_std
value: -3.3379
- type: nauc_precision_at_5_diff1
value: 37.0532
- type: nauc_precision_at_10_max
value: 23.5746
- type: nauc_precision_at_10_std
value: -0.5226
- type: nauc_precision_at_10_diff1
value: 34.071200000000005
- type: nauc_precision_at_20_max
value: 22.1536
- type: nauc_precision_at_20_std
value: 2.3993
- type: nauc_precision_at_20_diff1
value: 29.439
- type: nauc_precision_at_100_max
value: 20.7576
- type: nauc_precision_at_100_std
value: 8.468499999999999
- type: nauc_precision_at_100_diff1
value: 21.221799999999998
- type: nauc_precision_at_1000_max
value: 18.7522
- type: nauc_precision_at_1000_std
value: 18.916800000000002
- type: nauc_precision_at_1000_diff1
value: 13.558200000000001
- type: nauc_mrr_at_1_max
value: 27.8706
- type: nauc_mrr_at_1_std
value: -8.272300000000001
- type: nauc_mrr_at_1_diff1
value: 57.8385
- type: nauc_mrr_at_3_max
value: 28.256700000000002
- type: nauc_mrr_at_3_std
value: -8.050699999999999
- type: nauc_mrr_at_3_diff1
value: 54.5601
- type: nauc_mrr_at_5_max
value: 28.2928
- type: nauc_mrr_at_5_std
value: -7.8317
- type: nauc_mrr_at_5_diff1
value: 54.046499999999995
- type: nauc_mrr_at_10_max
value: 28.151500000000002
- type: nauc_mrr_at_10_std
value: -7.6431
- type: nauc_mrr_at_10_diff1
value: 53.9751
- type: nauc_mrr_at_20_max
value: 28.215
- type: nauc_mrr_at_20_std
value: -7.5285
- type: nauc_mrr_at_20_diff1
value: 53.9177
- type: nauc_mrr_at_100_max
value: 28.215600000000002
- type: nauc_mrr_at_100_std
value: -7.524699999999999
- type: nauc_mrr_at_100_diff1
value: 53.9393
- type: nauc_mrr_at_1000_max
value: 28.2194
- type: nauc_mrr_at_1000_std
value: -7.5150999999999994
- type: nauc_mrr_at_1000_diff1
value: 53.95290000000001
- type: main_score
value: 35.72
task:
type: Retrieval
- dataset:
config: default
name: MTEB ImdbClassification (default)
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
split: test
type: mteb/imdb
metrics:
- type: accuracy
value: 65.8656
- type: f1
value: 65.385
- type: f1_weighted
value: 65.385
- type: ap
value: 60.506899999999995
- type: ap_weighted
value: 60.506899999999995
- type: main_score
value: 65.8656
task:
type: Classification
- dataset:
config: default
name: MTEB MSMARCO (default)
revision: c5a29a104738b98a9e76336939199e264163d4a0
split: dev
type: mteb/msmarco
metrics:
- type: ndcg_at_1
value: 6.877
- type: ndcg_at_3
value: 10.963000000000001
- type: ndcg_at_5
value: 12.845
- type: ndcg_at_10
value: 14.918000000000001
- type: ndcg_at_20
value: 16.721
- type: ndcg_at_100
value: 20.041
- type: ndcg_at_1000
value: 23.296
- type: map_at_1
value: 6.717
- type: map_at_3
value: 9.846
- type: map_at_5
value: 10.886999999999999
- type: map_at_10
value: 11.74
- type: map_at_20
value: 12.237
- type: map_at_100
value: 12.683
- type: map_at_1000
value: 12.792
- type: recall_at_1
value: 6.717
- type: recall_at_3
value: 13.963999999999999
- type: recall_at_5
value: 18.498
- type: recall_at_10
value: 24.869
- type: recall_at_20
value: 31.901000000000003
- type: recall_at_100
value: 49.786
- type: recall_at_1000
value: 75.913
- type: precision_at_1
value: 6.877
- type: precision_at_3
value: 4.809
- type: precision_at_5
value: 3.8280000000000003
- type: precision_at_10
value: 2.5829999999999997
- type: precision_at_20
value: 1.6650000000000003
- type: precision_at_100
value: 0.523
- type: precision_at_1000
value: 0.08
- type: mrr_at_1
value: 6.876799999999999
- type: mrr_at_3
value: 10.093100000000002
- type: mrr_at_5
value: 11.1526
- type: mrr_at_10
value: 12.0074
- type: mrr_at_20
value: 12.5083
- type: mrr_at_100
value: 12.9529
- type: mrr_at_1000
value: 13.057099999999998
- type: nauc_ndcg_at_1_max
value: 4.7264
- type: nauc_ndcg_at_1_std
value: -16.2439
- type: nauc_ndcg_at_1_diff1
value: 27.4463
- type: nauc_ndcg_at_3_max
value: 6.1734
- type: nauc_ndcg_at_3_std
value: -16.8949
- type: nauc_ndcg_at_3_diff1
value: 22.7183
- type: nauc_ndcg_at_5_max
value: 6.493
- type: nauc_ndcg_at_5_std
value: -15.7852
- type: nauc_ndcg_at_5_diff1
value: 21.0805
- type: nauc_ndcg_at_10_max
value: 7.099600000000001
- type: nauc_ndcg_at_10_std
value: -15.1727
- type: nauc_ndcg_at_10_diff1
value: 20.3957
- type: nauc_ndcg_at_20_max
value: 7.9073
- type: nauc_ndcg_at_20_std
value: -14.596200000000001
- type: nauc_ndcg_at_20_diff1
value: 20.0084
- type: nauc_ndcg_at_100_max
value: 9.112
- type: nauc_ndcg_at_100_std
value: -12.0562
- type: nauc_ndcg_at_100_diff1
value: 19.3717
- type: nauc_ndcg_at_1000_max
value: 10.1474
- type: nauc_ndcg_at_1000_std
value: -10.3955
- type: nauc_ndcg_at_1000_diff1
value: 19.2427
- type: nauc_map_at_1_max
value: 4.4801
- type: nauc_map_at_1_std
value: -16.4499
- type: nauc_map_at_1_diff1
value: 27.5511
- type: nauc_map_at_3_max
value: 5.8799
- type: nauc_map_at_3_std
value: -16.7696
- type: nauc_map_at_3_diff1
value: 23.531299999999998
- type: nauc_map_at_5_max
value: 6.0905000000000005
- type: nauc_map_at_5_std
value: -16.0525
- type: nauc_map_at_5_diff1
value: 22.395799999999998
- type: nauc_map_at_10_max
value: 6.3876
- type: nauc_map_at_10_std
value: -15.774
- type: nauc_map_at_10_diff1
value: 22.0367
- type: nauc_map_at_20_max
value: 6.6676
- type: nauc_map_at_20_std
value: -15.5729
- type: nauc_map_at_20_diff1
value: 21.8952
- type: nauc_map_at_100_max
value: 6.912400000000001
- type: nauc_map_at_100_std
value: -15.162400000000002
- type: nauc_map_at_100_diff1
value: 21.7666
- type: nauc_map_at_1000_max
value: 6.952500000000001
- type: nauc_map_at_1000_std
value: -15.085799999999999
- type: nauc_map_at_1000_diff1
value: 21.7618
- type: nauc_recall_at_1_max
value: 4.4801
- type: nauc_recall_at_1_std
value: -16.4499
- type: nauc_recall_at_1_diff1
value: 27.5511
- type: nauc_recall_at_3_max
value: 6.7195
- type: nauc_recall_at_3_std
value: -17.2961
- type: nauc_recall_at_3_diff1
value: 20.9572
- type: nauc_recall_at_5_max
value: 7.199
- type: nauc_recall_at_5_std
value: -15.260599999999998
- type: nauc_recall_at_5_diff1
value: 18.4745
- type: nauc_recall_at_10_max
value: 8.3289
- type: nauc_recall_at_10_std
value: -14.0152
- type: nauc_recall_at_10_diff1
value: 17.3142
- type: nauc_recall_at_20_max
value: 10.1702
- type: nauc_recall_at_20_std
value: -12.7265
- type: nauc_recall_at_20_diff1
value: 16.5162
- type: nauc_recall_at_100_max
value: 13.9363
- type: nauc_recall_at_100_std
value: -4.0486
- type: nauc_recall_at_100_diff1
value: 14.5015
- type: nauc_recall_at_1000_max
value: 24.3013
- type: nauc_recall_at_1000_std
value: 12.3673
- type: nauc_recall_at_1000_diff1
value: 10.9827
- type: nauc_precision_at_1_max
value: 4.7264
- type: nauc_precision_at_1_std
value: -16.2439
- type: nauc_precision_at_1_diff1
value: 27.4463
- type: nauc_precision_at_3_max
value: 6.895700000000001
- type: nauc_precision_at_3_std
value: -17.0973
- type: nauc_precision_at_3_diff1
value: 20.7819
- type: nauc_precision_at_5_max
value: 7.3601
- type: nauc_precision_at_5_std
value: -15.189400000000001
- type: nauc_precision_at_5_diff1
value: 18.2284
- type: nauc_precision_at_10_max
value: 8.5933
- type: nauc_precision_at_10_std
value: -13.9345
- type: nauc_precision_at_10_diff1
value: 17.1801
- type: nauc_precision_at_20_max
value: 10.5732
- type: nauc_precision_at_20_std
value: -12.2593
- type: nauc_precision_at_20_diff1
value: 16.3194
- type: nauc_precision_at_100_max
value: 14.462800000000001
- type: nauc_precision_at_100_std
value: -2.7812
- type: nauc_precision_at_100_diff1
value: 13.8556
- type: nauc_precision_at_1000_max
value: 22.7827
- type: nauc_precision_at_1000_std
value: 13.1185
- type: nauc_precision_at_1000_diff1
value: 8.331199999999999
- type: nauc_mrr_at_1_max
value: 4.7264
- type: nauc_mrr_at_1_std
value: -16.2439
- type: nauc_mrr_at_1_diff1
value: 27.4463
- type: nauc_mrr_at_3_max
value: 5.9976
- type: nauc_mrr_at_3_std
value: -16.5493
- type: nauc_mrr_at_3_diff1
value: 23.5058
- type: nauc_mrr_at_5_max
value: 6.1958
- type: nauc_mrr_at_5_std
value: -15.893699999999999
- type: nauc_mrr_at_5_diff1
value: 22.4454
- type: nauc_mrr_at_10_max
value: 6.514200000000001
- type: nauc_mrr_at_10_std
value: -15.5116
- type: nauc_mrr_at_10_diff1
value: 22.0264
- type: nauc_mrr_at_20_max
value: 6.7813
- type: nauc_mrr_at_20_std
value: -15.2942
- type: nauc_mrr_at_20_diff1
value: 21.8857
- type: nauc_mrr_at_100_max
value: 7.0158
- type: nauc_mrr_at_100_std
value: -14.894599999999999
- type: nauc_mrr_at_100_diff1
value: 21.757299999999997
- type: nauc_mrr_at_1000_max
value: 7.0534
- type: nauc_mrr_at_1000_std
value: -14.8351
- type: nauc_mrr_at_1000_diff1
value: 21.7544
- type: main_score
value: 14.918000000000001
task:
type: Retrieval
- dataset:
config: en
name: MTEB MTOPDomainClassification (en)
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
split: test
type: mteb/mtop_domain
metrics:
- type: accuracy
value: 82.4669
- type: f1
value: 81.3346
- type: f1_weighted
value: 82.6885
- type: main_score
value: 82.4669
task:
type: Classification
- dataset:
config: en
name: MTEB MTOPIntentClassification (en)
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
split: test
type: mteb/mtop_intent
metrics:
- type: accuracy
value: 58.1145
- type: f1
value: 40.7841
- type: f1_weighted
value: 62.343
- type: main_score
value: 58.1145
task:
type: Classification
- dataset:
config: en
name: MTEB MassiveIntentClassification (en)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 60.24549999999999
- type: f1
value: 59.534
- type: f1_weighted
value: 60.47670000000001
- type: main_score
value: 60.24549999999999
task:
type: Classification
- dataset:
config: en
name: MTEB MassiveScenarioClassification (en)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 66.32820000000001
- type: f1
value: 65.2929
- type: f1_weighted
value: 66.51979999999999
- type: main_score
value: 66.32820000000001
task:
type: Classification
- dataset:
config: default
name: MTEB MedrxivClusteringP2P (default)
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
split: test
type: mteb/medrxiv-clustering-p2p
metrics:
- type: v_measure
value: 25.8495
- type: v_measure_std
value: 1.6320000000000001
- type: main_score
value: 25.8495
task:
type: Clustering
- dataset:
config: default
name: MTEB MedrxivClusteringS2S (default)
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
split: test
type: mteb/medrxiv-clustering-s2s
metrics:
- type: v_measure
value: 20.0754
- type: v_measure_std
value: 1.3306
- type: main_score
value: 20.0754
task:
type: Clustering
- dataset:
config: default
name: MTEB MindSmallReranking (default)
revision: 59042f120c80e8afa9cdbb224f67076cec0fc9a7
split: test
type: mteb/mind_small
metrics:
- type: map
value: 28.5611
- type: mrr
value: 29.4014
- type: nAUC_map_max
value: -20.8019
- type: nAUC_map_std
value: -5.307300000000001
- type: nAUC_map_diff1
value: 20.6483
- type: nAUC_mrr_max
value: -14.9738
- type: nAUC_mrr_std
value: -2.9508
- type: nAUC_mrr_diff1
value: 18.6743
- type: main_score
value: 28.5611
task:
type: Reranking
- dataset:
config: default
name: MTEB NFCorpus (default)
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
split: test
type: mteb/nfcorpus
metrics:
- type: ndcg_at_1
value: 32.972
- type: ndcg_at_3
value: 29.965000000000003
- type: ndcg_at_5
value: 28.773
- type: ndcg_at_10
value: 26.434
- type: ndcg_at_20
value: 24.922
- type: ndcg_at_100
value: 24.852
- type: ndcg_at_1000
value: 33.388
- type: map_at_1
value: 3.737
- type: map_at_3
value: 6.387
- type: map_at_5
value: 7.420999999999999
- type: map_at_10
value: 8.652
- type: map_at_20
value: 9.745
- type: map_at_100
value: 11.247
- type: map_at_1000
value: 12.494
- type: recall_at_1
value: 3.737
- type: recall_at_3
value: 7.889
- type: recall_at_5
value: 10.026
- type: recall_at_10
value: 12.615000000000002
- type: recall_at_20
value: 16.184
- type: recall_at_100
value: 26.988
- type: recall_at_1000
value: 57.594
- type: precision_at_1
value: 34.675
- type: precision_at_3
value: 28.173
- type: precision_at_5
value: 25.201
- type: precision_at_10
value: 20.0
- type: precision_at_20
value: 15.356
- type: precision_at_100
value: 6.898
- type: precision_at_1000
value: 1.936
- type: mrr_at_1
value: 34.674899999999994
- type: mrr_at_3
value: 42.0537
- type: mrr_at_5
value: 43.741
- type: mrr_at_10
value: 44.277699999999996
- type: mrr_at_20
value: 44.819700000000005
- type: mrr_at_100
value: 45.1552
- type: mrr_at_1000
value: 45.2048
- type: nauc_ndcg_at_1_max
value: 27.6992
- type: nauc_ndcg_at_1_std
value: 13.1387
- type: nauc_ndcg_at_1_diff1
value: 33.7772
- type: nauc_ndcg_at_3_max
value: 32.4741
- type: nauc_ndcg_at_3_std
value: 19.264
- type: nauc_ndcg_at_3_diff1
value: 26.1486
- type: nauc_ndcg_at_5_max
value: 32.6623
- type: nauc_ndcg_at_5_std
value: 21.435499999999998
- type: nauc_ndcg_at_5_diff1
value: 24.0412
- type: nauc_ndcg_at_10_max
value: 33.217400000000005
- type: nauc_ndcg_at_10_std
value: 22.591900000000003
- type: nauc_ndcg_at_10_diff1
value: 22.3637
- type: nauc_ndcg_at_20_max
value: 33.3978
- type: nauc_ndcg_at_20_std
value: 22.520200000000003
- type: nauc_ndcg_at_20_diff1
value: 22.0163
- type: nauc_ndcg_at_100_max
value: 33.0608
- type: nauc_ndcg_at_100_std
value: 20.4305
- type: nauc_ndcg_at_100_diff1
value: 21.1175
- type: nauc_ndcg_at_1000_max
value: 38.198100000000004
- type: nauc_ndcg_at_1000_std
value: 26.8712
- type: nauc_ndcg_at_1000_diff1
value: 22.78
- type: nauc_map_at_1_max
value: 18.898300000000003
- type: nauc_map_at_1_std
value: -11.0976
- type: nauc_map_at_1_diff1
value: 55.1605
- type: nauc_map_at_3_max
value: 20.451800000000002
- type: nauc_map_at_3_std
value: -12.0342
- type: nauc_map_at_3_diff1
value: 45.2096
- type: nauc_map_at_5_max
value: 21.199
- type: nauc_map_at_5_std
value: -9.8514
- type: nauc_map_at_5_diff1
value: 42.0142
- type: nauc_map_at_10_max
value: 23.1645
- type: nauc_map_at_10_std
value: -5.8333
- type: nauc_map_at_10_diff1
value: 38.048
- type: nauc_map_at_20_max
value: 24.9482
- type: nauc_map_at_20_std
value: -1.5368
- type: nauc_map_at_20_diff1
value: 36.241299999999995
- type: nauc_map_at_100_max
value: 27.1413
- type: nauc_map_at_100_std
value: 5.6268
- type: nauc_map_at_100_diff1
value: 33.3298
- type: nauc_map_at_1000_max
value: 28.7674
- type: nauc_map_at_1000_std
value: 10.9326
- type: nauc_map_at_1000_diff1
value: 31.700899999999997
- type: nauc_recall_at_1_max
value: 18.898300000000003
- type: nauc_recall_at_1_std
value: -11.0976
- type: nauc_recall_at_1_diff1
value: 55.1605
- type: nauc_recall_at_3_max
value: 19.4721
- type: nauc_recall_at_3_std
value: -13.496
- type: nauc_recall_at_3_diff1
value: 35.0178
- type: nauc_recall_at_5_max
value: 19.5024
- type: nauc_recall_at_5_std
value: -12.3428
- type: nauc_recall_at_5_diff1
value: 29.517
- type: nauc_recall_at_10_max
value: 21.215500000000002
- type: nauc_recall_at_10_std
value: -8.7165
- type: nauc_recall_at_10_diff1
value: 24.282
- type: nauc_recall_at_20_max
value: 21.735
- type: nauc_recall_at_20_std
value: -5.0988999999999995
- type: nauc_recall_at_20_diff1
value: 20.3041
- type: nauc_recall_at_100_max
value: 19.9243
- type: nauc_recall_at_100_std
value: 3.4522999999999997
- type: nauc_recall_at_100_diff1
value: 5.9747
- type: nauc_recall_at_1000_max
value: 21.7819
- type: nauc_recall_at_1000_std
value: 13.6785
- type: nauc_recall_at_1000_diff1
value: -0.25980000000000003
- type: nauc_precision_at_1_max
value: 28.624899999999997
- type: nauc_precision_at_1_std
value: 12.709599999999998
- type: nauc_precision_at_1_diff1
value: 33.308
- type: nauc_precision_at_3_max
value: 35.1699
- type: nauc_precision_at_3_std
value: 25.9338
- type: nauc_precision_at_3_diff1
value: 18.5464
- type: nauc_precision_at_5_max
value: 33.4433
- type: nauc_precision_at_5_std
value: 32.4517
- type: nauc_precision_at_5_diff1
value: 12.5543
- type: nauc_precision_at_10_max
value: 32.3973
- type: nauc_precision_at_10_std
value: 37.7554
- type: nauc_precision_at_10_diff1
value: 6.7227
- type: nauc_precision_at_20_max
value: 31.591599999999996
- type: nauc_precision_at_20_std
value: 44.658
- type: nauc_precision_at_20_diff1
value: 2.2702
- type: nauc_precision_at_100_max
value: 25.163600000000002
- type: nauc_precision_at_100_std
value: 51.7642
- type: nauc_precision_at_100_diff1
value: -4.8361
- type: nauc_precision_at_1000_max
value: 20.2984
- type: nauc_precision_at_1000_std
value: 49.0469
- type: nauc_precision_at_1000_diff1
value: -6.662700000000001
- type: nauc_mrr_at_1_max
value: 28.624899999999997
- type: nauc_mrr_at_1_std
value: 12.709599999999998
- type: nauc_mrr_at_1_diff1
value: 33.308
- type: nauc_mrr_at_3_max
value: 32.3306
- type: nauc_mrr_at_3_std
value: 18.1604
- type: nauc_mrr_at_3_diff1
value: 31.128600000000002
- type: nauc_mrr_at_5_max
value: 32.0504
- type: nauc_mrr_at_5_std
value: 18.3022
- type: nauc_mrr_at_5_diff1
value: 30.1868
- type: nauc_mrr_at_10_max
value: 32.093500000000006
- type: nauc_mrr_at_10_std
value: 18.348
- type: nauc_mrr_at_10_diff1
value: 30.2307
- type: nauc_mrr_at_20_max
value: 32.3491
- type: nauc_mrr_at_20_std
value: 18.309800000000003
- type: nauc_mrr_at_20_diff1
value: 30.0848
- type: nauc_mrr_at_100_max
value: 32.5297
- type: nauc_mrr_at_100_std
value: 18.4197
- type: nauc_mrr_at_100_diff1
value: 30.03
- type: nauc_mrr_at_1000_max
value: 32.502700000000004
- type: nauc_mrr_at_1000_std
value: 18.4073
- type: nauc_mrr_at_1000_diff1
value: 30.059599999999996
- type: main_score
value: 26.434
task:
type: Retrieval
- dataset:
config: default
name: MTEB NQ (default)
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
split: test
type: mteb/nq
metrics:
- type: ndcg_at_1
value: 9.067
- type: ndcg_at_3
value: 13.33
- type: ndcg_at_5
value: 15.773000000000001
- type: ndcg_at_10
value: 18.239
- type: ndcg_at_20
value: 20.777
- type: ndcg_at_100
value: 25.046000000000003
- type: ndcg_at_1000
value: 27.814
- type: map_at_1
value: 8.007
- type: map_at_3
value: 11.732
- type: map_at_5
value: 13.095
- type: map_at_10
value: 14.127
- type: map_at_20
value: 14.860000000000001
- type: map_at_100
value: 15.467
- type: map_at_1000
value: 15.57
- type: recall_at_1
value: 8.007
- type: recall_at_3
value: 16.553
- type: recall_at_5
value: 22.282
- type: recall_at_10
value: 29.592000000000002
- type: recall_at_20
value: 39.134
- type: recall_at_100
value: 61.307
- type: recall_at_1000
value: 82.556
- type: precision_at_1
value: 9.067
- type: precision_at_3
value: 6.441
- type: precision_at_5
value: 5.220000000000001
- type: precision_at_10
value: 3.488
- type: precision_at_20
value: 2.329
- type: precision_at_100
value: 0.734
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 9.0672
- type: mrr_at_3
value: 13.1277
- type: mrr_at_5
value: 14.544199999999998
- type: mrr_at_10
value: 15.589400000000001
- type: mrr_at_20
value: 16.2651
- type: mrr_at_100
value: 16.8195
- type: mrr_at_1000
value: 16.902800000000003
- type: nauc_ndcg_at_1_max
value: 11.3832
- type: nauc_ndcg_at_1_std
value: -4.1221
- type: nauc_ndcg_at_1_diff1
value: 20.5341
- type: nauc_ndcg_at_3_max
value: 11.4743
- type: nauc_ndcg_at_3_std
value: -4.4418
- type: nauc_ndcg_at_3_diff1
value: 16.481
- type: nauc_ndcg_at_5_max
value: 12.6479
- type: nauc_ndcg_at_5_std
value: -4.5466
- type: nauc_ndcg_at_5_diff1
value: 15.1785
- type: nauc_ndcg_at_10_max
value: 14.3237
- type: nauc_ndcg_at_10_std
value: -4.4135
- type: nauc_ndcg_at_10_diff1
value: 14.6574
- type: nauc_ndcg_at_20_max
value: 15.717300000000002
- type: nauc_ndcg_at_20_std
value: -3.0106
- type: nauc_ndcg_at_20_diff1
value: 14.6044
- type: nauc_ndcg_at_100_max
value: 17.5878
- type: nauc_ndcg_at_100_std
value: -0.36519999999999997
- type: nauc_ndcg_at_100_diff1
value: 14.5606
- type: nauc_ndcg_at_1000_max
value: 17.5657
- type: nauc_ndcg_at_1000_std
value: 1.1903000000000001
- type: nauc_ndcg_at_1000_diff1
value: 14.5654
- type: nauc_map_at_1_max
value: 10.2386
- type: nauc_map_at_1_std
value: -4.9847
- type: nauc_map_at_1_diff1
value: 20.9545
- type: nauc_map_at_3_max
value: 10.9023
- type: nauc_map_at_3_std
value: -4.8369
- type: nauc_map_at_3_diff1
value: 17.5991
- type: nauc_map_at_5_max
value: 11.7413
- type: nauc_map_at_5_std
value: -4.9516
- type: nauc_map_at_5_diff1
value: 16.7798
- type: nauc_map_at_10_max
value: 12.6051
- type: nauc_map_at_10_std
value: -4.9007000000000005
- type: nauc_map_at_10_diff1
value: 16.4911
- type: nauc_map_at_20_max
value: 13.1256
- type: nauc_map_at_20_std
value: -4.4518
- type: nauc_map_at_20_diff1
value: 16.4184
- type: nauc_map_at_100_max
value: 13.4467
- type: nauc_map_at_100_std
value: -3.9765
- type: nauc_map_at_100_diff1
value: 16.4427
- type: nauc_map_at_1000_max
value: 13.452
- type: nauc_map_at_1000_std
value: -3.8988
- type: nauc_map_at_1000_diff1
value: 16.4438
- type: nauc_recall_at_1_max
value: 10.2386
- type: nauc_recall_at_1_std
value: -4.9847
- type: nauc_recall_at_1_diff1
value: 20.9545
- type: nauc_recall_at_3_max
value: 11.843399999999999
- type: nauc_recall_at_3_std
value: -4.3091
- type: nauc_recall_at_3_diff1
value: 14.285999999999998
- type: nauc_recall_at_5_max
value: 13.5182
- type: nauc_recall_at_5_std
value: -4.417800000000001
- type: nauc_recall_at_5_diff1
value: 12.1453
- type: nauc_recall_at_10_max
value: 17.0065
- type: nauc_recall_at_10_std
value: -4.252000000000001
- type: nauc_recall_at_10_diff1
value: 11.457199999999998
- type: nauc_recall_at_20_max
value: 20.3871
- type: nauc_recall_at_20_std
value: -0.7614
- type: nauc_recall_at_20_diff1
value: 11.5536
- type: nauc_recall_at_100_max
value: 28.3368
- type: nauc_recall_at_100_std
value: 9.5722
- type: nauc_recall_at_100_diff1
value: 10.7211
- type: nauc_recall_at_1000_max
value: 37.0782
- type: nauc_recall_at_1000_std
value: 31.6326
- type: nauc_recall_at_1000_diff1
value: 8.82
- type: nauc_precision_at_1_max
value: 11.3832
- type: nauc_precision_at_1_std
value: -4.1221
- type: nauc_precision_at_1_diff1
value: 20.5341
- type: nauc_precision_at_3_max
value: 12.951099999999999
- type: nauc_precision_at_3_std
value: -3.4715999999999996
- type: nauc_precision_at_3_diff1
value: 14.0988
- type: nauc_precision_at_5_max
value: 14.8679
- type: nauc_precision_at_5_std
value: -3.9043
- type: nauc_precision_at_5_diff1
value: 11.9479
- type: nauc_precision_at_10_max
value: 18.0976
- type: nauc_precision_at_10_std
value: -3.1489999999999996
- type: nauc_precision_at_10_diff1
value: 10.7419
- type: nauc_precision_at_20_max
value: 20.4974
- type: nauc_precision_at_20_std
value: 1.2608
- type: nauc_precision_at_20_diff1
value: 9.8315
- type: nauc_precision_at_100_max
value: 24.1911
- type: nauc_precision_at_100_std
value: 11.971400000000001
- type: nauc_precision_at_100_diff1
value: 7.0899
- type: nauc_precision_at_1000_max
value: 20.2919
- type: nauc_precision_at_1000_std
value: 23.0171
- type: nauc_precision_at_1000_diff1
value: 1.4091
- type: nauc_mrr_at_1_max
value: 11.3832
- type: nauc_mrr_at_1_std
value: -4.1221
- type: nauc_mrr_at_1_diff1
value: 20.5341
- type: nauc_mrr_at_3_max
value: 11.7865
- type: nauc_mrr_at_3_std
value: -3.6935999999999996
- type: nauc_mrr_at_3_diff1
value: 16.8127
- type: nauc_mrr_at_5_max
value: 12.518199999999998
- type: nauc_mrr_at_5_std
value: -3.7152
- type: nauc_mrr_at_5_diff1
value: 15.893699999999999
- type: nauc_mrr_at_10_max
value: 13.1787
- type: nauc_mrr_at_10_std
value: -3.6301
- type: nauc_mrr_at_10_diff1
value: 15.617500000000001
- type: nauc_mrr_at_20_max
value: 13.529399999999999
- type: nauc_mrr_at_20_std
value: -3.1929
- type: nauc_mrr_at_20_diff1
value: 15.6602
- type: nauc_mrr_at_100_max
value: 13.770199999999999
- type: nauc_mrr_at_100_std
value: -2.9103
- type: nauc_mrr_at_100_diff1
value: 15.6841
- type: nauc_mrr_at_1000_max
value: 13.7598
- type: nauc_mrr_at_1000_std
value: -2.8705000000000003
- type: nauc_mrr_at_1000_diff1
value: 15.6886
- type: main_score
value: 18.239
task:
type: Retrieval
- dataset:
config: default
name: MTEB QuoraRetrieval (default)
revision: e4e08e0b7dbe3c8700f0daef558ff32256715259
split: test
type: mteb/quora
metrics:
- type: ndcg_at_1
value: 72.39
- type: ndcg_at_3
value: 76.303
- type: ndcg_at_5
value: 78.164
- type: ndcg_at_10
value: 79.946
- type: ndcg_at_20
value: 80.963
- type: ndcg_at_100
value: 82.086
- type: ndcg_at_1000
value: 82.494
- type: map_at_1
value: 62.965
- type: map_at_3
value: 72.429
- type: map_at_5
value: 74.246
- type: map_at_10
value: 75.414
- type: map_at_20
value: 75.87899999999999
- type: map_at_100
value: 76.164
- type: map_at_1000
value: 76.198
- type: recall_at_1
value: 62.965
- type: recall_at_3
value: 78.39
- type: recall_at_5
value: 83.506
- type: recall_at_10
value: 88.787
- type: recall_at_20
value: 92.223
- type: recall_at_100
value: 96.98
- type: recall_at_1000
value: 99.30099999999999
- type: precision_at_1
value: 72.39
- type: precision_at_3
value: 33.040000000000006
- type: precision_at_5
value: 21.884
- type: precision_at_10
value: 12.084999999999999
- type: precision_at_20
value: 6.49
- type: precision_at_100
value: 1.444
- type: precision_at_1000
value: 0.154
- type: mrr_at_1
value: 72.39
- type: mrr_at_3
value: 77.9883
- type: mrr_at_5
value: 78.8933
- type: mrr_at_10
value: 79.443
- type: mrr_at_20
value: 79.6218
- type: mrr_at_100
value: 79.7045
- type: mrr_at_1000
value: 79.7112
- type: nauc_ndcg_at_1_max
value: 43.343199999999996
- type: nauc_ndcg_at_1_std
value: -15.6476
- type: nauc_ndcg_at_1_diff1
value: 74.5603
- type: nauc_ndcg_at_3_max
value: 41.4951
- type: nauc_ndcg_at_3_std
value: -18.006
- type: nauc_ndcg_at_3_diff1
value: 71.4871
- type: nauc_ndcg_at_5_max
value: 41.665
- type: nauc_ndcg_at_5_std
value: -18.2802
- type: nauc_ndcg_at_5_diff1
value: 71.31060000000001
- type: nauc_ndcg_at_10_max
value: 41.9766
- type: nauc_ndcg_at_10_std
value: -17.1129
- type: nauc_ndcg_at_10_diff1
value: 71.4114
- type: nauc_ndcg_at_20_max
value: 42.3933
- type: nauc_ndcg_at_20_std
value: -16.8854
- type: nauc_ndcg_at_20_diff1
value: 71.5046
- type: nauc_ndcg_at_100_max
value: 42.7267
- type: nauc_ndcg_at_100_std
value: -15.7841
- type: nauc_ndcg_at_100_diff1
value: 71.7294
- type: nauc_ndcg_at_1000_max
value: 42.770799999999994
- type: nauc_ndcg_at_1000_std
value: -15.8694
- type: nauc_ndcg_at_1000_diff1
value: 71.8391
- type: nauc_map_at_1_max
value: 34.103899999999996
- type: nauc_map_at_1_std
value: -17.6429
- type: nauc_map_at_1_diff1
value: 74.37780000000001
- type: nauc_map_at_3_max
value: 39.3622
- type: nauc_map_at_3_std
value: -19.3706
- type: nauc_map_at_3_diff1
value: 72.3035
- type: nauc_map_at_5_max
value: 40.3833
- type: nauc_map_at_5_std
value: -19.126099999999997
- type: nauc_map_at_5_diff1
value: 71.99950000000001
- type: nauc_map_at_10_max
value: 40.8837
- type: nauc_map_at_10_std
value: -18.34
- type: nauc_map_at_10_diff1
value: 71.92150000000001
- type: nauc_map_at_20_max
value: 41.14
- type: nauc_map_at_20_std
value: -18.01
- type: nauc_map_at_20_diff1
value: 71.85629999999999
- type: nauc_map_at_100_max
value: 41.2511
- type: nauc_map_at_100_std
value: -17.6727
- type: nauc_map_at_100_diff1
value: 71.8731
- type: nauc_map_at_1000_max
value: 41.2569
- type: nauc_map_at_1000_std
value: -17.6477
- type: nauc_map_at_1000_diff1
value: 71.8801
- type: nauc_recall_at_1_max
value: 34.103899999999996
- type: nauc_recall_at_1_std
value: -17.6429
- type: nauc_recall_at_1_diff1
value: 74.37780000000001
- type: nauc_recall_at_3_max
value: 37.4459
- type: nauc_recall_at_3_std
value: -21.2405
- type: nauc_recall_at_3_diff1
value: 68.2773
- type: nauc_recall_at_5_max
value: 38.5924
- type: nauc_recall_at_5_std
value: -21.644
- type: nauc_recall_at_5_diff1
value: 66.3095
- type: nauc_recall_at_10_max
value: 39.3957
- type: nauc_recall_at_10_std
value: -17.0364
- type: nauc_recall_at_10_diff1
value: 64.8501
- type: nauc_recall_at_20_max
value: 40.325
- type: nauc_recall_at_20_std
value: -15.4228
- type: nauc_recall_at_20_diff1
value: 63.5063
- type: nauc_recall_at_100_max
value: 43.7134
- type: nauc_recall_at_100_std
value: 3.7923
- type: nauc_recall_at_100_diff1
value: 63.7613
- type: nauc_recall_at_1000_max
value: 53.65180000000001
- type: nauc_recall_at_1000_std
value: 35.6561
- type: nauc_recall_at_1000_diff1
value: 65.9936
- type: nauc_precision_at_1_max
value: 43.343199999999996
- type: nauc_precision_at_1_std
value: -15.6476
- type: nauc_precision_at_1_diff1
value: 74.5603
- type: nauc_precision_at_3_max
value: 21.8142
- type: nauc_precision_at_3_std
value: -1.1627999999999998
- type: nauc_precision_at_3_diff1
value: 9.954
- type: nauc_precision_at_5_max
value: 15.2041
- type: nauc_precision_at_5_std
value: 4.2947
- type: nauc_precision_at_5_diff1
value: -5.305
- type: nauc_precision_at_10_max
value: 8.163499999999999
- type: nauc_precision_at_10_std
value: 10.9367
- type: nauc_precision_at_10_diff1
value: -18.0036
- type: nauc_precision_at_20_max
value: 3.5585
- type: nauc_precision_at_20_std
value: 14.5351
- type: nauc_precision_at_20_diff1
value: -25.249700000000004
- type: nauc_precision_at_100_max
value: -3.0063
- type: nauc_precision_at_100_std
value: 19.791700000000002
- type: nauc_precision_at_100_diff1
value: -32.281
- type: nauc_precision_at_1000_max
value: -6.468100000000001
- type: nauc_precision_at_1000_std
value: 20.025100000000002
- type: nauc_precision_at_1000_diff1
value: -34.4531
- type: nauc_mrr_at_1_max
value: 43.2621
- type: nauc_mrr_at_1_std
value: -15.864
- type: nauc_mrr_at_1_diff1
value: 74.5603
- type: nauc_mrr_at_3_max
value: 43.8197
- type: nauc_mrr_at_3_std
value: -16.1674
- type: nauc_mrr_at_3_diff1
value: 72.9802
- type: nauc_mrr_at_5_max
value: 43.9843
- type: nauc_mrr_at_5_std
value: -16.042
- type: nauc_mrr_at_5_diff1
value: 72.907
- type: nauc_mrr_at_10_max
value: 44.0294
- type: nauc_mrr_at_10_std
value: -15.711500000000001
- type: nauc_mrr_at_10_diff1
value: 72.9915
- type: nauc_mrr_at_20_max
value: 44.044200000000004
- type: nauc_mrr_at_20_std
value: -15.7842
- type: nauc_mrr_at_20_diff1
value: 73.0535
- type: nauc_mrr_at_100_max
value: 44.0194
- type: nauc_mrr_at_100_std
value: -15.7612
- type: nauc_mrr_at_100_diff1
value: 73.0738
- type: nauc_mrr_at_1000_max
value: 44.0187
- type: nauc_mrr_at_1000_std
value: -15.764100000000001
- type: nauc_mrr_at_1000_diff1
value: 73.0758
- type: main_score
value: 79.946
task:
type: Retrieval
- dataset:
config: default
name: MTEB RedditClustering (default)
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
split: test
type: mteb/reddit-clustering
metrics:
- type: v_measure
value: 20.2171
- type: v_measure_std
value: 4.4216
- type: main_score
value: 20.2171
task:
type: Clustering
- dataset:
config: default
name: MTEB RedditClusteringP2P (default)
revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
split: test
type: mteb/reddit-clustering-p2p
metrics:
- type: v_measure
value: 38.8882
- type: v_measure_std
value: 9.315
- type: main_score
value: 38.8882
task:
type: Clustering
- dataset:
config: default
name: MTEB SCIDOCS (default)
revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88
split: test
type: mteb/scidocs
metrics:
- type: ndcg_at_1
value: 15.1
- type: ndcg_at_3
value: 12.036
- type: ndcg_at_5
value: 11.007
- type: ndcg_at_10
value: 13.352
- type: ndcg_at_20
value: 15.6
- type: ndcg_at_100
value: 19.871
- type: ndcg_at_1000
value: 25.255
- type: map_at_1
value: 3.058
- type: map_at_3
value: 5.268
- type: map_at_5
value: 6.406000000000001
- type: map_at_10
value: 7.478
- type: map_at_20
value: 8.21
- type: map_at_100
value: 8.946
- type: map_at_1000
value: 9.223
- type: recall_at_1
value: 3.058
- type: recall_at_3
value: 6.793
- type: recall_at_5
value: 10.003
- type: recall_at_10
value: 14.288
- type: recall_at_20
value: 19.542
- type: recall_at_100
value: 33.413
- type: recall_at_1000
value: 59.733000000000004
- type: precision_at_1
value: 15.1
- type: precision_at_3
value: 11.167
- type: precision_at_5
value: 9.879999999999999
- type: precision_at_10
value: 7.07
- type: precision_at_20
value: 4.825
- type: precision_at_100
value: 1.649
- type: precision_at_1000
value: 0.294
- type: mrr_at_1
value: 15.1
- type: mrr_at_3
value: 20.2833
- type: mrr_at_5
value: 22.4733
- type: mrr_at_10
value: 23.6601
- type: mrr_at_20
value: 24.3772
- type: mrr_at_100
value: 24.9007
- type: mrr_at_1000
value: 24.9743
- type: nauc_ndcg_at_1_max
value: 18.8537
- type: nauc_ndcg_at_1_std
value: -3.2037000000000004
- type: nauc_ndcg_at_1_diff1
value: 20.8288
- type: nauc_ndcg_at_3_max
value: 15.3817
- type: nauc_ndcg_at_3_std
value: -3.2159
- type: nauc_ndcg_at_3_diff1
value: 18.13
- type: nauc_ndcg_at_5_max
value: 17.940900000000003
- type: nauc_ndcg_at_5_std
value: 0.3294
- type: nauc_ndcg_at_5_diff1
value: 16.9378
- type: nauc_ndcg_at_10_max
value: 21.146
- type: nauc_ndcg_at_10_std
value: 2.6954
- type: nauc_ndcg_at_10_diff1
value: 15.363399999999999
- type: nauc_ndcg_at_20_max
value: 21.9075
- type: nauc_ndcg_at_20_std
value: 4.9554
- type: nauc_ndcg_at_20_diff1
value: 15.4857
- type: nauc_ndcg_at_100_max
value: 22.9248
- type: nauc_ndcg_at_100_std
value: 8.8094
- type: nauc_ndcg_at_100_diff1
value: 15.1255
- type: nauc_ndcg_at_1000_max
value: 24.7883
- type: nauc_ndcg_at_1000_std
value: 13.3551
- type: nauc_ndcg_at_1000_diff1
value: 15.1244
- type: nauc_map_at_1_max
value: 19.238
- type: nauc_map_at_1_std
value: -2.9537
- type: nauc_map_at_1_diff1
value: 21.3456
- type: nauc_map_at_3_max
value: 16.0914
- type: nauc_map_at_3_std
value: -4.2357
- type: nauc_map_at_3_diff1
value: 17.1314
- type: nauc_map_at_5_max
value: 17.9317
- type: nauc_map_at_5_std
value: -1.2885
- type: nauc_map_at_5_diff1
value: 15.5052
- type: nauc_map_at_10_max
value: 20.1204
- type: nauc_map_at_10_std
value: 0.29109999999999997
- type: nauc_map_at_10_diff1
value: 14.513200000000001
- type: nauc_map_at_20_max
value: 20.6688
- type: nauc_map_at_20_std
value: 1.6063
- type: nauc_map_at_20_diff1
value: 14.934800000000001
- type: nauc_map_at_100_max
value: 21.2455
- type: nauc_map_at_100_std
value: 3.1651
- type: nauc_map_at_100_diff1
value: 14.6507
- type: nauc_map_at_1000_max
value: 21.4903
- type: nauc_map_at_1000_std
value: 3.7647
- type: nauc_map_at_1000_diff1
value: 14.6354
- type: nauc_recall_at_1_max
value: 19.238
- type: nauc_recall_at_1_std
value: -2.9537
- type: nauc_recall_at_1_diff1
value: 21.3456
- type: nauc_recall_at_3_max
value: 14.5564
- type: nauc_recall_at_3_std
value: -3.2211
- type: nauc_recall_at_3_diff1
value: 17.0505
- type: nauc_recall_at_5_max
value: 18.159200000000002
- type: nauc_recall_at_5_std
value: 2.6766
- type: nauc_recall_at_5_diff1
value: 14.7598
- type: nauc_recall_at_10_max
value: 23.6071
- type: nauc_recall_at_10_std
value: 6.6582
- type: nauc_recall_at_10_diff1
value: 11.7647
- type: nauc_recall_at_20_max
value: 23.5471
- type: nauc_recall_at_20_std
value: 10.6906
- type: nauc_recall_at_20_diff1
value: 11.5654
- type: nauc_recall_at_100_max
value: 23.2746
- type: nauc_recall_at_100_std
value: 18.3139
- type: nauc_recall_at_100_diff1
value: 10.2364
- type: nauc_recall_at_1000_max
value: 27.2333
- type: nauc_recall_at_1000_std
value: 32.5351
- type: nauc_recall_at_1000_diff1
value: 8.7211
- type: nauc_precision_at_1_max
value: 18.8537
- type: nauc_precision_at_1_std
value: -3.2037000000000004
- type: nauc_precision_at_1_diff1
value: 20.8288
- type: nauc_precision_at_3_max
value: 14.260200000000001
- type: nauc_precision_at_3_std
value: -3.1767
- type: nauc_precision_at_3_diff1
value: 16.9826
- type: nauc_precision_at_5_max
value: 17.999399999999998
- type: nauc_precision_at_5_std
value: 2.7119999999999997
- type: nauc_precision_at_5_diff1
value: 14.685300000000002
- type: nauc_precision_at_10_max
value: 23.5629
- type: nauc_precision_at_10_std
value: 6.7014000000000005
- type: nauc_precision_at_10_diff1
value: 11.6848
- type: nauc_precision_at_20_max
value: 23.1819
- type: nauc_precision_at_20_std
value: 10.478
- type: nauc_precision_at_20_diff1
value: 11.6263
- type: nauc_precision_at_100_max
value: 22.7954
- type: nauc_precision_at_100_std
value: 18.215500000000002
- type: nauc_precision_at_100_diff1
value: 10.526299999999999
- type: nauc_precision_at_1000_max
value: 26.4283
- type: nauc_precision_at_1000_std
value: 31.9492
- type: nauc_precision_at_1000_diff1
value: 9.031799999999999
- type: nauc_mrr_at_1_max
value: 18.8537
- type: nauc_mrr_at_1_std
value: -3.2037000000000004
- type: nauc_mrr_at_1_diff1
value: 20.8288
- type: nauc_mrr_at_3_max
value: 16.253500000000003
- type: nauc_mrr_at_3_std
value: -2.3413
- type: nauc_mrr_at_3_diff1
value: 20.333399999999997
- type: nauc_mrr_at_5_max
value: 17.2285
- type: nauc_mrr_at_5_std
value: -0.5249
- type: nauc_mrr_at_5_diff1
value: 20.119
- type: nauc_mrr_at_10_max
value: 18.351100000000002
- type: nauc_mrr_at_10_std
value: 0.0489
- type: nauc_mrr_at_10_diff1
value: 19.711000000000002
- type: nauc_mrr_at_20_max
value: 18.409100000000002
- type: nauc_mrr_at_20_std
value: 0.41079999999999994
- type: nauc_mrr_at_20_diff1
value: 19.5248
- type: nauc_mrr_at_100_max
value: 18.404799999999998
- type: nauc_mrr_at_100_std
value: 0.4336
- type: nauc_mrr_at_100_diff1
value: 19.5129
- type: nauc_mrr_at_1000_max
value: 18.3706
- type: nauc_mrr_at_1000_std
value: 0.41529999999999995
- type: nauc_mrr_at_1000_diff1
value: 19.5103
- type: main_score
value: 13.352
task:
type: Retrieval
- dataset:
config: default
name: MTEB SICK-R (default)
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
split: test
type: mteb/sickr-sts
metrics:
- type: pearson
value: 73.39529999999999
- type: spearman
value: 63.871599999999994
- type: cosine_pearson
value: 73.39529999999999
- type: cosine_spearman
value: 63.871500000000005
- type: manhattan_pearson
value: 62.5861
- type: manhattan_spearman
value: 56.714600000000004
- type: euclidean_pearson
value: 62.606899999999996
- type: euclidean_spearman
value: 56.714200000000005
- type: main_score
value: 63.871500000000005
task:
type: STS
- dataset:
config: default
name: MTEB STS12 (default)
revision: a0d554a64d88156834ff5ae9920b964011b16384
split: test
type: mteb/sts12-sts
metrics:
- type: pearson
value: 72.35770000000001
- type: spearman
value: 63.606899999999996
- type: cosine_pearson
value: 72.35770000000001
- type: cosine_spearman
value: 63.610299999999995
- type: manhattan_pearson
value: 59.8404
- type: manhattan_spearman
value: 56.85059999999999
- type: euclidean_pearson
value: 59.8116
- type: euclidean_spearman
value: 56.691
- type: main_score
value: 63.610299999999995
task:
type: STS
- dataset:
config: default
name: MTEB STS13 (default)
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
split: test
type: mteb/sts13-sts
metrics:
- type: pearson
value: 76.4727
- type: spearman
value: 76.983
- type: cosine_pearson
value: 76.4727
- type: cosine_spearman
value: 76.983
- type: manhattan_pearson
value: 49.4803
- type: manhattan_spearman
value: 51.1301
- type: euclidean_pearson
value: 49.4542
- type: euclidean_spearman
value: 51.19669999999999
- type: main_score
value: 76.983
task:
type: STS
- dataset:
config: default
name: MTEB STS14 (default)
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
split: test
type: mteb/sts14-sts
metrics:
- type: pearson
value: 75.777
- type: spearman
value: 71.2099
- type: cosine_pearson
value: 75.777
- type: cosine_spearman
value: 71.2099
- type: manhattan_pearson
value: 52.475899999999996
- type: manhattan_spearman
value: 53.8072
- type: euclidean_pearson
value: 52.416799999999995
- type: euclidean_spearman
value: 53.725500000000004
- type: main_score
value: 71.2099
task:
type: STS
- dataset:
config: default
name: MTEB STS15 (default)
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
split: test
type: mteb/sts15-sts
metrics:
- type: pearson
value: 80.1072
- type: spearman
value: 80.735
- type: cosine_pearson
value: 80.1072
- type: cosine_spearman
value: 80.7349
- type: manhattan_pearson
value: 50.711600000000004
- type: manhattan_spearman
value: 53.491299999999995
- type: euclidean_pearson
value: 50.6255
- type: euclidean_spearman
value: 53.47539999999999
- type: main_score
value: 80.7349
task:
type: STS
- dataset:
config: default
name: MTEB STS16 (default)
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
split: test
type: mteb/sts16-sts
metrics:
- type: pearson
value: 73.1658
- type: spearman
value: 74.2121
- type: cosine_pearson
value: 73.1658
- type: cosine_spearman
value: 74.2121
- type: manhattan_pearson
value: 43.4074
- type: manhattan_spearman
value: 47.193200000000004
- type: euclidean_pearson
value: 43.438300000000005
- type: euclidean_spearman
value: 47.2757
- type: main_score
value: 74.2121
task:
type: STS
- dataset:
config: en-en
name: MTEB STS17 (en-en)
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
split: test
type: mteb/sts17-crosslingual-sts
metrics:
- type: pearson
value: 81.8156
- type: spearman
value: 81.9457
- type: cosine_pearson
value: 81.8156
- type: cosine_spearman
value: 81.9457
- type: manhattan_pearson
value: 59.4332
- type: manhattan_spearman
value: 60.5687
- type: euclidean_pearson
value: 59.2942
- type: euclidean_spearman
value: 60.39679999999999
- type: main_score
value: 81.9457
task:
type: STS
- dataset:
config: en
name: MTEB STS22 (en)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: pearson
value: 48.9285
- type: spearman
value: 55.862500000000004
- type: cosine_pearson
value: 48.9285
- type: cosine_spearman
value: 55.862500000000004
- type: manhattan_pearson
value: 43.082300000000004
- type: manhattan_spearman
value: 51.1876
- type: euclidean_pearson
value: 43.2313
- type: euclidean_spearman
value: 51.094899999999996
- type: main_score
value: 55.862500000000004
task:
type: STS
- dataset:
config: default
name: MTEB STSBenchmark (default)
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
split: test
type: mteb/stsbenchmark-sts
metrics:
- type: pearson
value: 73.44380000000001
- type: spearman
value: 71.9343
- type: cosine_pearson
value: 73.44380000000001
- type: cosine_spearman
value: 71.9345
- type: manhattan_pearson
value: 52.233799999999995
- type: manhattan_spearman
value: 51.7687
- type: euclidean_pearson
value: 52.2753
- type: euclidean_spearman
value: 51.845
- type: main_score
value: 71.9345
task:
type: STS
- dataset:
config: default
name: MTEB SciDocsRR (default)
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
split: test
type: mteb/scidocs-reranking
metrics:
- type: map
value: 71.4557
- type: mrr
value: 90.6219
- type: nAUC_map_max
value: 54.74830000000001
- type: nAUC_map_std
value: 65.2558
- type: nAUC_map_diff1
value: 10.2936
- type: nAUC_mrr_max
value: 75.10900000000001
- type: nAUC_mrr_std
value: 69.6523
- type: nAUC_mrr_diff1
value: 49.4991
- type: main_score
value: 71.4557
task:
type: Reranking
- dataset:
config: default
name: MTEB SciFact (default)
revision: 0228b52cf27578f30900b9e5271d331663a030d7
split: test
type: mteb/scifact
metrics:
- type: ndcg_at_1
value: 43.667
- type: ndcg_at_3
value: 52.102000000000004
- type: ndcg_at_5
value: 54.751000000000005
- type: ndcg_at_10
value: 57.422
- type: ndcg_at_20
value: 59.425
- type: ndcg_at_100
value: 61.166
- type: ndcg_at_1000
value: 62.244
- type: map_at_1
value: 41.888999999999996
- type: map_at_3
value: 49.435
- type: map_at_5
value: 51.029
- type: map_at_10
value: 52.190000000000005
- type: map_at_20
value: 52.797000000000004
- type: map_at_100
value: 53.03
- type: map_at_1000
value: 53.069
- type: recall_at_1
value: 41.888999999999996
- type: recall_at_3
value: 57.916999999999994
- type: recall_at_5
value: 64.372
- type: recall_at_10
value: 72.311
- type: recall_at_20
value: 79.97800000000001
- type: recall_at_100
value: 89.333
- type: recall_at_1000
value: 97.867
- type: precision_at_1
value: 43.667
- type: precision_at_3
value: 20.778
- type: precision_at_5
value: 14.066999999999998
- type: precision_at_10
value: 8.033
- type: precision_at_20
value: 4.45
- type: precision_at_100
value: 1.0030000000000001
- type: precision_at_1000
value: 0.11
- type: mrr_at_1
value: 43.666700000000006
- type: mrr_at_3
value: 50.9444
- type: mrr_at_5
value: 52.3444
- type: mrr_at_10
value: 53.3852
- type: mrr_at_20
value: 53.8864
- type: mrr_at_100
value: 54.0887
- type: mrr_at_1000
value: 54.11749999999999
- type: nauc_ndcg_at_1_max
value: 36.6444
- type: nauc_ndcg_at_1_std
value: -7.4722
- type: nauc_ndcg_at_1_diff1
value: 63.631099999999996
- type: nauc_ndcg_at_3_max
value: 37.2859
- type: nauc_ndcg_at_3_std
value: -11.2775
- type: nauc_ndcg_at_3_diff1
value: 56.352999999999994
- type: nauc_ndcg_at_5_max
value: 36.7832
- type: nauc_ndcg_at_5_std
value: -12.310699999999999
- type: nauc_ndcg_at_5_diff1
value: 55.41740000000001
- type: nauc_ndcg_at_10_max
value: 37.9586
- type: nauc_ndcg_at_10_std
value: -9.7483
- type: nauc_ndcg_at_10_diff1
value: 56.8082
- type: nauc_ndcg_at_20_max
value: 38.4072
- type: nauc_ndcg_at_20_std
value: -7.473299999999999
- type: nauc_ndcg_at_20_diff1
value: 56.4974
- type: nauc_ndcg_at_100_max
value: 38.5583
- type: nauc_ndcg_at_100_std
value: -5.521100000000001
- type: nauc_ndcg_at_100_diff1
value: 56.8808
- type: nauc_ndcg_at_1000_max
value: 38.580999999999996
- type: nauc_ndcg_at_1000_std
value: -6.6578
- type: nauc_ndcg_at_1000_diff1
value: 57.3412
- type: nauc_map_at_1_max
value: 35.4069
- type: nauc_map_at_1_std
value: -11.9598
- type: nauc_map_at_1_diff1
value: 62.351299999999995
- type: nauc_map_at_3_max
value: 36.3612
- type: nauc_map_at_3_std
value: -12.6999
- type: nauc_map_at_3_diff1
value: 57.918099999999995
- type: nauc_map_at_5_max
value: 36.268299999999996
- type: nauc_map_at_5_std
value: -12.921199999999999
- type: nauc_map_at_5_diff1
value: 57.496
- type: nauc_map_at_10_max
value: 36.918099999999995
- type: nauc_map_at_10_std
value: -11.6299
- type: nauc_map_at_10_diff1
value: 58.1148
- type: nauc_map_at_20_max
value: 37.060900000000004
- type: nauc_map_at_20_std
value: -10.8228
- type: nauc_map_at_20_diff1
value: 58.0205
- type: nauc_map_at_100_max
value: 37.085499999999996
- type: nauc_map_at_100_std
value: -10.5358
- type: nauc_map_at_100_diff1
value: 58.095
- type: nauc_map_at_1000_max
value: 37.1083
- type: nauc_map_at_1000_std
value: -10.5578
- type: nauc_map_at_1000_diff1
value: 58.1224
- type: nauc_recall_at_1_max
value: 35.4069
- type: nauc_recall_at_1_std
value: -11.9598
- type: nauc_recall_at_1_diff1
value: 62.351299999999995
- type: nauc_recall_at_3_max
value: 37.6511
- type: nauc_recall_at_3_std
value: -13.3993
- type: nauc_recall_at_3_diff1
value: 50.4572
- type: nauc_recall_at_5_max
value: 35.8548
- type: nauc_recall_at_5_std
value: -16.1098
- type: nauc_recall_at_5_diff1
value: 47.2106
- type: nauc_recall_at_10_max
value: 38.9793
- type: nauc_recall_at_10_std
value: -8.1869
- type: nauc_recall_at_10_diff1
value: 50.5379
- type: nauc_recall_at_20_max
value: 42.3127
- type: nauc_recall_at_20_std
value: 4.1918999999999995
- type: nauc_recall_at_20_diff1
value: 47.5366
- type: nauc_recall_at_100_max
value: 48.4392
- type: nauc_recall_at_100_std
value: 37.5486
- type: nauc_recall_at_100_diff1
value: 46.853699999999996
- type: nauc_recall_at_1000_max
value: 70.1389
- type: nauc_recall_at_1000_std
value: 81.7519
- type: nauc_recall_at_1000_diff1
value: 46.0741
- type: nauc_precision_at_1_max
value: 36.6444
- type: nauc_precision_at_1_std
value: -7.4722
- type: nauc_precision_at_1_diff1
value: 63.631099999999996
- type: nauc_precision_at_3_max
value: 37.9141
- type: nauc_precision_at_3_std
value: -2.6281
- type: nauc_precision_at_3_diff1
value: 45.406600000000005
- type: nauc_precision_at_5_max
value: 35.0402
- type: nauc_precision_at_5_std
value: 0.7128
- type: nauc_precision_at_5_diff1
value: 36.686099999999996
- type: nauc_precision_at_10_max
value: 37.4825
- type: nauc_precision_at_10_std
value: 15.613199999999999
- type: nauc_precision_at_10_diff1
value: 33.1716
- type: nauc_precision_at_20_max
value: 36.1575
- type: nauc_precision_at_20_std
value: 30.4446
- type: nauc_precision_at_20_diff1
value: 23.3224
- type: nauc_precision_at_100_max
value: 29.5019
- type: nauc_precision_at_100_std
value: 52.942
- type: nauc_precision_at_100_diff1
value: 9.0284
- type: nauc_precision_at_1000_max
value: 20.350099999999998
- type: nauc_precision_at_1000_std
value: 52.2915
- type: nauc_precision_at_1000_diff1
value: -8.6009
- type: nauc_mrr_at_1_max
value: 36.6444
- type: nauc_mrr_at_1_std
value: -7.4722
- type: nauc_mrr_at_1_diff1
value: 63.631099999999996
- type: nauc_mrr_at_3_max
value: 38.016299999999994
- type: nauc_mrr_at_3_std
value: -8.0229
- type: nauc_mrr_at_3_diff1
value: 58.757400000000004
- type: nauc_mrr_at_5_max
value: 37.433899999999994
- type: nauc_mrr_at_5_std
value: -8.1996
- type: nauc_mrr_at_5_diff1
value: 58.235899999999994
- type: nauc_mrr_at_10_max
value: 37.7997
- type: nauc_mrr_at_10_std
value: -7.542699999999999
- type: nauc_mrr_at_10_diff1
value: 58.8486
- type: nauc_mrr_at_20_max
value: 37.8879
- type: nauc_mrr_at_20_std
value: -7.133000000000001
- type: nauc_mrr_at_20_diff1
value: 58.834900000000005
- type: nauc_mrr_at_100_max
value: 37.8627
- type: nauc_mrr_at_100_std
value: -6.9667
- type: nauc_mrr_at_100_diff1
value: 58.880900000000004
- type: nauc_mrr_at_1000_max
value: 37.8675
- type: nauc_mrr_at_1000_std
value: -6.9817
- type: nauc_mrr_at_1000_diff1
value: 58.904500000000006
- type: main_score
value: 57.422
task:
type: Retrieval
- dataset:
config: default
name: MTEB SprintDuplicateQuestions (default)
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
split: test
type: mteb/sprintduplicatequestions-pairclassification
metrics:
- type: similarity_accuracy
value: 99.6703
- type: similarity_accuracy_threshold
value: 81.69669999999999
- type: similarity_f1
value: 82.5479
- type: similarity_f1_threshold
value: 80.97919999999999
- type: similarity_precision
value: 85.6069
- type: similarity_recall
value: 79.7
- type: similarity_ap
value: 87.6918
- type: cosine_accuracy
value: 99.6703
- type: cosine_accuracy_threshold
value: 81.69669999999999
- type: cosine_f1
value: 82.5479
- type: cosine_f1_threshold
value: 80.97919999999999
- type: cosine_precision
value: 85.6069
- type: cosine_recall
value: 79.7
- type: cosine_ap
value: 87.6918
- type: manhattan_accuracy
value: 99.4327
- type: manhattan_accuracy_threshold
value: 2292.4838999999997
- type: manhattan_f1
value: 66.0851
- type: manhattan_f1_threshold
value: 2517.333
- type: manhattan_precision
value: 72.6619
- type: manhattan_recall
value: 60.6
- type: manhattan_ap
value: 68.1683
- type: euclidean_accuracy
value: 99.4327
- type: euclidean_accuracy_threshold
value: 105.6427
- type: euclidean_f1
value: 66.1605
- type: euclidean_f1_threshold
value: 114.9346
- type: euclidean_precision
value: 72.2749
- type: euclidean_recall
value: 61.0
- type: euclidean_ap
value: 68.2419
- type: dot_accuracy
value: 99.0168
- type: dot_accuracy_threshold
value: 1011.5417000000001
- type: dot_f1
value: 18.6459
- type: dot_f1_threshold
value: 554.0581999999999
- type: dot_precision
value: 20.9476
- type: dot_recall
value: 16.8
- type: dot_ap
value: 11.5838
- type: max_accuracy
value: 99.6703
- type: max_f1
value: 82.5479
- type: max_precision
value: 85.6069
- type: max_recall
value: 79.7
- type: max_ap
value: 87.6918
- type: main_score
value: 87.6918
task:
type: PairClassification
- dataset:
config: default
name: MTEB StackExchangeClustering (default)
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
split: test
type: mteb/stackexchange-clustering
metrics:
- type: v_measure
value: 27.147700000000004
- type: v_measure_std
value: 4.3151
- type: main_score
value: 27.147700000000004
task:
type: Clustering
- dataset:
config: default
name: MTEB StackExchangeClusteringP2P (default)
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
split: test
type: mteb/stackexchange-clustering-p2p
metrics:
- type: v_measure
value: 28.9253
- type: v_measure_std
value: 1.6500000000000001
- type: main_score
value: 28.9253
task:
type: Clustering
- dataset:
config: default
name: MTEB StackOverflowDupQuestions (default)
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
split: test
type: mteb/stackoverflowdupquestions-reranking
metrics:
- type: map
value: 42.7933
- type: mrr
value: 43.2531
- type: nAUC_map_max
value: 15.137400000000001
- type: nAUC_map_std
value: 4.6048
- type: nAUC_map_diff1
value: 31.665100000000002
- type: nAUC_mrr_max
value: 16.429299999999998
- type: nAUC_mrr_std
value: 4.943899999999999
- type: nAUC_mrr_diff1
value: 30.8849
- type: main_score
value: 42.7933
task:
type: Reranking
- dataset:
config: default
name: MTEB SummEval (default)
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
split: test
type: mteb/summeval
metrics:
- type: pearson
value: 31.8891
- type: spearman
value: 30.635299999999997
- type: cosine_spearman
value: 30.635299999999997
- type: cosine_pearson
value: 31.8891
- type: dot_spearman
value: 23.1495
- type: dot_pearson
value: 20.2811
- type: main_score
value: 30.635299999999997
task:
type: Summarization
- dataset:
config: default
name: MTEB TRECCOVID (default)
revision: bb9466bac8153a0349341eb1b22e06409e78ef4e
split: test
type: mteb/trec-covid
metrics:
- type: ndcg_at_1
value: 60.0
- type: ndcg_at_3
value: 56.592
- type: ndcg_at_5
value: 52.15
- type: ndcg_at_10
value: 48.264
- type: ndcg_at_20
value: 43.568
- type: ndcg_at_100
value: 31.196
- type: ndcg_at_1000
value: 26.101000000000003
- type: map_at_1
value: 0.153
- type: map_at_3
value: 0.4
- type: map_at_5
value: 0.601
- type: map_at_10
value: 1.016
- type: map_at_20
value: 1.6099999999999999
- type: map_at_100
value: 4.169
- type: map_at_1000
value: 9.733
- type: recall_at_1
value: 0.153
- type: recall_at_3
value: 0.42300000000000004
- type: recall_at_5
value: 0.6629999999999999
- type: recall_at_10
value: 1.201
- type: recall_at_20
value: 2.022
- type: recall_at_100
value: 6.5409999999999995
- type: recall_at_1000
value: 24.422
- type: precision_at_1
value: 64.0
- type: precision_at_3
value: 58.667
- type: precision_at_5
value: 54.0
- type: precision_at_10
value: 49.8
- type: precision_at_20
value: 44.3
- type: precision_at_100
value: 31.180000000000003
- type: precision_at_1000
value: 12.21
- type: mrr_at_1
value: 64.0
- type: mrr_at_3
value: 68.6667
- type: mrr_at_5
value: 69.9667
- type: mrr_at_10
value: 71.2222
- type: mrr_at_20
value: 71.3651
- type: mrr_at_100
value: 71.4965
- type: mrr_at_1000
value: 71.51429999999999
- type: nauc_ndcg_at_1_max
value: 37.0018
- type: nauc_ndcg_at_1_std
value: 3.0042
- type: nauc_ndcg_at_1_diff1
value: 1.0129000000000001
- type: nauc_ndcg_at_3_max
value: 42.3179
- type: nauc_ndcg_at_3_std
value: 1.1211
- type: nauc_ndcg_at_3_diff1
value: -1.3197999999999999
- type: nauc_ndcg_at_5_max
value: 38.2867
- type: nauc_ndcg_at_5_std
value: 1.436
- type: nauc_ndcg_at_5_diff1
value: -0.635
- type: nauc_ndcg_at_10_max
value: 36.545100000000005
- type: nauc_ndcg_at_10_std
value: 9.4313
- type: nauc_ndcg_at_10_diff1
value: 0.7185
- type: nauc_ndcg_at_20_max
value: 28.841499999999996
- type: nauc_ndcg_at_20_std
value: 14.584
- type: nauc_ndcg_at_20_diff1
value: 0.2278
- type: nauc_ndcg_at_100_max
value: 22.2284
- type: nauc_ndcg_at_100_std
value: 30.9548
- type: nauc_ndcg_at_100_diff1
value: 1.7124000000000001
- type: nauc_ndcg_at_1000_max
value: 7.9275
- type: nauc_ndcg_at_1000_std
value: 43.918
- type: nauc_ndcg_at_1000_diff1
value: 1.1608
- type: nauc_map_at_1_max
value: 16.718700000000002
- type: nauc_map_at_1_std
value: -14.5026
- type: nauc_map_at_1_diff1
value: 6.9494
- type: nauc_map_at_3_max
value: 26.3749
- type: nauc_map_at_3_std
value: -14.2379
- type: nauc_map_at_3_diff1
value: 2.6883
- type: nauc_map_at_5_max
value: 26.8639
- type: nauc_map_at_5_std
value: -11.9289
- type: nauc_map_at_5_diff1
value: -0.5275
- type: nauc_map_at_10_max
value: 28.7924
- type: nauc_map_at_10_std
value: -6.2317
- type: nauc_map_at_10_diff1
value: 0.153
- type: nauc_map_at_20_max
value: 24.3923
- type: nauc_map_at_20_std
value: 1.5524
- type: nauc_map_at_20_diff1
value: -0.7799999999999999
- type: nauc_map_at_100_max
value: 14.5538
- type: nauc_map_at_100_std
value: 29.851499999999998
- type: nauc_map_at_100_diff1
value: -1.5013
- type: nauc_map_at_1000_max
value: 6.609800000000001
- type: nauc_map_at_1000_std
value: 50.8853
- type: nauc_map_at_1000_diff1
value: 2.2463
- type: nauc_recall_at_1_max
value: 16.718700000000002
- type: nauc_recall_at_1_std
value: -14.5026
- type: nauc_recall_at_1_diff1
value: 6.9494
- type: nauc_recall_at_3_max
value: 26.313
- type: nauc_recall_at_3_std
value: -16.5391
- type: nauc_recall_at_3_diff1
value: -0.0947
- type: nauc_recall_at_5_max
value: 27.136
- type: nauc_recall_at_5_std
value: -13.486999999999998
- type: nauc_recall_at_5_diff1
value: -2.2484
- type: nauc_recall_at_10_max
value: 27.9019
- type: nauc_recall_at_10_std
value: -7.2991
- type: nauc_recall_at_10_diff1
value: 0.35729999999999995
- type: nauc_recall_at_20_max
value: 24.1923
- type: nauc_recall_at_20_std
value: 0.3075
- type: nauc_recall_at_20_diff1
value: -2.6993
- type: nauc_recall_at_100_max
value: 15.928400000000002
- type: nauc_recall_at_100_std
value: 24.5423
- type: nauc_recall_at_100_diff1
value: -4.0408
- type: nauc_recall_at_1000_max
value: -0.2523
- type: nauc_recall_at_1000_std
value: 49.0728
- type: nauc_recall_at_1000_diff1
value: -0.1562
- type: nauc_precision_at_1_max
value: 42.5437
- type: nauc_precision_at_1_std
value: 0.859
- type: nauc_precision_at_1_diff1
value: -7.6319
- type: nauc_precision_at_3_max
value: 46.4231
- type: nauc_precision_at_3_std
value: -2.6254
- type: nauc_precision_at_3_diff1
value: -5.129700000000001
- type: nauc_precision_at_5_max
value: 40.022600000000004
- type: nauc_precision_at_5_std
value: 1.4931
- type: nauc_precision_at_5_diff1
value: -5.634399999999999
- type: nauc_precision_at_10_max
value: 37.8846
- type: nauc_precision_at_10_std
value: 11.4085
- type: nauc_precision_at_10_diff1
value: -2.3909
- type: nauc_precision_at_20_max
value: 26.971400000000003
- type: nauc_precision_at_20_std
value: 17.3784
- type: nauc_precision_at_20_diff1
value: -1.5310000000000001
- type: nauc_precision_at_100_max
value: 19.9237
- type: nauc_precision_at_100_std
value: 35.952400000000004
- type: nauc_precision_at_100_diff1
value: 1.4594
- type: nauc_precision_at_1000_max
value: 6.1676
- type: nauc_precision_at_1000_std
value: 50.53959999999999
- type: nauc_precision_at_1000_diff1
value: 3.8484
- type: nauc_mrr_at_1_max
value: 42.5437
- type: nauc_mrr_at_1_std
value: 0.859
- type: nauc_mrr_at_1_diff1
value: -7.6319
- type: nauc_mrr_at_3_max
value: 44.3255
- type: nauc_mrr_at_3_std
value: -4.5994
- type: nauc_mrr_at_3_diff1
value: -12.2252
- type: nauc_mrr_at_5_max
value: 45.7817
- type: nauc_mrr_at_5_std
value: -3.1611000000000002
- type: nauc_mrr_at_5_diff1
value: -10.706100000000001
- type: nauc_mrr_at_10_max
value: 45.5444
- type: nauc_mrr_at_10_std
value: -1.1735
- type: nauc_mrr_at_10_diff1
value: -9.6912
- type: nauc_mrr_at_20_max
value: 45.3001
- type: nauc_mrr_at_20_std
value: -0.8477999999999999
- type: nauc_mrr_at_20_diff1
value: -8.7214
- type: nauc_mrr_at_100_max
value: 45.3697
- type: nauc_mrr_at_100_std
value: -1.2326
- type: nauc_mrr_at_100_diff1
value: -9.1853
- type: nauc_mrr_at_1000_max
value: 45.356
- type: nauc_mrr_at_1000_std
value: -1.2729000000000001
- type: nauc_mrr_at_1000_diff1
value: -9.2226
- type: main_score
value: 48.264
task:
type: Retrieval
- dataset:
config: default
name: MTEB Touche2020 (default)
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
split: test
type: mteb/touche2020
metrics:
- type: ndcg_at_1
value: 13.264999999999999
- type: ndcg_at_3
value: 16.817
- type: ndcg_at_5
value: 17.718999999999998
- type: ndcg_at_10
value: 17.318
- type: ndcg_at_20
value: 18.445
- type: ndcg_at_100
value: 28.137
- type: ndcg_at_1000
value: 41.744
- type: map_at_1
value: 1.335
- type: map_at_3
value: 2.94
- type: map_at_5
value: 4.37
- type: map_at_10
value: 6.447
- type: map_at_20
value: 8.141
- type: map_at_100
value: 10.428999999999998
- type: map_at_1000
value: 12.23
- type: recall_at_1
value: 1.335
- type: recall_at_3
value: 4.05
- type: recall_at_5
value: 7.507999999999999
- type: recall_at_10
value: 12.862000000000002
- type: recall_at_20
value: 18.953999999999997
- type: recall_at_100
value: 40.384
- type: recall_at_1000
value: 82.421
- type: precision_at_1
value: 16.326999999999998
- type: precision_at_3
value: 21.088
- type: precision_at_5
value: 21.224
- type: precision_at_10
value: 17.755000000000003
- type: precision_at_20
value: 13.264999999999999
- type: precision_at_100
value: 6.5920000000000005
- type: precision_at_1000
value: 1.516
- type: mrr_at_1
value: 16.3265
- type: mrr_at_3
value: 29.251700000000003
- type: mrr_at_5
value: 32.9252
- type: mrr_at_10
value: 34.613699999999994
- type: mrr_at_20
value: 35.3587
- type: mrr_at_100
value: 35.6307
- type: mrr_at_1000
value: 35.6307
- type: nauc_ndcg_at_1_max
value: -32.3322
- type: nauc_ndcg_at_1_std
value: -13.9866
- type: nauc_ndcg_at_1_diff1
value: -21.525
- type: nauc_ndcg_at_3_max
value: -33.6213
- type: nauc_ndcg_at_3_std
value: -9.2265
- type: nauc_ndcg_at_3_diff1
value: -7.9922
- type: nauc_ndcg_at_5_max
value: -38.3363
- type: nauc_ndcg_at_5_std
value: -19.017999999999997
- type: nauc_ndcg_at_5_diff1
value: 0.7867000000000001
- type: nauc_ndcg_at_10_max
value: -45.460699999999996
- type: nauc_ndcg_at_10_std
value: -36.0452
- type: nauc_ndcg_at_10_diff1
value: 11.525599999999999
- type: nauc_ndcg_at_20_max
value: -43.7997
- type: nauc_ndcg_at_20_std
value: -39.293499999999995
- type: nauc_ndcg_at_20_diff1
value: 18.019099999999998
- type: nauc_ndcg_at_100_max
value: -47.180499999999995
- type: nauc_ndcg_at_100_std
value: -31.8569
- type: nauc_ndcg_at_100_diff1
value: 14.1121
- type: nauc_ndcg_at_1000_max
value: -40.8476
- type: nauc_ndcg_at_1000_std
value: -21.2172
- type: nauc_ndcg_at_1000_diff1
value: 20.3064
- type: nauc_map_at_1_max
value: -39.5068
- type: nauc_map_at_1_std
value: -16.150000000000002
- type: nauc_map_at_1_diff1
value: -31.249900000000004
- type: nauc_map_at_3_max
value: -41.2738
- type: nauc_map_at_3_std
value: -23.5467
- type: nauc_map_at_3_diff1
value: -21.5959
- type: nauc_map_at_5_max
value: -45.9079
- type: nauc_map_at_5_std
value: -28.181099999999997
- type: nauc_map_at_5_diff1
value: -14.3231
- type: nauc_map_at_10_max
value: -45.8169
- type: nauc_map_at_10_std
value: -41.293400000000005
- type: nauc_map_at_10_diff1
value: -0.7166
- type: nauc_map_at_20_max
value: -42.233900000000006
- type: nauc_map_at_20_std
value: -42.2579
- type: nauc_map_at_20_diff1
value: 9.9162
- type: nauc_map_at_100_max
value: -42.6044
- type: nauc_map_at_100_std
value: -39.921
- type: nauc_map_at_100_diff1
value: 10.408900000000001
- type: nauc_map_at_1000_max
value: -41.4171
- type: nauc_map_at_1000_std
value: -37.167899999999996
- type: nauc_map_at_1000_diff1
value: 11.7185
- type: nauc_recall_at_1_max
value: -39.5068
- type: nauc_recall_at_1_std
value: -16.150000000000002
- type: nauc_recall_at_1_diff1
value: -31.249900000000004
- type: nauc_recall_at_3_max
value: -38.8655
- type: nauc_recall_at_3_std
value: -21.6066
- type: nauc_recall_at_3_diff1
value: -11.395900000000001
- type: nauc_recall_at_5_max
value: -47.9991
- type: nauc_recall_at_5_std
value: -32.9137
- type: nauc_recall_at_5_diff1
value: -1.0116
- type: nauc_recall_at_10_max
value: -49.586999999999996
- type: nauc_recall_at_10_std
value: -48.6293
- type: nauc_recall_at_10_diff1
value: 13.092699999999999
- type: nauc_recall_at_20_max
value: -45.1018
- type: nauc_recall_at_20_std
value: -46.1638
- type: nauc_recall_at_20_diff1
value: 20.9848
- type: nauc_recall_at_100_max
value: -48.106700000000004
- type: nauc_recall_at_100_std
value: -30.618699999999997
- type: nauc_recall_at_100_diff1
value: 8.3225
- type: nauc_recall_at_1000_max
value: -35.183
- type: nauc_recall_at_1000_std
value: 9.1089
- type: nauc_recall_at_1000_diff1
value: 14.8164
- type: nauc_precision_at_1_max
value: -36.7404
- type: nauc_precision_at_1_std
value: -20.7164
- type: nauc_precision_at_1_diff1
value: -24.9514
- type: nauc_precision_at_3_max
value: -32.1394
- type: nauc_precision_at_3_std
value: -14.9321
- type: nauc_precision_at_3_diff1
value: -5.2914
- type: nauc_precision_at_5_max
value: -39.6017
- type: nauc_precision_at_5_std
value: -27.8755
- type: nauc_precision_at_5_diff1
value: 6.2789
- type: nauc_precision_at_10_max
value: -42.565799999999996
- type: nauc_precision_at_10_std
value: -45.101200000000006
- type: nauc_precision_at_10_diff1
value: 18.4024
- type: nauc_precision_at_20_max
value: -36.074
- type: nauc_precision_at_20_std
value: -41.6858
- type: nauc_precision_at_20_diff1
value: 29.625899999999998
- type: nauc_precision_at_100_max
value: -20.7563
- type: nauc_precision_at_100_std
value: -6.5164
- type: nauc_precision_at_100_diff1
value: 13.5108
- type: nauc_precision_at_1000_max
value: 41.492200000000004
- type: nauc_precision_at_1000_std
value: 45.918
- type: nauc_precision_at_1000_diff1
value: 9.314400000000001
- type: nauc_mrr_at_1_max
value: -36.7404
- type: nauc_mrr_at_1_std
value: -20.7164
- type: nauc_mrr_at_1_diff1
value: -24.9514
- type: nauc_mrr_at_3_max
value: -34.8748
- type: nauc_mrr_at_3_std
value: -11.2167
- type: nauc_mrr_at_3_diff1
value: -14.4811
- type: nauc_mrr_at_5_max
value: -39.5232
- type: nauc_mrr_at_5_std
value: -18.9591
- type: nauc_mrr_at_5_diff1
value: -13.2719
- type: nauc_mrr_at_10_max
value: -41.7821
- type: nauc_mrr_at_10_std
value: -18.368399999999998
- type: nauc_mrr_at_10_diff1
value: -13.4359
- type: nauc_mrr_at_20_max
value: -42.8581
- type: nauc_mrr_at_20_std
value: -18.6052
- type: nauc_mrr_at_20_diff1
value: -13.6098
- type: nauc_mrr_at_100_max
value: -42.0696
- type: nauc_mrr_at_100_std
value: -18.1447
- type: nauc_mrr_at_100_diff1
value: -14.102500000000001
- type: nauc_mrr_at_1000_max
value: -42.0696
- type: nauc_mrr_at_1000_std
value: -18.1447
- type: nauc_mrr_at_1000_diff1
value: -14.102500000000001
- type: main_score
value: 17.318
task:
type: Retrieval
- dataset:
config: default
name: MTEB ToxicConversationsClassification (default)
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
split: test
type: mteb/toxic_conversations_50k
metrics:
- type: accuracy
value: 74.0283
- type: f1
value: 54.813100000000006
- type: f1_weighted
value: 79.4125
- type: ap
value: 12.750800000000002
- type: ap_weighted
value: 12.750800000000002
- type: main_score
value: 74.0283
task:
type: Classification
- dataset:
config: default
name: MTEB TweetSentimentExtractionClassification (default)
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
split: test
type: mteb/tweet_sentiment_extraction
metrics:
- type: accuracy
value: 52.818299999999994
- type: f1
value: 52.8999
- type: f1_weighted
value: 52.223299999999995
- type: main_score
value: 52.818299999999994
task:
type: Classification
- dataset:
config: default
name: MTEB TwentyNewsgroupsClustering (default)
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
split: test
type: mteb/twentynewsgroups-clustering
metrics:
- type: v_measure
value: 14.5905
- type: v_measure_std
value: 1.0532
- type: main_score
value: 14.5905
task:
type: Clustering
- dataset:
config: default
name: MTEB TwitterSemEval2015 (default)
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
split: test
type: mteb/twittersemeval2015-pairclassification
metrics:
- type: similarity_accuracy
value: 80.3481
- type: similarity_accuracy_threshold
value: 85.3551
- type: similarity_f1
value: 51.27850000000001
- type: similarity_f1_threshold
value: 75.8966
- type: similarity_precision
value: 45.8247
- type: similarity_recall
value: 58.205799999999996
- type: similarity_ap
value: 52.295100000000005
- type: cosine_accuracy
value: 80.3481
- type: cosine_accuracy_threshold
value: 85.3551
- type: cosine_f1
value: 51.27850000000001
- type: cosine_f1_threshold
value: 75.8966
- type: cosine_precision
value: 45.8247
- type: cosine_recall
value: 58.205799999999996
- type: cosine_ap
value: 52.295199999999994
- type: manhattan_accuracy
value: 78.9712
- type: manhattan_accuracy_threshold
value: 3046.9002
- type: manhattan_f1
value: 44.784600000000005
- type: manhattan_f1_threshold
value: 4624.7635
- type: manhattan_precision
value: 35.5133
- type: manhattan_recall
value: 60.606899999999996
- type: manhattan_ap
value: 44.4155
- type: euclidean_accuracy
value: 78.9772
- type: euclidean_accuracy_threshold
value: 141.3014
- type: euclidean_f1
value: 44.8638
- type: euclidean_f1_threshold
value: 210.8781
- type: euclidean_precision
value: 35.3191
- type: euclidean_recall
value: 61.477599999999995
- type: euclidean_ap
value: 44.3973
- type: dot_accuracy
value: 77.4095
- type: dot_accuracy_threshold
value: 3833.3893000000003
- type: dot_f1
value: 41.7116
- type: dot_f1_threshold
value: 336.5812
- type: dot_precision
value: 28.259600000000002
- type: dot_recall
value: 79.6042
- type: dot_ap
value: 30.7809
- type: max_accuracy
value: 80.3481
- type: max_f1
value: 51.27850000000001
- type: max_precision
value: 45.8247
- type: max_recall
value: 79.6042
- type: max_ap
value: 52.295199999999994
- type: main_score
value: 52.295199999999994
task:
type: PairClassification
- dataset:
config: default
name: MTEB TwitterURLCorpus (default)
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
split: test
type: mteb/twitterurlcorpus-pairclassification
metrics:
- type: similarity_accuracy
value: 85.9025
- type: similarity_accuracy_threshold
value: 71.6078
- type: similarity_f1
value: 70.9832
- type: similarity_f1_threshold
value: 66.4079
- type: similarity_precision
value: 68.9871
- type: similarity_recall
value: 73.0982
- type: similarity_ap
value: 79.2622
- type: cosine_accuracy
value: 85.9025
- type: cosine_accuracy_threshold
value: 71.6078
- type: cosine_f1
value: 70.9832
- type: cosine_f1_threshold
value: 66.4079
- type: cosine_precision
value: 68.9871
- type: cosine_recall
value: 73.0982
- type: cosine_ap
value: 79.2622
- type: manhattan_accuracy
value: 81.8954
- type: manhattan_accuracy_threshold
value: 2754.9084000000003
- type: manhattan_f1
value: 58.4303
- type: manhattan_f1_threshold
value: 3301.9608
- type: manhattan_precision
value: 56.1511
- type: manhattan_recall
value: 60.9024
- type: manhattan_ap
value: 66.2046
- type: euclidean_accuracy
value: 81.8974
- type: euclidean_accuracy_threshold
value: 122.74810000000001
- type: euclidean_f1
value: 58.455
- type: euclidean_f1_threshold
value: 151.3654
- type: euclidean_precision
value: 55.0722
- type: euclidean_recall
value: 62.2806
- type: euclidean_ap
value: 66.22019999999999
- type: dot_accuracy
value: 78.7402
- type: dot_accuracy_threshold
value: 317.0264
- type: dot_f1
value: 58.2905
- type: dot_f1_threshold
value: 187.0591
- type: dot_precision
value: 48.1454
- type: dot_recall
value: 73.8528
- type: dot_ap
value: 58.116
- type: max_accuracy
value: 85.9025
- type: max_f1
value: 70.9832
- type: max_precision
value: 68.9871
- type: max_recall
value: 73.8528
- type: max_ap
value: 79.2622
- type: main_score
value: 79.2622
task:
type: PairClassification
---
# 🧚🏻♀️ brown-fairy-base-v0 Model Card
<div align="center">
<img width="50%" alt="Fairy logo" src="./assets/fairy_logo.png">
</div>
> [!TIP]
> Fairies are among the most enchanting and magical beings in folklore and mythology. They appear across countless cultures and stories, from ancient forests to modern gardens. They are celebrated for their ability to bridge the mundane and magical realms, known for their ethereal grace and transformative powers. Fairies are tiny, higher-dimensional beings that can interact with the world in ways that are beyond our understanding.
The fairy series of models are an attempt to tune the beetle series of models to be more suitable for downstream tasks. These models are meant to fully open experiments at making state-of-the-art static embeddings.
The brown-fairy-base-v0 model is a distillation of the `baai/bge-base-en-v1.5` model into the `brown-beetle-base-v0` model. There was no PCA or Zipf applied to this model.
## Installation
Install model2vec using pip:
```bash
pip install model2vec
```
## Usage
Load this model using the `from_pretrained` method:
```python
from model2vec import StaticModel
# Load a pretrained Model2Vec model
model = StaticModel.from_pretrained("bhavnicksm/brown-fairy-base-v0")
# Compute text embeddings
embeddings = model.encode(["Example sentence"])
```
Read more about the Model2Vec library [here](https://github.com/MinishLab/model2vec).
## Reproduce this model
This model was trained on a subset of the 2 Million texts from the [FineWeb-Edu](https://huggingface.co/datasets/mixedbread-ai/fineweb-edu) dataset, which was labeled by the `baai/bge-base-en-v1.5` model.
<details>
<summary>Training Code</summary>
Note: The datasets need to me made seperately and loaded with the `datasets` library.
```python
static_embedding = StaticEmbedding.from_model2vec("bhavnicksm/brown-beetle-base-v0")
model = SentenceTransformer(
modules=[static_embedding]
)
loss = MSELoss(model)
run_name = "brown-fairy-base-v0"
args = SentenceTransformerTrainingArguments(
# Required parameter:
output_dir=f"output/{run_name}",
# Optional training parameters:
num_train_epochs=1,
per_device_train_batch_size=2048,
per_device_eval_batch_size=2048,
learning_rate=1e-1,
warmup_ratio=0.1,
fp16=False, # Set to False if you get an error that your GPU can't run on FP16
bf16=True, # Set to True if you have a GPU that supports BF16
batch_sampler=BatchSamplers.NO_DUPLICATES,
# Optional tracking/debugging parameters:
eval_strategy="steps",
eval_steps=50,
save_strategy="steps",
save_steps=50,
save_total_limit=5,
logging_steps=50,
logging_first_step=True,
run_name=run_name,
)
evaluator = NanoBEIREvaluator()
evaluator(model)
trainer = SentenceTransformerTrainer(
model=model,
args=args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
loss=loss,
evaluator=evaluator,
)
trainer.train()
evaluator(model)
model.save_pretrained(f"output/{run_name}")
```
</details>
## Comparison with other models
Coming soon...
## Acknowledgements
This model is based on the [Model2Vec](https://github.com/MinishLab/model2vec) library. Credit goes to the [Minish Lab](https://github.com/MinishLab) team for developing this library.
## Citation
This model builds on work done by Minish Lab. Please cite the [Model2Vec repository](https://github.com/MinishLab/model2vec) if you use this model in your work.
```bibtex
@software{minishlab2024model2vec,
authors = {Stephan Tulkens, Thomas van Dongen},
title = {Model2Vec: Turn any Sentence Transformer into a Small Fast Model},
year = {2024},
url = {https://github.com/MinishLab/model2vec},
}
```
|
AyoubChLin/Qwen2.5-Coder-3B_passet_classifer_1.2_16
|
AyoubChLin
| 2025-02-01T12:05:57Z | 136 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"qwen2",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-12-29T10:05:30Z |
---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aleegis12/87ec9b14-159b-4401-bed3-3261c3826d57
|
aleegis12
| 2025-02-01T12:03:12Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-0.5B-Instruct",
"base_model:adapter:unsloth/Qwen2-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-02-01T10:56:33Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 87ec9b14-159b-4401-bed3-3261c3826d57
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-0.5B-Instruct
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 8752aff936d5c852_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8752aff936d5c852_train_data.json
type:
field_instruction: prompt
field_output: completion
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: aleegis12/87ec9b14-159b-4401-bed3-3261c3826d57
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/8752aff936d5c852_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 88425d2c-62ef-4adf-945e-6ac9fafdb1dd
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 88425d2c-62ef-4adf-945e-6ac9fafdb1dd
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 87ec9b14-159b-4401-bed3-3261c3826d57
This model is a fine-tuned version of [unsloth/Qwen2-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2032
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.432 | 0.0000 | 1 | 3.0365 |
| 3.2474 | 0.0022 | 50 | 2.8272 |
| 3.3228 | 0.0043 | 100 | 2.5349 |
| 3.7141 | 0.0065 | 150 | 2.2215 |
| 3.4516 | 0.0086 | 200 | 2.2032 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
playboy40k/flux-AimeeGarciaLora
|
playboy40k
| 2025-02-01T12:02:54Z | 353 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-01T12:01:49Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/FLUX.1-dev.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# Aimee Garcia Flux
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/playboy40k/flux-AimeeGarciaLora/tree/main) them in the Files & versions tab.
|
arcwarden46/5aa1da01-37e9-4fd6-a9aa-a45d823981e2
|
arcwarden46
| 2025-02-01T12:02:41Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-1b",
"base_model:adapter:EleutherAI/pythia-1b",
"license:apache-2.0",
"region:us"
] | null | 2025-02-01T11:53:42Z |
---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-1b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5aa1da01-37e9-4fd6-a9aa-a45d823981e2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-1b
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a5d9c055a3c13355_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a5d9c055a3c13355_train_data.json
type:
field_input: ''
field_instruction: prompt
field_output: text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: arcwarden46/5aa1da01-37e9-4fd6-a9aa-a45d823981e2
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/a5d9c055a3c13355_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 779fc888-bb45-4742-b498-aa4f31c20392
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 779fc888-bb45-4742-b498-aa4f31c20392
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 5aa1da01-37e9-4fd6-a9aa-a45d823981e2
This model is a fine-tuned version of [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8081
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.895 | 0.0034 | 1 | 1.1800 |
| 3.7661 | 0.1718 | 50 | 0.9115 |
| 3.1993 | 0.3436 | 100 | 0.8494 |
| 3.2199 | 0.5155 | 150 | 0.8173 |
| 3.3063 | 0.6873 | 200 | 0.8081 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
auxyus/bf7d3ad7-281b-4e56-b4ae-05f8514af79e
|
auxyus
| 2025-02-01T12:02:04Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Mistral-7b-128k",
"base_model:adapter:NousResearch/Yarn-Mistral-7b-128k",
"license:apache-2.0",
"region:us"
] | null | 2025-02-01T10:20:54Z |
---
library_name: peft
license: apache-2.0
base_model: NousResearch/Yarn-Mistral-7b-128k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: bf7d3ad7-281b-4e56-b4ae-05f8514af79e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Mistral-7b-128k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3fafaf8cf25404aa_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3fafaf8cf25404aa_train_data.json
type:
field_input: context
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: auxyus/bf7d3ad7-281b-4e56-b4ae-05f8514af79e
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/3fafaf8cf25404aa_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 41d8118f-d704-40f9-b279-287f5d2979de
wandb_project: Gradients-On-Two
wandb_run: your_name
wandb_runid: 41d8118f-d704-40f9-b279-287f5d2979de
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# bf7d3ad7-281b-4e56-b4ae-05f8514af79e
This model is a fine-tuned version of [NousResearch/Yarn-Mistral-7b-128k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3164
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0007 | 1 | 0.4783 |
| 1.5155 | 0.0061 | 9 | 0.3963 |
| 1.4998 | 0.0122 | 18 | 0.3621 |
| 1.4113 | 0.0183 | 27 | 0.3472 |
| 1.3425 | 0.0243 | 36 | 0.3374 |
| 1.3894 | 0.0304 | 45 | 0.3318 |
| 1.3277 | 0.0365 | 54 | 0.3266 |
| 1.3134 | 0.0426 | 63 | 0.3226 |
| 1.2321 | 0.0487 | 72 | 0.3192 |
| 1.2888 | 0.0548 | 81 | 0.3175 |
| 1.2835 | 0.0609 | 90 | 0.3166 |
| 1.2445 | 0.0670 | 99 | 0.3164 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
mrHunghddddd/ed92a9aa-79f9-4d8c-b9bd-dde90f2405b5
|
mrHunghddddd
| 2025-02-01T11:56:35Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4",
"base_model:adapter:MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-01T11:02:25Z |
---
library_name: peft
base_model: MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ed92a9aa-79f9-4d8c-b9bd-dde90f2405b5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f206ba1093bd24a7_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f206ba1093bd24a7_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: original_instruction
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: mrHunghddddd/ed92a9aa-79f9-4d8c-b9bd-dde90f2405b5
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/f206ba1093bd24a7_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1136fcf6-c30e-4d43-9aeb-2a86b219d103
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1136fcf6-c30e-4d43-9aeb-2a86b219d103
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# ed92a9aa-79f9-4d8c-b9bd-dde90f2405b5
This model is a fine-tuned version of [MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4](https://huggingface.co/MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0013
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.001 | 0.0786 | 200 | 0.0013 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
n00b001/new-ms-rp-test-ws-Q4_K_M-GGUF
|
n00b001
| 2025-02-01T11:55:27Z | 38 | 0 |
peft
|
[
"peft",
"gguf",
"axolotl",
"generated_from_trainer",
"llama-cpp",
"gguf-my-repo",
"dataset:ToastyPigeon/some-rp-extended",
"base_model:ToastyPigeon/new-ms-rp-test-ws",
"base_model:adapter:ToastyPigeon/new-ms-rp-test-ws",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-01T11:55:24Z |
---
library_name: peft
license: apache-2.0
base_model: ToastyPigeon/new-ms-rp-test-ws
tags:
- axolotl
- generated_from_trainer
- llama-cpp
- gguf-my-repo
datasets:
- ToastyPigeon/some-rp-extended
model-index:
- name: new-ms-rp-test-ws
results: []
---
# n00b001/new-ms-rp-test-ws-Q4_K_M-GGUF
This model was converted to GGUF format from [`ToastyPigeon/new-ms-rp-test-ws`](https://huggingface.co/ToastyPigeon/new-ms-rp-test-ws) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ToastyPigeon/new-ms-rp-test-ws) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo n00b001/new-ms-rp-test-ws-Q4_K_M-GGUF --hf-file new-ms-rp-test-ws-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo n00b001/new-ms-rp-test-ws-Q4_K_M-GGUF --hf-file new-ms-rp-test-ws-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo n00b001/new-ms-rp-test-ws-Q4_K_M-GGUF --hf-file new-ms-rp-test-ws-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo n00b001/new-ms-rp-test-ws-Q4_K_M-GGUF --hf-file new-ms-rp-test-ws-q4_k_m.gguf -c 2048
```
|
p06pratibha/fine-tuned-opus-mt-en-fr
|
p06pratibha
| 2025-02-01T11:54:27Z | 157 | 0 |
transformers
|
[
"transformers",
"safetensors",
"marian",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-01-25T10:41:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
arcwarden46/83574913-8be2-4e57-bc31-b1d81f4d9143
|
arcwarden46
| 2025-02-01T11:53:03Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Math-1.5B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-Math-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-02-01T11:32:27Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Math-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 83574913-8be2-4e57-bc31-b1d81f4d9143
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Math-1.5B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 22587293b779bc55_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/22587293b779bc55_train_data.json
type:
field_input: content
field_instruction: title
field_output: summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: arcwarden46/83574913-8be2-4e57-bc31-b1d81f4d9143
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/22587293b779bc55_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6863ca7d-dba1-4f20-86fd-f4e741cc8950
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6863ca7d-dba1-4f20-86fd-f4e741cc8950
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 83574913-8be2-4e57-bc31-b1d81f4d9143
This model is a fine-tuned version of [unsloth/Qwen2.5-Math-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Math-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3621
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9796 | 0.0136 | 1 | 1.1896 |
| 0.5994 | 0.6803 | 50 | 0.5325 |
| 0.3914 | 1.3639 | 100 | 0.4064 |
| 0.3981 | 2.0476 | 150 | 0.3672 |
| 0.339 | 2.7279 | 200 | 0.3621 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
medmekk/Llama-3.2-1B-Instruct.GGUF
|
medmekk
| 2025-02-01T11:49:37Z | 403 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-02-01T11:48:12Z |
# medmekk/Llama-3.2-1B-Instruct.GGUF
GGUF quantized versions of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct)
## Available Formats:
- `Q2_K`: Llama-3.2-1B-Instruct-Q2_K.gguf
- `Q3_K_S`: Llama-3.2-1B-Instruct-Q3_K_S.gguf
- `Q3_K_M`: Llama-3.2-1B-Instruct-Q3_K_M.gguf
- `Q3_K_L`: Llama-3.2-1B-Instruct-Q3_K_L.gguf
- `Q4_0`: Llama-3.2-1B-Instruct-Q4_0.gguf
- `Q4_K_S`: Llama-3.2-1B-Instruct-Q4_K_S.gguf
- `Q4_K_M`: Llama-3.2-1B-Instruct-Q4_K_M.gguf
- `Q5_0`: Llama-3.2-1B-Instruct-Q5_0.gguf
- `Q5_K_S`: Llama-3.2-1B-Instruct-Q5_K_S.gguf
- `Q5_K_M`: Llama-3.2-1B-Instruct-Q5_K_M.gguf
- `Q6_K`: Llama-3.2-1B-Instruct-Q6_K.gguf
- `Q8_0`: Llama-3.2-1B-Instruct-Q8_0.gguf
- `IQ3_M_IMAT`: Llama-3.2-1B-Instruct-IQ3_M_imat.gguf
- `IQ3_XXS_IMAT`: Llama-3.2-1B-Instruct-IQ3_XXS_imat.gguf
- `Q4_K_M_IMAT`: Llama-3.2-1B-Instruct-Q4_K_M_imat.gguf
- `Q4_K_S_IMAT`: Llama-3.2-1B-Instruct-Q4_K_S_imat.gguf
- `IQ4_NL_IMAT`: Llama-3.2-1B-Instruct-IQ4_NL_imat.gguf
- `IQ4_XS_IMAT`: Llama-3.2-1B-Instruct-IQ4_XS_imat.gguf
- `Q5_K_M_IMAT`: Llama-3.2-1B-Instruct-Q5_K_M_imat.gguf
- `Q5_K_S_IMAT`: Llama-3.2-1B-Instruct-Q5_K_S_imat.gguf
## Usage with llama.cpp:
```bash
# CLI:
llama-cli --hf-repo medmekk/Llama-3.2-1B-Instruct.GGUF --hf-file MODEL_FILE -p "Your prompt"
# Server:
llama-server --hf-repo medmekk/Llama-3.2-1B-Instruct.GGUF --hf-file MODEL_FILE -c 2048
```
|
rikiwi/AbstractPainting
|
rikiwi
| 2025-02-01T11:48:25Z | 22 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:bigscience-bloom-rail-1.0",
"region:us"
] |
text-to-image
| 2025-02-01T11:47:53Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/1000003264.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: bigscience-bloom-rail-1.0
---
# Ave abstract painting
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/rikiwi/AbstractPainting/tree/main) them in the Files & versions tab.
|
dheerajdevai/medicalquestion-answer-gpt2
|
dheerajdevai
| 2025-02-01T11:48:21Z | 13 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-01T11:47:49Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
alchemist69/e09f5bc8-136b-4f16-84bd-ce41d304532c
|
alchemist69
| 2025-02-01T11:46:36Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Math-1.5B",
"base_model:adapter:unsloth/Qwen2.5-Math-1.5B",
"license:apache-2.0",
"region:us"
] | null | 2025-02-01T10:37:41Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Math-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e09f5bc8-136b-4f16-84bd-ce41d304532c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Math-1.5B
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 18e4b6ceb7ea22d7_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/18e4b6ceb7ea22d7_train_data.json
type:
field_instruction: source_text
field_output: target_text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: alchemist69/e09f5bc8-136b-4f16-84bd-ce41d304532c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/18e4b6ceb7ea22d7_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 55d79518-633c-4140-bb9f-1e0392c95610
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 55d79518-633c-4140-bb9f-1e0392c95610
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# e09f5bc8-136b-4f16-84bd-ce41d304532c
This model is a fine-tuned version of [unsloth/Qwen2.5-Math-1.5B](https://huggingface.co/unsloth/Qwen2.5-Math-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1505
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.879 | 0.0001 | 1 | 6.4376 |
| 5.5543 | 0.0036 | 50 | 4.9438 |
| 4.694 | 0.0071 | 100 | 4.4399 |
| 4.9229 | 0.0107 | 150 | 4.2068 |
| 5.0547 | 0.0142 | 200 | 4.1505 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
bane5631/7044cb13-332f-4cb2-858c-26635c953ee3
|
bane5631
| 2025-02-01T11:43:46Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.2-3B",
"base_model:adapter:unsloth/Llama-3.2-3B",
"license:llama3.2",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-01T10:53:19Z |
---
library_name: peft
license: llama3.2
base_model: unsloth/Llama-3.2-3B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7044cb13-332f-4cb2-858c-26635c953ee3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Llama-3.2-3B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 941f453fb96e0898_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/941f453fb96e0898_train_data.json
type:
field_instruction: source_text
field_output: target_text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: bane5631/7044cb13-332f-4cb2-858c-26635c953ee3
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/941f453fb96e0898_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5079f05e-7dbd-403e-b28e-14c8430c58eb
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5079f05e-7dbd-403e-b28e-14c8430c58eb
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 7044cb13-332f-4cb2-858c-26635c953ee3
This model is a fine-tuned version of [unsloth/Llama-3.2-3B](https://huggingface.co/unsloth/Llama-3.2-3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3437
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.8257 | 0.0071 | 200 | 3.3437 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lucifer-ms/task-1-google-gemma-2b
|
lucifer-ms
| 2025-02-01T11:40:05Z | 449 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"region:us"
] | null | 2025-01-22T16:36:10Z |
---
base_model: google/gemma-2b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2
|
nhung03/bbfdb9e1-eda4-408e-a554-0f2fb4a2e201
|
nhung03
| 2025-02-01T11:39:20Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4",
"base_model:adapter:MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-01T11:02:26Z |
---
library_name: peft
base_model: MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4
tags:
- axolotl
- generated_from_trainer
model-index:
- name: bbfdb9e1-eda4-408e-a554-0f2fb4a2e201
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f206ba1093bd24a7_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f206ba1093bd24a7_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: original_instruction
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung03/bbfdb9e1-eda4-408e-a554-0f2fb4a2e201
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/f206ba1093bd24a7_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1136fcf6-c30e-4d43-9aeb-2a86b219d103
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1136fcf6-c30e-4d43-9aeb-2a86b219d103
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# bbfdb9e1-eda4-408e-a554-0f2fb4a2e201
This model is a fine-tuned version of [MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4](https://huggingface.co/MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0014
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.001 | 0.0786 | 200 | 0.0014 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
hongngo/bd3d62cc-e5c3-4a57-8933-4c2389d6f37c
|
hongngo
| 2025-02-01T11:39:18Z | 17 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4",
"base_model:adapter:MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-01T11:02:25Z |
---
library_name: peft
base_model: MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4
tags:
- axolotl
- generated_from_trainer
model-index:
- name: bd3d62cc-e5c3-4a57-8933-4c2389d6f37c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f206ba1093bd24a7_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f206ba1093bd24a7_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: original_instruction
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: hongngo/bd3d62cc-e5c3-4a57-8933-4c2389d6f37c
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/f206ba1093bd24a7_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1136fcf6-c30e-4d43-9aeb-2a86b219d103
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1136fcf6-c30e-4d43-9aeb-2a86b219d103
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# bd3d62cc-e5c3-4a57-8933-4c2389d6f37c
This model is a fine-tuned version of [MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4](https://huggingface.co/MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0013
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0008 | 0.0786 | 200 | 0.0013 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
thangla01/a9491cfd-4149-40fa-ac9b-fb70ebdd8a11
|
thangla01
| 2025-02-01T11:39:14Z | 17 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4",
"base_model:adapter:MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-01T11:02:24Z |
---
library_name: peft
base_model: MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a9491cfd-4149-40fa-ac9b-fb70ebdd8a11
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f206ba1093bd24a7_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f206ba1093bd24a7_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: original_instruction
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thangla01/a9491cfd-4149-40fa-ac9b-fb70ebdd8a11
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/f206ba1093bd24a7_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1136fcf6-c30e-4d43-9aeb-2a86b219d103
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1136fcf6-c30e-4d43-9aeb-2a86b219d103
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# a9491cfd-4149-40fa-ac9b-fb70ebdd8a11
This model is a fine-tuned version of [MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4](https://huggingface.co/MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0012
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0013 | 0.0786 | 200 | 0.0012 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
cunghoctienganh/08d67801-48e1-4e6b-becb-f19639ddc412
|
cunghoctienganh
| 2025-02-01T11:37:39Z | 15 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4",
"base_model:adapter:MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-01T11:02:29Z |
---
library_name: peft
base_model: MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 08d67801-48e1-4e6b-becb-f19639ddc412
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f206ba1093bd24a7_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f206ba1093bd24a7_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: original_instruction
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: cunghoctienganh/08d67801-48e1-4e6b-becb-f19639ddc412
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/f206ba1093bd24a7_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1136fcf6-c30e-4d43-9aeb-2a86b219d103
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1136fcf6-c30e-4d43-9aeb-2a86b219d103
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 08d67801-48e1-4e6b-becb-f19639ddc412
This model is a fine-tuned version of [MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4](https://huggingface.co/MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0012
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0008 | 0.0786 | 200 | 0.0012 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
laquythang/426ea81c-54ee-4fda-9a46-6654be23326f
|
laquythang
| 2025-02-01T11:36:19Z | 18 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4",
"base_model:adapter:MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-01T11:01:21Z |
---
library_name: peft
base_model: MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 426ea81c-54ee-4fda-9a46-6654be23326f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f206ba1093bd24a7_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f206ba1093bd24a7_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: original_instruction
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: laquythang/426ea81c-54ee-4fda-9a46-6654be23326f
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/f206ba1093bd24a7_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1136fcf6-c30e-4d43-9aeb-2a86b219d103
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1136fcf6-c30e-4d43-9aeb-2a86b219d103
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 426ea81c-54ee-4fda-9a46-6654be23326f
This model is a fine-tuned version of [MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4](https://huggingface.co/MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0013
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0009 | 0.0786 | 200 | 0.0013 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
botenius/b90ee90e-142b-4e27-8c64-c8d4f6e40abd
|
botenius
| 2025-02-01T11:34:02Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"gpt_neo",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/gpt-neo-125m",
"base_model:adapter:EleutherAI/gpt-neo-125m",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-01T11:24:35Z |
---
library_name: peft
license: mit
base_model: EleutherAI/gpt-neo-125m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b90ee90e-142b-4e27-8c64-c8d4f6e40abd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/gpt-neo-125m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b1b8b2b04d77b5e9_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b1b8b2b04d77b5e9_train_data.json
type:
field_instruction: prompt
field_output: model_completion
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: true
hub_model_id: botenius/b90ee90e-142b-4e27-8c64-c8d4f6e40abd
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/b1b8b2b04d77b5e9_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 7d09926f-711d-4cdd-b3f0-b3dd3266426e
wandb_project: Gradients-On-13
wandb_run: your_name
wandb_runid: 7d09926f-711d-4cdd-b3f0-b3dd3266426e
warmup_steps: 5
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# b90ee90e-142b-4e27-8c64-c8d4f6e40abd
This model is a fine-tuned version of [EleutherAI/gpt-neo-125m](https://huggingface.co/EleutherAI/gpt-neo-125m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6194
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 5.038 | 0.0138 | 200 | 1.6194 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
arcwarden46/e9ae35fc-91c5-4ade-a01b-c67f44ae291c
|
arcwarden46
| 2025-02-01T11:30:33Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-0.5B-Instruct",
"base_model:adapter:unsloth/Qwen2-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-02-01T10:24:10Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e9ae35fc-91c5-4ade-a01b-c67f44ae291c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-0.5B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 8752aff936d5c852_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8752aff936d5c852_train_data.json
type:
field_instruction: prompt
field_output: completion
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: arcwarden46/e9ae35fc-91c5-4ade-a01b-c67f44ae291c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/8752aff936d5c852_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 88425d2c-62ef-4adf-945e-6ac9fafdb1dd
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 88425d2c-62ef-4adf-945e-6ac9fafdb1dd
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# e9ae35fc-91c5-4ade-a01b-c67f44ae291c
This model is a fine-tuned version of [unsloth/Qwen2-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2041
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.432 | 0.0000 | 1 | 3.0365 |
| 3.2566 | 0.0022 | 50 | 2.8335 |
| 3.3407 | 0.0043 | 100 | 2.5393 |
| 3.7431 | 0.0065 | 150 | 2.2218 |
| 3.4656 | 0.0086 | 200 | 2.2041 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
robiual-awal/37b83b48-1d77-4f88-bcce-00d376fafd88
|
robiual-awal
| 2025-02-01T11:30:00Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"gpt_neo",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/gpt-neo-125m",
"base_model:adapter:EleutherAI/gpt-neo-125m",
"license:mit",
"region:us"
] | null | 2025-02-01T11:24:22Z |
---
library_name: peft
license: mit
base_model: EleutherAI/gpt-neo-125m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 37b83b48-1d77-4f88-bcce-00d376fafd88
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/gpt-neo-125m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b1b8b2b04d77b5e9_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b1b8b2b04d77b5e9_train_data.json
type:
field_instruction: prompt
field_output: model_completion
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: robiual-awal/37b83b48-1d77-4f88-bcce-00d376fafd88
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: constant
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/b1b8b2b04d77b5e9_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7d09926f-711d-4cdd-b3f0-b3dd3266426e
wandb_project: Birthday-SN56-29-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7d09926f-711d-4cdd-b3f0-b3dd3266426e
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 37b83b48-1d77-4f88-bcce-00d376fafd88
This model is a fine-tuned version of [EleutherAI/gpt-neo-125m](https://huggingface.co/EleutherAI/gpt-neo-125m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5508
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 1.9582 |
| 6.2262 | 0.0034 | 50 | 1.6357 |
| 6.5949 | 0.0069 | 100 | 1.5843 |
| 6.4406 | 0.0103 | 150 | 1.5637 |
| 6.6867 | 0.0138 | 200 | 1.5508 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
adammandic87/8b4c85c0-caa6-4736-9add-7877b36118b5
|
adammandic87
| 2025-02-01T11:28:25Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"gpt_neo",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/gpt-neo-125m",
"base_model:adapter:EleutherAI/gpt-neo-125m",
"license:mit",
"region:us"
] | null | 2025-02-01T11:22:42Z |
---
library_name: peft
license: mit
base_model: EleutherAI/gpt-neo-125m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8b4c85c0-caa6-4736-9add-7877b36118b5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/gpt-neo-125m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b1b8b2b04d77b5e9_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b1b8b2b04d77b5e9_train_data.json
type:
field_instruction: prompt
field_output: model_completion
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: adammandic87/8b4c85c0-caa6-4736-9add-7877b36118b5
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: constant
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/b1b8b2b04d77b5e9_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7d09926f-711d-4cdd-b3f0-b3dd3266426e
wandb_project: Birthday-SN56-34-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7d09926f-711d-4cdd-b3f0-b3dd3266426e
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 8b4c85c0-caa6-4736-9add-7877b36118b5
This model is a fine-tuned version of [EleutherAI/gpt-neo-125m](https://huggingface.co/EleutherAI/gpt-neo-125m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5499
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 1.9581 |
| 6.2231 | 0.0034 | 50 | 1.6352 |
| 6.5959 | 0.0069 | 100 | 1.5832 |
| 6.4388 | 0.0103 | 150 | 1.5627 |
| 6.6822 | 0.0138 | 200 | 1.5499 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
leixa/dcafe8ad-7d56-444b-a6d4-8362ff2367da
|
leixa
| 2025-02-01T11:26:43Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Phi-3.5-mini-instruct",
"base_model:adapter:unsloth/Phi-3.5-mini-instruct",
"license:mit",
"region:us"
] | null | 2025-02-01T11:03:02Z |
---
library_name: peft
license: mit
base_model: unsloth/Phi-3.5-mini-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: dcafe8ad-7d56-444b-a6d4-8362ff2367da
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Phi-3.5-mini-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2fbab8c1a175ddba_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2fbab8c1a175ddba_train_data.json
type:
field_input: dataset
field_instruction: input
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: leixa/dcafe8ad-7d56-444b-a6d4-8362ff2367da
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/2fbab8c1a175ddba_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 53b91699-f1c7-405a-883e-084d874dd816
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 53b91699-f1c7-405a-883e-084d874dd816
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# dcafe8ad-7d56-444b-a6d4-8362ff2367da
This model is a fine-tuned version of [unsloth/Phi-3.5-mini-instruct](https://huggingface.co/unsloth/Phi-3.5-mini-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 13.5129
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0046 | 1 | 13.5129 |
| 13.5375 | 0.0413 | 9 | 13.5129 |
| 13.0794 | 0.0826 | 18 | 13.5129 |
| 13.6611 | 0.1239 | 27 | 13.5129 |
| 13.6841 | 0.1651 | 36 | 13.5129 |
| 13.1899 | 0.2064 | 45 | 13.5129 |
| 13.6396 | 0.2477 | 54 | 13.5129 |
| 13.7636 | 0.2890 | 63 | 13.5129 |
| 13.704 | 0.3303 | 72 | 13.5129 |
| 13.576 | 0.3716 | 81 | 13.5129 |
| 14.2948 | 0.4128 | 90 | 13.5129 |
| 13.9451 | 0.4541 | 99 | 13.5129 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
rikiwi/MiniFarms
|
rikiwi
| 2025-02-01T11:26:34Z | 16 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:artistic-2.0",
"region:us"
] |
text-to-image
| 2025-02-01T11:26:23Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/1000003299.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Farm
license: artistic-2.0
---
# AvenersFarm
<Gallery />
## Trigger words
You should use `Farm` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/rikiwi/MiniFarms/tree/main) them in the Files & versions tab.
|
roleplaiapp/L3.3-Nevoria-R1-70b-IQ3_M-GGUF
|
roleplaiapp
| 2025-02-01T11:26:11Z | 12 | 0 |
transformers
|
[
"transformers",
"gguf",
"70b",
"IQ3_M",
"iq3",
"l33",
"llama-cpp",
"nevoria",
"text-generation",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] |
text-generation
| 2025-02-01T11:24:17Z |
---
library_name: transformers
pipeline_tag: text-generation
tags:
- 70b
- IQ3_M
- gguf
- iq3
- l33
- llama-cpp
- nevoria
- text-generation
---
# roleplaiapp/L3.3-Nevoria-R1-70b-IQ3_M-GGUF
**Repo:** `roleplaiapp/L3.3-Nevoria-R1-70b-IQ3_M-GGUF`
**Original Model:** `L3.3-Nevoria-R1-70b`
**Quantized File:** `L3.3-Nevoria-R1-70b-IQ3_M.gguf`
**Quantization:** `GGUF`
**Quantization Method:** `IQ3_M`
## Overview
This is a GGUF IQ3_M quantized version of L3.3-Nevoria-R1-70b
## Quantization By
I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models.
I hope the community finds these quantizations useful.
Andrew Webby @ [RolePlai](https://roleplai.app/).
|
botenius/f943e548-286b-41d7-8270-db06d9b84c63
|
botenius
| 2025-02-01T11:21:20Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Phi-3.5-mini-instruct",
"base_model:adapter:unsloth/Phi-3.5-mini-instruct",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-01T11:03:39Z |
---
library_name: peft
license: mit
base_model: unsloth/Phi-3.5-mini-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f943e548-286b-41d7-8270-db06d9b84c63
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Phi-3.5-mini-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2fbab8c1a175ddba_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2fbab8c1a175ddba_train_data.json
type:
field_input: dataset
field_instruction: input
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: true
hub_model_id: botenius/f943e548-286b-41d7-8270-db06d9b84c63
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/2fbab8c1a175ddba_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 53b91699-f1c7-405a-883e-084d874dd816
wandb_project: Gradients-On-13
wandb_run: your_name
wandb_runid: 53b91699-f1c7-405a-883e-084d874dd816
warmup_steps: 5
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# f943e548-286b-41d7-8270-db06d9b84c63
This model is a fine-tuned version of [unsloth/Phi-3.5-mini-instruct](https://huggingface.co/unsloth/Phi-3.5-mini-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 13.1379
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 13.5608 | 0.2294 | 200 | 13.1379 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
StickyWorm/salamandra-2b-instruct-Q4_K_M-GGUF
|
StickyWorm
| 2025-02-01T11:20:35Z | 32 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"bg",
"ca",
"code",
"cs",
"cy",
"da",
"de",
"el",
"en",
"es",
"et",
"eu",
"fi",
"fr",
"ga",
"gl",
"hr",
"hu",
"it",
"lt",
"lv",
"mt",
"nl",
"nn",
"oc",
"pl",
"pt",
"ro",
"ru",
"sh",
"sk",
"sl",
"sr",
"sv",
"uk",
"dataset:oscar-corpus/colossal-oscar-1.0",
"dataset:HuggingFaceFW/fineweb-edu",
"dataset:joelniklaus/eurlex_resources",
"dataset:joelito/legal-mc4",
"dataset:projecte-aina/CATalog",
"dataset:UFRGS/brwac",
"dataset:community-datasets/hrwac",
"dataset:danish-foundation-models/danish-gigaword",
"dataset:HiTZ/euscrawl",
"dataset:PleIAs/French-PD-Newspapers",
"dataset:PleIAs/French-PD-Books",
"dataset:AI-team-UoA/greek_legal_code",
"dataset:HiTZ/latxa-corpus-v1.1",
"dataset:allenai/peS2o",
"dataset:pile-of-law/pile-of-law",
"dataset:PORTULAN/parlamento-pt",
"dataset:hoskinson-center/proof-pile",
"dataset:togethercomputer/RedPajama-Data-1T",
"dataset:bigcode/starcoderdata",
"dataset:bjoernp/tagesschau-2018-2023",
"dataset:EleutherAI/the_pile_deduplicated",
"base_model:BSC-LT/salamandra-2b-instruct",
"base_model:quantized:BSC-LT/salamandra-2b-instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-02-01T11:20:24Z |
---
license: apache-2.0
library_name: transformers
pipeline_tag: text-generation
language:
- bg
- ca
- code
- cs
- cy
- da
- de
- el
- en
- es
- et
- eu
- fi
- fr
- ga
- gl
- hr
- hu
- it
- lt
- lv
- mt
- nl
- nn
- \no
- oc
- pl
- pt
- ro
- ru
- sh
- sk
- sl
- sr
- sv
- uk
datasets:
- oscar-corpus/colossal-oscar-1.0
- HuggingFaceFW/fineweb-edu
- joelniklaus/eurlex_resources
- joelito/legal-mc4
- projecte-aina/CATalog
- UFRGS/brwac
- community-datasets/hrwac
- danish-foundation-models/danish-gigaword
- HiTZ/euscrawl
- PleIAs/French-PD-Newspapers
- PleIAs/French-PD-Books
- AI-team-UoA/greek_legal_code
- HiTZ/latxa-corpus-v1.1
- allenai/peS2o
- pile-of-law/pile-of-law
- PORTULAN/parlamento-pt
- hoskinson-center/proof-pile
- togethercomputer/RedPajama-Data-1T
- bigcode/starcoderdata
- bjoernp/tagesschau-2018-2023
- EleutherAI/the_pile_deduplicated
base_model: BSC-LT/salamandra-2b-instruct
tags:
- llama-cpp
- gguf-my-repo
---
# StickyWorm/salamandra-2b-instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`BSC-LT/salamandra-2b-instruct`](https://huggingface.co/BSC-LT/salamandra-2b-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/BSC-LT/salamandra-2b-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo StickyWorm/salamandra-2b-instruct-Q4_K_M-GGUF --hf-file salamandra-2b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo StickyWorm/salamandra-2b-instruct-Q4_K_M-GGUF --hf-file salamandra-2b-instruct-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo StickyWorm/salamandra-2b-instruct-Q4_K_M-GGUF --hf-file salamandra-2b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo StickyWorm/salamandra-2b-instruct-Q4_K_M-GGUF --hf-file salamandra-2b-instruct-q4_k_m.gguf -c 2048
```
|
nttx/a86c724a-42ad-42ce-9135-5fba95c8c9b6
|
nttx
| 2025-02-01T11:20:09Z | 17 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4",
"base_model:adapter:MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4",
"region:us"
] | null | 2025-02-01T11:00:17Z |
---
library_name: peft
base_model: MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a86c724a-42ad-42ce-9135-5fba95c8c9b6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f206ba1093bd24a7_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f206ba1093bd24a7_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: original_instruction
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: nttx/a86c724a-42ad-42ce-9135-5fba95c8c9b6
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/f206ba1093bd24a7_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1136fcf6-c30e-4d43-9aeb-2a86b219d103
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1136fcf6-c30e-4d43-9aeb-2a86b219d103
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a86c724a-42ad-42ce-9135-5fba95c8c9b6
This model is a fine-tuned version of [MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4](https://huggingface.co/MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0009
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0009 | 0.1572 | 200 | 0.0009 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
NikolayKozloff/Virtuoso-Small-v2-Q4_K_M-GGUF
|
NikolayKozloff
| 2025-02-01T11:18:42Z | 9 | 1 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:arcee-ai/Virtuoso-Small-v2",
"base_model:quantized:arcee-ai/Virtuoso-Small-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-01T11:18:03Z |
---
base_model: arcee-ai/Virtuoso-Small-v2
library_name: transformers
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# NikolayKozloff/Virtuoso-Small-v2-Q4_K_M-GGUF
This model was converted to GGUF format from [`arcee-ai/Virtuoso-Small-v2`](https://huggingface.co/arcee-ai/Virtuoso-Small-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/arcee-ai/Virtuoso-Small-v2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NikolayKozloff/Virtuoso-Small-v2-Q4_K_M-GGUF --hf-file virtuoso-small-v2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/Virtuoso-Small-v2-Q4_K_M-GGUF --hf-file virtuoso-small-v2-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NikolayKozloff/Virtuoso-Small-v2-Q4_K_M-GGUF --hf-file virtuoso-small-v2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NikolayKozloff/Virtuoso-Small-v2-Q4_K_M-GGUF --hf-file virtuoso-small-v2-q4_k_m.gguf -c 2048
```
|
memevis/nano17
|
memevis
| 2025-02-01T11:17:56Z | 35 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-01T11:12:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
pniedzwiedzinski/donut-demo-2
|
pniedzwiedzinski
| 2025-02-01T11:17:04Z | 24 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-01-31T15:01:53Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
roleplaiapp/L3.3-Nevoria-R1-70b-Q5_K_M-GGUF
|
roleplaiapp
| 2025-02-01T11:16:51Z | 24 | 0 |
transformers
|
[
"transformers",
"gguf",
"5-bit",
"70b",
"Q5_K_M",
"l33",
"llama-cpp",
"nevoria",
"text-generation",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-01T11:12:22Z |
---
library_name: transformers
pipeline_tag: text-generation
tags:
- 5-bit
- 70b
- Q5_K_M
- gguf
- l33
- llama-cpp
- nevoria
- text-generation
---
# roleplaiapp/L3.3-Nevoria-R1-70b-Q5_K_M-GGUF
**Repo:** `roleplaiapp/L3.3-Nevoria-R1-70b-Q5_K_M-GGUF`
**Original Model:** `L3.3-Nevoria-R1-70b`
**Quantized File:** `L3.3-Nevoria-R1-70b-Q5_K_M/L3.3-Nevoria-R1-70b-Q5_K_M-00001-of-00002.gguf`
**Quantization:** `GGUF`
**Quantization Method:** `Q5_K_M`
## Overview
This is a GGUF Q5_K_M quantized version of L3.3-Nevoria-R1-70b
## Quantization By
I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models.
I hope the community finds these quantizations useful.
Andrew Webby @ [RolePlai](https://roleplai.app/).
|
mrferr3t/a4a52d28-0413-4b0c-9d56-8ac3e3aef8d9
|
mrferr3t
| 2025-02-01T11:15:16Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.2-3B",
"base_model:adapter:unsloth/Llama-3.2-3B",
"license:llama3.2",
"region:us"
] | null | 2025-02-01T10:57:45Z |
---
library_name: peft
license: llama3.2
base_model: unsloth/Llama-3.2-3B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a4a52d28-0413-4b0c-9d56-8ac3e3aef8d9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Llama-3.2-3B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 941f453fb96e0898_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/941f453fb96e0898_train_data.json
type:
field_instruction: source_text
field_output: target_text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 50
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/a4a52d28-0413-4b0c-9d56-8ac3e3aef8d9
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0005
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 99
micro_batch_size: 2
mlflow_experiment_name: /tmp/941f453fb96e0898_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 300
saves_per_epoch: 0
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5079f05e-7dbd-403e-b28e-14c8430c58eb
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5079f05e-7dbd-403e-b28e-14c8430c58eb
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a4a52d28-0413-4b0c-9d56-8ac3e3aef8d9
This model is a fine-tuned version of [unsloth/Llama-3.2-3B](https://huggingface.co/unsloth/Llama-3.2-3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4152
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 99
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.9962 | 0.0000 | 1 | 4.7231 |
| 3.6592 | 0.0009 | 50 | 3.4152 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
WUw0/7153482216-6
|
WUw0
| 2025-02-01T11:13:02Z | 15 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-01-29T13:37:54Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: 7153482216-6
---
# 7153482216 6
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `7153482216-6` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('WUw0/7153482216-6', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
rorito/perfecthandFlux
|
rorito
| 2025-02-01T11:12:44Z | 17 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2025-02-01T11:12:32Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: "UNICODE\0\0<\0l\0o\0r\0a\0:\0H\0a\0n\0d\0 \0v\02\0:\01\0>\0"
output:
url: images/00061-3789446010.jpeg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: apache-2.0
---
# fluxhand
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/rorito/perfecthandFlux/tree/main) them in the Files & versions tab.
|
Bhaveen/medimix-whisper-fine-tuned
|
Bhaveen
| 2025-02-01T11:12:11Z | 54 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"en",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-02-01T10:05:57Z |
---
library_name: transformers
language:
- en
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper Small En Medimix
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small En Medimix
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the cutsom_whatsapp_audio dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5352
- Wer: 12.2137
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:--------:|:----:|:---------------:|:-------:|
| 0.0004 | 66.6667 | 200 | 0.4815 | 11.8321 |
| 0.0001 | 133.3333 | 400 | 0.5102 | 11.4504 |
| 0.0001 | 200.0 | 600 | 0.5246 | 11.4504 |
| 0.0001 | 266.6667 | 800 | 0.5323 | 12.2137 |
| 0.0001 | 333.3333 | 1000 | 0.5352 | 12.2137 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu124
- Tokenizers 0.21.0
|
kk-aivio/ed6312f2-f0cb-4438-9c09-ff059d8f45e3
|
kk-aivio
| 2025-02-01T11:11:23Z | 17 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4",
"base_model:adapter:MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4",
"region:us"
] | null | 2025-02-01T11:02:27Z |
---
library_name: peft
base_model: MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ed6312f2-f0cb-4438-9c09-ff059d8f45e3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f206ba1093bd24a7_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f206ba1093bd24a7_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: original_instruction
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kk-aivio/ed6312f2-f0cb-4438-9c09-ff059d8f45e3
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/f206ba1093bd24a7_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1136fcf6-c30e-4d43-9aeb-2a86b219d103
wandb_project: Birthday-SN56-17-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1136fcf6-c30e-4d43-9aeb-2a86b219d103
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# ed6312f2-f0cb-4438-9c09-ff059d8f45e3
This model is a fine-tuned version of [MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4](https://huggingface.co/MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0004 | 1 | nan |
| 0.657 | 0.0196 | 50 | nan |
| 0.3429 | 0.0393 | 100 | nan |
| 0.1382 | 0.0589 | 150 | nan |
| 0.0 | 0.0786 | 200 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
NikolayKozloff/Virtuoso-Small-v2-Q5_K_S-GGUF
|
NikolayKozloff
| 2025-02-01T11:10:52Z | 5 | 1 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:arcee-ai/Virtuoso-Small-v2",
"base_model:quantized:arcee-ai/Virtuoso-Small-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-01T11:10:07Z |
---
base_model: arcee-ai/Virtuoso-Small-v2
library_name: transformers
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# NikolayKozloff/Virtuoso-Small-v2-Q5_K_S-GGUF
This model was converted to GGUF format from [`arcee-ai/Virtuoso-Small-v2`](https://huggingface.co/arcee-ai/Virtuoso-Small-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/arcee-ai/Virtuoso-Small-v2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NikolayKozloff/Virtuoso-Small-v2-Q5_K_S-GGUF --hf-file virtuoso-small-v2-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/Virtuoso-Small-v2-Q5_K_S-GGUF --hf-file virtuoso-small-v2-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NikolayKozloff/Virtuoso-Small-v2-Q5_K_S-GGUF --hf-file virtuoso-small-v2-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NikolayKozloff/Virtuoso-Small-v2-Q5_K_S-GGUF --hf-file virtuoso-small-v2-q5_k_s.gguf -c 2048
```
|
cimol/9ef14f69-ebd3-49de-998d-222171ffa8f3
|
cimol
| 2025-02-01T11:09:46Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:fxmarty/tiny-dummy-qwen2",
"base_model:adapter:fxmarty/tiny-dummy-qwen2",
"license:mit",
"region:us"
] | null | 2025-02-01T11:08:25Z |
---
library_name: peft
license: mit
base_model: fxmarty/tiny-dummy-qwen2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9ef14f69-ebd3-49de-998d-222171ffa8f3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/tiny-dummy-qwen2
bf16: true
chat_template: llama3
data_processes: 24
dataset_prepared_path: null
datasets:
- data_files:
- e40773c2e24ae20f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e40773c2e24ae20f_train_data.json
type:
field_instruction: inputs
field_output: targets
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 4
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: cimol/9ef14f69-ebd3-49de-998d-222171ffa8f3
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 7.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.04
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
lr_scheduler_warmup_steps: 50
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/e40773c2e24ae20f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-8
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
seed: 17333
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
total_train_batch_size: 32
train_batch_size: 8
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b989a2c7-32d0-4a72-b4fa-b25cde863b42
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b989a2c7-32d0-4a72-b4fa-b25cde863b42
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 9ef14f69-ebd3-49de-998d-222171ffa8f3
This model is a fine-tuned version of [fxmarty/tiny-dummy-qwen2](https://huggingface.co/fxmarty/tiny-dummy-qwen2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.9156
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 17333
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-8
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 11.932 | 0.0071 | 1 | 11.9302 |
| 11.9226 | 0.3571 | 50 | 11.9227 |
| 11.8939 | 0.7143 | 100 | 11.9166 |
| 11.923 | 1.0714 | 150 | 11.9159 |
| 11.911 | 1.4286 | 200 | 11.9156 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
shibajustfor/5cef1f96-df62-4f3b-a177-ef66479c0100
|
shibajustfor
| 2025-02-01T11:09:28Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:fxmarty/tiny-dummy-qwen2",
"base_model:adapter:fxmarty/tiny-dummy-qwen2",
"license:mit",
"region:us"
] | null | 2025-02-01T11:09:03Z |
---
library_name: peft
license: mit
base_model: fxmarty/tiny-dummy-qwen2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5cef1f96-df62-4f3b-a177-ef66479c0100
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/tiny-dummy-qwen2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e40773c2e24ae20f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e40773c2e24ae20f_train_data.json
type:
field_instruction: inputs
field_output: targets
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: shibajustfor/5cef1f96-df62-4f3b-a177-ef66479c0100
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: constant
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/e40773c2e24ae20f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b989a2c7-32d0-4a72-b4fa-b25cde863b42
wandb_project: Birthday-SN56-38-Gradients-On-Demand
wandb_run: your_name
wandb_runid: b989a2c7-32d0-4a72-b4fa-b25cde863b42
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 5cef1f96-df62-4f3b-a177-ef66479c0100
This model is a fine-tuned version of [fxmarty/tiny-dummy-qwen2](https://huggingface.co/fxmarty/tiny-dummy-qwen2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.9295
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0018 | 1 | 11.9301 |
| 11.9322 | 0.0232 | 13 | 11.9299 |
| 11.9307 | 0.0465 | 26 | 11.9297 |
| 11.9294 | 0.0697 | 39 | 11.9295 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
dixedus/74eb2245-aef4-4d49-8125-f2c8086f2bba
|
dixedus
| 2025-02-01T11:09:20Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:fxmarty/tiny-dummy-qwen2",
"base_model:adapter:fxmarty/tiny-dummy-qwen2",
"license:mit",
"region:us"
] | null | 2025-02-01T11:08:20Z |
---
library_name: peft
license: mit
base_model: fxmarty/tiny-dummy-qwen2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 74eb2245-aef4-4d49-8125-f2c8086f2bba
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/tiny-dummy-qwen2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e40773c2e24ae20f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e40773c2e24ae20f_train_data.json
type:
field_instruction: inputs
field_output: targets
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: dixedus/74eb2245-aef4-4d49-8125-f2c8086f2bba
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/e40773c2e24ae20f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: b989a2c7-32d0-4a72-b4fa-b25cde863b42
wandb_project: Gradients-On-Eight
wandb_run: your_name
wandb_runid: b989a2c7-32d0-4a72-b4fa-b25cde863b42
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 74eb2245-aef4-4d49-8125-f2c8086f2bba
This model is a fine-tuned version of [fxmarty/tiny-dummy-qwen2](https://huggingface.co/fxmarty/tiny-dummy-qwen2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.9291
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0071 | 1 | 11.9306 |
| 11.931 | 0.0643 | 9 | 11.9305 |
| 11.9315 | 0.1286 | 18 | 11.9304 |
| 11.9311 | 0.1929 | 27 | 11.9302 |
| 11.9312 | 0.2571 | 36 | 11.9300 |
| 11.9312 | 0.3214 | 45 | 11.9298 |
| 11.9308 | 0.3857 | 54 | 11.9296 |
| 11.9309 | 0.45 | 63 | 11.9294 |
| 11.9295 | 0.5143 | 72 | 11.9292 |
| 11.931 | 0.5786 | 81 | 11.9291 |
| 11.9279 | 0.6429 | 90 | 11.9291 |
| 11.9296 | 0.7071 | 99 | 11.9291 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso18/ac8661f4-79f3-45d1-9d6f-a66b0760303a
|
lesso18
| 2025-02-01T11:08:51Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:fxmarty/tiny-dummy-qwen2",
"base_model:adapter:fxmarty/tiny-dummy-qwen2",
"license:mit",
"region:us"
] | null | 2025-02-01T11:08:25Z |
---
library_name: peft
license: mit
base_model: fxmarty/tiny-dummy-qwen2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ac8661f4-79f3-45d1-9d6f-a66b0760303a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/tiny-dummy-qwen2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e40773c2e24ae20f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e40773c2e24ae20f_train_data.json
type:
field_instruction: inputs
field_output: targets
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso18/ac8661f4-79f3-45d1-9d6f-a66b0760303a
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/e40773c2e24ae20f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b989a2c7-32d0-4a72-b4fa-b25cde863b42
wandb_project: new-01-29
wandb_run: your_name
wandb_runid: b989a2c7-32d0-4a72-b4fa-b25cde863b42
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# ac8661f4-79f3-45d1-9d6f-a66b0760303a
This model is a fine-tuned version of [fxmarty/tiny-dummy-qwen2](https://huggingface.co/fxmarty/tiny-dummy-qwen2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.9297
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 11.9306 | 0.3576 | 200 | 11.9297 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
mrferr3t/7b78b240-d2d1-4884-b8a5-c6519790cf59
|
mrferr3t
| 2025-02-01T11:06:23Z | 17 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4",
"base_model:adapter:MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4",
"region:us"
] | null | 2025-02-01T11:01:54Z |
---
library_name: peft
base_model: MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7b78b240-d2d1-4884-b8a5-c6519790cf59
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f206ba1093bd24a7_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f206ba1093bd24a7_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: original_instruction
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 50
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/7b78b240-d2d1-4884-b8a5-c6519790cf59
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0005
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 99
micro_batch_size: 2
mlflow_experiment_name: /tmp/f206ba1093bd24a7_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 300
saves_per_epoch: 0
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1136fcf6-c30e-4d43-9aeb-2a86b219d103
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1136fcf6-c30e-4d43-9aeb-2a86b219d103
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 7b78b240-d2d1-4884-b8a5-c6519790cf59
This model is a fine-tuned version of [MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4](https://huggingface.co/MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0043
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 99
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.1254 | 0.0004 | 1 | 0.1242 |
| 0.0064 | 0.0196 | 50 | 0.0043 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
arunvinc/medicalqna-gpt2
|
arunvinc
| 2025-02-01T11:02:42Z | 29 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-01T09:45:49Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kalyankamarajugadda/gita-text-generation-gpt2
|
kalyankamarajugadda
| 2025-02-01T11:01:30Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-01T11:00:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
genki10/ASAP_FineTuningBERT_AugV6_k2_task1_organization_fold4
|
genki10
| 2025-02-01T10:58:02Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-02-01T10:34:16Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: ASAP_FineTuningBERT_AugV6_k2_task1_organization_fold4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ASAP_FineTuningBERT_AugV6_k2_task1_organization_fold4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6804
- Qwk: 0.5021
- Mse: 0.6804
- Rmse: 0.8249
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:------:|
| No log | 1.0 | 2 | 11.1209 | -0.0157 | 11.1209 | 3.3348 |
| No log | 2.0 | 4 | 8.7746 | 0.0018 | 8.7746 | 2.9622 |
| No log | 3.0 | 6 | 6.9364 | 0.0018 | 6.9364 | 2.6337 |
| No log | 4.0 | 8 | 5.5324 | 0.0274 | 5.5324 | 2.3521 |
| 6.6034 | 5.0 | 10 | 4.2492 | 0.0156 | 4.2492 | 2.0614 |
| 6.6034 | 6.0 | 12 | 3.2808 | 0.0040 | 3.2808 | 1.8113 |
| 6.6034 | 7.0 | 14 | 2.5080 | 0.0048 | 2.5080 | 1.5837 |
| 6.6034 | 8.0 | 16 | 2.0068 | 0.0482 | 2.0068 | 1.4166 |
| 6.6034 | 9.0 | 18 | 1.6132 | 0.0420 | 1.6132 | 1.2701 |
| 2.5428 | 10.0 | 20 | 1.3131 | 0.0212 | 1.3131 | 1.1459 |
| 2.5428 | 11.0 | 22 | 1.1359 | 0.0316 | 1.1359 | 1.0658 |
| 2.5428 | 12.0 | 24 | 1.3030 | 0.0771 | 1.3030 | 1.1415 |
| 2.5428 | 13.0 | 26 | 1.2342 | 0.0855 | 1.2342 | 1.1110 |
| 2.5428 | 14.0 | 28 | 0.9099 | 0.0600 | 0.9099 | 0.9539 |
| 1.8254 | 15.0 | 30 | 0.9973 | 0.1133 | 0.9973 | 0.9986 |
| 1.8254 | 16.0 | 32 | 0.9950 | 0.2223 | 0.9950 | 0.9975 |
| 1.8254 | 17.0 | 34 | 0.7981 | 0.4332 | 0.7981 | 0.8933 |
| 1.8254 | 18.0 | 36 | 0.6714 | 0.5183 | 0.6714 | 0.8194 |
| 1.8254 | 19.0 | 38 | 0.7042 | 0.4804 | 0.7042 | 0.8392 |
| 1.3223 | 20.0 | 40 | 0.6472 | 0.5065 | 0.6472 | 0.8045 |
| 1.3223 | 21.0 | 42 | 0.6011 | 0.4998 | 0.6011 | 0.7753 |
| 1.3223 | 22.0 | 44 | 0.5965 | 0.4949 | 0.5965 | 0.7723 |
| 1.3223 | 23.0 | 46 | 0.5990 | 0.4922 | 0.5990 | 0.7739 |
| 1.3223 | 24.0 | 48 | 0.5436 | 0.5149 | 0.5436 | 0.7373 |
| 0.6976 | 25.0 | 50 | 0.7030 | 0.4948 | 0.7030 | 0.8385 |
| 0.6976 | 26.0 | 52 | 0.6425 | 0.4953 | 0.6425 | 0.8016 |
| 0.6976 | 27.0 | 54 | 0.6427 | 0.4759 | 0.6427 | 0.8017 |
| 0.6976 | 28.0 | 56 | 0.7482 | 0.4521 | 0.7482 | 0.8650 |
| 0.6976 | 29.0 | 58 | 0.6907 | 0.5331 | 0.6907 | 0.8311 |
| 0.3639 | 30.0 | 60 | 0.6864 | 0.5512 | 0.6864 | 0.8285 |
| 0.3639 | 31.0 | 62 | 0.8570 | 0.4641 | 0.8570 | 0.9257 |
| 0.3639 | 32.0 | 64 | 0.6410 | 0.5539 | 0.6410 | 0.8006 |
| 0.3639 | 33.0 | 66 | 0.6630 | 0.5442 | 0.6630 | 0.8143 |
| 0.3639 | 34.0 | 68 | 0.7999 | 0.4648 | 0.7999 | 0.8943 |
| 0.2713 | 35.0 | 70 | 0.7183 | 0.4937 | 0.7183 | 0.8475 |
| 0.2713 | 36.0 | 72 | 0.7293 | 0.4991 | 0.7293 | 0.8540 |
| 0.2713 | 37.0 | 74 | 0.8183 | 0.4629 | 0.8183 | 0.9046 |
| 0.2713 | 38.0 | 76 | 0.6861 | 0.5125 | 0.6861 | 0.8283 |
| 0.2713 | 39.0 | 78 | 0.6470 | 0.5371 | 0.6470 | 0.8044 |
| 0.1705 | 40.0 | 80 | 0.7218 | 0.5471 | 0.7218 | 0.8496 |
| 0.1705 | 41.0 | 82 | 0.6592 | 0.5299 | 0.6592 | 0.8119 |
| 0.1705 | 42.0 | 84 | 0.7195 | 0.4795 | 0.7195 | 0.8483 |
| 0.1705 | 43.0 | 86 | 0.8045 | 0.4526 | 0.8045 | 0.8970 |
| 0.1705 | 44.0 | 88 | 0.7435 | 0.4670 | 0.7435 | 0.8623 |
| 0.1643 | 45.0 | 90 | 0.7635 | 0.5157 | 0.7635 | 0.8738 |
| 0.1643 | 46.0 | 92 | 0.6574 | 0.5198 | 0.6574 | 0.8108 |
| 0.1643 | 47.0 | 94 | 0.7274 | 0.5531 | 0.7274 | 0.8529 |
| 0.1643 | 48.0 | 96 | 0.6681 | 0.5602 | 0.6681 | 0.8174 |
| 0.1643 | 49.0 | 98 | 0.6613 | 0.5276 | 0.6613 | 0.8132 |
| 0.165 | 50.0 | 100 | 0.6779 | 0.5002 | 0.6779 | 0.8234 |
| 0.165 | 51.0 | 102 | 0.6650 | 0.5086 | 0.6650 | 0.8155 |
| 0.165 | 52.0 | 104 | 0.6254 | 0.5311 | 0.6254 | 0.7908 |
| 0.165 | 53.0 | 106 | 0.6492 | 0.5679 | 0.6492 | 0.8057 |
| 0.165 | 54.0 | 108 | 0.6459 | 0.5476 | 0.6459 | 0.8037 |
| 0.1146 | 55.0 | 110 | 0.6640 | 0.5016 | 0.6640 | 0.8149 |
| 0.1146 | 56.0 | 112 | 0.6997 | 0.4750 | 0.6997 | 0.8365 |
| 0.1146 | 57.0 | 114 | 0.7033 | 0.4754 | 0.7033 | 0.8386 |
| 0.1146 | 58.0 | 116 | 0.7129 | 0.4819 | 0.7129 | 0.8443 |
| 0.1146 | 59.0 | 118 | 0.6873 | 0.4774 | 0.6873 | 0.8290 |
| 0.0861 | 60.0 | 120 | 0.6961 | 0.5109 | 0.6961 | 0.8343 |
| 0.0861 | 61.0 | 122 | 0.6808 | 0.5319 | 0.6808 | 0.8251 |
| 0.0861 | 62.0 | 124 | 0.6856 | 0.5110 | 0.6856 | 0.8280 |
| 0.0861 | 63.0 | 126 | 0.6850 | 0.5130 | 0.6850 | 0.8277 |
| 0.0861 | 64.0 | 128 | 0.6803 | 0.5179 | 0.6803 | 0.8248 |
| 0.0785 | 65.0 | 130 | 0.6657 | 0.5129 | 0.6657 | 0.8159 |
| 0.0785 | 66.0 | 132 | 0.6568 | 0.5309 | 0.6568 | 0.8105 |
| 0.0785 | 67.0 | 134 | 0.6509 | 0.5216 | 0.6509 | 0.8068 |
| 0.0785 | 68.0 | 136 | 0.6608 | 0.5250 | 0.6608 | 0.8129 |
| 0.0785 | 69.0 | 138 | 0.6653 | 0.5107 | 0.6653 | 0.8157 |
| 0.0731 | 70.0 | 140 | 0.6596 | 0.5168 | 0.6596 | 0.8121 |
| 0.0731 | 71.0 | 142 | 0.6484 | 0.5240 | 0.6484 | 0.8053 |
| 0.0731 | 72.0 | 144 | 0.6503 | 0.5401 | 0.6503 | 0.8064 |
| 0.0731 | 73.0 | 146 | 0.6622 | 0.5133 | 0.6622 | 0.8137 |
| 0.0731 | 74.0 | 148 | 0.6903 | 0.5059 | 0.6903 | 0.8308 |
| 0.0682 | 75.0 | 150 | 0.6977 | 0.4960 | 0.6977 | 0.8353 |
| 0.0682 | 76.0 | 152 | 0.6871 | 0.4985 | 0.6871 | 0.8289 |
| 0.0682 | 77.0 | 154 | 0.6751 | 0.5075 | 0.6751 | 0.8216 |
| 0.0682 | 78.0 | 156 | 0.6674 | 0.5051 | 0.6674 | 0.8170 |
| 0.0682 | 79.0 | 158 | 0.6755 | 0.5081 | 0.6755 | 0.8219 |
| 0.0669 | 80.0 | 160 | 0.6913 | 0.5010 | 0.6913 | 0.8314 |
| 0.0669 | 81.0 | 162 | 0.6989 | 0.4971 | 0.6989 | 0.8360 |
| 0.0669 | 82.0 | 164 | 0.6937 | 0.5027 | 0.6937 | 0.8329 |
| 0.0669 | 83.0 | 166 | 0.6865 | 0.5006 | 0.6865 | 0.8285 |
| 0.0669 | 84.0 | 168 | 0.6706 | 0.5135 | 0.6706 | 0.8189 |
| 0.0652 | 85.0 | 170 | 0.6746 | 0.5186 | 0.6746 | 0.8213 |
| 0.0652 | 86.0 | 172 | 0.7008 | 0.5125 | 0.7008 | 0.8371 |
| 0.0652 | 87.0 | 174 | 0.7165 | 0.4873 | 0.7165 | 0.8464 |
| 0.0652 | 88.0 | 176 | 0.7140 | 0.4873 | 0.7140 | 0.8450 |
| 0.0652 | 89.0 | 178 | 0.7087 | 0.4841 | 0.7087 | 0.8418 |
| 0.068 | 90.0 | 180 | 0.6997 | 0.5012 | 0.6997 | 0.8365 |
| 0.068 | 91.0 | 182 | 0.6954 | 0.4941 | 0.6954 | 0.8339 |
| 0.068 | 92.0 | 184 | 0.6945 | 0.4998 | 0.6945 | 0.8334 |
| 0.068 | 93.0 | 186 | 0.6864 | 0.4993 | 0.6864 | 0.8285 |
| 0.068 | 94.0 | 188 | 0.6816 | 0.5024 | 0.6816 | 0.8256 |
| 0.0606 | 95.0 | 190 | 0.6798 | 0.5045 | 0.6798 | 0.8245 |
| 0.0606 | 96.0 | 192 | 0.6796 | 0.5014 | 0.6796 | 0.8244 |
| 0.0606 | 97.0 | 194 | 0.6798 | 0.5001 | 0.6798 | 0.8245 |
| 0.0606 | 98.0 | 196 | 0.6800 | 0.5021 | 0.6800 | 0.8246 |
| 0.0606 | 99.0 | 198 | 0.6802 | 0.5021 | 0.6802 | 0.8248 |
| 0.0613 | 100.0 | 200 | 0.6804 | 0.5021 | 0.6804 | 0.8249 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.2.0
- Tokenizers 0.19.1
|
memevis/nano15
|
memevis
| 2025-02-01T10:57:33Z | 47 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-01T10:52:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
XeTute/AURORA-V1-1.1B-GGUF
|
XeTute
| 2025-02-01T10:51:24Z | 6 | 4 |
GGUF
|
[
"GGUF",
"gguf",
"conversational",
"chat",
"roleplay",
"text-generation",
"en",
"es",
"dataset:XeTute/Small-Medium-Conversation-Multilingual",
"dataset:XeTute/Conversational-Small",
"base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-715k-1.5T",
"base_model:quantized:TinyLlama/TinyLlama-1.1B-intermediate-step-715k-1.5T",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-20T11:04:22Z |
---
license: mit
license_name: xt-aurora-license
license_link: LICENSE
language:
- en
- es
tags:
- conversational
- chat
- roleplay
library_name: GGUF
pipeline_tag: text-generation
base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-715k-1.5T
datasets:
- XeTute/Small-Medium-Conversation-Multilingual
- XeTute/Conversational-Small
---

**Note**<br>
With the release of Meta's LLaMA 3.2 1B, this model got outperformed significantly. Since we don't have a lot of GPU power or money to furter train this or another model to even come close to Meta's models, we recommend you to use theirs over ours.
We, XeTute, introduce AURORA V1.0 - a humerous, efficient, smart(for its size) and mostly unbiased(consider it a virtual child with a bunch of knowledge =), biases were largely removed after training through some easy techniques) Language Model.
**Intended usecases:**
- Next-Word prediction for mobile devices:
- - This Model can be reliably packaged into a keyboard-app to help make Next-Word suggestions more accurate (for performance, INT4 or less might be smart)
- Conversations:
- - AURORA can engage in conversations using the Vicuna format, remember to replace "ASSISTANT" with "AURORA" though.
- - AURORA can engage in SFW roleplay with simple character definitions. It wasn't trained on NSFW.
- - AURORA can engage in simple, short Q&A. It was trained on factual data too, which means it performs well for its size.
**Training:**
- Trained for two months.
- Dataset created by XeTute, and translated using different free-lancing services.
- Dataset included:
- - Mathematic Q&A
- - Logic Q&A
- - One-Page stories and roleplays with very brief character definitions
- ADAM as an optimizer.
Alltogether, the model was trained on additional 20B tokens.
<a href='https://ko-fi.com/C0C2ZXNON' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi3.png?v=3' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
Note:
- All previous beta versions of this series of SLMs were deleted, because almost no downloads were made.
- V1.0 is the last model in this series which will be published, because of too little community activity.
Recommended settings:
- Temperature 0.1 - 0.4 is stable.
- Context Length of 2048(base) to 4096(RoPE) will work well for story-telling, role-playing and simple conversations.
- Output Length: 256 will work very stable, but you can extent to 512. Anything beyond that point is risky, text might become repetitous.
- A system prompt which works well can be found at "Files at Versions" => "chat_template". Just copy and paste this into the system prompt or add it before your first message.
- Chat Format:
```For roleplay:
{name of your roleplay}: {input}
{name of AURORA's character}: {output}
```
or,
```For normal chatting:
USER: {input}
AURORA: {output}
```
Chat examples using KoboldCPP and the settings recommended above:


Note, a roleplay where you directly pass character definitions and a starting scenario will work way better, this is just an example.
We wish you a friendly chat with AURORA.
|
Kuongan/cs221-xlnet-large-cased-eng-pt
|
Kuongan
| 2025-02-01T10:51:05Z | 15 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlnet",
"text-classification",
"generated_from_trainer",
"base_model:xlnet/xlnet-large-cased",
"base_model:finetune:xlnet/xlnet-large-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-02-01T10:07:38Z |
---
library_name: transformers
license: mit
base_model: xlnet/xlnet-large-cased
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: cs221-xlnet-large-cased-eng-pt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cs221-xlnet-large-cased-eng-pt
This model is a fine-tuned version of [xlnet/xlnet-large-cased](https://huggingface.co/xlnet/xlnet-large-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5460
- F1: 0.7566
- Roc Auc: 0.8089
- Accuracy: 0.4828
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.5889 | 1.0 | 87 | 0.5873 | 0.1408 | 0.5 | 0.1207 |
| 0.4707 | 2.0 | 174 | 0.4159 | 0.6512 | 0.7304 | 0.3966 |
| 0.3648 | 3.0 | 261 | 0.3671 | 0.6550 | 0.7572 | 0.4483 |
| 0.2675 | 4.0 | 348 | 0.3692 | 0.7085 | 0.7779 | 0.4397 |
| 0.1929 | 5.0 | 435 | 0.3821 | 0.7077 | 0.7781 | 0.4483 |
| 0.1407 | 6.0 | 522 | 0.4573 | 0.7087 | 0.7753 | 0.4224 |
| 0.097 | 7.0 | 609 | 0.4498 | 0.7392 | 0.8005 | 0.4569 |
| 0.0603 | 8.0 | 696 | 0.4655 | 0.7396 | 0.8002 | 0.4483 |
| 0.0455 | 9.0 | 783 | 0.4833 | 0.7472 | 0.8075 | 0.4483 |
| 0.0277 | 10.0 | 870 | 0.5366 | 0.7338 | 0.7972 | 0.4655 |
| 0.0254 | 11.0 | 957 | 0.5452 | 0.7429 | 0.8051 | 0.4569 |
| 0.0138 | 12.0 | 1044 | 0.5668 | 0.7460 | 0.8062 | 0.4655 |
| 0.0128 | 13.0 | 1131 | 0.5460 | 0.7566 | 0.8089 | 0.4828 |
| 0.0072 | 14.0 | 1218 | 0.5875 | 0.7551 | 0.8117 | 0.4828 |
| 0.0058 | 15.0 | 1305 | 0.6071 | 0.7474 | 0.8038 | 0.4655 |
| 0.0064 | 16.0 | 1392 | 0.5952 | 0.7531 | 0.8120 | 0.4828 |
| 0.005 | 17.0 | 1479 | 0.5976 | 0.7468 | 0.8041 | 0.4655 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
prxy5604/1f6c9228-30d8-4992-981d-44f405654a1e
|
prxy5604
| 2025-02-01T10:48:19Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Math-1.5B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-Math-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-02-01T10:27:49Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Math-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1f6c9228-30d8-4992-981d-44f405654a1e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Math-1.5B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 22587293b779bc55_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/22587293b779bc55_train_data.json
type:
field_input: content
field_instruction: title
field_output: summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: prxy5604/1f6c9228-30d8-4992-981d-44f405654a1e
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/22587293b779bc55_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6863ca7d-dba1-4f20-86fd-f4e741cc8950
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6863ca7d-dba1-4f20-86fd-f4e741cc8950
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 1f6c9228-30d8-4992-981d-44f405654a1e
This model is a fine-tuned version of [unsloth/Qwen2.5-Math-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Math-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3629
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9796 | 0.0136 | 1 | 1.1896 |
| 0.5988 | 0.6803 | 50 | 0.5322 |
| 0.3908 | 1.3639 | 100 | 0.4067 |
| 0.3988 | 2.0476 | 150 | 0.3676 |
| 0.341 | 2.7279 | 200 | 0.3629 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Kuongan/cs221-roberta-large-eng-pt
|
Kuongan
| 2025-02-01T10:48:01Z | 16 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-02-01T10:09:50Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/roberta-large
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: cs221-roberta-large-eng-pt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cs221-roberta-large-eng-pt
This model is a fine-tuned version of [FacebookAI/roberta-large](https://huggingface.co/FacebookAI/roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5690
- F1: 0.7598
- Roc Auc: 0.8118
- Accuracy: 0.5086
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.3966 | 1.0 | 173 | 0.3720 | 0.6785 | 0.7508 | 0.4224 |
| 0.3263 | 2.0 | 346 | 0.3824 | 0.7098 | 0.7742 | 0.4052 |
| 0.2298 | 3.0 | 519 | 0.3525 | 0.7210 | 0.7832 | 0.4569 |
| 0.1699 | 4.0 | 692 | 0.3996 | 0.6968 | 0.7673 | 0.4224 |
| 0.115 | 5.0 | 865 | 0.4215 | 0.7371 | 0.8025 | 0.4655 |
| 0.0622 | 6.0 | 1038 | 0.4543 | 0.7425 | 0.8002 | 0.4741 |
| 0.0609 | 7.0 | 1211 | 0.4787 | 0.7399 | 0.8028 | 0.4741 |
| 0.0344 | 8.0 | 1384 | 0.5559 | 0.7326 | 0.7927 | 0.4914 |
| 0.0205 | 9.0 | 1557 | 0.5545 | 0.7486 | 0.8052 | 0.4828 |
| 0.0153 | 10.0 | 1730 | 0.5612 | 0.7528 | 0.8131 | 0.4914 |
| 0.0082 | 11.0 | 1903 | 0.5690 | 0.7598 | 0.8118 | 0.5086 |
| 0.0038 | 12.0 | 2076 | 0.6239 | 0.7358 | 0.7974 | 0.4655 |
| 0.0047 | 13.0 | 2249 | 0.6296 | 0.7567 | 0.8072 | 0.5086 |
| 0.0025 | 14.0 | 2422 | 0.6246 | 0.7448 | 0.8028 | 0.5 |
| 0.0018 | 15.0 | 2595 | 0.6347 | 0.7403 | 0.8000 | 0.4828 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
botenius/403c3ca2-b32d-44e2-97b9-9435f55d3c2a
|
botenius
| 2025-02-01T10:45:24Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-360M-Instruct",
"base_model:adapter:unsloth/SmolLM-360M-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-01T10:35:29Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-360M-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 403c3ca2-b32d-44e2-97b9-9435f55d3c2a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-360M-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 9b6a7ed78887b72a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9b6a7ed78887b72a_train_data.json
type:
field_instruction: problem
field_output: solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: true
hub_model_id: botenius/403c3ca2-b32d-44e2-97b9-9435f55d3c2a
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/9b6a7ed78887b72a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: f3a95ab3-eeb4-4c4e-bb7c-1b3bd0a29c18
wandb_project: Gradients-On-13
wandb_run: your_name
wandb_runid: f3a95ab3-eeb4-4c4e-bb7c-1b3bd0a29c18
warmup_steps: 5
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 403c3ca2-b32d-44e2-97b9-9435f55d3c2a
This model is a fine-tuned version of [unsloth/SmolLM-360M-Instruct](https://huggingface.co/unsloth/SmolLM-360M-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1467
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.1705 | 0.1369 | 200 | 1.1467 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
roleplaiapp/L3.3-Nevoria-R1-70b-Q8_0-GGUF
|
roleplaiapp
| 2025-02-01T10:45:23Z | 29 | 0 |
transformers
|
[
"transformers",
"gguf",
"70b",
"8-bit",
"Q8_0",
"l33",
"llama-cpp",
"nevoria",
"text-generation",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-01T10:41:39Z |
---
library_name: transformers
pipeline_tag: text-generation
tags:
- 70b
- 8-bit
- Q8_0
- gguf
- l33
- llama-cpp
- nevoria
- text-generation
---
# roleplaiapp/L3.3-Nevoria-R1-70b-Q8_0-GGUF
**Repo:** `roleplaiapp/L3.3-Nevoria-R1-70b-Q8_0-GGUF`
**Original Model:** `L3.3-Nevoria-R1-70b`
**Quantized File:** `L3.3-Nevoria-R1-70b-Q8_0/L3.3-Nevoria-R1-70b-Q8_0-00001-of-00002.gguf`
**Quantization:** `GGUF`
**Quantization Method:** `Q8_0`
## Overview
This is a GGUF Q8_0 quantized version of L3.3-Nevoria-R1-70b
## Quantization By
I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models.
I hope the community finds these quantizations useful.
Andrew Webby @ [RolePlai](https://roleplai.app/).
|
brew35/4845b9c0-e250-46ad-8be5-279c0c4793a0
|
brew35
| 2025-02-01T10:45:06Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-360M-Instruct",
"base_model:adapter:unsloth/SmolLM-360M-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-01T10:35:35Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-360M-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4845b9c0-e250-46ad-8be5-279c0c4793a0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-360M-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 9b6a7ed78887b72a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9b6a7ed78887b72a_train_data.json
type:
field_instruction: problem
field_output: solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: brew35/4845b9c0-e250-46ad-8be5-279c0c4793a0
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/9b6a7ed78887b72a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f3a95ab3-eeb4-4c4e-bb7c-1b3bd0a29c18
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: f3a95ab3-eeb4-4c4e-bb7c-1b3bd0a29c18
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 4845b9c0-e250-46ad-8be5-279c0c4793a0
This model is a fine-tuned version of [unsloth/SmolLM-360M-Instruct](https://huggingface.co/unsloth/SmolLM-360M-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1328
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.3637 | 0.2738 | 200 | 1.1328 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
ancient41/24950af4-6235-4c35-aec3-bccc6fb50be7
|
ancient41
| 2025-02-01T10:43:47Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"starcoder2",
"axolotl",
"generated_from_trainer",
"base_model:bigcode/starcoder2-3b",
"base_model:adapter:bigcode/starcoder2-3b",
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-02-01T10:25:12Z |
---
library_name: peft
license: bigcode-openrail-m
base_model: bigcode/starcoder2-3b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 24950af4-6235-4c35-aec3-bccc6fb50be7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: bigcode/starcoder2-3b
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1f363d38b0a18fae_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1f363d38b0a18fae_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: ancient41/24950af4-6235-4c35-aec3-bccc6fb50be7
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/1f363d38b0a18fae_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 11d6d6d8-0f3b-4480-adc8-58ddc86a0ed7
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 11d6d6d8-0f3b-4480-adc8-58ddc86a0ed7
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 24950af4-6235-4c35-aec3-bccc6fb50be7
This model is a fine-tuned version of [bigcode/starcoder2-3b](https://huggingface.co/bigcode/starcoder2-3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4376
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 13.3816 | 0.0012 | 1 | 1.7327 |
| 14.1477 | 0.0587 | 50 | 1.6138 |
| 9.0052 | 0.1173 | 100 | 1.5306 |
| 8.9556 | 0.1760 | 150 | 1.4662 |
| 10.7712 | 0.2346 | 200 | 1.4376 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
nhung03/6f5686a7-fdf2-4a68-8543-315d8a47d0a3
|
nhung03
| 2025-02-01T10:43:30Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Math-1.5B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-Math-1.5B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-01T10:28:15Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Math-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6f5686a7-fdf2-4a68-8543-315d8a47d0a3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Math-1.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 22587293b779bc55_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/22587293b779bc55_train_data.json
type:
field_input: content
field_instruction: title
field_output: summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung03/6f5686a7-fdf2-4a68-8543-315d8a47d0a3
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/22587293b779bc55_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6863ca7d-dba1-4f20-86fd-f4e741cc8950
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6863ca7d-dba1-4f20-86fd-f4e741cc8950
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 6f5686a7-fdf2-4a68-8543-315d8a47d0a3
This model is a fine-tuned version of [unsloth/Qwen2.5-Math-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Math-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.57 | 0.6809 | 200 | 0.7222 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
cimol/d4ae43f7-ec36-45b0-9cd6-60e0fc0b7214
|
cimol
| 2025-02-01T10:43:09Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-360M-Instruct",
"base_model:adapter:unsloth/SmolLM-360M-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-02-01T10:39:04Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-360M-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d4ae43f7-ec36-45b0-9cd6-60e0fc0b7214
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-360M-Instruct
bf16: true
chat_template: llama3
data_processes: 24
dataset_prepared_path: null
datasets:
- data_files:
- 9b6a7ed78887b72a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9b6a7ed78887b72a_train_data.json
type:
field_instruction: problem
field_output: solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
early_stopping_patience: 4
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: cimol/d4ae43f7-ec36-45b0-9cd6-60e0fc0b7214
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 1.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
lr_scheduler_warmup_steps: 50
max_grad_norm: 0.1
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/9b6a7ed78887b72a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.999
adam_epsilon: 1e-8
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
seed: 17333
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
total_train_batch_size: 16
train_batch_size: 8
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f3a95ab3-eeb4-4c4e-bb7c-1b3bd0a29c18
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: f3a95ab3-eeb4-4c4e-bb7c-1b3bd0a29c18
warmup_steps: 100
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# d4ae43f7-ec36-45b0-9cd6-60e0fc0b7214
This model is a fine-tuned version of [unsloth/SmolLM-360M-Instruct](https://huggingface.co/unsloth/SmolLM-360M-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 17333
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.999,adam_epsilon=1e-8
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0014 | 1 | nan |
| 0.0 | 0.0684 | 50 | nan |
| 0.0 | 0.1369 | 100 | nan |
| 0.0 | 0.2053 | 150 | nan |
| 0.0 | 0.2738 | 200 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
minhtrannnn/9cf7b0fb-d200-4025-9b5a-6e54183ec18a
|
minhtrannnn
| 2025-02-01T10:42:37Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Math-1.5B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-Math-1.5B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-01T10:28:19Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Math-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9cf7b0fb-d200-4025-9b5a-6e54183ec18a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Math-1.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 22587293b779bc55_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/22587293b779bc55_train_data.json
type:
field_input: content
field_instruction: title
field_output: summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: minhtrannnn/9cf7b0fb-d200-4025-9b5a-6e54183ec18a
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/22587293b779bc55_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6863ca7d-dba1-4f20-86fd-f4e741cc8950
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6863ca7d-dba1-4f20-86fd-f4e741cc8950
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 9cf7b0fb-d200-4025-9b5a-6e54183ec18a
This model is a fine-tuned version of [unsloth/Qwen2.5-Math-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Math-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7204
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.5683 | 0.6809 | 200 | 0.7204 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
ajku2199/Llama-2-7b-hf_abstract_prob7_dataset1_n1000_seed7_epochs10_batch8_qlora
|
ajku2199
| 2025-02-01T10:42:12Z | 5 | 0 |
peft
|
[
"peft",
"safetensors",
"region:us"
] | null | 2025-01-10T14:53:57Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
clarxus/4186fc7d-8853-481a-9bb3-e6dee0d50053
|
clarxus
| 2025-02-01T10:40:26Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-360M-Instruct",
"base_model:adapter:unsloth/SmolLM-360M-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-02-01T10:35:19Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-360M-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4186fc7d-8853-481a-9bb3-e6dee0d50053
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-360M-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 9b6a7ed78887b72a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9b6a7ed78887b72a_train_data.json
type:
field_instruction: problem
field_output: solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: clarxus/4186fc7d-8853-481a-9bb3-e6dee0d50053
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/9b6a7ed78887b72a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: f3a95ab3-eeb4-4c4e-bb7c-1b3bd0a29c18
wandb_project: Gradients-On-Seven
wandb_run: your_name
wandb_runid: f3a95ab3-eeb4-4c4e-bb7c-1b3bd0a29c18
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 4186fc7d-8853-481a-9bb3-e6dee0d50053
This model is a fine-tuned version of [unsloth/SmolLM-360M-Instruct](https://huggingface.co/unsloth/SmolLM-360M-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1296
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0027 | 1 | 1.1999 |
| 1.1705 | 0.0246 | 9 | 1.1993 |
| 1.1512 | 0.0493 | 18 | 1.1932 |
| 1.1106 | 0.0739 | 27 | 1.1801 |
| 1.2376 | 0.0986 | 36 | 1.1651 |
| 1.1479 | 0.1232 | 45 | 1.1524 |
| 1.0341 | 0.1478 | 54 | 1.1433 |
| 1.1297 | 0.1725 | 63 | 1.1367 |
| 1.1057 | 0.1971 | 72 | 1.1329 |
| 1.1072 | 0.2218 | 81 | 1.1307 |
| 1.1325 | 0.2464 | 90 | 1.1298 |
| 1.1523 | 0.2710 | 99 | 1.1296 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
roleplaiapp/L3.3-Nevoria-R1-70b-Q6_K-GGUF
|
roleplaiapp
| 2025-02-01T10:39:50Z | 24 | 0 |
transformers
|
[
"transformers",
"gguf",
"6-bit",
"70b",
"Q6_K",
"l33",
"llama-cpp",
"nevoria",
"text-generation",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-01T10:35:22Z |
---
library_name: transformers
pipeline_tag: text-generation
tags:
- 6-bit
- 70b
- Q6_K
- gguf
- l33
- llama-cpp
- nevoria
- text-generation
---
# roleplaiapp/L3.3-Nevoria-R1-70b-Q6_K-GGUF
**Repo:** `roleplaiapp/L3.3-Nevoria-R1-70b-Q6_K-GGUF`
**Original Model:** `L3.3-Nevoria-R1-70b`
**Quantized File:** `L3.3-Nevoria-R1-70b-Q6_K/L3.3-Nevoria-R1-70b-Q6_K-00001-of-00002.gguf`
**Quantization:** `GGUF`
**Quantization Method:** `Q6_K`
## Overview
This is a GGUF Q6_K quantized version of L3.3-Nevoria-R1-70b
## Quantization By
I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models.
I hope the community finds these quantizations useful.
Andrew Webby @ [RolePlai](https://roleplai.app/).
|
lesso17/6b390319-bab5-44be-943f-aa0dc3786961
|
lesso17
| 2025-02-01T10:36:40Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Mistral-7b-128k",
"base_model:adapter:NousResearch/Yarn-Mistral-7b-128k",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-01T09:55:11Z |
---
library_name: peft
license: apache-2.0
base_model: NousResearch/Yarn-Mistral-7b-128k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6b390319-bab5-44be-943f-aa0dc3786961
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Mistral-7b-128k
bf16: auto
chat_template: llama3
datasets:
- data_files:
- 3fafaf8cf25404aa_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3fafaf8cf25404aa_train_data.json
type:
field_input: context
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso17/6b390319-bab5-44be-943f-aa0dc3786961
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/3fafaf8cf25404aa_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 41d8118f-d704-40f9-b279-287f5d2979de
wandb_project: new-01-29
wandb_run: your_name
wandb_runid: 41d8118f-d704-40f9-b279-287f5d2979de
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 6b390319-bab5-44be-943f-aa0dc3786961
This model is a fine-tuned version of [NousResearch/Yarn-Mistral-7b-128k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3393
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.2936 | 0.0338 | 200 | 0.3393 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
alchemist69/140777b8-2c71-4650-a7cf-595c87afcbc8
|
alchemist69
| 2025-02-01T10:35:33Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-instruct-v0.3",
"base_model:adapter:unsloth/mistral-7b-instruct-v0.3",
"license:apache-2.0",
"region:us"
] | null | 2025-02-01T10:12:52Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-instruct-v0.3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 140777b8-2c71-4650-a7cf-595c87afcbc8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/mistral-7b-instruct-v0.3
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- bbdb7d345038de31_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/bbdb7d345038de31_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: alchemist69/140777b8-2c71-4650-a7cf-595c87afcbc8
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/bbdb7d345038de31_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: de370e54-9b1b-408e-9116-e240c8432fd9
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: de370e54-9b1b-408e-9116-e240c8432fd9
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 140777b8-2c71-4650-a7cf-595c87afcbc8
This model is a fine-tuned version of [unsloth/mistral-7b-instruct-v0.3](https://huggingface.co/unsloth/mistral-7b-instruct-v0.3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5477
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 161
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.7111 | 0.0187 | 1 | 1.2936 |
| 2.019 | 0.9346 | 50 | 0.6054 |
| 1.4201 | 1.8692 | 100 | 0.5421 |
| 1.2055 | 2.8037 | 150 | 0.5477 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
earnxus/3686a998-5956-4e85-a2bb-a5c4ca3b48da
|
earnxus
| 2025-02-01T10:32:02Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-135M-Instruct",
"base_model:adapter:unsloth/SmolLM-135M-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-01T10:15:16Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-135M-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3686a998-5956-4e85-a2bb-a5c4ca3b48da
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-135M-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e6a3c7d274205c36_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e6a3c7d274205c36_train_data.json
type:
field_input: context
field_instruction: alpaca_prompt_text
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: true
hub_model_id: earnxus/3686a998-5956-4e85-a2bb-a5c4ca3b48da
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/e6a3c7d274205c36_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 43d525c3-01ed-41a2-9424-8b3b5f9b62d7
wandb_project: Gradients-On-Nine
wandb_run: your_name
wandb_runid: 43d525c3-01ed-41a2-9424-8b3b5f9b62d7
warmup_steps: 5
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 3686a998-5956-4e85-a2bb-a5c4ca3b48da
This model is a fine-tuned version of [unsloth/SmolLM-135M-Instruct](https://huggingface.co/unsloth/SmolLM-135M-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6753
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.2378 | 0.0280 | 200 | 0.6753 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lapsel/halt-ll_qwen25_7B_full_steam_qa_10000
|
lapsel
| 2025-02-01T10:30:30Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"generated_from_trainer",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-01T02:14:19Z |
---
library_name: transformers
tags:
- llama-factory
- generated_from_trainer
model-index:
- name: halt-ll_qwen25_7B_full_steam_qa_10000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# halt-ll_qwen25_7B_full_steam_qa_10000
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 8
- total_eval_batch_size: 4
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
great0001/9ae12a9e-6310-4aac-986f-6cdb6d115d66
|
great0001
| 2025-02-01T10:29:08Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"starcoder2",
"axolotl",
"generated_from_trainer",
"base_model:bigcode/starcoder2-3b",
"base_model:adapter:bigcode/starcoder2-3b",
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-02-01T10:25:26Z |
---
library_name: peft
license: bigcode-openrail-m
base_model: bigcode/starcoder2-3b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9ae12a9e-6310-4aac-986f-6cdb6d115d66
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: bigcode/starcoder2-3b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1f363d38b0a18fae_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1f363d38b0a18fae_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: great0001/9ae12a9e-6310-4aac-986f-6cdb6d115d66
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/1f363d38b0a18fae_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 11d6d6d8-0f3b-4480-adc8-58ddc86a0ed7
wandb_project: Mine-SN56-20-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 11d6d6d8-0f3b-4480-adc8-58ddc86a0ed7
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 9ae12a9e-6310-4aac-986f-6cdb6d115d66
This model is a fine-tuned version of [bigcode/starcoder2-3b](https://huggingface.co/bigcode/starcoder2-3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4521
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 1.6048 |
| 4.2718 | 0.0073 | 50 | 1.5159 |
| 3.8279 | 0.0147 | 100 | 1.4751 |
| 3.7744 | 0.0220 | 150 | 1.4533 |
| 3.9004 | 0.0293 | 200 | 1.4521 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
roleplaiapp/Selene-1-Mini-Llama-3.1-8B-f16-GGUF
|
roleplaiapp
| 2025-02-01T10:21:49Z | 15 | 0 |
transformers
|
[
"transformers",
"gguf",
"f16",
"llama",
"llama-cpp",
"mini",
"selene",
"text-generation",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] |
text-generation
| 2025-02-01T10:20:50Z |
---
library_name: transformers
pipeline_tag: text-generation
tags:
- f16
- gguf
- llama
- llama-cpp
- mini
- selene
- text-generation
---
# roleplaiapp/Selene-1-Mini-Llama-3.1-8B-f16-GGUF
**Repo:** `roleplaiapp/Selene-1-Mini-Llama-3.1-8B-f16-GGUF`
**Original Model:** `Selene-1-Mini-Llama-3.1-8B`
**Quantized File:** `Selene-1-Mini-Llama-3.1-8B-f16.gguf`
**Quantization:** `GGUF`
**Quantization Method:** `f16`
## Overview
This is a GGUF f16 quantized version of Selene-1-Mini-Llama-3.1-8B
## Quantization By
I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models.
I hope the community finds these quantizations useful.
Andrew Webby @ [RolePlai](https://roleplai.app/).
|
hongngo/9d069e14-343a-46a1-aa56-40d85d483a32
|
hongngo
| 2025-02-01T10:15:52Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/zephyr-sft",
"base_model:adapter:unsloth/zephyr-sft",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-01T09:22:43Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/zephyr-sft
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9d069e14-343a-46a1-aa56-40d85d483a32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/zephyr-sft
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 6946a575c01504bd_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6946a575c01504bd_train_data.json
type:
field_input: dialogue
field_instruction: rendered_input
field_output: summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: hongngo/9d069e14-343a-46a1-aa56-40d85d483a32
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/6946a575c01504bd_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e0338e1a-9767-499b-b9af-44008ae05e25
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: e0338e1a-9767-499b-b9af-44008ae05e25
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 9d069e14-343a-46a1-aa56-40d85d483a32
This model is a fine-tuned version of [unsloth/zephyr-sft](https://huggingface.co/unsloth/zephyr-sft) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8681
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.7472 | 0.0147 | 200 | 0.8681 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
sky-2002/SmolLM-135M-Instruct-bespoke-ft-v0
|
sky-2002
| 2025-02-01T10:15:25Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:HuggingFaceTB/SmolLM-135M-Instruct",
"base_model:finetune:HuggingFaceTB/SmolLM-135M-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-01T10:15:10Z |
---
base_model: HuggingFaceTB/SmolLM-135M-Instruct
library_name: transformers
model_name: SmolLM-135M-Instruct-bespoke-ft-v0
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for SmolLM-135M-Instruct-bespoke-ft-v0
This model is a fine-tuned version of [HuggingFaceTB/SmolLM-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM-135M-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="sky-2002/SmolLM-135M-Instruct-bespoke-ft-v0", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/aathatte2002-indian-institute-of-technology/SmolLM-135M-finetune/runs/tu93q8mu)
This model was trained with SFT.
### Framework versions
- TRL: 0.15.0.dev0
- Transformers: 4.49.0.dev0
- Pytorch: 2.4.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
thangla01/2fc294e4-2d79-4801-a8a8-5a90c8f701a0
|
thangla01
| 2025-02-01T10:14:31Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/zephyr-sft",
"base_model:adapter:unsloth/zephyr-sft",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-01T09:21:46Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/zephyr-sft
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2fc294e4-2d79-4801-a8a8-5a90c8f701a0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/zephyr-sft
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 6946a575c01504bd_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6946a575c01504bd_train_data.json
type:
field_input: dialogue
field_instruction: rendered_input
field_output: summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thangla01/2fc294e4-2d79-4801-a8a8-5a90c8f701a0
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/6946a575c01504bd_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e0338e1a-9767-499b-b9af-44008ae05e25
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: e0338e1a-9767-499b-b9af-44008ae05e25
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 2fc294e4-2d79-4801-a8a8-5a90c8f701a0
This model is a fine-tuned version of [unsloth/zephyr-sft](https://huggingface.co/unsloth/zephyr-sft) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8678
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.7394 | 0.0147 | 200 | 0.8678 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
botenius/2fba5899-a0bb-4245-9745-ab052844b2cd
|
botenius
| 2025-02-01T10:12:53Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/zephyr-sft",
"base_model:adapter:unsloth/zephyr-sft",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-01T09:22:26Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/zephyr-sft
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2fba5899-a0bb-4245-9745-ab052844b2cd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/zephyr-sft
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 6946a575c01504bd_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6946a575c01504bd_train_data.json
type:
field_input: dialogue
field_instruction: rendered_input
field_output: summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: true
hub_model_id: botenius/2fba5899-a0bb-4245-9745-ab052844b2cd
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/6946a575c01504bd_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: e0338e1a-9767-499b-b9af-44008ae05e25
wandb_project: Gradients-On-13
wandb_run: your_name
wandb_runid: e0338e1a-9767-499b-b9af-44008ae05e25
warmup_steps: 5
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 2fba5899-a0bb-4245-9745-ab052844b2cd
This model is a fine-tuned version of [unsloth/zephyr-sft](https://huggingface.co/unsloth/zephyr-sft) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8236
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.1352 | 0.0147 | 200 | 0.8236 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
nhung03/7ee4d222-c0bc-4e62-98f6-b0be0fb210d8
|
nhung03
| 2025-02-01T10:12:11Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/zephyr-sft",
"base_model:adapter:unsloth/zephyr-sft",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-01T09:21:52Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/zephyr-sft
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7ee4d222-c0bc-4e62-98f6-b0be0fb210d8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/zephyr-sft
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 6946a575c01504bd_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6946a575c01504bd_train_data.json
type:
field_input: dialogue
field_instruction: rendered_input
field_output: summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung03/7ee4d222-c0bc-4e62-98f6-b0be0fb210d8
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/6946a575c01504bd_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e0338e1a-9767-499b-b9af-44008ae05e25
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: e0338e1a-9767-499b-b9af-44008ae05e25
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 7ee4d222-c0bc-4e62-98f6-b0be0fb210d8
This model is a fine-tuned version of [unsloth/zephyr-sft](https://huggingface.co/unsloth/zephyr-sft) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8678
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.7301 | 0.0147 | 200 | 0.8678 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
kostiantynk1205/d759762f-e5a3-4437-8738-7a1a3a5ddb5b
|
kostiantynk1205
| 2025-02-01T10:12:04Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Mistral-7b-128k",
"base_model:adapter:NousResearch/Yarn-Mistral-7b-128k",
"license:apache-2.0",
"region:us"
] | null | 2025-02-01T09:55:57Z |
---
library_name: peft
license: apache-2.0
base_model: NousResearch/Yarn-Mistral-7b-128k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d759762f-e5a3-4437-8738-7a1a3a5ddb5b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Mistral-7b-128k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3fafaf8cf25404aa_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3fafaf8cf25404aa_train_data.json
type:
field_input: context
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk1205/d759762f-e5a3-4437-8738-7a1a3a5ddb5b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/3fafaf8cf25404aa_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 41d8118f-d704-40f9-b279-287f5d2979de
wandb_project: Birthday-SN56-23-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 41d8118f-d704-40f9-b279-287f5d2979de
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# d759762f-e5a3-4437-8738-7a1a3a5ddb5b
This model is a fine-tuned version of [NousResearch/Yarn-Mistral-7b-128k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3372
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 0.5157 |
| 1.5943 | 0.0085 | 50 | 0.3710 |
| 1.4004 | 0.0169 | 100 | 0.3538 |
| 1.3368 | 0.0254 | 150 | 0.3406 |
| 1.4978 | 0.0338 | 200 | 0.3372 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
ancient41/a2a875c2-de00-4d16-9107-621ce2f00feb
|
ancient41
| 2025-02-01T10:09:51Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:codellama/CodeLlama-7b-hf",
"base_model:adapter:codellama/CodeLlama-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-02-01T09:21:52Z |
---
library_name: peft
license: llama2
base_model: codellama/CodeLlama-7b-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a2a875c2-de00-4d16-9107-621ce2f00feb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: codellama/CodeLlama-7b-hf
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b917ee80f66720cc_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b917ee80f66720cc_train_data.json
type:
field_input: context
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: ancient41/a2a875c2-de00-4d16-9107-621ce2f00feb
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/b917ee80f66720cc_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5b7d7ce8-2550-4af4-b238-2dd8fab8f073
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5b7d7ce8-2550-4af4-b238-2dd8fab8f073
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a2a875c2-de00-4d16-9107-621ce2f00feb
This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0335
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.5112 | 0.0004 | 1 | 0.9924 |
| 0.1028 | 0.0215 | 50 | 0.0564 |
| 0.0573 | 0.0429 | 100 | 0.0454 |
| 0.0608 | 0.0644 | 150 | 0.0352 |
| 0.1269 | 0.0858 | 200 | 0.0335 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
outlookAi/27J3zTntcL
|
outlookAi
| 2025-02-01T10:09:18Z | 13 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-02-01T09:48:09Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Kwanrudee Model
---
# 27J3Ztntcl
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Kwanrudee Model` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('outlookAi/27J3zTntcL', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
alchemist69/ef1a2a6e-9e61-4207-a80f-43e779731848
|
alchemist69
| 2025-02-01T10:06:17Z | 20 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2",
"base_model:adapter:UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2",
"license:gemma",
"region:us"
] | null | 2025-02-01T09:26:02Z |
---
library_name: peft
license: gemma
base_model: UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ef1a2a6e-9e61-4207-a80f-43e779731848
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 9b5b3e5b8099870e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9b5b3e5b8099870e_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: alchemist69/ef1a2a6e-9e61-4207-a80f-43e779731848
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/9b5b3e5b8099870e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a332f332-5c5c-49ae-9e2e-af878cf04d49
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: a332f332-5c5c-49ae-9e2e-af878cf04d49
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# ef1a2a6e-9e61-4207-a80f-43e779731848
This model is a fine-tuned version of [UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2](https://huggingface.co/UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.9239 | 0.0065 | 1 | nan |
| 1.656 | 0.3268 | 50 | nan |
| 1.6442 | 0.6536 | 100 | nan |
| 0.0 | 0.9804 | 150 | nan |
| 1.4541 | 1.3072 | 200 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
memevis/nano10
|
memevis
| 2025-02-01T10:02:24Z | 70 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-01T09:56:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
brew35/23f2dcac-d4d1-4ae6-a43f-d96d159df543
|
brew35
| 2025-02-01T09:59:07Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-135M-Instruct",
"base_model:adapter:unsloth/SmolLM-135M-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-01T09:46:01Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-135M-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 23f2dcac-d4d1-4ae6-a43f-d96d159df543
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-135M-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e6a3c7d274205c36_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e6a3c7d274205c36_train_data.json
type:
field_input: context
field_instruction: alpaca_prompt_text
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: brew35/23f2dcac-d4d1-4ae6-a43f-d96d159df543
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/e6a3c7d274205c36_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 43d525c3-01ed-41a2-9424-8b3b5f9b62d7
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 43d525c3-01ed-41a2-9424-8b3b5f9b62d7
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 23f2dcac-d4d1-4ae6-a43f-d96d159df543
This model is a fine-tuned version of [unsloth/SmolLM-135M-Instruct](https://huggingface.co/unsloth/SmolLM-135M-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5893
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.1457 | 0.0560 | 200 | 0.5893 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
memevis/nano14
|
memevis
| 2025-02-01T09:55:52Z | 25 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-01T09:50:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
baby-dev/273c636d-fc41-4a21-8760-606b4a1a605b
|
baby-dev
| 2025-02-01T09:55:05Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-135M-Instruct",
"base_model:adapter:unsloth/SmolLM-135M-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-02-01T09:47:58Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-135M-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 273c636d-fc41-4a21-8760-606b4a1a605b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
# 273c636d-fc41-4a21-8760-606b4a1a605b
This model is a fine-tuned version of [unsloth/SmolLM-135M-Instruct](https://huggingface.co/unsloth/SmolLM-135M-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0063
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Nexspear/ed490de5-6ce0-4282-88dd-397a4944a0ec
|
Nexspear
| 2025-02-01T09:55:01Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Coder-7B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-Coder-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-02-01T09:40:35Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Coder-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ed490de5-6ce0-4282-88dd-397a4944a0ec
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Coder-7B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b7cdc27ddaec015e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b7cdc27ddaec015e_train_data.json
type:
field_instruction: question
field_output: solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: Nexspear/ed490de5-6ce0-4282-88dd-397a4944a0ec
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/b7cdc27ddaec015e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 537e2746-bdbf-433d-87a7-94617348b3f7
wandb_project: Gradients-On-Four
wandb_run: your_name
wandb_runid: 537e2746-bdbf-433d-87a7-94617348b3f7
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# ed490de5-6ce0-4282-88dd-397a4944a0ec
This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Coder-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5724
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.539 | 0.0034 | 1 | 0.8346 |
| 0.4436 | 0.1718 | 50 | 0.5972 |
| 0.4434 | 0.3436 | 100 | 0.5724 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
fifxus/b67d41aa-fce4-48c4-80de-fc7e20dbcb25
|
fifxus
| 2025-02-01T09:52:12Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"opt",
"axolotl",
"generated_from_trainer",
"base_model:facebook/opt-1.3b",
"base_model:adapter:facebook/opt-1.3b",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-01T09:29:57Z |
---
library_name: peft
license: other
base_model: facebook/opt-1.3b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b67d41aa-fce4-48c4-80de-fc7e20dbcb25
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: facebook/opt-1.3b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 96a2fc66c5b07ef1_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/96a2fc66c5b07ef1_train_data.json
type:
field_instruction: timecoded_cc
field_output: qa
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: true
hub_model_id: fifxus/b67d41aa-fce4-48c4-80de-fc7e20dbcb25
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/96a2fc66c5b07ef1_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: d3343316-7c96-4efd-ae85-68e87a921e72
wandb_project: Gradients-On-10
wandb_run: your_name
wandb_runid: d3343316-7c96-4efd-ae85-68e87a921e72
warmup_steps: 5
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# b67d41aa-fce4-48c4-80de-fc7e20dbcb25
This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7202
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.6207 | 0.0254 | 200 | 0.7202 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
kk-aivio/fae4c5b4-3891-4930-8fe7-6f751923dc70
|
kk-aivio
| 2025-02-01T09:51:40Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"opt",
"axolotl",
"generated_from_trainer",
"base_model:facebook/opt-1.3b",
"base_model:adapter:facebook/opt-1.3b",
"license:other",
"region:us"
] | null | 2025-02-01T09:45:14Z |
---
library_name: peft
license: other
base_model: facebook/opt-1.3b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fae4c5b4-3891-4930-8fe7-6f751923dc70
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: facebook/opt-1.3b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 96a2fc66c5b07ef1_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/96a2fc66c5b07ef1_train_data.json
type:
field_instruction: timecoded_cc
field_output: qa
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kk-aivio/fae4c5b4-3891-4930-8fe7-6f751923dc70
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/96a2fc66c5b07ef1_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d3343316-7c96-4efd-ae85-68e87a921e72
wandb_project: Birthday-SN56-17-Gradients-On-Demand
wandb_run: your_name
wandb_runid: d3343316-7c96-4efd-ae85-68e87a921e72
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# fae4c5b4-3891-4930-8fe7-6f751923dc70
This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7256
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 1.2490 |
| 3.1954 | 0.0064 | 50 | 0.7762 |
| 2.9453 | 0.0127 | 100 | 0.7419 |
| 2.9891 | 0.0191 | 150 | 0.7284 |
| 2.9477 | 0.0254 | 200 | 0.7256 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
memevis/nano13
|
memevis
| 2025-02-01T09:51:24Z | 30 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-01T09:46:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
memevis/nano12
|
memevis
| 2025-02-01T09:49:59Z | 43 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-01T09:44:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jiinking/1_layer_GQA4_llama8B_model
|
jiinking
| 2025-02-01T09:47:38Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-01T06:44:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
canongun/deepseek-ft
|
canongun
| 2025-02-01T09:46:46Z | 53 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-01T08:57:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
genki10/ASAP_FineTuningBERT_AugV6_k2_task1_organization_fold1
|
genki10
| 2025-02-01T09:46:13Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-02-01T09:22:24Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: ASAP_FineTuningBERT_AugV6_k2_task1_organization_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ASAP_FineTuningBERT_AugV6_k2_task1_organization_fold1
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1448
- Qwk: 0.4410
- Mse: 1.1439
- Rmse: 1.0695
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:------:|
| No log | 1.0 | 2 | 10.7158 | -0.0077 | 10.7133 | 3.2731 |
| No log | 2.0 | 4 | 8.3122 | 0.0 | 8.3103 | 2.8828 |
| No log | 3.0 | 6 | 6.5809 | 0.0 | 6.5791 | 2.5650 |
| No log | 4.0 | 8 | 5.2714 | 0.0219 | 5.2700 | 2.2956 |
| 6.3556 | 5.0 | 10 | 4.0074 | 0.0040 | 4.0064 | 2.0016 |
| 6.3556 | 6.0 | 12 | 3.0440 | 0.0 | 3.0432 | 1.7445 |
| 6.3556 | 7.0 | 14 | 2.4480 | -0.0037 | 2.4475 | 1.5644 |
| 6.3556 | 8.0 | 16 | 1.8615 | 0.0669 | 1.8613 | 1.3643 |
| 6.3556 | 9.0 | 18 | 1.5608 | 0.0106 | 1.5608 | 1.2493 |
| 2.4488 | 10.0 | 20 | 1.2514 | 0.0106 | 1.2515 | 1.1187 |
| 2.4488 | 11.0 | 22 | 1.1488 | 0.0106 | 1.1490 | 1.0719 |
| 2.4488 | 12.0 | 24 | 1.2662 | 0.0106 | 1.2664 | 1.1253 |
| 2.4488 | 13.0 | 26 | 1.1856 | 0.0315 | 1.1857 | 1.0889 |
| 2.4488 | 14.0 | 28 | 0.9949 | 0.0521 | 0.9950 | 0.9975 |
| 1.783 | 15.0 | 30 | 1.0346 | 0.1683 | 1.0346 | 1.0172 |
| 1.783 | 16.0 | 32 | 1.1401 | 0.1537 | 1.1401 | 1.0678 |
| 1.783 | 17.0 | 34 | 0.8525 | 0.3462 | 0.8526 | 0.9233 |
| 1.783 | 18.0 | 36 | 0.9109 | 0.3087 | 0.9109 | 0.9544 |
| 1.783 | 19.0 | 38 | 0.7036 | 0.4534 | 0.7036 | 0.8388 |
| 1.2744 | 20.0 | 40 | 0.8109 | 0.3039 | 0.8110 | 0.9006 |
| 1.2744 | 21.0 | 42 | 0.6317 | 0.4528 | 0.6318 | 0.7949 |
| 1.2744 | 22.0 | 44 | 0.8021 | 0.3146 | 0.8023 | 0.8957 |
| 1.2744 | 23.0 | 46 | 0.6614 | 0.4031 | 0.6615 | 0.8133 |
| 1.2744 | 24.0 | 48 | 0.8952 | 0.2442 | 0.8954 | 0.9463 |
| 0.6914 | 25.0 | 50 | 0.6814 | 0.3810 | 0.6814 | 0.8254 |
| 0.6914 | 26.0 | 52 | 0.8218 | 0.3288 | 0.8218 | 0.9065 |
| 0.6914 | 27.0 | 54 | 0.7396 | 0.3786 | 0.7395 | 0.8599 |
| 0.6914 | 28.0 | 56 | 0.6287 | 0.4553 | 0.6286 | 0.7928 |
| 0.6914 | 29.0 | 58 | 0.9749 | 0.2446 | 0.9749 | 0.9874 |
| 0.474 | 30.0 | 60 | 0.8070 | 0.3678 | 0.8069 | 0.8983 |
| 0.474 | 31.0 | 62 | 0.5358 | 0.5600 | 0.5356 | 0.7318 |
| 0.474 | 32.0 | 64 | 0.8668 | 0.3751 | 0.8666 | 0.9309 |
| 0.474 | 33.0 | 66 | 1.2490 | 0.1574 | 1.2491 | 1.1176 |
| 0.474 | 34.0 | 68 | 0.7519 | 0.4896 | 0.7516 | 0.8669 |
| 0.4074 | 35.0 | 70 | 0.7474 | 0.5015 | 0.7471 | 0.8643 |
| 0.4074 | 36.0 | 72 | 1.0359 | 0.2944 | 1.0358 | 1.0177 |
| 0.4074 | 37.0 | 74 | 0.7322 | 0.5270 | 0.7319 | 0.8555 |
| 0.4074 | 38.0 | 76 | 0.8105 | 0.4989 | 0.8102 | 0.9001 |
| 0.4074 | 39.0 | 78 | 1.1253 | 0.2595 | 1.1252 | 1.0607 |
| 0.2859 | 40.0 | 80 | 0.8086 | 0.5079 | 0.8083 | 0.8990 |
| 0.2859 | 41.0 | 82 | 0.8835 | 0.4552 | 0.8832 | 0.9398 |
| 0.2859 | 42.0 | 84 | 1.0433 | 0.3642 | 1.0431 | 1.0213 |
| 0.2859 | 43.0 | 86 | 0.9794 | 0.4176 | 0.9790 | 0.9895 |
| 0.2859 | 44.0 | 88 | 1.1257 | 0.3222 | 1.1254 | 1.0609 |
| 0.2135 | 45.0 | 90 | 1.0142 | 0.3960 | 1.0138 | 1.0069 |
| 0.2135 | 46.0 | 92 | 1.1155 | 0.3586 | 1.1152 | 1.0560 |
| 0.2135 | 47.0 | 94 | 1.0376 | 0.4093 | 1.0370 | 1.0183 |
| 0.2135 | 48.0 | 96 | 1.3530 | 0.2487 | 1.3527 | 1.1630 |
| 0.2135 | 49.0 | 98 | 1.3032 | 0.2987 | 1.3028 | 1.1414 |
| 0.195 | 50.0 | 100 | 1.0014 | 0.4474 | 1.0007 | 1.0003 |
| 0.195 | 51.0 | 102 | 1.1582 | 0.3548 | 1.1576 | 1.0759 |
| 0.195 | 52.0 | 104 | 1.1044 | 0.3964 | 1.1037 | 1.0506 |
| 0.195 | 53.0 | 106 | 0.9584 | 0.4740 | 0.9577 | 0.9786 |
| 0.195 | 54.0 | 108 | 1.1789 | 0.3501 | 1.1784 | 1.0855 |
| 0.1629 | 55.0 | 110 | 1.1975 | 0.3583 | 1.1969 | 1.0940 |
| 0.1629 | 56.0 | 112 | 1.1943 | 0.3572 | 1.1937 | 1.0926 |
| 0.1629 | 57.0 | 114 | 1.0032 | 0.4606 | 1.0025 | 1.0012 |
| 0.1629 | 58.0 | 116 | 1.0562 | 0.4242 | 1.0555 | 1.0274 |
| 0.1629 | 59.0 | 118 | 1.3335 | 0.3011 | 1.3329 | 1.1545 |
| 0.1505 | 60.0 | 120 | 1.2705 | 0.3531 | 1.2697 | 1.1268 |
| 0.1505 | 61.0 | 122 | 1.3065 | 0.3599 | 1.3057 | 1.1427 |
| 0.1505 | 62.0 | 124 | 1.4995 | 0.2422 | 1.4990 | 1.2243 |
| 0.1505 | 63.0 | 126 | 1.4697 | 0.2475 | 1.4692 | 1.2121 |
| 0.1505 | 64.0 | 128 | 1.1872 | 0.3841 | 1.1864 | 1.0892 |
| 0.1517 | 65.0 | 130 | 1.0728 | 0.4328 | 1.0720 | 1.0354 |
| 0.1517 | 66.0 | 132 | 1.1772 | 0.3791 | 1.1765 | 1.0847 |
| 0.1517 | 67.0 | 134 | 1.2172 | 0.3851 | 1.2165 | 1.1029 |
| 0.1517 | 68.0 | 136 | 1.1582 | 0.4303 | 1.1574 | 1.0758 |
| 0.1517 | 69.0 | 138 | 1.1800 | 0.4207 | 1.1792 | 1.0859 |
| 0.1196 | 70.0 | 140 | 1.1957 | 0.4141 | 1.1949 | 1.0931 |
| 0.1196 | 71.0 | 142 | 1.1450 | 0.4298 | 1.1442 | 1.0697 |
| 0.1196 | 72.0 | 144 | 1.2040 | 0.4223 | 1.2031 | 1.0969 |
| 0.1196 | 73.0 | 146 | 1.2350 | 0.3933 | 1.2342 | 1.1109 |
| 0.1196 | 74.0 | 148 | 1.1945 | 0.4070 | 1.1936 | 1.0925 |
| 0.1021 | 75.0 | 150 | 1.1052 | 0.4401 | 1.1043 | 1.0508 |
| 0.1021 | 76.0 | 152 | 1.1365 | 0.4121 | 1.1356 | 1.0657 |
| 0.1021 | 77.0 | 154 | 1.2479 | 0.3307 | 1.2472 | 1.1168 |
| 0.1021 | 78.0 | 156 | 1.3010 | 0.3690 | 1.3002 | 1.1403 |
| 0.1021 | 79.0 | 158 | 1.3130 | 0.4064 | 1.3120 | 1.1454 |
| 0.1123 | 80.0 | 160 | 1.3795 | 0.4023 | 1.3785 | 1.1741 |
| 0.1123 | 81.0 | 162 | 1.4750 | 0.3605 | 1.4742 | 1.2142 |
| 0.1123 | 82.0 | 164 | 1.4007 | 0.3528 | 1.4000 | 1.1832 |
| 0.1123 | 83.0 | 166 | 1.2203 | 0.3888 | 1.2195 | 1.1043 |
| 0.1123 | 84.0 | 168 | 1.0353 | 0.4809 | 1.0344 | 1.0170 |
| 0.1066 | 85.0 | 170 | 0.9608 | 0.5082 | 0.9599 | 0.9797 |
| 0.1066 | 86.0 | 172 | 0.9956 | 0.4931 | 0.9948 | 0.9974 |
| 0.1066 | 87.0 | 174 | 1.1219 | 0.4210 | 1.1210 | 1.0588 |
| 0.1066 | 88.0 | 176 | 1.2118 | 0.3898 | 1.2110 | 1.1004 |
| 0.1066 | 89.0 | 178 | 1.2135 | 0.4110 | 1.2126 | 1.1012 |
| 0.0999 | 90.0 | 180 | 1.1746 | 0.4321 | 1.1736 | 1.0834 |
| 0.0999 | 91.0 | 182 | 1.1774 | 0.4286 | 1.1764 | 1.0846 |
| 0.0999 | 92.0 | 184 | 1.1884 | 0.4194 | 1.1875 | 1.0897 |
| 0.0999 | 93.0 | 186 | 1.2155 | 0.4141 | 1.2146 | 1.1021 |
| 0.0999 | 94.0 | 188 | 1.2217 | 0.4089 | 1.2208 | 1.1049 |
| 0.0873 | 95.0 | 190 | 1.2050 | 0.4090 | 1.2040 | 1.0973 |
| 0.0873 | 96.0 | 192 | 1.1722 | 0.4326 | 1.1712 | 1.0822 |
| 0.0873 | 97.0 | 194 | 1.1553 | 0.4353 | 1.1544 | 1.0744 |
| 0.0873 | 98.0 | 196 | 1.1463 | 0.4408 | 1.1454 | 1.0702 |
| 0.0873 | 99.0 | 198 | 1.1422 | 0.4426 | 1.1412 | 1.0683 |
| 0.0817 | 100.0 | 200 | 1.1448 | 0.4410 | 1.1439 | 1.0695 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.2.0
- Tokenizers 0.19.1
|
adammandic87/5aff7f12-f427-4fe6-8b44-023e83b11cd2
|
adammandic87
| 2025-02-01T09:43:50Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"opt",
"axolotl",
"generated_from_trainer",
"base_model:facebook/opt-1.3b",
"base_model:adapter:facebook/opt-1.3b",
"license:other",
"region:us"
] | null | 2025-02-01T09:37:27Z |
---
library_name: peft
license: other
base_model: facebook/opt-1.3b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5aff7f12-f427-4fe6-8b44-023e83b11cd2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: facebook/opt-1.3b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 96a2fc66c5b07ef1_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/96a2fc66c5b07ef1_train_data.json
type:
field_instruction: timecoded_cc
field_output: qa
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: adammandic87/5aff7f12-f427-4fe6-8b44-023e83b11cd2
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/96a2fc66c5b07ef1_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d3343316-7c96-4efd-ae85-68e87a921e72
wandb_project: Birthday-SN56-13-Gradients-On-Demand
wandb_run: your_name
wandb_runid: d3343316-7c96-4efd-ae85-68e87a921e72
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 5aff7f12-f427-4fe6-8b44-023e83b11cd2
This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7257
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 5.4485 | 0.0001 | 1 | 1.2490 |
| 3.1446 | 0.0064 | 50 | 0.7800 |
| 3.0597 | 0.0127 | 100 | 0.7424 |
| 2.7162 | 0.0191 | 150 | 0.7288 |
| 2.9025 | 0.0254 | 200 | 0.7257 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
mrferr3t/09052abc-9303-4278-921c-7a88d3e9944a
|
mrferr3t
| 2025-02-01T09:43:48Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Coder-7B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-Coder-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-02-01T09:41:45Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Coder-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 09052abc-9303-4278-921c-7a88d3e9944a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Coder-7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b7cdc27ddaec015e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b7cdc27ddaec015e_train_data.json
type:
field_instruction: question
field_output: solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 50
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/09052abc-9303-4278-921c-7a88d3e9944a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0005
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 99
micro_batch_size: 2
mlflow_experiment_name: /tmp/b7cdc27ddaec015e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 300
saves_per_epoch: 0
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 537e2746-bdbf-433d-87a7-94617348b3f7
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 537e2746-bdbf-433d-87a7-94617348b3f7
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 09052abc-9303-4278-921c-7a88d3e9944a
This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Coder-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5702
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 99
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.5513 | 0.0009 | 1 | 0.7753 |
| 0.5365 | 0.0430 | 50 | 0.5702 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
trenden/61ab6e36-d2b5-4986-b78c-2ab482761928
|
trenden
| 2025-02-01T09:41:14Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-2b-it",
"base_model:adapter:unsloth/gemma-2-2b-it",
"license:gemma",
"region:us"
] | null | 2025-02-01T09:36:58Z |
---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-2b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 61ab6e36-d2b5-4986-b78c-2ab482761928
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-2b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 868fde04833ea01a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/868fde04833ea01a_train_data.json
type:
field_instruction: query
field_output: ori_review
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: trenden/61ab6e36-d2b5-4986-b78c-2ab482761928
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/868fde04833ea01a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ee22ac0a-39e2-4a24-88a0-8dbcec863f82
wandb_project: Birthday-SN56-26-Gradients-On-Demand
wandb_run: your_name
wandb_runid: ee22ac0a-39e2-4a24-88a0-8dbcec863f82
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 61ab6e36-d2b5-4986-b78c-2ab482761928
This model is a fine-tuned version of [unsloth/gemma-2-2b-it](https://huggingface.co/unsloth/gemma-2-2b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9602
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0004 | 1 | 3.0819 |
| 2.1068 | 0.0200 | 50 | 2.0122 |
| 2.0288 | 0.0401 | 100 | 1.9774 |
| 1.9738 | 0.0601 | 150 | 1.9645 |
| 1.9129 | 0.0801 | 200 | 1.9602 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
nhung01/05c49357-4c2a-47e7-b616-e928342de0c0
|
nhung01
| 2025-02-01T09:39:01Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/tinyllama",
"base_model:adapter:unsloth/tinyllama",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-01T09:09:54Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/tinyllama
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 05c49357-4c2a-47e7-b616-e928342de0c0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/tinyllama
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 81368e48ca14d203_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/81368e48ca14d203_train_data.json
type:
field_instruction: package_name
field_output: review
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung01/05c49357-4c2a-47e7-b616-e928342de0c0
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/81368e48ca14d203_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8128fd5f-66c6-40af-8623-b2defccd28b8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 8128fd5f-66c6-40af-8623-b2defccd28b8
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 05c49357-4c2a-47e7-b616-e928342de0c0
This model is a fine-tuned version of [unsloth/tinyllama](https://huggingface.co/unsloth/tinyllama) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.5201
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.1343 | 0.0059 | 200 | 5.5201 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
leixa/cbbc61b7-68a7-4b80-bde2-a1bebeacb932
|
leixa
| 2025-02-01T09:38:06Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:heegyu/WizardVicuna-open-llama-3b-v2",
"base_model:adapter:heegyu/WizardVicuna-open-llama-3b-v2",
"license:apache-2.0",
"region:us"
] | null | 2025-02-01T08:08:18Z |
---
library_name: peft
license: apache-2.0
base_model: heegyu/WizardVicuna-open-llama-3b-v2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: cbbc61b7-68a7-4b80-bde2-a1bebeacb932
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: heegyu/WizardVicuna-open-llama-3b-v2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- de7a2442f31942d3_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/de7a2442f31942d3_train_data.json
type:
field_input: query
field_instruction: task
field_output: pos
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: leixa/cbbc61b7-68a7-4b80-bde2-a1bebeacb932
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/de7a2442f31942d3_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: ce8a4f0c-b461-4ec2-b171-ebb3f2186039
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ce8a4f0c-b461-4ec2-b171-ebb3f2186039
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# cbbc61b7-68a7-4b80-bde2-a1bebeacb932
This model is a fine-tuned version of [heegyu/WizardVicuna-open-llama-3b-v2](https://huggingface.co/heegyu/WizardVicuna-open-llama-3b-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9294
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 1.3270 |
| 1.2925 | 0.0015 | 9 | 1.2278 |
| 1.095 | 0.0030 | 18 | 1.0679 |
| 0.9885 | 0.0044 | 27 | 1.0151 |
| 0.958 | 0.0059 | 36 | 0.9830 |
| 0.9455 | 0.0074 | 45 | 0.9639 |
| 0.9423 | 0.0089 | 54 | 0.9499 |
| 0.9033 | 0.0104 | 63 | 0.9412 |
| 0.9238 | 0.0119 | 72 | 0.9355 |
| 0.9539 | 0.0133 | 81 | 0.9316 |
| 0.8881 | 0.0148 | 90 | 0.9298 |
| 0.9571 | 0.0163 | 99 | 0.9294 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
arash-rasouli/clip-vit-large-patch14-336-f
|
arash-rasouli
| 2025-02-01T09:37:48Z | 29 | 0 |
transformers
|
[
"transformers",
"safetensors",
"clip",
"zero-shot-image-classification",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
zero-shot-image-classification
| 2025-02-01T09:33:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
daniel40/3be1be7d-384e-4347-9e83-ff49ee5d6ed4
|
daniel40
| 2025-02-01T09:35:25Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:codellama/CodeLlama-7b-hf",
"base_model:adapter:codellama/CodeLlama-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-02-01T09:23:48Z |
---
library_name: peft
license: llama2
base_model: codellama/CodeLlama-7b-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3be1be7d-384e-4347-9e83-ff49ee5d6ed4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: codellama/CodeLlama-7b-hf
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b917ee80f66720cc_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b917ee80f66720cc_train_data.json
type:
field_input: context
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: daniel40/3be1be7d-384e-4347-9e83-ff49ee5d6ed4
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/b917ee80f66720cc_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5b7d7ce8-2550-4af4-b238-2dd8fab8f073
wandb_project: Birthday-SN56-28-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5b7d7ce8-2550-4af4-b238-2dd8fab8f073
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 3be1be7d-384e-4347-9e83-ff49ee5d6ed4
This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | nan |
| 0.0 | 0.0054 | 50 | nan |
| 0.0 | 0.0107 | 100 | nan |
| 0.0 | 0.0161 | 150 | nan |
| 0.0 | 0.0215 | 200 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
robiual-awal/dddd29c2-746c-412e-a748-d86166cc73be
|
robiual-awal
| 2025-02-01T09:35:19Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:codellama/CodeLlama-7b-hf",
"base_model:adapter:codellama/CodeLlama-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-02-01T09:23:48Z |
---
library_name: peft
license: llama2
base_model: codellama/CodeLlama-7b-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: dddd29c2-746c-412e-a748-d86166cc73be
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: codellama/CodeLlama-7b-hf
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b917ee80f66720cc_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b917ee80f66720cc_train_data.json
type:
field_input: context
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: robiual-awal/dddd29c2-746c-412e-a748-d86166cc73be
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: constant
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/b917ee80f66720cc_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5b7d7ce8-2550-4af4-b238-2dd8fab8f073
wandb_project: Birthday-SN56-29-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5b7d7ce8-2550-4af4-b238-2dd8fab8f073
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# dddd29c2-746c-412e-a748-d86166cc73be
This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | nan |
| 0.0 | 0.0054 | 50 | nan |
| 0.0 | 0.0107 | 100 | nan |
| 0.0 | 0.0161 | 150 | nan |
| 0.0 | 0.0215 | 200 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.