Search is not available for this dataset
modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-03 12:28:27
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 411
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-03 12:28:07
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Mirapape/julia-lora-v1 | Mirapape | "2025-01-29T16:10:04Z" | 23 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-01-29T15:43:43Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: JULIA
---
# Julia Lora V1
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `JULIA` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Mirapape/julia-lora-v1', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
BAAI/bge-en-icl | BAAI | "2025-01-15T02:43:23Z" | 20,844 | 121 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"mistral",
"feature-extraction",
"sentence-similarity",
"transformers",
"mteb",
"arxiv:2409.15700",
"arxiv:2309.07597",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2024-07-25T09:37:34Z" | ---
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- mteb
license: apache-2.0
model-index:
- name: bge-en-icl
results:
- dataset:
config: en
name: MTEB AmazonCounterfactualClassification (en)
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
split: test
type: mteb/amazon_counterfactual
metrics:
- type: accuracy
value: 93.1492537313433
- type: ap
value: 72.56132559564212
- type: f1
value: 89.71796898040243
- type: main_score
value: 93.1492537313433
task:
type: Classification
- dataset:
config: default
name: MTEB AmazonPolarityClassification
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
split: test
type: mteb/amazon_polarity
metrics:
- type: accuracy
value: 96.98372499999999
- type: ap
value: 95.62303091773919
- type: f1
value: 96.98308191715637
- type: main_score
value: 96.98372499999999
task:
type: Classification
- dataset:
config: en
name: MTEB AmazonReviewsClassification (en)
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
split: test
type: mteb/amazon_reviews_multi
metrics:
- type: accuracy
value: 61.461999999999996
- type: f1
value: 60.57257766583118
- type: main_score
value: 61.461999999999996
task:
type: Classification
- dataset:
config: default
name: MTEB ArguAna
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
split: test
type: mteb/arguana
metrics:
- type: main_score
value: 83.07967801208441
- type: ndcg_at_1
value: 66.50071123755335
- type: ndcg_at_3
value: 80.10869593172173
- type: ndcg_at_5
value: 81.89670542467924
- type: ndcg_at_10
value: 83.07967801208441
- type: ndcg_at_100
value: 83.5991349601075
- type: ndcg_at_1000
value: 83.5991349601075
- type: map_at_1
value: 66.50071123755335
- type: map_at_3
value: 76.83736367946898
- type: map_at_5
value: 77.8473210052158
- type: map_at_10
value: 78.35472690735851
- type: map_at_100
value: 78.47388207611678
- type: map_at_1000
value: 78.47388207611678
- type: precision_at_1
value: 66.50071123755335
- type: precision_at_3
value: 29.848269321953076
- type: precision_at_5
value: 18.762446657183045
- type: precision_at_10
value: 9.736842105262909
- type: precision_at_100
value: 0.9964438122332677
- type: precision_at_1000
value: 0.09964438122332549
- type: recall_at_1
value: 66.50071123755335
- type: recall_at_3
value: 89.5448079658606
- type: recall_at_5
value: 93.8122332859175
- type: recall_at_10
value: 97.36842105263158
- type: recall_at_100
value: 99.6443812233286
- type: recall_at_1000
value: 99.6443812233286
task:
type: Retrieval
- dataset:
config: default
name: MTEB ArxivClusteringP2P
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
split: test
type: mteb/arxiv-clustering-p2p
metrics:
- type: main_score
value: 54.43859683357485
- type: v_measure
value: 54.43859683357485
- type: v_measure_std
value: 14.511128158596337
task:
type: Clustering
- dataset:
config: default
name: MTEB ArxivClusteringS2S
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
split: test
type: mteb/arxiv-clustering-s2s
metrics:
- type: main_score
value: 49.33365996236564
- type: v_measure
value: 49.33365996236564
- type: v_measure_std
value: 14.61261944856548
task:
type: Clustering
- dataset:
config: default
name: MTEB AskUbuntuDupQuestions
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
split: test
type: mteb/askubuntudupquestions-reranking
metrics:
- type: main_score
value: 65.15263966490278
- type: map
value: 65.15263966490278
- type: mrr
value: 77.90331090885107
task:
type: Reranking
- dataset:
config: default
name: MTEB BIOSSES
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
split: test
type: mteb/biosses-sts
metrics:
- type: main_score
value: 86.47365710792691
- type: cosine_spearman
value: 86.47365710792691
- type: spearman
value: 86.47365710792691
task:
type: STS
- dataset:
config: default
name: MTEB Banking77Classification
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
split: test
type: mteb/banking77
metrics:
- type: accuracy
value: 91.48701298701299
- type: f1
value: 91.4733869423637
- type: main_score
value: 91.48701298701299
task:
type: Classification
- dataset:
config: default
name: MTEB BiorxivClusteringP2P
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
split: test
type: mteb/biorxiv-clustering-p2p
metrics:
- type: main_score
value: 53.050461108038036
- type: v_measure
value: 53.050461108038036
- type: v_measure_std
value: 0.9436104839012786
task:
type: Clustering
- dataset:
config: default
name: MTEB BiorxivClusteringS2S
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
split: test
type: mteb/biorxiv-clustering-s2s
metrics:
- type: main_score
value: 48.38215568371151
- type: v_measure
value: 48.38215568371151
- type: v_measure_std
value: 0.9104384504649026
task:
type: Clustering
- dataset:
config: default
name: MTEB CQADupstackRetrieval
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
split: test
type: mteb/cqadupstack
metrics:
- type: main_score
value: 47.308084499970704
- type: ndcg_at_1
value: 36.038578730542476
- type: ndcg_at_3
value: 41.931365356453036
- type: ndcg_at_5
value: 44.479015523894994
- type: ndcg_at_10
value: 47.308084499970704
- type: ndcg_at_100
value: 52.498062430513606
- type: ndcg_at_1000
value: 54.2908789514719
- type: map_at_1
value: 30.38821701528966
- type: map_at_3
value: 37.974871761903636
- type: map_at_5
value: 39.85399878507757
- type: map_at_10
value: 41.31456611036795
- type: map_at_100
value: 42.62907836655835
- type: map_at_1000
value: 42.737235870659845
- type: precision_at_1
value: 36.038578730542476
- type: precision_at_3
value: 19.39960180094633
- type: precision_at_5
value: 13.79264655952497
- type: precision_at_10
value: 8.399223517333388
- type: precision_at_100
value: 1.2992373779520896
- type: precision_at_1000
value: 0.16327170951909567
- type: recall_at_1
value: 30.38821701528966
- type: recall_at_3
value: 45.51645512564165
- type: recall_at_5
value: 52.06077167834868
- type: recall_at_10
value: 60.38864106788279
- type: recall_at_100
value: 82.76968509918343
- type: recall_at_1000
value: 94.84170217080344
task:
type: Retrieval
- dataset:
config: default
name: MTEB ClimateFEVER
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
split: test
type: mteb/climate-fever
metrics:
- type: main_score
value: 45.4272998284769
- type: ndcg_at_1
value: 44.36482084690554
- type: ndcg_at_3
value: 38.13005747178844
- type: ndcg_at_5
value: 40.83474510717123
- type: ndcg_at_10
value: 45.4272998284769
- type: ndcg_at_100
value: 52.880220707479516
- type: ndcg_at_1000
value: 55.364753427333
- type: map_at_1
value: 19.200868621064064
- type: map_at_3
value: 28.33785740137525
- type: map_at_5
value: 31.67162504524064
- type: map_at_10
value: 34.417673164090075
- type: map_at_100
value: 36.744753097028976
- type: map_at_1000
value: 36.91262189016135
- type: precision_at_1
value: 44.36482084690554
- type: precision_at_3
value: 29.14223669923975
- type: precision_at_5
value: 22.410423452768388
- type: precision_at_10
value: 14.293159609120309
- type: precision_at_100
value: 2.248859934853431
- type: precision_at_1000
value: 0.2722475570032542
- type: recall_at_1
value: 19.200868621064064
- type: recall_at_3
value: 34.132464712269176
- type: recall_at_5
value: 42.35613463626491
- type: recall_at_10
value: 52.50814332247546
- type: recall_at_100
value: 77.16178067318128
- type: recall_at_1000
value: 90.59174809989138
task:
type: Retrieval
- dataset:
config: default
name: MTEB DBPedia
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
split: test
type: mteb/dbpedia
metrics:
- type: main_score
value: 51.634197691802754
- type: ndcg_at_1
value: 64.375
- type: ndcg_at_3
value: 55.677549598242614
- type: ndcg_at_5
value: 53.44347199908503
- type: ndcg_at_10
value: 51.634197691802754
- type: ndcg_at_100
value: 56.202861267183415
- type: ndcg_at_1000
value: 63.146019108272576
- type: map_at_1
value: 9.789380503780919
- type: map_at_3
value: 16.146582195277016
- type: map_at_5
value: 19.469695222167193
- type: map_at_10
value: 24.163327344766145
- type: map_at_100
value: 35.47047690245571
- type: map_at_1000
value: 37.5147432331838
- type: precision_at_1
value: 76.25
- type: precision_at_3
value: 59.08333333333333
- type: precision_at_5
value: 52.24999999999997
- type: precision_at_10
value: 42.54999999999994
- type: precision_at_100
value: 13.460000000000008
- type: precision_at_1000
value: 2.4804999999999966
- type: recall_at_1
value: 9.789380503780919
- type: recall_at_3
value: 17.48487134027656
- type: recall_at_5
value: 22.312024269698806
- type: recall_at_10
value: 30.305380335237324
- type: recall_at_100
value: 62.172868946596424
- type: recall_at_1000
value: 85.32410301328747
task:
type: Retrieval
- dataset:
config: default
name: MTEB EmotionClassification
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
split: test
type: mteb/emotion
metrics:
- type: accuracy
value: 93.36
- type: f1
value: 89.73665936982262
- type: main_score
value: 93.36
task:
type: Classification
- dataset:
config: default
name: MTEB FEVER
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
split: test
type: mteb/fever
metrics:
- type: main_score
value: 92.82809814626805
- type: ndcg_at_1
value: 88.98889888988899
- type: ndcg_at_3
value: 91.82404417747676
- type: ndcg_at_5
value: 92.41785792357787
- type: ndcg_at_10
value: 92.82809814626805
- type: ndcg_at_100
value: 93.31730867509245
- type: ndcg_at_1000
value: 93.45171203408582
- type: map_at_1
value: 82.64125817343636
- type: map_at_3
value: 89.39970782792554
- type: map_at_5
value: 89.96799501378695
- type: map_at_10
value: 90.27479706587437
- type: map_at_100
value: 90.45185655778057
- type: map_at_1000
value: 90.46130471574544
- type: precision_at_1
value: 88.98889888988899
- type: precision_at_3
value: 34.923492349234245
- type: precision_at_5
value: 21.524152415244043
- type: precision_at_10
value: 11.033603360337315
- type: precision_at_100
value: 1.1521152115211895
- type: precision_at_1000
value: 0.11765676567657675
- type: recall_at_1
value: 82.64125817343636
- type: recall_at_3
value: 94.35195900542428
- type: recall_at_5
value: 95.9071323799047
- type: recall_at_10
value: 97.04234113887586
- type: recall_at_100
value: 98.77282371094255
- type: recall_at_1000
value: 99.5555567461508
task:
type: Retrieval
- dataset:
config: default
name: MTEB FiQA2018
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
split: test
type: mteb/fiqa
metrics:
- type: main_score
value: 59.67151242793314
- type: ndcg_at_1
value: 57.407407407407405
- type: ndcg_at_3
value: 53.79975378289304
- type: ndcg_at_5
value: 56.453379423655406
- type: ndcg_at_10
value: 59.67151242793314
- type: ndcg_at_100
value: 65.34055762539253
- type: ndcg_at_1000
value: 67.07707746043032
- type: map_at_1
value: 30.65887045053714
- type: map_at_3
value: 44.09107110881799
- type: map_at_5
value: 48.18573748068346
- type: map_at_10
value: 51.03680979612876
- type: map_at_100
value: 53.03165194566928
- type: map_at_1000
value: 53.16191096190861
- type: precision_at_1
value: 57.407407407407405
- type: precision_at_3
value: 35.493827160493886
- type: precision_at_5
value: 26.913580246913547
- type: precision_at_10
value: 16.435185185185155
- type: precision_at_100
value: 2.2685185185184986
- type: precision_at_1000
value: 0.25864197530863964
- type: recall_at_1
value: 30.65887045053714
- type: recall_at_3
value: 48.936723427464194
- type: recall_at_5
value: 58.55942925387371
- type: recall_at_10
value: 68.45128551147073
- type: recall_at_100
value: 88.24599311867836
- type: recall_at_1000
value: 98.18121693121691
task:
type: Retrieval
- dataset:
config: default
name: MTEB HotpotQA
revision: ab518f4d6fcca38d87c25209f94beba119d02014
split: test
type: mteb/hotpotqa
metrics:
- type: main_score
value: 85.13780800141961
- type: ndcg_at_1
value: 89.9392302498312
- type: ndcg_at_3
value: 81.2061569376288
- type: ndcg_at_5
value: 83.53311592078133
- type: ndcg_at_10
value: 85.13780800141961
- type: ndcg_at_100
value: 87.02630661625386
- type: ndcg_at_1000
value: 87.47294723601075
- type: map_at_1
value: 44.9696151249156
- type: map_at_3
value: 76.46972766148966
- type: map_at_5
value: 78.47749268512187
- type: map_at_10
value: 79.49792611170005
- type: map_at_100
value: 80.09409086274644
- type: map_at_1000
value: 80.11950878917663
- type: precision_at_1
value: 89.9392302498312
- type: precision_at_3
value: 53.261309925724234
- type: precision_at_5
value: 33.79338284942924
- type: precision_at_10
value: 17.69750168805041
- type: precision_at_100
value: 1.9141120864280805
- type: precision_at_1000
value: 0.19721809588118133
- type: recall_at_1
value: 44.9696151249156
- type: recall_at_3
value: 79.8919648885888
- type: recall_at_5
value: 84.48345712356516
- type: recall_at_10
value: 88.48750844024308
- type: recall_at_100
value: 95.70560432140446
- type: recall_at_1000
value: 98.60904794058068
task:
type: Retrieval
- dataset:
config: default
name: MTEB ImdbClassification
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
split: test
type: mteb/imdb
metrics:
- type: accuracy
value: 96.9144
- type: ap
value: 95.45276911068486
- type: f1
value: 96.91412729455966
- type: main_score
value: 96.9144
task:
type: Classification
- dataset:
config: default
name: MTEB MSMARCO
revision: c5a29a104738b98a9e76336939199e264163d4a0
split: dev
type: mteb/msmarco
metrics:
- type: main_score
value: 46.78865753107054
- type: ndcg_at_1
value: 26.63323782234957
- type: ndcg_at_3
value: 38.497585804985754
- type: ndcg_at_5
value: 42.72761631631636
- type: ndcg_at_10
value: 46.78865753107054
- type: ndcg_at_100
value: 51.96170786623209
- type: ndcg_at_1000
value: 52.82713901970963
- type: map_at_1
value: 25.89063992359121
- type: map_at_3
value: 35.299466730340654
- type: map_at_5
value: 37.68771887933786
- type: map_at_10
value: 39.40908074468253
- type: map_at_100
value: 40.53444082323405
- type: map_at_1000
value: 40.57183037649452
- type: precision_at_1
value: 26.63323782234957
- type: precision_at_3
value: 16.265520534861793
- type: precision_at_5
value: 11.902578796562304
- type: precision_at_10
value: 7.262177650430416
- type: precision_at_100
value: 0.9819484240687512
- type: precision_at_1000
value: 0.10571633237823287
- type: recall_at_1
value: 25.89063992359121
- type: recall_at_3
value: 46.99737344794652
- type: recall_at_5
value: 57.160936007640906
- type: recall_at_10
value: 69.43409742120343
- type: recall_at_100
value: 92.86413562559697
- type: recall_at_1000
value: 99.3230659025788
task:
type: Retrieval
- dataset:
config: en
name: MTEB MTOPDomainClassification (en)
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
split: test
type: mteb/mtop_domain
metrics:
- type: accuracy
value: 98.42225262197901
- type: f1
value: 98.31652547061115
- type: main_score
value: 98.42225262197901
task:
type: Classification
- dataset:
config: en
name: MTEB MTOPIntentClassification (en)
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
split: test
type: mteb/mtop_intent
metrics:
- type: accuracy
value: 94.00136798905609
- type: f1
value: 82.7022316533099
- type: main_score
value: 94.00136798905609
task:
type: Classification
- dataset:
config: en
name: MTEB MassiveIntentClassification (en)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 82.92535305985204
- type: f1
value: 79.885538231847
- type: main_score
value: 82.92535305985204
task:
type: Classification
- dataset:
config: en
name: MTEB MassiveScenarioClassification (en)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 85.60188298587758
- type: f1
value: 84.87416963499224
- type: main_score
value: 85.60188298587758
task:
type: Classification
- dataset:
config: default
name: MTEB MedrxivClusteringP2P
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
split: test
type: mteb/medrxiv-clustering-p2p
metrics:
- type: main_score
value: 45.86171497327639
- type: v_measure
value: 45.86171497327639
- type: v_measure_std
value: 1.551347259003324
task:
type: Clustering
- dataset:
config: default
name: MTEB MedrxivClusteringS2S
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
split: test
type: mteb/medrxiv-clustering-s2s
metrics:
- type: main_score
value: 44.33336692345644
- type: v_measure
value: 44.33336692345644
- type: v_measure_std
value: 1.5931408596404715
task:
type: Clustering
- dataset:
config: default
name: MTEB MindSmallReranking
revision: 59042f120c80e8afa9cdbb224f67076cec0fc9a7
split: test
type: mteb/mind_small
metrics:
- type: main_score
value: 30.597409734750503
- type: map
value: 30.597409734750503
- type: mrr
value: 31.397041548018457
task:
type: Reranking
- dataset:
config: default
name: MTEB NFCorpus
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
split: test
type: mteb/nfcorpus
metrics:
- type: main_score
value: 41.850870119787835
- type: ndcg_at_1
value: 52.47678018575851
- type: ndcg_at_3
value: 47.43993801247414
- type: ndcg_at_5
value: 45.08173173082719
- type: ndcg_at_10
value: 41.850870119787835
- type: ndcg_at_100
value: 37.79284946590978
- type: ndcg_at_1000
value: 46.58046062123418
- type: map_at_1
value: 6.892464464226138
- type: map_at_3
value: 12.113195798233127
- type: map_at_5
value: 13.968475602788812
- type: map_at_10
value: 16.47564069781326
- type: map_at_100
value: 20.671726065190025
- type: map_at_1000
value: 22.328875914012006
- type: precision_at_1
value: 53.86996904024768
- type: precision_at_3
value: 43.96284829721363
- type: precision_at_5
value: 38.69969040247682
- type: precision_at_10
value: 30.928792569659457
- type: precision_at_100
value: 9.507739938080498
- type: precision_at_1000
value: 2.25882352941176
- type: recall_at_1
value: 6.892464464226138
- type: recall_at_3
value: 13.708153358278407
- type: recall_at_5
value: 16.651919797359145
- type: recall_at_10
value: 21.01801714352559
- type: recall_at_100
value: 37.01672102843443
- type: recall_at_1000
value: 69.8307270724072
task:
type: Retrieval
- dataset:
config: default
name: MTEB NQ
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
split: test
type: mteb/nq
metrics:
- type: main_score
value: 73.88350836507092
- type: ndcg_at_1
value: 57.0683661645423
- type: ndcg_at_3
value: 67.89935813080585
- type: ndcg_at_5
value: 71.47769719452941
- type: ndcg_at_10
value: 73.88350836507092
- type: ndcg_at_100
value: 75.76561068060907
- type: ndcg_at_1000
value: 75.92437662684215
- type: map_at_1
value: 51.00424874468904
- type: map_at_3
value: 63.87359984550011
- type: map_at_5
value: 66.23696407879494
- type: map_at_10
value: 67.42415446608673
- type: map_at_100
value: 67.92692839842621
- type: map_at_1000
value: 67.93437922640133
- type: precision_at_1
value: 57.0683661645423
- type: precision_at_3
value: 29.692931633836416
- type: precision_at_5
value: 20.046349942062854
- type: precision_at_10
value: 10.950173812283
- type: precision_at_100
value: 1.1995944380069687
- type: precision_at_1000
value: 0.12146581691772171
- type: recall_at_1
value: 51.00424874468904
- type: recall_at_3
value: 75.93665507918116
- type: recall_at_5
value: 83.95133256083433
- type: recall_at_10
value: 90.78794901506375
- type: recall_at_100
value: 98.61915797605253
- type: recall_at_1000
value: 99.7827346465817
task:
type: Retrieval
- dataset:
config: default
name: MTEB QuoraRetrieval
revision: e4e08e0b7dbe3c8700f0daef558ff32256715259
split: test
type: mteb/quora
metrics:
- type: main_score
value: 90.95410848372035
- type: ndcg_at_1
value: 84.61999999999999
- type: ndcg_at_3
value: 88.57366734033212
- type: ndcg_at_5
value: 89.89804048972175
- type: ndcg_at_10
value: 90.95410848372035
- type: ndcg_at_100
value: 91.83227134455773
- type: ndcg_at_1000
value: 91.88368412611601
- type: map_at_1
value: 73.4670089207039
- type: map_at_3
value: 84.87862925508942
- type: map_at_5
value: 86.68002324701408
- type: map_at_10
value: 87.7165466015312
- type: map_at_100
value: 88.28718809614146
- type: map_at_1000
value: 88.29877148480672
- type: precision_at_1
value: 84.61999999999999
- type: precision_at_3
value: 38.82333333333838
- type: precision_at_5
value: 25.423999999998642
- type: precision_at_10
value: 13.787999999998583
- type: precision_at_100
value: 1.5442999999999767
- type: precision_at_1000
value: 0.15672999999997972
- type: recall_at_1
value: 73.4670089207039
- type: recall_at_3
value: 89.98389854832143
- type: recall_at_5
value: 93.88541046010576
- type: recall_at_10
value: 96.99779417520634
- type: recall_at_100
value: 99.80318763957743
- type: recall_at_1000
value: 99.99638888888889
task:
type: Retrieval
- dataset:
config: default
name: MTEB RedditClustering
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
split: test
type: mteb/reddit-clustering
metrics:
- type: main_score
value: 72.33008348681277
- type: v_measure
value: 72.33008348681277
- type: v_measure_std
value: 2.9203215463933008
task:
type: Clustering
- dataset:
config: default
name: MTEB RedditClusteringP2P
revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
split: test
type: mteb/reddit-clustering-p2p
metrics:
- type: main_score
value: 72.72079657828903
- type: v_measure
value: 72.72079657828903
- type: v_measure_std
value: 11.930271663428735
task:
type: Clustering
- dataset:
config: default
name: MTEB SCIDOCS
revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88
split: test
type: mteb/scidocs
metrics:
- type: main_score
value: 25.25865384510787
- type: ndcg_at_1
value: 28.7
- type: ndcg_at_3
value: 23.61736427940938
- type: ndcg_at_5
value: 20.845690325673885
- type: ndcg_at_10
value: 25.25865384510787
- type: ndcg_at_100
value: 36.18596641088721
- type: ndcg_at_1000
value: 41.7166868935345
- type: map_at_1
value: 5.828333333333361
- type: map_at_3
value: 10.689166666666676
- type: map_at_5
value: 13.069916666666668
- type: map_at_10
value: 15.4901164021164
- type: map_at_100
value: 18.61493245565425
- type: map_at_1000
value: 18.99943478016456
- type: precision_at_1
value: 28.7
- type: precision_at_3
value: 22.30000000000006
- type: precision_at_5
value: 18.55999999999997
- type: precision_at_10
value: 13.289999999999946
- type: precision_at_100
value: 2.905000000000005
- type: precision_at_1000
value: 0.4218999999999946
- type: recall_at_1
value: 5.828333333333361
- type: recall_at_3
value: 13.548333333333387
- type: recall_at_5
value: 18.778333333333308
- type: recall_at_10
value: 26.939999999999902
- type: recall_at_100
value: 58.91333333333344
- type: recall_at_1000
value: 85.57499999999972
task:
type: Retrieval
- dataset:
config: default
name: MTEB SICK-R
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
split: test
type: mteb/sickr-sts
metrics:
- type: main_score
value: 83.86733787791422
- type: cosine_spearman
value: 83.86733787791422
- type: spearman
value: 83.86733787791422
task:
type: STS
- dataset:
config: default
name: MTEB STS12
revision: a0d554a64d88156834ff5ae9920b964011b16384
split: test
type: mteb/sts12-sts
metrics:
- type: main_score
value: 78.14269330480724
- type: cosine_spearman
value: 78.14269330480724
- type: spearman
value: 78.14269330480724
task:
type: STS
- dataset:
config: default
name: MTEB STS13
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
split: test
type: mteb/sts13-sts
metrics:
- type: main_score
value: 86.58640009300751
- type: cosine_spearman
value: 86.58640009300751
- type: spearman
value: 86.58640009300751
task:
type: STS
- dataset:
config: default
name: MTEB STS14
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
split: test
type: mteb/sts14-sts
metrics:
- type: main_score
value: 82.8292579957437
- type: cosine_spearman
value: 82.8292579957437
- type: spearman
value: 82.8292579957437
task:
type: STS
- dataset:
config: default
name: MTEB STS15
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
split: test
type: mteb/sts15-sts
metrics:
- type: main_score
value: 87.77203714228862
- type: cosine_spearman
value: 87.77203714228862
- type: spearman
value: 87.77203714228862
task:
type: STS
- dataset:
config: default
name: MTEB STS16
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
split: test
type: mteb/sts16-sts
metrics:
- type: main_score
value: 87.0439304006969
- type: cosine_spearman
value: 87.0439304006969
- type: spearman
value: 87.0439304006969
task:
type: STS
- dataset:
config: en-en
name: MTEB STS17 (en-en)
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
split: test
type: mteb/sts17-crosslingual-sts
metrics:
- type: main_score
value: 91.24736138013424
- type: cosine_spearman
value: 91.24736138013424
- type: spearman
value: 91.24736138013424
task:
type: STS
- dataset:
config: en
name: MTEB STS22 (en)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: main_score
value: 70.07326214706
- type: cosine_spearman
value: 70.07326214706
- type: spearman
value: 70.07326214706
task:
type: STS
- dataset:
config: default
name: MTEB STSBenchmark
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
split: test
type: mteb/stsbenchmark-sts
metrics:
- type: main_score
value: 88.42076443255168
- type: cosine_spearman
value: 88.42076443255168
- type: spearman
value: 88.42076443255168
task:
type: STS
- dataset:
config: default
name: MTEB SciDocsRR
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
split: test
type: mteb/scidocs-reranking
metrics:
- type: main_score
value: 86.9584489124583
- type: map
value: 86.9584489124583
- type: mrr
value: 96.59475328592976
task:
type: Reranking
- dataset:
config: default
name: MTEB SciFact
revision: 0228b52cf27578f30900b9e5271d331663a030d7
split: test
type: mteb/scifact
metrics:
- type: main_score
value: 79.09159079425369
- type: ndcg_at_1
value: 66.0
- type: ndcg_at_3
value: 74.98853481223065
- type: ndcg_at_5
value: 77.29382051205019
- type: ndcg_at_10
value: 79.09159079425369
- type: ndcg_at_100
value: 80.29692802526776
- type: ndcg_at_1000
value: 80.55210036585547
- type: map_at_1
value: 62.994444444444454
- type: map_at_3
value: 71.7425925925926
- type: map_at_5
value: 73.6200925925926
- type: map_at_10
value: 74.50223544973547
- type: map_at_100
value: 74.82438594015447
- type: map_at_1000
value: 74.83420474892468
- type: precision_at_1
value: 66.0
- type: precision_at_3
value: 29.44444444444439
- type: precision_at_5
value: 19.40000000000008
- type: precision_at_10
value: 10.366666666666715
- type: precision_at_100
value: 1.0999999999999928
- type: precision_at_1000
value: 0.11200000000000007
- type: recall_at_1
value: 62.994444444444454
- type: recall_at_3
value: 80.89999999999998
- type: recall_at_5
value: 86.72777777777779
- type: recall_at_10
value: 91.88888888888887
- type: recall_at_100
value: 97.0
- type: recall_at_1000
value: 99.0
task:
type: Retrieval
- dataset:
config: default
name: MTEB SprintDuplicateQuestions
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
split: test
type: mteb/sprintduplicatequestions-pairclassification
metrics:
- type: main_score
value: 97.26819027722253
- type: cos_sim_accuracy
value: 99.88019801980198
- type: cos_sim_accuracy_threshold
value: 76.67685151100159
- type: cos_sim_ap
value: 97.23260568085786
- type: cos_sim_f1
value: 93.91824526420737
- type: cos_sim_f1_threshold
value: 75.82710981369019
- type: cos_sim_precision
value: 93.63817097415506
- type: cos_sim_recall
value: 94.19999999999999
- type: dot_accuracy
value: 99.88019801980198
- type: dot_accuracy_threshold
value: 76.67686343193054
- type: dot_ap
value: 97.23260568085786
- type: dot_f1
value: 93.91824526420737
- type: dot_f1_threshold
value: 75.8271336555481
- type: dot_precision
value: 93.63817097415506
- type: dot_recall
value: 94.19999999999999
- type: euclidean_accuracy
value: 99.88019801980198
- type: euclidean_accuracy_threshold
value: 68.29807758331299
- type: euclidean_ap
value: 97.23259982599497
- type: euclidean_f1
value: 93.91824526420737
- type: euclidean_f1_threshold
value: 69.53110694885254
- type: euclidean_precision
value: 93.63817097415506
- type: euclidean_recall
value: 94.19999999999999
- type: manhattan_accuracy
value: 99.87821782178217
- type: manhattan_accuracy_threshold
value: 3482.6908111572266
- type: manhattan_ap
value: 97.26819027722253
- type: manhattan_f1
value: 93.92592592592592
- type: manhattan_f1_threshold
value: 3555.5641174316406
- type: manhattan_precision
value: 92.78048780487805
- type: manhattan_recall
value: 95.1
- type: max_accuracy
value: 99.88019801980198
- type: max_ap
value: 97.26819027722253
- type: max_f1
value: 93.92592592592592
task:
type: PairClassification
- dataset:
config: default
name: MTEB StackExchangeClustering
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
split: test
type: mteb/stackexchange-clustering
metrics:
- type: main_score
value: 81.32419328350603
- type: v_measure
value: 81.32419328350603
- type: v_measure_std
value: 2.666861121694755
task:
type: Clustering
- dataset:
config: default
name: MTEB StackExchangeClusteringP2P
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
split: test
type: mteb/stackexchange-clustering-p2p
metrics:
- type: main_score
value: 46.048387963107565
- type: v_measure
value: 46.048387963107565
- type: v_measure_std
value: 1.4102848576321703
task:
type: Clustering
- dataset:
config: default
name: MTEB StackOverflowDupQuestions
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
split: test
type: mteb/stackoverflowdupquestions-reranking
metrics:
- type: main_score
value: 56.70574900554072
- type: map
value: 56.70574900554072
- type: mrr
value: 57.517109116373824
task:
type: Reranking
- dataset:
config: default
name: MTEB SummEval
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
split: test
type: mteb/summeval
metrics:
- type: main_score
value: 30.76932903185174
- type: cosine_spearman
value: 30.76932903185174
- type: spearman
value: 30.76932903185174
task:
type: Summarization
- dataset:
config: default
name: MTEB TRECCOVID
revision: bb9466bac8153a0349341eb1b22e06409e78ef4e
split: test
type: mteb/trec-covid
metrics:
- type: main_score
value: 79.07987651251462
- type: ndcg_at_1
value: 83.0
- type: ndcg_at_3
value: 79.86598407528447
- type: ndcg_at_5
value: 79.27684428714952
- type: ndcg_at_10
value: 79.07987651251462
- type: ndcg_at_100
value: 64.55029164391163
- type: ndcg_at_1000
value: 59.42333857860492
- type: map_at_1
value: 0.226053732680979
- type: map_at_3
value: 0.644034626013194
- type: map_at_5
value: 1.045196967937728
- type: map_at_10
value: 2.0197496659905085
- type: map_at_100
value: 13.316018005224159
- type: map_at_1000
value: 33.784766957424104
- type: precision_at_1
value: 88.0
- type: precision_at_3
value: 86.66666666666667
- type: precision_at_5
value: 85.20000000000002
- type: precision_at_10
value: 84.19999999999997
- type: precision_at_100
value: 67.88000000000001
- type: precision_at_1000
value: 26.573999999999998
- type: recall_at_1
value: 0.226053732680979
- type: recall_at_3
value: 0.6754273711472734
- type: recall_at_5
value: 1.1168649828059245
- type: recall_at_10
value: 2.2215081031265207
- type: recall_at_100
value: 16.694165236664727
- type: recall_at_1000
value: 56.7022214857503
task:
type: Retrieval
- dataset:
config: default
name: MTEB Touche2020
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
split: test
type: mteb/touche2020
metrics:
- type: main_score
value: 30.47934263207554
- type: ndcg_at_1
value: 33.6734693877551
- type: ndcg_at_3
value: 34.36843900446739
- type: ndcg_at_5
value: 32.21323786731918
- type: ndcg_at_10
value: 30.47934263207554
- type: ndcg_at_100
value: 41.49598869753928
- type: ndcg_at_1000
value: 52.32963949183662
- type: map_at_1
value: 3.0159801678718168
- type: map_at_3
value: 7.13837927642557
- type: map_at_5
value: 9.274004610363466
- type: map_at_10
value: 12.957368366814324
- type: map_at_100
value: 19.3070585127604
- type: map_at_1000
value: 20.809777161133532
- type: precision_at_1
value: 34.69387755102041
- type: precision_at_3
value: 36.054421768707485
- type: precision_at_5
value: 32.24489795918368
- type: precision_at_10
value: 27.142857142857146
- type: precision_at_100
value: 8.326530612244898
- type: precision_at_1000
value: 1.5755102040816336
- type: recall_at_1
value: 3.0159801678718168
- type: recall_at_3
value: 8.321771388428257
- type: recall_at_5
value: 11.737532394366069
- type: recall_at_10
value: 19.49315139822179
- type: recall_at_100
value: 50.937064145519685
- type: recall_at_1000
value: 83.4358283484675
task:
type: Retrieval
- dataset:
config: default
name: MTEB ToxicConversationsClassification
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
split: test
type: mteb/toxic_conversations_50k
metrics:
- type: accuracy
value: 93.173828125
- type: ap
value: 46.040184641424396
- type: f1
value: 80.77280549412752
- type: main_score
value: 93.173828125
task:
type: Classification
- dataset:
config: default
name: MTEB TweetSentimentExtractionClassification
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
split: test
type: mteb/tweet_sentiment_extraction
metrics:
- type: accuracy
value: 79.9320882852292
- type: f1
value: 80.22638685975485
- type: main_score
value: 79.9320882852292
task:
type: Classification
- dataset:
config: default
name: MTEB TwentyNewsgroupsClustering
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
split: test
type: mteb/twentynewsgroups-clustering
metrics:
- type: main_score
value: 68.98152919711418
- type: v_measure
value: 68.98152919711418
- type: v_measure_std
value: 1.2519720970652428
task:
type: Clustering
- dataset:
config: default
name: MTEB TwitterSemEval2015
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
split: test
type: mteb/twittersemeval2015-pairclassification
metrics:
- type: main_score
value: 79.34189681158234
- type: cos_sim_accuracy
value: 87.68552184538356
- type: cos_sim_accuracy_threshold
value: 76.06316804885864
- type: cos_sim_ap
value: 79.34189149773933
- type: cos_sim_f1
value: 72.16386554621849
- type: cos_sim_f1_threshold
value: 73.62890243530273
- type: cos_sim_precision
value: 71.82435964453737
- type: cos_sim_recall
value: 72.5065963060686
- type: dot_accuracy
value: 87.68552184538356
- type: dot_accuracy_threshold
value: 76.06316208839417
- type: dot_ap
value: 79.34189231911259
- type: dot_f1
value: 72.16386554621849
- type: dot_f1_threshold
value: 73.62889647483826
- type: dot_precision
value: 71.82435964453737
- type: dot_recall
value: 72.5065963060686
- type: euclidean_accuracy
value: 87.68552184538356
- type: euclidean_accuracy_threshold
value: 69.19080018997192
- type: euclidean_ap
value: 79.34189681158234
- type: euclidean_f1
value: 72.16386554621849
- type: euclidean_f1_threshold
value: 72.62383103370667
- type: euclidean_precision
value: 71.82435964453737
- type: euclidean_recall
value: 72.5065963060686
- type: manhattan_accuracy
value: 87.661679680515
- type: manhattan_accuracy_threshold
value: 3408.807373046875
- type: manhattan_ap
value: 79.29617544165136
- type: manhattan_f1
value: 72.1957671957672
- type: manhattan_f1_threshold
value: 3597.7684020996094
- type: manhattan_precision
value: 72.38726790450929
- type: manhattan_recall
value: 72.00527704485488
- type: max_accuracy
value: 87.68552184538356
- type: max_ap
value: 79.34189681158234
- type: max_f1
value: 72.1957671957672
task:
type: PairClassification
- dataset:
config: default
name: MTEB TwitterURLCorpus
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
split: test
type: mteb/twitterurlcorpus-pairclassification
metrics:
- type: main_score
value: 87.8635519535718
- type: cos_sim_accuracy
value: 89.80672953778088
- type: cos_sim_accuracy_threshold
value: 73.09532165527344
- type: cos_sim_ap
value: 87.84251379545145
- type: cos_sim_f1
value: 80.25858884373845
- type: cos_sim_f1_threshold
value: 70.57080268859863
- type: cos_sim_precision
value: 77.14103110353643
- type: cos_sim_recall
value: 83.63874345549738
- type: dot_accuracy
value: 89.80672953778088
- type: dot_accuracy_threshold
value: 73.09532761573792
- type: dot_ap
value: 87.84251881260793
- type: dot_f1
value: 80.25858884373845
- type: dot_f1_threshold
value: 70.57079076766968
- type: dot_precision
value: 77.14103110353643
- type: dot_recall
value: 83.63874345549738
- type: euclidean_accuracy
value: 89.80672953778088
- type: euclidean_accuracy_threshold
value: 73.3548641204834
- type: euclidean_ap
value: 87.84251335039049
- type: euclidean_f1
value: 80.25858884373845
- type: euclidean_f1_threshold
value: 76.71923041343689
- type: euclidean_precision
value: 77.14103110353643
- type: euclidean_recall
value: 83.63874345549738
- type: manhattan_accuracy
value: 89.78150347343501
- type: manhattan_accuracy_threshold
value: 3702.7603149414062
- type: manhattan_ap
value: 87.8635519535718
- type: manhattan_f1
value: 80.27105660516332
- type: manhattan_f1_threshold
value: 3843.5962677001953
- type: manhattan_precision
value: 76.9361101306036
- type: manhattan_recall
value: 83.90822297505389
- type: max_accuracy
value: 89.80672953778088
- type: max_ap
value: 87.8635519535718
- type: max_f1
value: 80.27105660516332
task:
type: PairClassification
---
<h1 align="center">FlagEmbedding</h1>
For more details please refer to our Github: [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding).
**BGE-EN-ICL** primarily demonstrates the following capabilities:
- In-context learning ability: By providing few-shot examples in the query, it can significantly enhance the model's ability to handle new tasks.
- Outstanding performance: The model has achieved state-of-the-art (SOTA) performance on both BEIR and AIR-Bench.
## 📑 Open-source Plan
- [x] Checkpoint
- [x] Training Data
- [x] Technical Report
- [ ] Evaluation Pipeline
The technical report for **BGE-EN-ICL** can be found in [Making Text Embedders Few-Shot Learners](https://arxiv.org/abs/2409.15700)
## Data List
| Data | Introduction |
| ------------------------------------------------------------ | ------------------------------------------------------------ |
| [public-data](https://huggingface.co/datasets/cfli/bge-e5data) | Public data identical to [e5-mistral](https://huggingface.co/intfloat/e5-mistral-7b-instruct) |
| [full-data](https://huggingface.co/datasets/cfli/bge-full-data) | The full dataset we used for training |
## Usage
### Using FlagEmbedding
```
git clone https://github.com/FlagOpen/FlagEmbedding.git
cd FlagEmbedding
pip install -e .
```
```python
from FlagEmbedding import FlagICLModel
queries = ["how much protein should a female eat", "summit define"]
documents = [
"As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."
]
examples = [
{'instruct': 'Given a web search query, retrieve relevant passages that answer the query.',
'query': 'what is a virtual interface',
'response': "A virtual interface is a software-defined abstraction that mimics the behavior and characteristics of a physical network interface. It allows multiple logical network connections to share the same physical network interface, enabling efficient utilization of network resources. Virtual interfaces are commonly used in virtualization technologies such as virtual machines and containers to provide network connectivity without requiring dedicated hardware. They facilitate flexible network configurations and help in isolating network traffic for security and management purposes."},
{'instruct': 'Given a web search query, retrieve relevant passages that answer the query.',
'query': 'causes of back pain in female for a week',
'response': "Back pain in females lasting a week can stem from various factors. Common causes include muscle strain due to lifting heavy objects or improper posture, spinal issues like herniated discs or osteoporosis, menstrual cramps causing referred pain, urinary tract infections, or pelvic inflammatory disease. Pregnancy-related changes can also contribute. Stress and lack of physical activity may exacerbate symptoms. Proper diagnosis by a healthcare professional is crucial for effective treatment and management."}
]
model = FlagICLModel('BAAI/bge-en-icl',
query_instruction_for_retrieval="Given a web search query, retrieve relevant passages that answer the query.",
examples_for_task=examples, # set `examples_for_task=None` to use model without examples
use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
embeddings_1 = model.encode_queries(queries)
embeddings_2 = model.encode_corpus(documents)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
```
By default, FlagICLModel will use all available GPUs when encoding. Please set `os.environ["CUDA_VISIBLE_DEVICES"]` to select specific GPUs.
You also can set `os.environ["CUDA_VISIBLE_DEVICES"]=""` to make all GPUs unavailable.
### Using HuggingFace Transformers
With the transformers package, you can use the model like this: First, you pass your input through the transformer model, then you select the last hidden state of the first token (i.e., [CLS]) as the sentence embedding.
```python
import torch
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def last_token_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
left_padding = (attention_mask[:, -1].sum() == attention_mask.shape[0])
if left_padding:
return last_hidden_states[:, -1]
else:
sequence_lengths = attention_mask.sum(dim=1) - 1
batch_size = last_hidden_states.shape[0]
return last_hidden_states[torch.arange(batch_size, device=last_hidden_states.device), sequence_lengths]
def get_detailed_instruct(task_description: str, query: str) -> str:
return f'<instruct>{task_description}\n<query>{query}'
def get_detailed_example(task_description: str, query: str, response: str) -> str:
return f'<instruct>{task_description}\n<query>{query}\n<response>{response}'
def get_new_queries(queries, query_max_len, examples_prefix, tokenizer):
inputs = tokenizer(
queries,
max_length=query_max_len - len(tokenizer('<s>', add_special_tokens=False)['input_ids']) - len(
tokenizer('\n<response></s>', add_special_tokens=False)['input_ids']),
return_token_type_ids=False,
truncation=True,
return_tensors=None,
add_special_tokens=False
)
prefix_ids = tokenizer(examples_prefix, add_special_tokens=False)['input_ids']
suffix_ids = tokenizer('\n<response>', add_special_tokens=False)['input_ids']
new_max_length = (len(prefix_ids) + len(suffix_ids) + query_max_len + 8) // 8 * 8 + 8
new_queries = tokenizer.batch_decode(inputs['input_ids'])
for i in range(len(new_queries)):
new_queries[i] = examples_prefix + new_queries[i] + '\n<response>'
return new_max_length, new_queries
task = 'Given a web search query, retrieve relevant passages that answer the query.'
examples = [
{'instruct': 'Given a web search query, retrieve relevant passages that answer the query.',
'query': 'what is a virtual interface',
'response': "A virtual interface is a software-defined abstraction that mimics the behavior and characteristics of a physical network interface. It allows multiple logical network connections to share the same physical network interface, enabling efficient utilization of network resources. Virtual interfaces are commonly used in virtualization technologies such as virtual machines and containers to provide network connectivity without requiring dedicated hardware. They facilitate flexible network configurations and help in isolating network traffic for security and management purposes."},
{'instruct': 'Given a web search query, retrieve relevant passages that answer the query.',
'query': 'causes of back pain in female for a week',
'response': "Back pain in females lasting a week can stem from various factors. Common causes include muscle strain due to lifting heavy objects or improper posture, spinal issues like herniated discs or osteoporosis, menstrual cramps causing referred pain, urinary tract infections, or pelvic inflammatory disease. Pregnancy-related changes can also contribute. Stress and lack of physical activity may exacerbate symptoms. Proper diagnosis by a healthcare professional is crucial for effective treatment and management."}
]
examples = [get_detailed_example(e['instruct'], e['query'], e['response']) for e in examples]
examples_prefix = '\n\n'.join(examples) + '\n\n' # if there not exists any examples, just set examples_prefix = ''
queries = [
get_detailed_instruct(task, 'how much protein should a female eat'),
get_detailed_instruct(task, 'summit define')
]
documents = [
"As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."
]
query_max_len, doc_max_len = 512, 512
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-en-icl')
model = AutoModel.from_pretrained('BAAI/bge-en-icl')
model.eval()
new_query_max_len, new_queries = get_new_queries(queries, query_max_len, examples_prefix, tokenizer)
query_batch_dict = tokenizer(new_queries, max_length=new_query_max_len, padding=True, truncation=True, return_tensors='pt')
doc_batch_dict = tokenizer(documents, max_length=doc_max_len, padding=True, truncation=True, return_tensors='pt')
with torch.no_grad():
query_outputs = model(**query_batch_dict)
query_embeddings = last_token_pool(query_outputs.last_hidden_state, query_batch_dict['attention_mask'])
doc_outputs = model(**doc_batch_dict)
doc_embeddings = last_token_pool(doc_outputs.last_hidden_state, doc_batch_dict['attention_mask'])
# normalize embeddings
query_embeddings = F.normalize(query_embeddings, p=2, dim=1)
doc_embeddings = F.normalize(doc_embeddings, p=2, dim=1)
scores = (query_embeddings @ doc_embeddings.T) * 100
print(scores.tolist())
```
## Evaluation
`bge-en-icl` achieve **state-of-the-art performance on both MTEB and AIR-Bench leaderboard!**
- **[MTEB](https://huggingface.co/spaces/mteb/leaderboard)**:

- **[BEIR](https://huggingface.co/spaces/mteb/leaderboard)**:

- **[AIR-Bench](https://huggingface.co/spaces/AIR-Bench/leaderboard)**:
**QA (en, nDCG@10):**
| AIR-Bench_24.04 | wiki | web | news | healthcare | law | finance | arxiv | msmarco | ALL (8) |
| :--------------------------: | :-------: | :-------: | :-------: | :--------: | :-------: | :-------: | :-------: | :-------: | :-------: |
| **e5-mistral-7b-instruct** | 61.67 | 44.41 | 48.18 | 56.32 | 19.32 | 54.79 | 44.78 | 59.03 | 48.56 |
| **SFR-Embedding-Mistral** | 63.46 | 51.27 | 52.21 | 58.76 | 23.27 | 56.94 | 47.75 | 58.99 | 51.58 |
| **NV-Embed-v1** | 62.84 | 50.42 | 51.46 | 58.53 | 20.65 | 49.89 | 46.10 | 60.27 | 50.02 |
| **Linq-Embed-Mistral** | 61.04 | 48.41 | 49.44 | **60.18** | 20.34 | 50.04 | 47.56 | 60.50 | 49.69 |
| **gte-Qwen2-7B-instruct** | 63.46 | 51.20 | 54.07 | 54.20 | 22.31 | **58.20** | 40.27 | 58.39 | 50.26 |
| **stella_en_1.5B_v5** | 61.99 | 50.88 | 53.87 | 58.81 | 23.22 | 57.26 | 44.81 | 61.38 | 51.53 |
| **bge-en-icl zero-shot** | 64.61 | 54.40 | 55.11 | 57.25 | 25.10 | 54.81 | 48.46 | 63.71 | 52.93 |
| **bge-en-icl few-shot** | **64.94** | **55.11** | **56.02** | 58.85 | **28.29** | 57.16 | **50.04** | **64.50** | **54.36** |
**Long-Doc (en, Recall@10):**
| AIR-Bench_24.04 | arxiv (4) | book (2) | healthcare (5) | law (4) | ALL (15) |
| :--------------------------: | :-------: | :-------: | :------------: | :-------: | :-------: |
| **text-embedding-3-large** | 74.53 | 73.16 | 65.83 | 64.47 | 68.77 |
| **e5-mistral-7b-instruct** | 72.14 | 72.44 | 68.44 | 62.92 | 68.49 |
| **SFR-Embedding-Mistral** | 72.79 | 72.41 | 67.94 | 64.83 | 69.00 |
| **NV-Embed-v1** | 77.65 | 75.49 | 72.38 | **69.55** | 73.45 |
| **Linq-Embed-Mistral** | 75.46 | 73.81 | 71.58 | 68.58 | 72.11 |
| **gte-Qwen2-7B-instruct** | 63.93 | 68.51 | 65.59 | 65.26 | 65.45 |
| **stella_en_1.5B_v5** | 73.17 | 74.38 | 70.02 | 69.32 | 71.25 |
| **bge-en-icl zero-shot** | 78.30 | 78.21 | 73.65 | 67.09 | 73.75 |
| **bge-en-icl few-shot** | **79.63** | **79.36** | **74.80** | 67.79 | **74.83** |
## Model List
`bge` is short for `BAAI general embedding`.
| Model | Language | | Description | query instruction for retrieval [1] |
|:--------------------------------------------------------------------------|:-------------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------------------------------------------------:|:--------:|
| [BAAI/bge-en-icl](https://huggingface.co/BAAI/bge-en-icl) | English | - | A LLM-based embedding model with in-context learning capabilities, which can fully leverage the model's potential based on a few shot examples | Provide instructions and few-shot examples freely based on the given task. |
| [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) | Multilingual | [Inference](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3#usage) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3) | Multi-Functionality(dense retrieval, sparse retrieval, multi-vector(colbert)), Multi-Linguality, and Multi-Granularity(8192 tokens) | |
| [BAAI/llm-embedder](https://huggingface.co/BAAI/llm-embedder) | English | [Inference](./FlagEmbedding/llm_embedder/README.md) [Fine-tune](./FlagEmbedding/llm_embedder/README.md) | a unified embedding model to support diverse retrieval augmentation needs for LLMs | See [README](./FlagEmbedding/llm_embedder/README.md) |
| [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | |
| [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | |
| [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-large-zh-v1.5](https://huggingface.co/BAAI/bge-large-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-base-en](https://huggingface.co/BAAI/bge-base-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-en` | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-small-en](https://huggingface.co/BAAI/bge-small-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a small-scale model but with competitive performance | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) benchmark | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-zh` | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a small-scale model but with competitive performance | `为这个句子生成表示以用于检索相关文章:` |
## Citation
If you find this repository useful, please consider giving a star :star: and citation
```
@misc{li2024makingtextembeddersfewshot,
title={Making Text Embedders Few-Shot Learners},
author={Chaofan Li and MingHao Qin and Shitao Xiao and Jianlyu Chen and Kun Luo and Yingxia Shao and Defu Lian and Zheng Liu},
year={2024},
eprint={2409.15700},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2409.15700},
}
@misc{bge_embedding,
title={C-Pack: Packaged Resources To Advance General Chinese Embedding},
author={Shitao Xiao and Zheng Liu and Peitian Zhang and Niklas Muennighoff},
year={2023},
eprint={2309.07597},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## License
FlagEmbedding is licensed under the [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). |
ponik7/Llama3.1-8b-reward | ponik7 | "2025-03-25T13:46:30Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-classification",
"generated_from_trainer",
"trl",
"reward-trainer",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-03-25T13:42:28Z" | ---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
library_name: transformers
model_name: Llama3.1-8b-reward
tags:
- generated_from_trainer
- trl
- reward-trainer
licence: license
---
# Model Card for Llama3.1-8b-reward
This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ponik7/Llama3.1-8b-reward", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with Reward.
### Framework versions
- TRL: 0.16.0
- Transformers: 4.50.0
- Pytorch: 2.1.0+cu118
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
chengregy/Llama-3-Taiwan-8B-Instruct | chengregy | "2024-07-03T08:56:46Z" | 8 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-03T08:36:34Z" | ---
license: apache-2.0
---
|
shadowml/TurdusBeagle-7B-gen2 | shadowml | "2024-01-29T22:53:49Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-01-29T22:47:56Z" | ---
{}
---
---
license: cc-by-nc-4.0
base_model:
- mlabonne/NeuralBeagle14-7B
- udkai/Turdus
tags:
- merge
- mergekit
- lazymergekit
---
# shadowml/TurdusBeagle-7B-gen2
shadowml/TurdusBeagle-7B-gen2 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B)
* [udkai/Turdus](https://huggingface.co/udkai/Turdus)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mlabonne/NeuralBeagle14-7B
layer_range: [0, 32]
- model: udkai/Turdus
layer_range: [0, 32]
merge_method: slerp
base_model: mlabonne/NeuralBeagle14-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "shadowml/shadowml/TurdusBeagle-7B-gen2"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
vermoney/ea054be6-999d-4e85-8d72-78f77f62a5d5 | vermoney | "2025-01-23T12:59:59Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/codegemma-7b-it",
"base_model:adapter:unsloth/codegemma-7b-it",
"license:apache-2.0",
"region:us"
] | null | "2025-01-23T12:23:19Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/codegemma-7b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ea054be6-999d-4e85-8d72-78f77f62a5d5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/codegemma-7b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- fe9a49ea71c877be_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fe9a49ea71c877be_train_data.json
type:
field_instruction: problem
field_output: solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: vermoney/ea054be6-999d-4e85-8d72-78f77f62a5d5
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 78GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/fe9a49ea71c877be_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2e1153b5-3240-496e-9c8e-8d694921beb9
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2e1153b5-3240-496e-9c8e-8d694921beb9
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# ea054be6-999d-4e85-8d72-78f77f62a5d5
This model is a fine-tuned version of [unsloth/codegemma-7b-it](https://huggingface.co/unsloth/codegemma-7b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6342
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0005 | 1 | 1.7949 |
| 1.3469 | 0.0024 | 5 | 1.3069 |
| 0.8284 | 0.0047 | 10 | 0.7411 |
| 0.6783 | 0.0071 | 15 | 0.6783 |
| 0.6415 | 0.0095 | 20 | 0.6468 |
| 0.6168 | 0.0118 | 25 | 0.6371 |
| 0.655 | 0.0142 | 30 | 0.6342 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
qualcomm/OpenPose | qualcomm | "2025-03-18T04:22:20Z" | 0 | 8 | pytorch | [
"pytorch",
"android",
"keypoint-detection",
"arxiv:1812.08008",
"license:other",
"region:us"
] | keypoint-detection | "2024-02-25T23:02:12Z" | ---
library_name: pytorch
license: other
tags:
- android
pipeline_tag: keypoint-detection
---

# OpenPose: Optimized for Mobile Deployment
## Human pose estimation
OpenPose is a machine learning model that estimates body and hand pose in an image and returns location and confidence for each of 19 joints.
This model is an implementation of OpenPose found [here](https://github.com/CMU-Perceptual-Computing-Lab/openpose).
More details on model performance across various devices, can be found [here](https://aihub.qualcomm.com/models/openpose).
### Model Details
- **Model Type:** Pose estimation
- **Model Stats:**
- Model checkpoint: body_pose_model.pth
- Input resolution: 240x320
- Number of parameters: 52.3M
- Model size: 200 MB
| Model | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| OpenPose | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | TFLITE | 11.506 ms | 0 - 911 MB | FP16 | NPU | -- |
| OpenPose | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | QNN | 11.506 ms | 1 - 3 MB | FP16 | NPU | -- |
| OpenPose | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | ONNX | 11.686 ms | 1 - 299 MB | FP16 | NPU | -- |
| OpenPose | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | TFLITE | 8.56 ms | 0 - 134 MB | FP16 | NPU | -- |
| OpenPose | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 8.602 ms | 1 - 19 MB | FP16 | NPU | -- |
| OpenPose | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | ONNX | 8.772 ms | 1 - 28 MB | FP16 | NPU | -- |
| OpenPose | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | TFLITE | 8.892 ms | 0 - 25 MB | FP16 | NPU | -- |
| OpenPose | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | QNN | 7.031 ms | 0 - 18 MB | FP16 | NPU | -- |
| OpenPose | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | ONNX | 8.645 ms | 1 - 21 MB | FP16 | NPU | -- |
| OpenPose | SA7255P ADP | SA7255P | TFLITE | 769.514 ms | 0 - 20 MB | FP16 | NPU | -- |
| OpenPose | SA7255P ADP | SA7255P | QNN | 769.547 ms | 1 - 10 MB | FP16 | NPU | -- |
| OpenPose | SA8255 (Proxy) | SA8255P Proxy | TFLITE | 11.489 ms | 0 - 880 MB | FP16 | NPU | -- |
| OpenPose | SA8255 (Proxy) | SA8255P Proxy | QNN | 11.541 ms | 1 - 10 MB | FP16 | NPU | -- |
| OpenPose | SA8295P ADP | SA8295P | TFLITE | 26.304 ms | 0 - 22 MB | FP16 | NPU | -- |
| OpenPose | SA8295P ADP | SA8295P | QNN | 25.312 ms | 1 - 18 MB | FP16 | NPU | -- |
| OpenPose | SA8650 (Proxy) | SA8650P Proxy | TFLITE | 11.535 ms | 0 - 853 MB | FP16 | NPU | -- |
| OpenPose | SA8650 (Proxy) | SA8650P Proxy | QNN | 11.535 ms | 1 - 3 MB | FP16 | NPU | -- |
| OpenPose | SA8775P ADP | SA8775P | TFLITE | 29.065 ms | 0 - 19 MB | FP16 | NPU | -- |
| OpenPose | SA8775P ADP | SA8775P | QNN | 28.859 ms | 1 - 10 MB | FP16 | NPU | -- |
| OpenPose | QCS8275 (Proxy) | QCS8275 Proxy | TFLITE | 769.514 ms | 0 - 20 MB | FP16 | NPU | -- |
| OpenPose | QCS8275 (Proxy) | QCS8275 Proxy | QNN | 769.547 ms | 1 - 10 MB | FP16 | NPU | -- |
| OpenPose | QCS8550 (Proxy) | QCS8550 Proxy | TFLITE | 11.546 ms | 0 - 877 MB | FP16 | NPU | -- |
| OpenPose | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 11.445 ms | 1 - 4 MB | FP16 | NPU | -- |
| OpenPose | QCS9075 (Proxy) | QCS9075 Proxy | TFLITE | 29.065 ms | 0 - 19 MB | FP16 | NPU | -- |
| OpenPose | QCS9075 (Proxy) | QCS9075 Proxy | QNN | 28.859 ms | 1 - 10 MB | FP16 | NPU | -- |
| OpenPose | QCS8450 (Proxy) | QCS8450 Proxy | TFLITE | 23.612 ms | 0 - 137 MB | FP16 | NPU | -- |
| OpenPose | QCS8450 (Proxy) | QCS8450 Proxy | QNN | 22.314 ms | 0 - 23 MB | FP16 | NPU | -- |
| OpenPose | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 11.966 ms | 1 - 1 MB | FP16 | NPU | -- |
| OpenPose | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 12.689 ms | 102 - 102 MB | FP16 | NPU | -- |
## License
* The license for the original implementation of OpenPose can be found
[here](https://cmu.flintbox.com/technologies/b820c21d-8443-4aa2-a49f-8919d93a8740).
* The license for the compiled assets for on-device deployment can be found [here](https://cmu.flintbox.com/technologies/b820c21d-8443-4aa2-a49f-8919d93a8740)
## References
* [OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields](https://arxiv.org/abs/1812.08008)
* [Source Model Implementation](https://github.com/CMU-Perceptual-Computing-Lab/openpose)
## Community
* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:[email protected]).
## Usage and Limitations
Model may not be used for or in connection with any of the following applications:
- Accessing essential private and public services and benefits;
- Administration of justice and democratic processes;
- Assessing or recognizing the emotional state of a person;
- Biometric and biometrics-based systems, including categorization of persons based on sensitive characteristics;
- Education and vocational training;
- Employment and workers management;
- Exploitation of the vulnerabilities of persons resulting in harmful behavior;
- General purpose social scoring;
- Law enforcement;
- Management and operation of critical infrastructure;
- Migration, asylum and border control management;
- Predictive policing;
- Real-time remote biometric identification in public spaces;
- Recommender systems of social media platforms;
- Scraping of facial images (from the internet or otherwise); and/or
- Subliminal manipulation
|
dheerajnarne/textsummarizer | dheerajnarne | "2025-01-05T18:07:37Z" | 225 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-01-05T18:07:26Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
icefog72/Ice0.29-06.11-RP | icefog72 | "2024-11-06T08:44:32Z" | 9 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-06T08:20:50Z" | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# Ice0.29-06.11-RP
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* D:\FModels\Ice0.27-06.11-RP
* E:\FModels\Ice0.28-06.11-RP
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: D:\FModels\Ice0.27-06.11-RP
layer_range: [0, 32]
- model: E:\FModels\Ice0.28-06.11-RP
layer_range: [0, 32]
merge_method: slerp
base_model: D:\FModels\Ice0.27-06.11-RP
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: bfloat16
```
|
BrendaScar/Reinforce-CartPole-v1 | BrendaScar | "2023-08-14T03:28:33Z" | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-08-14T03:28:24Z" | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Almondpeanuts/xlm-roberta-base-finetuned-panx-de-fr | Almondpeanuts | "2023-04-15T04:10:49Z" | 103 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2023-04-15T03:47:26Z" | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1645
- F1: 0.8592
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.29 | 1.0 | 715 | 0.1809 | 0.8196 |
| 0.1462 | 2.0 | 1430 | 0.1628 | 0.8484 |
| 0.0936 | 3.0 | 2145 | 0.1645 | 0.8592 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
FranEnguix/HFRLu4_CartPole | FranEnguix | "2023-04-07T13:17:52Z" | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-04-07T13:17:38Z" | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: HFRLu4_CartPole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
John6666/chromayume-noobai-xl-nai-xl-v30-sdxl | John6666 | "2025-03-21T07:16:40Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"artist",
"styles",
"girls",
"cute",
"anatomy",
"lifelike characters",
"v-pred",
"noobai",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-Vpred-1.0",
"base_model:finetune:Laxhar/noobai-XL-Vpred-1.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2025-03-21T07:09:54Z" | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- artist
- styles
- girls
- cute
- anatomy
- lifelike characters
- v-pred
- noobai
- illustrious
base_model: Laxhar/noobai-XL-Vpred-1.0
---
Original model is [here](https://civitai.com/models/1330192?modelVersionId=1559738).
This model created by [duongve13112002](https://civitai.com/user/duongve13112002).
|
bowilleatyou/40418638-4ad7-47b5-995e-cbbdd2d3c40e | bowilleatyou | "2025-03-01T17:14:07Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-03-01T15:50:38Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
alexshengzhili/llava-fte2e-scicap-w-mentions-390K-440MB | alexshengzhili | "2023-06-28T05:44:43Z" | 16 | 0 | transformers | [
"transformers",
"pytorch",
"llava",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-06-26T07:57:45Z" | This model is formulated to 'predict' caption given image and mentioned paragraph
Trains on alexshengzhili/llava-SciCapplus-w-mentions[https://huggingface.co/datasets/alexshengzhili/llava-SciCapplus-w-mentions/tree/main] |
sail-rvc/Damianodavid | sail-rvc | "2023-07-14T07:21:23Z" | 1 | 0 | transformers | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
] | audio-to-audio | "2023-07-14T07:20:49Z" |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# Damianodavid
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:21:23
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
|
ccsimon123/simon_path-to-save-model | ccsimon123 | "2023-05-16T14:34:10Z" | 1 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-05-16T11:49:49Z" |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: selfie of a happy sks man
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - ccsimon123/simon_path-to-save-model
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on selfie of a happy sks man using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: True.
|
LarryAIDraw/arknightsInes4in1_v11 | LarryAIDraw | "2023-04-11T19:19:24Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-04-11T19:12:30Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/34231/arknights-ines-4in1 |
flowers-team/TA_RIAC_SAC_chimpanzee_s10 | flowers-team | "2022-08-11T11:52:07Z" | 0 | 0 | null | [
"sac",
"deep-reinforcement-learning",
"reinforcement-learning",
"teach-my-agent-parkour",
"arxiv:2103.09815",
"model-index",
"region:us"
] | reinforcement-learning | "2022-08-11T11:51:44Z" | ---
tags:
- sac
- deep-reinforcement-learning
- reinforcement-learning
- teach-my-agent-parkour
model-index:
- name: RIAC_SAC_chimpanzee_s10
results:
- metrics:
- type: mean_reward
value: -59.06 +/- 4.59
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: teach-my-agent-parkour
type: teach-my-agent-parkour
---
# Deep RL Agent Playing TeachMyAgent's parkour.
You can find more info about TeachMyAgent [here](https://developmentalsystems.org/TeachMyAgent/).
Results of our benchmark can be found in our [paper](https://arxiv.org/pdf/2103.09815.pdf).
You can test this policy [here](https://huggingface.co/spaces/flowers-team/Interactive_DeepRL_Demo)
## Results
Percentage of mastered tasks (i.e. reward >= 230) after 20 millions steps on the Parkour track.
Results shown are averages over 16 seeds along with the standard deviation for each morphology as well as the aggregation of the 48 seeds in the *Overall* column.
We highlight the best results in bold.
| Algorithm | BipedalWalker | Fish | Climber | Overall |
|---------------|----------------|---------------|--------------|---------------|
| Random | 27.25 (± 10.7) | 23.6 (± 21.3) | 0.0 (± 0.0) | 16.9 (± 18.3) |
| ADR | 14.7 (± 19.4) | 5.3 (± 20.6) | 0.0 (± 0.0) | 6.7 (± 17.4) |
| ALP-GMM | **42.7** (± 11.2) | 36.1 (± 28.5) | 0.4 (± 1.2) | **26.4** (± 25.7) |
| Covar-GMM | 35.7 (± 15.9) | 29.9 (± 27.9) | 0.5 (± 1.9) | 22.1 (± 24.2) |
| GoalGAN | 25.4 (± 24.7) | 34.7 ± 37.0) | 0.8 (± 2.7) | 20.3 (± 29.5) |
| RIAC | 31.2 (± 8.2) | **37.4** (± 25.4) | 0.4 (± 1.4) | 23.0 (± 22.4) |
| SPDL | 30.6 (± 22.8) | 9.0 (± 24.2) | **1.0** (± 3.4) | 13.5 (± 23.0) |
| Setter-Solver | 28.75 (± 20.7) | 5.1 (± 7.6) | 0.0 (± 0.0) | 11.3 (± 17.9) |
# Hyperparameters
```python
{'student': 'SAC'
'environment': 'parkour'
'training_steps': 20000000
'n_evaluation_tasks': 100
'teacher': 'RIAC'
'morphology': 'climbing_profile_chimpanzee'}
```
|
yaswanthchittepu/pythia-6.9b-tldr-dpo-beta-0.025-alpha-0-step-39936 | yaswanthchittepu | "2024-04-29T02:13:21Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-29T01:58:55Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tensorblock/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-GGUF | tensorblock | "2024-11-26T21:16:21Z" | 789 | 2 | transformers | [
"transformers",
"gguf",
"code",
"qwen-coder",
"finetune",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"base_model:WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B",
"base_model:quantized:WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2024-11-26T20:36:51Z" | ---
license: apache-2.0
base_model: WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B
language:
- en
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- qwen-coder
- finetune
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B - GGUF
This repo contains GGUF format model files for [WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B](https://huggingface.co/WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-Q2_K.gguf](https://huggingface.co/tensorblock/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-GGUF/blob/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-Q2_K.gguf) | Q2_K | 3.016 GB | smallest, significant quality loss - not recommended for most purposes |
| [WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-Q3_K_S.gguf](https://huggingface.co/tensorblock/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-GGUF/blob/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-Q3_K_S.gguf) | Q3_K_S | 3.492 GB | very small, high quality loss |
| [WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-Q3_K_M.gguf](https://huggingface.co/tensorblock/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-GGUF/blob/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-Q3_K_M.gguf) | Q3_K_M | 3.808 GB | very small, high quality loss |
| [WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-Q3_K_L.gguf](https://huggingface.co/tensorblock/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-GGUF/blob/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-Q3_K_L.gguf) | Q3_K_L | 4.088 GB | small, substantial quality loss |
| [WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-Q4_0.gguf](https://huggingface.co/tensorblock/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-GGUF/blob/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-Q4_0.gguf) | Q4_0 | 4.431 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-Q4_K_S.gguf](https://huggingface.co/tensorblock/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-GGUF/blob/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-Q4_K_S.gguf) | Q4_K_S | 4.458 GB | small, greater quality loss |
| [WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-Q4_K_M.gguf](https://huggingface.co/tensorblock/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-GGUF/blob/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-Q4_K_M.gguf) | Q4_K_M | 4.683 GB | medium, balanced quality - recommended |
| [WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-Q5_0.gguf](https://huggingface.co/tensorblock/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-GGUF/blob/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-Q5_0.gguf) | Q5_0 | 5.315 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-Q5_K_S.gguf](https://huggingface.co/tensorblock/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-GGUF/blob/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-Q5_K_S.gguf) | Q5_K_S | 5.315 GB | large, low quality loss - recommended |
| [WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-Q5_K_M.gguf](https://huggingface.co/tensorblock/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-GGUF/blob/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-Q5_K_M.gguf) | Q5_K_M | 5.445 GB | large, very low quality loss - recommended |
| [WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-Q6_K.gguf](https://huggingface.co/tensorblock/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-GGUF/blob/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-Q6_K.gguf) | Q6_K | 6.254 GB | very large, extremely low quality loss |
| [WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-Q8_0.gguf](https://huggingface.co/tensorblock/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-GGUF/blob/main/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-Q8_0.gguf) | Q8_0 | 8.099 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-GGUF --include "WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tilyupo/t5-small-mmlu-qa2a | tilyupo | "2023-08-08T07:06:43Z" | 67 | 0 | transformers | [
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:google/flan-t5-small",
"base_model:finetune:google/flan-t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-07-29T19:07:39Z" | ---
license: apache-2.0
base_model: google/flan-t5-small
tags:
- generated_from_keras_callback
model-index:
- name: t5-small-mmlu-qa2a
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# t5-small-mmlu-qa2a
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2046
- Validation Loss: 0.2880
- Epoch: 1
<pre>{'eval_loss': 3.1777148246765137,
'eval_bleu': 8.258012778244474,
'eval_rouge1': 19.05,
'eval_rouge2': 6.45,
'eval_rougeL': 17.73,
'eval_rougeLsum': 17.73,
'eval_exact': 0.0010739490641301012,
'eval_runtime': 155.1163,
'eval_samples_per_second': 84.04,
'eval_steps_per_second': 2.63}</pre>
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adafactor', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 0.001, 'beta_2_decay': -0.8, 'epsilon_1': 1e-30, 'epsilon_2': 0.001, 'clip_threshold': 1.0, 'relative_step': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.4413 | 0.2854 | 0 |
| 0.2046 | 0.2880 | 1 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.14.3
- Tokenizers 0.13.3
|
lintw/HealthGPT-L14 | lintw | "2025-03-03T17:32:47Z" | 0 | 2 | null | [
"medical",
"any-to-any",
"en",
"base_model:microsoft/phi-4",
"base_model:finetune:microsoft/phi-4",
"license:mit",
"region:us"
] | any-to-any | "2025-02-17T17:38:39Z" | ---
license: mit
language:
- en
base_model:
- microsoft/phi-4
pipeline_tag: any-to-any
tags:
- medical
---
# HealthGPT
Welcome to the model card for **HealthGPT**. This model is developed for unified multi-modal medical tasks.
For detailed imformation, please visit our [GitHub repository](https://github.com/DCDmllm/HealthGPT). |
IRUCAAI/Opeai_LoRA_70B_AWQ | IRUCAAI | "2025-02-21T02:26:07Z" | 0 | 0 | null | [
"safetensors",
"llama",
"base_model:IRUCAAI/Opeai_LoRA_70B",
"base_model:quantized:IRUCAAI/Opeai_LoRA_70B",
"license:apache-2.0",
"4-bit",
"awq",
"region:us"
] | null | "2025-02-21T02:14:24Z" | ---
license: apache-2.0
base_model:
- IRUCAAI/Opeai_LoRA_70B
--- |
s3nh/WizardLM-WizardCoder-Python-13B-V1.0-GGUF | s3nh | "2023-09-02T20:28:35Z" | 11 | 2 | transformers | [
"transformers",
"gguf",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-09-02T20:10:40Z" |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/WizardLM/WizardCoder-Python-13B-V1.0).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### Perplexity params
Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16
7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066
13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543
### inference
TODO
# Original model card
|
featherless-ai-quants/FallenMerick-Smart-Lemon-Cookie-7B-GGUF | featherless-ai-quants | "2024-11-10T19:52:49Z" | 37 | 0 | null | [
"gguf",
"text-generation",
"base_model:FallenMerick/Smart-Lemon-Cookie-7B",
"base_model:quantized:FallenMerick/Smart-Lemon-Cookie-7B",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2024-11-08T06:54:40Z" | ---
base_model: FallenMerick/Smart-Lemon-Cookie-7B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# FallenMerick/Smart-Lemon-Cookie-7B GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [FallenMerick-Smart-Lemon-Cookie-7B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/FallenMerick-Smart-Lemon-Cookie-7B-GGUF/blob/main/FallenMerick-Smart-Lemon-Cookie-7B-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [FallenMerick-Smart-Lemon-Cookie-7B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/FallenMerick-Smart-Lemon-Cookie-7B-GGUF/blob/main/FallenMerick-Smart-Lemon-Cookie-7B-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [FallenMerick-Smart-Lemon-Cookie-7B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/FallenMerick-Smart-Lemon-Cookie-7B-GGUF/blob/main/FallenMerick-Smart-Lemon-Cookie-7B-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [FallenMerick-Smart-Lemon-Cookie-7B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/FallenMerick-Smart-Lemon-Cookie-7B-GGUF/blob/main/FallenMerick-Smart-Lemon-Cookie-7B-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [FallenMerick-Smart-Lemon-Cookie-7B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/FallenMerick-Smart-Lemon-Cookie-7B-GGUF/blob/main/FallenMerick-Smart-Lemon-Cookie-7B-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [FallenMerick-Smart-Lemon-Cookie-7B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/FallenMerick-Smart-Lemon-Cookie-7B-GGUF/blob/main/FallenMerick-Smart-Lemon-Cookie-7B-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [FallenMerick-Smart-Lemon-Cookie-7B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/FallenMerick-Smart-Lemon-Cookie-7B-GGUF/blob/main/FallenMerick-Smart-Lemon-Cookie-7B-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [FallenMerick-Smart-Lemon-Cookie-7B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/FallenMerick-Smart-Lemon-Cookie-7B-GGUF/blob/main/FallenMerick-Smart-Lemon-Cookie-7B-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [FallenMerick-Smart-Lemon-Cookie-7B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/FallenMerick-Smart-Lemon-Cookie-7B-GGUF/blob/main/FallenMerick-Smart-Lemon-Cookie-7B-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [FallenMerick-Smart-Lemon-Cookie-7B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/FallenMerick-Smart-Lemon-Cookie-7B-GGUF/blob/main/FallenMerick-Smart-Lemon-Cookie-7B-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [FallenMerick-Smart-Lemon-Cookie-7B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/FallenMerick-Smart-Lemon-Cookie-7B-GGUF/blob/main/FallenMerick-Smart-Lemon-Cookie-7B-Q8_0.gguf) | 7339.34 MB |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
fedovtt/7ab75123-68dc-4fbd-b98a-c359e69541ba | fedovtt | "2025-01-21T08:37:03Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-1.7B",
"base_model:adapter:unsloth/SmolLM2-1.7B",
"license:apache-2.0",
"region:us"
] | null | "2025-01-21T08:16:32Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-1.7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7ab75123-68dc-4fbd-b98a-c359e69541ba
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-1.7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c02cb8bb462e8ae6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c02cb8bb462e8ae6_train_data.json
type:
field_input: ''
field_instruction: question
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: fedovtt/7ab75123-68dc-4fbd-b98a-c359e69541ba
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 78GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/c02cb8bb462e8ae6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 826ad990-41de-4b2e-9513-0c907c77ff8c
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 826ad990-41de-4b2e-9513-0c907c77ff8c
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 7ab75123-68dc-4fbd-b98a-c359e69541ba
This model is a fine-tuned version of [unsloth/SmolLM2-1.7B](https://huggingface.co/unsloth/SmolLM2-1.7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | nan |
| 0.0 | 0.0002 | 5 | nan |
| 0.0 | 0.0005 | 10 | nan |
| 0.0 | 0.0007 | 15 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
dt-and-vanilla-ardt/dt-arrl_train_hopper_high-2908_1211-33 | dt-and-vanilla-ardt | "2023-08-29T11:49:28Z" | 33 | 0 | transformers | [
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | "2023-08-29T11:13:04Z" | ---
tags:
- generated_from_trainer
model-index:
- name: dt-arrl_train_hopper_high-2908_1211-33
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dt-arrl_train_hopper_high-2908_1211-33
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.1.0.dev20230727+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
dcarpintero/fashion-mnist-base | dcarpintero | "2024-04-04T22:16:34Z" | 0 | 0 | null | [
"image-classification",
"en",
"dataset:fashion_mnist",
"license:apache-2.0",
"region:us"
] | image-classification | "2024-04-04T20:14:05Z" | ---
license: apache-2.0
datasets:
- fashion_mnist
language:
- en
metrics:
- accuracy
pipeline_tag: image-classification
---
# Fashion-MNIST Baseline Classifier
## Model Details
- **Model Name:** fashion-mnist-base
- **Framework:** Custom implementation in Python
- **Version:** 0.1
- **License:** Apache-2.0
## Model Description
This is a neural network model developed from the ground up to classify images from the Fashion-MNIST dataset.
The dataset comprises 70,000 grayscale images across 10 categories. Each example is a 28x28 grayscale image,
associated with a label from 10 classes including T-shirts/tops, trousers, pullovers, dresses, coats, sandals, shirts, sneakers, bags, and ankle boots.
## Intended Use
This model is intended for educational purposes and as a baseline for more complex implementations. It can be used by students and AI enthusiasts
to understand the workings of neural networks and their application in image classification.
## Training Data
The model was trained on the Fashion-MNIST dataset, which contains 60,000 training images and 10,000 test images.
Each image is 28x28 pixels, grayscale, associated with one of 10 classes representing different types of clothing and accessories.
### Architecture Details:
- Input layer: 784 neurons (flattened 28x28 image)
- Hidden layer 1: 256 neurons, ReLU activation, Dropout
- Hidden layer 2: 64 neurons, ReLU activation, Dropout
- Output layer: 10 neurons, logits
### Hyperparameters:
- Learning rate: 0.005
- Batch size: 32
- Epochs: 25
The model uses a self-implemented stochastic gradient descent (SGD) optimizer.
## Evaluation Results
The model achieved the following performance on the test set:
- Accuracy: 86.7%
- Precision, Recall, and F1-Score:
| Label | Precision | Recall | F1-score |
|-------------|-----------|---------|----------|
| T-shirt/Top | 0.847514 | 0.767 | 0.805249 |
| Trouser | 0.982618 | 0.961 | 0.971689 |
| Pullover | 0.800000 | 0.748 | 0.773127 |
| Dress | 0.861868 | 0.886 | 0.873767 |
| Coat | 0.776278 | 0.805 | 0.790378 |
| Sandal | 0.957958 | 0.957 | 0.957479 |
| Shirt | 0.638587 | 0.705 | 0.670152 |
| Sneaker | 0.935743 | 0.932 | 0.933868 |
| Bag | 0.952381 | 0.960 | 0.956175 |
| Ankle-Boot | 0.944554 | 0.954 | 0.949254 |
## Limitations and Biases
Due to the nature of the training dataset, the model may not capture the full complexity of fashion items in diverse real-world scenarios.
In practice, we found out that it is sensitive to background colors and article's proportions.
## How to Use
```python
import torch
import torchvision.transforms as transforms
from PIL import Image
model = torch.load('fashion-mnist-base.pt')
# Images need to be transformed to the `fashion MNIST` dataset format
transform = transforms.Compose(
[
transforms.Resize((28, 28)),
transforms.Grayscale(),
transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,)), # Normalization
transforms.Lambda(lambda x: 1.0 - x), # Invert colors
transforms.Lambda(lambda x: x[0]),
transforms.Lambda(lambda x: x.unsqueeze(0)),
]
)
img = Image.open('fashion/dress.png')
img = transform(img)
model.predictions(img)
```
## Sample Output
```
{'Dress': 84.437744,
'Coat': 7.631796,
'Pullover': 4.2272186,
'Shirt': 1.297625,
'T-shirt/Top': 1.2237197,
'Bag': 0.9053432,
'Trouser/Jeans': 0.27268794,
'Sneaker': 0.0031491981,
'Ankle-Boot': 0.00063403655,
'Sandal': 8.5103806e-05}
``` |
miroslawas/distilbert-base-uncased-finetuned-squad | miroslawas | "2023-04-15T14:17:45Z" | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | "2023-03-30T12:06:00Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2233
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.2674 | 1.0 | 5533 | 1.2233 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0
- Datasets 2.8.0
- Tokenizers 0.13.2
|
OpenGVLab/internimage_xl_1k_384 | OpenGVLab | "2023-03-03T02:19:01Z" | 124 | 1 | transformers | [
"transformers",
"pytorch",
"intern_image",
"feature-extraction",
"image-classification",
"custom_code",
"region:us"
] | image-classification | "2023-03-03T02:14:01Z" | ---
pipeline_tag: image-classification
--- |
chcaa/da_dacy_small_trf | chcaa | "2023-05-16T20:40:24Z" | 34 | 2 | spacy | [
"spacy",
"dacy",
"danish",
"token-classification",
"pos tagging",
"morphological analysis",
"lemmatization",
"dependency parsing",
"named entity recognition",
"coreference resolution",
"named entity linking",
"named entity disambiguation",
"da",
"dataset:universal_dependencies",
"dataset:dane",
"dataset:alexandrainst/dacoref",
"license:apache-2.0",
"model-index",
"region:us"
] | token-classification | "2022-03-02T23:29:05Z" | ---
tags:
- spacy
- dacy
- danish
- token-classification
- pos tagging
- morphological analysis
- lemmatization
- dependency parsing
- named entity recognition
- coreference resolution
- named entity linking
- named entity disambiguation
language:
- da
license: apache-2.0
model-index:
- name: da_dacy_small_trf-0.2.0
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.8306010929
- name: NER Recall
type: recall
value: 0.8172043011
- name: NER F Score
type: f_score
value: 0.8238482385
dataset:
name: DaNE
split: test
type: dane
- task:
name: TAG
type: token-classification
metrics:
- name: TAG (XPOS) Accuracy
type: accuracy
value: 0.9846798742
dataset:
name: UD Danish DDT
split: test
type: universal_dependencies
config: da_ddt
- task:
name: POS
type: token-classification
metrics:
- name: POS (UPOS) Accuracy
type: accuracy
value: 0.9842315369
dataset:
name: UD Danish DDT
split: test
type: universal_dependencies
config: da_ddt
- task:
name: MORPH
type: token-classification
metrics:
- name: Morph (UFeats) Accuracy
type: accuracy
value: 0.9772942762
dataset:
name: UD Danish DDT
split: test
type: universal_dependencies
config: da_ddt
- task:
name: LEMMA
type: token-classification
metrics:
- name: Lemma Accuracy
type: accuracy
value: 0.9466699925
dataset:
name: UD Danish DDT
split: test
type: universal_dependencies
config: da_ddt
- task:
name: UNLABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Unlabeled Attachment Score (UAS)
type: f_score
value: 0.8978522787
dataset:
name: UD Danish DDT
split: test
type: universal_dependencies
config: da_ddt
- task:
name: LABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Labeled Attachment Score (LAS)
type: f_score
value: 0.8701623698
dataset:
name: UD Danish DDT
split: test
type: universal_dependencies
config: da_ddt
- task:
name: SENTS
type: token-classification
metrics:
- name: Sentences F-Score
type: f_score
value: 0.9433304272
dataset:
name: UD Danish DDT
split: test
type: universal_dependencies
config: da_ddt
- task:
name: coreference-resolution
type: coreference-resolution
metrics:
- name: LEA
type: f_score
value: 0.4218334451
dataset:
name: DaCoref
type: alexandrainst/dacoref
split: custom
- task:
name: coreference-resolution
type: coreference-resolution
metrics:
- name: Named entity Linking Precision
type: precision
value: 0.8461538462
- name: Named entity Linking Recall
type: recall
value: 0.2222222222
- name: Named entity Linking F Score
type: f_score
value: 0.352
dataset:
name: DaNED
type: named-entity-linking
split: custom
library_name: spacy
datasets:
- universal_dependencies
- dane
- alexandrainst/dacoref
metrics:
- accuracy
---
<a href="https://github.com/centre-for-humanities-computing/Dacy"><img src="https://centre-for-humanities-computing.github.io/DaCy/_static/icon.png" width="175" height="175" align="right" /></a>
# DaCy small
DaCy is a Danish language processing framework with state-of-the-art pipelines as well as functionality for analysing Danish pipelines.
DaCy's largest pipeline has achieved State-of-the-Art performance on parts-of-speech tagging and dependency
parsing for Danish on the Danish Dependency treebank as well as competitive performance on named entity recognition, named entity disambiguation and coreference resolution.
To read more check out the [DaCy repository](https://github.com/centre-for-humanities-computing/DaCy) for material on how to use DaCy and reproduce the results.
DaCy also contains guides on usage of the package as well as behavioural test for biases and robustness of Danish NLP pipelines.
| Feature | Description |
| --- | --- |
| **Name** | `da_dacy_small_trf` |
| **Version** | `0.2.0` |
| **spaCy** | `>=3.5.2,<3.6.0` |
| **Default Pipeline** | `transformer`, `tagger`, `morphologizer`, `trainable_lemmatizer`, `parser`, `ner`, `coref`, `span_resolver`, `span_cleaner`, `entity_linker` |
| **Components** | `transformer`, `tagger`, `morphologizer`, `trainable_lemmatizer`, `parser`, `ner`, `coref`, `span_resolver`, `span_cleaner`, `entity_linker` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | [UD Danish DDT v2.11](https://github.com/UniversalDependencies/UD_Danish-DDT) (Johannsen, Anders; Martínez Alonso, Héctor; Plank, Barbara)<br />[DaNE](https://huggingface.co/datasets/dane) (Rasmus Hvingelby, Amalie B. Pauli, Maria Barrett, Christina Rosted, Lasse M. Lidegaard, Anders Søgaard)<br />[DaCoref](https://huggingface.co/datasets/alexandrainst/dacoref) (Buch-Kromann, Matthias)<br />[DaNED](https://danlp-alexandra.readthedocs.io/en/stable/docs/datasets.html#daned) (Barrett, M. J., Lam, H., Wu, M., Lacroix, O., Plank, B., & Søgaard, A.)<br />[jonfd/electra-small-nordic](https://huggingface.co/jonfd/electra-small-nordic) (Jón Friðrik Daðason) |
| **License** | `Apache-2.0` |
| **Author** | [Kenneth Enevoldsen](https://chcaa.io/#/) |
### Label Scheme
<details>
<summary>View label scheme (211 labels for 4 components)</summary>
| Component | Labels |
| --- | --- |
| **`tagger`** | `ADJ`, `ADP`, `ADV`, `AUX`, `CCONJ`, `DET`, `INTJ`, `NOUN`, `NUM`, `PART`, `PRON`, `PROPN`, `PUNCT`, `SCONJ`, `SYM`, `VERB`, `X` |
| **`morphologizer`** | `AdpType=Prep\|POS=ADP`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=NOUN`, `Mood=Ind\|POS=AUX\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `POS=PROPN`, `Definite=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Definite=Def\|Gender=Neut\|Number=Sing\|POS=NOUN`, `POS=SCONJ`, `Definite=Def\|Gender=Com\|Number=Sing\|POS=NOUN`, `Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `POS=ADV`, `Number=Plur\|POS=DET\|PronType=Dem`, `Degree=Pos\|Number=Plur\|POS=ADJ`, `Definite=Ind\|Gender=Com\|Number=Plur\|POS=NOUN`, `POS=PUNCT`, `NumType=Ord\|POS=ADJ`, `POS=CCONJ`, `Definite=Ind\|Gender=Neut\|Number=Plur\|POS=NOUN`, `POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Degree=Sup\|POS=ADV`, `Degree=Pos\|POS=ADV`, `Gender=Com\|Number=Sing\|POS=DET\|PronType=Ind`, `Number=Plur\|POS=DET\|PronType=Ind`, `POS=ADP`, `POS=ADV\|PartType=Inf`, `Case=Nom\|Gender=Com\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Ind\|POS=AUX\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Definite=Def\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `POS=ADP\|PartType=Inf`, `Definite=Ind\|Degree=Pos\|Gender=Com\|Number=Sing\|POS=ADJ`, `NumType=Card\|POS=NUM`, `Degree=Pos\|POS=ADJ`, `Definite=Ind\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part`, `POS=PART\|PartType=Inf`, `Case=Acc\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Definite=Def\|Gender=Com\|Number=Plur\|POS=NOUN`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `POS=VERB\|Tense=Pres\|VerbForm=Part`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Definite=Def\|Gender=Com\|Number=Sing\|POS=NOUN`, `Definite=Def\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `POS=AUX\|VerbForm=Inf\|Voice=Act`, `Definite=Ind\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Definite=Ind\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Degree=Cmp\|POS=ADJ`, `POS=PRON\|PartType=Inf`, `Definite=Ind\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Com\|POS=PRON\|PronType=Ind`, `Number=Plur\|POS=PRON\|PronType=Ind`, `POS=INTJ`, `Gender=Com\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Gen\|Number=Plur\|POS=DET\|PronType=Ind`, `Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Definite=Def\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Degree=Cmp\|POS=ADV`, `Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs\|Style=Form`, `Case=Acc\|Gender=Com\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|POS=PROPN`, `Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Ind`, `Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Gender=Com\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Definite=Def\|Degree=Sup\|POS=ADJ`, `Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Gen\|Definite=Ind\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Definite=Def\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `POS=PRON\|PronType=Dem`, `Degree=Pos\|Gender=Com\|Number=Sing\|POS=ADJ`, `Number=Plur\|POS=NUM`, `POS=VERB\|VerbForm=Inf\|Voice=Pass`, `Definite=Def\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Number=Sing\|POS=PRON\|PronType=Int,Rel`, `Case=Nom\|Gender=Com\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Com\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `POS=PRON`, `Definite=Ind\|Number=Sing\|POS=NOUN`, `Definite=Ind\|Number=Sing\|POS=NUM`, `Case=Gen\|Definite=Ind\|Gender=Com\|Number=Sing\|POS=NOUN`, `Foreign=Yes\|POS=ADV`, `POS=NOUN`, `Case=Gen\|Definite=Def\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Gender=Com\|Number=Plur\|POS=NOUN`, `Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Int,Rel`, `Case=Nom\|Gender=Com\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Com\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Gen\|Definite=Ind\|Gender=Com\|Number=Plur\|POS=NOUN`, `Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Degree=Sup\|POS=ADJ`, `Degree=Pos\|Number=Sing\|POS=ADJ`, `Mood=Imp\|POS=VERB`, `Case=Nom\|Gender=Com\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `Case=Acc\|Gender=Com\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `POS=X`, `Case=Gen\|Definite=Def\|Gender=Com\|Number=Plur\|POS=NOUN`, `Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Acc\|Gender=Com\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Number=Plur\|POS=PRON\|PronType=Int,Rel`, `Gender=Com\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Degree=Cmp\|Number=Plur\|POS=ADJ`, `Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Com\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|Style=Form`, `Case=Nom\|Gender=Com\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Acc\|Gender=Com\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Gender=Com\|POS=PRON\|PronType=Int,Rel`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `POS=VERB\|VerbForm=Ger`, `Gender=Com\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Gen\|POS=PRON\|PronType=Int,Rel`, `Mood=Ind\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Abbr=Yes\|POS=X`, `Case=Gen\|Definite=Ind\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Gender=Com\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Definite=Ind\|Number=Plur\|POS=NOUN`, `Foreign=Yes\|POS=X`, `Number=Plur\|POS=PRON\|PronType=Rcp`, `Case=Nom\|Gender=Com\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Gen\|Degree=Cmp\|POS=ADJ`, `Case=Gen\|Definite=Def\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Acc\|Gender=Com\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem`, `Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|Style=Form`, `Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|Style=Form`, `Number=Plur\|Number[psor]=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Number[psor]=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Number=Plur\|POS=PRON\|PronType=Rcp`, `POS=DET\|Person=2\|Polite=Form\|Poss=Yes\|PronType=Prs`, `POS=SYM`, `POS=DET\|PronType=Dem`, `Gender=Com\|Number=Sing\|POS=NUM`, `Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Definite=Def\|Degree=Abs\|POS=ADJ`, `POS=VERB\|Tense=Pres`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=NUM`, `Degree=Abs\|POS=ADV`, `Case=Gen\|Definite=Def\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Gender=Com\|Number=Sing\|POS=PRON\|PronType=Int,Rel`, `POS=VERB\|Tense=Past\|VerbForm=Part`, `Definite=Ind\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Gender=Com\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Number[psor]=Plur\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Definite=Ind\|POS=NOUN`, `Case=Gen\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Ind`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=NUM`, `Definite=Def\|Number=Plur\|POS=NOUN`, `Case=Gen\|POS=NOUN`, `POS=AUX\|Tense=Pres\|VerbForm=Part` |
| **`parser`** | `ROOT`, `acl:relcl`, `advcl`, `advmod`, `advmod:lmod`, `amod`, `appos`, `aux`, `case`, `cc`, `ccomp`, `compound:prt`, `conj`, `cop`, `dep`, `det`, `expl`, `fixed`, `flat`, `iobj`, `list`, `mark`, `nmod`, `nmod:poss`, `nsubj`, `nummod`, `obj`, `obl`, `obl:lmod`, `obl:tmod`, `punct`, `xcomp` |
| **`ner`** | `LOC`, `MISC`, `ORG`, `PER` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TOKEN_ACC` | 99.92 |
| `TOKEN_P` | 99.70 |
| `TOKEN_R` | 99.77 |
| `TOKEN_F` | 99.74 |
| `SENTS_P` | 92.96 |
| `SENTS_R` | 95.75 |
| `SENTS_F` | 94.33 |
| `TAG_ACC` | 98.47 |
| `POS_ACC` | 98.42 |
| `MORPH_ACC` | 97.73 |
| `MORPH_MICRO_P` | 98.94 |
| `MORPH_MICRO_R` | 98.33 |
| `MORPH_MICRO_F` | 98.64 |
| `DEP_UAS` | 89.79 |
| `DEP_LAS` | 87.02 |
| `ENTS_P` | 83.06 |
| `ENTS_R` | 81.72 |
| `ENTS_F` | 82.38 |
| `LEMMA_ACC` | 94.67 |
| `COREF_LEA_F1` | 42.18 |
| `COREF_LEA_PRECISION` | 44.79 |
| `COREF_LEA_RECALL` | 39.86 |
| `NEL_SCORE` | 35.20 |
| `NEL_MICRO_P` | 84.62 |
| `NEL_MICRO_R` | 22.22 |
| `NEL_MICRO_F` | 35.20 |
| `NEL_MACRO_P` | 87.68 |
| `NEL_MACRO_R` | 24.76 |
| `NEL_MACRO_F` | 37.52 |
### Training
This model was trained using [spaCy](https://spacy.io) and logged to [Weights & Biases](https://wandb.ai/kenevoldsen/dacy-v0.2.0). You can find all the training logs [here](https://wandb.ai/kenevoldsen/dacy-v0.2.0). |
tanoManzo/dnabert2_ft_BioS73_1kbpHG19_DHSs_H3K27AC_on_shot_one_shot | tanoManzo | "2024-10-29T14:34:25Z" | 106 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"custom_code",
"base_model:vivym/DNABERT-2-117M",
"base_model:finetune:vivym/DNABERT-2-117M",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-10-29T14:34:16Z" | ---
library_name: transformers
base_model: vivym/DNABERT-2-117M
tags:
- generated_from_trainer
metrics:
- precision
- recall
- accuracy
model-index:
- name: dnabert2_ft_BioS73_1kbpHG19_DHSs_H3K27AC_on_shot_one_shot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dnabert2_ft_BioS73_1kbpHG19_DHSs_H3K27AC_on_shot_one_shot
This model is a fine-tuned version of [vivym/DNABERT-2-117M](https://huggingface.co/vivym/DNABERT-2-117M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1495
- F1 Score: 0.6667
- Precision: 0.6667
- Recall: 0.6667
- Accuracy: 0.7037
- Auc: 0.8667
- Prc: 0.8708
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Precision | Recall | Accuracy | Auc | Prc |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:---------:|:------:|:--------:|:------:|:------:|
| 0.3645 | 18.5185 | 500 | 1.1495 | 0.6667 | 0.6667 | 0.6667 | 0.7037 | 0.8667 | 0.8708 |
### Framework versions
- Transformers 4.46.0.dev0
- Pytorch 2.4.1+cu121
- Datasets 2.18.0
- Tokenizers 0.20.0
|
kokovova/581e4301-3b55-4cf8-98dc-61367e311ed7 | kokovova | "2025-01-11T20:34:56Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Hermes-llama-2-7b",
"base_model:adapter:NousResearch/Nous-Hermes-llama-2-7b",
"license:mit",
"region:us"
] | null | "2025-01-11T17:20:56Z" | ---
library_name: peft
license: mit
base_model: NousResearch/Nous-Hermes-llama-2-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 581e4301-3b55-4cf8-98dc-61367e311ed7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Nous-Hermes-llama-2-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 6df602c711f64927_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6df602c711f64927_train_data.json
type:
field_input: language
field_instruction: conllu
field_output: sentence
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kokovova/581e4301-3b55-4cf8-98dc-61367e311ed7
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/6df602c711f64927_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a0bbb32c-d936-40b4-b03e-366c33ddce63
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: a0bbb32c-d936-40b4-b03e-366c33ddce63
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 581e4301-3b55-4cf8-98dc-61367e311ed7
This model is a fine-tuned version of [NousResearch/Nous-Hermes-llama-2-7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | nan |
| 0.0 | 0.0001 | 8 | nan |
| 0.0 | 0.0002 | 16 | nan |
| 0.0 | 0.0003 | 24 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
spow12/ChatWaifu_22B_v2.0_preview | spow12 | "2024-09-25T11:50:37Z" | 10 | 6 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"nsfw",
"Visual novel",
"roleplay",
"mergekit",
"merge",
"conversational",
"en",
"ja",
"dataset:roleplay4fun/aesir-v1.1",
"dataset:kalomaze/Opus_Instruct_3k",
"dataset:Gryphe/Sonnet3.5-SlimOrcaDedupCleaned",
"dataset:Aratako/Synthetic-Japanese-Roleplay-gpt-4o-mini-39.6k-formatted",
"dataset:Aratako/Synthetic-Japanese-Roleplay-NSFW-Claude-3.5s-15.3k-formatted",
"dataset:SkunkworksAI/reasoning-0.01",
"base_model:mistralai/Mistral-Small-Instruct-2409",
"base_model:finetune:mistralai/Mistral-Small-Instruct-2409",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-09-23T07:51:45Z" | ---
language:
- en
- ja
license: cc-by-nc-4.0
library_name: transformers
tags:
- nsfw
- Visual novel
- roleplay
- mergekit
- merge
base_model:
- mistralai/Mistral-Small-Instruct-2409
datasets:
- roleplay4fun/aesir-v1.1
- kalomaze/Opus_Instruct_3k
- Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
- Aratako/Synthetic-Japanese-Roleplay-gpt-4o-mini-39.6k-formatted
- Aratako/Synthetic-Japanese-Roleplay-NSFW-Claude-3.5s-15.3k-formatted
- SkunkworksAI/reasoning-0.01
pipeline_tag: text-generation
model-index:
- name: ChatWaifu_22B_v2.0_preview
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 67.45
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=spow12/ChatWaifu_22B_v2.0_preview
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 45.49
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=spow12/ChatWaifu_22B_v2.0_preview
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 16.31
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=spow12/ChatWaifu_22B_v2.0_preview
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 8.72
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=spow12/ChatWaifu_22B_v2.0_preview
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 3.53
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=spow12/ChatWaifu_22B_v2.0_preview
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 33.2
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=spow12/ChatWaifu_22B_v2.0_preview
name: Open LLM Leaderboard
---
# Model Card for Model ID

Merged model using [mergekit](https://github.com/arcee-ai/mergekit/tree/main/mergekit)
This model aimed to act like visual novel character.
## Merge Format
```yaml
models:
- model: mistralai/Mistral-Small-Instruct-2409_SFT
layer_range: [0, 56]
- model: mistralai/Mistral-Small-Instruct-2409
layer_range: [0, 56]
merge_method: slerp
base_model: mistralai/Mistral-Small-Instruct-2409_SFT
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: bfloat16
```
# WaifuModel Collections
- [TTS](https://huggingface.co/spow12/visual_novel_tts)
- [Chat](https://huggingface.co/spow12/ChatWaifu_22B_v2.0)
- [ASR](https://huggingface.co/spow12/Visual-novel-transcriptor)
# Unified demo
[WaifuAssistant](https://github.com/yw0nam/WaifuAssistant)
# Update 2.0
- 2024.09.23 Update 22B, Ver 2.0
## Model Details
### Model Description
- **Developed by:** spow12(yw_nam)
- **Shared by :** spow12(yw_nam)
- **Model type:** CausalLM
- **Language(s) (NLP):** japanese. English
- **Finetuned from model :** [mistralai/Mistral-Small-Instruct-2409](https://huggingface.co/mistralai/Mistral-Small-Instruct-2409)
Currently, chatbot has below personality.
character | visual_novel |
--- | --- |
ムラサメ | Senren*Banka |
茉子 | Senren*Banka |
芳乃 | Senren*Banka |
レナ | Senren*Banka |
千咲 | Senren*Banka |
芦花 | Senren*Banka |
愛衣 | Café Stella and the Reaper's Butterflies |
栞那 | Café Stella and the Reaper's Butterflies |
ナツメ | Café Stella and the Reaper's Butterflies |
希 | Café Stella and the Reaper's Butterflies |
涼音 | Café Stella and the Reaper's Butterflies |
あやせ | Riddle Joker |
七海 | Riddle Joker |
羽月 | Riddle Joker |
茉優 | Riddle Joker |
小春 | Riddle Joker |
But you can chat your own Character with persona text.
Feel free to test.
Your feedback will be helpful for improving model.
### Dataset
Riddle Joker(Prviate)
Café Stella and the Reaper's Butterflies(Private)
Senren*Banka(Private)
roleplay4fun/aesir-v1.1
kalomaze/Opus_Instruct_3k
Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
Aratako/Synthetic-JP-EN-Coding-Dataset-567k (only using 50000 sample)
Aratako/Synthetic-Japanese-Roleplay-gpt-4o-mini-39.6k-formatted
Aratako/Synthetic-Japanese-Roleplay-NSFW-Claude-3.5s-15.3k-formatted
SkunkworksAI/reasoning-0.01
### Feature
- Fluent Chat performance
- Reduce repetition problem when generate with many turn(over 20~30)
- Zero Shot character persona using description of character.
- 128k context window
- Memory ability that does not forget even after long-context generation
## Demo
You can use Demo in google colab.
Check [Here](https://colab.research.google.com/drive/194_FN28reEPTwS51dwpLLBBwEfeoBjP9?usp=sharing)
## Bias, Risks, and Limitations
This model can generate NSFW content.
## Use & Credit
This model is currently available for non-commercial & Research purpose only.
Also, since I'm not detailed in licensing, I hope you use this model responsibly.
By sharing this model, I hope to contribute to the research efforts of our community (the open-source community and Waifu Lovers).
## Citation
```bibtex
@misc {ChatWaifu_22B_v2.0
author = { YoungWoo Nam },
title = { ChatWaifu_22B_v2.0_preview },
year = 2024,
url = { https://huggingface.co/spow12/ChatWaifu_22B_v2.0_preview },
publisher = { Hugging Face }
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_spow12__ChatWaifu_22B_v2.0_preview)
| Metric |Value|
|-------------------|----:|
|Avg. |29.12|
|IFEval (0-Shot) |67.45|
|BBH (3-Shot) |45.49|
|MATH Lvl 5 (4-Shot)|16.31|
|GPQA (0-shot) | 8.72|
|MuSR (0-shot) | 3.53|
|MMLU-PRO (5-shot) |33.20|
|
suhara/nm5-56b-8k-base | suhara | "2025-04-01T07:08:50Z" | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | "2025-04-01T06:38:08Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
hongerzh/my_NFT_sale_classifier | hongerzh | "2023-10-11T21:17:47Z" | 200 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-10-11T20:00:33Z" | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_NFT_sale_classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_NFT_sale_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6323
- Accuracy: 0.6560
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6234 | 1.0 | 112 | 0.6335 | 0.6565 |
| 0.6077 | 2.0 | 225 | 0.6335 | 0.6583 |
| 0.5896 | 2.99 | 336 | 0.6323 | 0.6560 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1
- Datasets 2.14.5
- Tokenizers 0.14.0
|
cleanrl/Freeway-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2 | cleanrl | "2023-02-10T09:16:28Z" | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Freeway-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-02-10T09:16:23Z" | ---
tags:
- Freeway-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Freeway-v5
type: Freeway-v5
metrics:
- type: mean_reward
value: 33.90 +/- 0.30
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Freeway-v5**
This is a trained model of a PPO agent playing Freeway-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Freeway-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Freeway-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Freeway-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Freeway-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Freeway-v5 --seed 2
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 7680,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Freeway-v5',
'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4, 5, 6],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1920,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 60,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6510,
'profile': False,
'save_model': True,
'seed': 2,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
fun-research/Video-LLaVA-Seg | fun-research | "2025-03-14T16:58:25Z" | 12 | 1 | null | [
"safetensors",
"llava_llama",
"arxiv:2412.09754",
"license:apache-2.0",
"region:us"
] | null | "2025-03-04T13:52:42Z" | ---
license: apache-2.0
---
# Video-LLaVA-Seg
[Project](https://ali2500.github.io/vicas-project/) | [Arxiv](https://arxiv.org/abs/2412.09754)
This is the official baseline implementation for the ViCas dataset.
For details about setting up the model, refer to the [Video-LLaVA-Seg GitHub repo](https://github.com/Ali2500/Video-LLaVA-Seg/tree/main)
For details about downloading and evaluating the dataset benchmark, refer to the [ViCaS GitHub repo](https://github.com/Ali2500/ViCaS/tree/main) |
marialvsantiago/baf5908d-3fbb-46bd-a6ed-7bd5a564b811 | marialvsantiago | "2025-01-15T14:13:16Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:lmsys/vicuna-7b-v1.5",
"base_model:adapter:lmsys/vicuna-7b-v1.5",
"license:llama2",
"region:us"
] | null | "2025-01-15T10:03:12Z" | ---
library_name: peft
license: llama2
base_model: lmsys/vicuna-7b-v1.5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: baf5908d-3fbb-46bd-a6ed-7bd5a564b811
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: lmsys/vicuna-7b-v1.5
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 467e1bce5068b28e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/467e1bce5068b28e_train_data.json
type:
field_input: action
field_instruction: situation
field_output: rot
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: marialvsantiago/baf5908d-3fbb-46bd-a6ed-7bd5a564b811
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 78GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/467e1bce5068b28e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_hf
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f7ff8c43-c4de-4110-9190-d730517df126
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: f7ff8c43-c4de-4110-9190-d730517df126
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# baf5908d-3fbb-46bd-a6ed-7bd5a564b811
This model is a fine-tuned version of [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3947
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 3.1135 |
| 2.999 | 0.0001 | 5 | 3.0636 |
| 2.9483 | 0.0002 | 10 | 2.8018 |
| 2.5879 | 0.0004 | 15 | 2.5786 |
| 2.3338 | 0.0005 | 20 | 2.4500 |
| 2.3556 | 0.0006 | 25 | 2.4037 |
| 2.3574 | 0.0007 | 30 | 2.3947 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
hmrvc/qingxin | hmrvc | "2023-09-19T04:23:29Z" | 0 | 0 | null | [
"retrieval-based-voice-conversion",
"rvc",
"audio-to-audio",
"license:unknown",
"region:us"
] | audio-to-audio | "2023-09-19T04:23:02Z" | ---
license: unknown
pipeline_tag: audio-to-audio
tags:
- retrieval-based-voice-conversion
- rvc
---
|
zarakiquemparte/zararp-l2-7b | zarakiquemparte | "2023-09-04T19:37:31Z" | 1,471 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama2",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-09-04T16:20:26Z" | ---
license: other
tags:
- llama2
---
# Model Card: ZaraRP L2 7b
This model uses [Nous Hermes Llama2 7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b) (53%) as a base with [Stable Beluga 7b](https://huggingface.co/stabilityai/StableBeluga-7B) (47%) and the result of this merge was merged with [LimaRP LLama2 7B Lora version of the day 07/23/2023](https://huggingface.co/lemonilia/limarp-llama2) and [PIPPA ShareGPT Subset Variation Two Lora 7b](https://huggingface.co/zarakiquemparte/PIPPA-ShareGPT-Subset-Lora-VT-7b).
This merge of models(hermes and stable beluga) was done with this [script](https://github.com/zarakiquemparte/zaraki-tools/blob/main/merge-cli.py)
This merge of Lora with Model was done with this [script](https://github.com/zarakiquemparte/zaraki-tools/blob/main/apply-lora.py)
Merge illustration:

## Usage:
Since this is a merge between Nous Hermes, Stable Beluga, LimaRP, and PIPPA ShareGPT, the following instruction formats should work:
Alpaca 2:
```
### Instruction:
<prompt>
### Response:
<leave a newline blank for model to respond>
```
Custom:
```
SYSTEM: Do thing
USER: {prompt}
CHARACTER:
```
LimaRP instruction format:
```
<<SYSTEM>>
<character card and system prompt>
<<USER>>
<prompt>
<<AIBOT>>
<leave a newline blank for model to respond>
```
## Bias, Risks, and Limitations
This model is not intended for supplying factual information or advice in any form
## Training Details
This model is merged and can be reproduced using the tools mentioned above. Please refer to all provided links for extra model-specific details. |
RichardErkhov/Nelathan_-_Qwen2-7B-FocusMix-gguf | RichardErkhov | "2024-08-12T20:39:38Z" | 27 | 1 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-08-12T18:48:09Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Qwen2-7B-FocusMix - GGUF
- Model creator: https://huggingface.co/Nelathan/
- Original model: https://huggingface.co/Nelathan/Qwen2-7B-FocusMix/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Qwen2-7B-FocusMix.Q2_K.gguf](https://huggingface.co/RichardErkhov/Nelathan_-_Qwen2-7B-FocusMix-gguf/blob/main/Qwen2-7B-FocusMix.Q2_K.gguf) | Q2_K | 2.81GB |
| [Qwen2-7B-FocusMix.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Nelathan_-_Qwen2-7B-FocusMix-gguf/blob/main/Qwen2-7B-FocusMix.IQ3_XS.gguf) | IQ3_XS | 3.11GB |
| [Qwen2-7B-FocusMix.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Nelathan_-_Qwen2-7B-FocusMix-gguf/blob/main/Qwen2-7B-FocusMix.IQ3_S.gguf) | IQ3_S | 3.26GB |
| [Qwen2-7B-FocusMix.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Nelathan_-_Qwen2-7B-FocusMix-gguf/blob/main/Qwen2-7B-FocusMix.Q3_K_S.gguf) | Q3_K_S | 3.25GB |
| [Qwen2-7B-FocusMix.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Nelathan_-_Qwen2-7B-FocusMix-gguf/blob/main/Qwen2-7B-FocusMix.IQ3_M.gguf) | IQ3_M | 3.33GB |
| [Qwen2-7B-FocusMix.Q3_K.gguf](https://huggingface.co/RichardErkhov/Nelathan_-_Qwen2-7B-FocusMix-gguf/blob/main/Qwen2-7B-FocusMix.Q3_K.gguf) | Q3_K | 3.55GB |
| [Qwen2-7B-FocusMix.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Nelathan_-_Qwen2-7B-FocusMix-gguf/blob/main/Qwen2-7B-FocusMix.Q3_K_M.gguf) | Q3_K_M | 3.55GB |
| [Qwen2-7B-FocusMix.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Nelathan_-_Qwen2-7B-FocusMix-gguf/blob/main/Qwen2-7B-FocusMix.Q3_K_L.gguf) | Q3_K_L | 3.81GB |
| [Qwen2-7B-FocusMix.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Nelathan_-_Qwen2-7B-FocusMix-gguf/blob/main/Qwen2-7B-FocusMix.IQ4_XS.gguf) | IQ4_XS | 3.96GB |
| [Qwen2-7B-FocusMix.Q4_0.gguf](https://huggingface.co/RichardErkhov/Nelathan_-_Qwen2-7B-FocusMix-gguf/blob/main/Qwen2-7B-FocusMix.Q4_0.gguf) | Q4_0 | 4.13GB |
| [Qwen2-7B-FocusMix.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Nelathan_-_Qwen2-7B-FocusMix-gguf/blob/main/Qwen2-7B-FocusMix.IQ4_NL.gguf) | IQ4_NL | 4.15GB |
| [Qwen2-7B-FocusMix.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Nelathan_-_Qwen2-7B-FocusMix-gguf/blob/main/Qwen2-7B-FocusMix.Q4_K_S.gguf) | Q4_K_S | 4.15GB |
| [Qwen2-7B-FocusMix.Q4_K.gguf](https://huggingface.co/RichardErkhov/Nelathan_-_Qwen2-7B-FocusMix-gguf/blob/main/Qwen2-7B-FocusMix.Q4_K.gguf) | Q4_K | 4.36GB |
| [Qwen2-7B-FocusMix.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Nelathan_-_Qwen2-7B-FocusMix-gguf/blob/main/Qwen2-7B-FocusMix.Q4_K_M.gguf) | Q4_K_M | 4.36GB |
| [Qwen2-7B-FocusMix.Q4_1.gguf](https://huggingface.co/RichardErkhov/Nelathan_-_Qwen2-7B-FocusMix-gguf/blob/main/Qwen2-7B-FocusMix.Q4_1.gguf) | Q4_1 | 4.54GB |
| [Qwen2-7B-FocusMix.Q5_0.gguf](https://huggingface.co/RichardErkhov/Nelathan_-_Qwen2-7B-FocusMix-gguf/blob/main/Qwen2-7B-FocusMix.Q5_0.gguf) | Q5_0 | 4.95GB |
| [Qwen2-7B-FocusMix.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Nelathan_-_Qwen2-7B-FocusMix-gguf/blob/main/Qwen2-7B-FocusMix.Q5_K_S.gguf) | Q5_K_S | 4.95GB |
| [Qwen2-7B-FocusMix.Q5_K.gguf](https://huggingface.co/RichardErkhov/Nelathan_-_Qwen2-7B-FocusMix-gguf/blob/main/Qwen2-7B-FocusMix.Q5_K.gguf) | Q5_K | 5.07GB |
| [Qwen2-7B-FocusMix.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Nelathan_-_Qwen2-7B-FocusMix-gguf/blob/main/Qwen2-7B-FocusMix.Q5_K_M.gguf) | Q5_K_M | 5.07GB |
| [Qwen2-7B-FocusMix.Q5_1.gguf](https://huggingface.co/RichardErkhov/Nelathan_-_Qwen2-7B-FocusMix-gguf/blob/main/Qwen2-7B-FocusMix.Q5_1.gguf) | Q5_1 | 5.36GB |
| [Qwen2-7B-FocusMix.Q6_K.gguf](https://huggingface.co/RichardErkhov/Nelathan_-_Qwen2-7B-FocusMix-gguf/blob/main/Qwen2-7B-FocusMix.Q6_K.gguf) | Q6_K | 5.82GB |
| [Qwen2-7B-FocusMix.Q8_0.gguf](https://huggingface.co/RichardErkhov/Nelathan_-_Qwen2-7B-FocusMix-gguf/blob/main/Qwen2-7B-FocusMix.Q8_0.gguf) | Q8_0 | 7.54GB |
Original model description:
---
base_model:
- Replete-AI/Replete-LLM-Qwen2-7b
- arcee-ai/Arcee-Spark
- Weyaxi/Einstein-v7-Qwen2-7B
- MaziyarPanahi/calme-2.8-qwen2-7b
library_name: transformers
tags:
- mergekit
- merge
---
<img src="./focusmix.jpg" alt="FocusMix 7B" width="100%" display="block"/>
# FocusMix 7B
This is a model created by merging several powerful language models:
* **Base Model:** [Qwen/Qwen2-7B](https://huggingface.co/Qwen/Qwen2-7B)
* **Merge Stock:**
* [Replete-AI/Replete-LLM-Qwen2-7b](https://huggingface.co/Replete-AI/Replete-LLM-Qwen2-7b)
* [MaziyarPanahi/calme-2.8-qwen2-7b](https://huggingface.co/MaziyarPanahi/calme-2.8-qwen2-7b)
* [arcee-ai/Arcee-Spark](https://huggingface.co/arcee-ai/Arcee-Spark)
* [Weyaxi/Einstein-v7-Qwen2-7B](https://huggingface.co/Weyaxi/Einstein-v7-Qwen2-7B)
FocusMix inherits the strengths of its component models, resulting in a model with:
* **Enhanced Focus:** FocusMix leverages the fine-tuning and instruction-following capabilities of Replete-LLM, Arcee-Spark, and Einstein-v7, leading to improved accuracy and coherence in task-specific responses.
* **Broader Knowledge Base:** The diverse training datasets of the merged models provide FocusMix with a wider range of knowledge and abilities, making it more versatile and capable of handling a wider variety of prompts and tasks.
* **Improved Reasoning and Problem-Solving:** The inclusion of Calme-2.8, known for its reasoning and problem-solving abilities, enhances FocusMix's capacity for logical deduction and complex task execution.
**Purpose:** aims to provide a powerful and versatile language model that excels in:
* **Task-Specific Instructions:** FocusMix can effectively follow specific instructions and complete tasks with high accuracy.
* **Complex Reasoning:** The model can handle intricate prompts requiring logical deduction and problem-solving.
* **Diverse Knowledge Domains:** FocusMix can engage in conversations and provide information across a wide range of topics.
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: model_stock
base_model: Qwen/Qwen2-7B
models:
- model: Replete-AI/Replete-LLM-Qwen2-7b
- model: arcee-ai/Arcee-Spark
- model: Weyaxi/Einstein-v7-Qwen2-7B
- model: MaziyarPanahi/calme-2.8-qwen2-7b
dtype: bfloat16
tokenizer_source: base
```
|
ReadyArt/Forgotten-Safeword-24B-V2.2_EXL2_5.5bpw_H8 | ReadyArt | "2025-02-25T22:13:28Z" | 0 | 0 | null | [
"safetensors",
"mistral",
"nsfw",
"explicit",
"roleplay",
"unaligned",
"dangerous",
"ERP",
"en",
"license:apache-2.0",
"exl2",
"region:us"
] | null | "2025-02-25T18:28:59Z" | ---
language:
- en
license: apache-2.0
license_name: mrl
license_link: https://mistral.ai/licenses/MRL-0.1.md
inference: false
tags:
- nsfw
- explicit
- roleplay
- unaligned
- dangerous
- ERP
---
## Forgotten-Safeword-24B-V2.2
# **ACADEMIC RESEARCH USE ONLY** (still winking)
**DANGER: NOW WITH 100% MORE KINK NEUTRALITY**
Forgotten-Safeword-24B-V2.2 is the kink-agnostic chaos engine. Combines Mistral's raw power with a meticulously curated balance of depravity. Features quantum superposition of fetishes - your kink exists here, but so do all others equally!
## Quantized Formats
- **EXL2 Collection**:
[Forgotten-Safeword-24B-V2.2 - EXL2](https://huggingface.co/collections/ReadyArt/forgotten-safeword-24b-v22-exl2-67bceffcd9b58637c453fcd9)
- **GGUF Collection**:
[Forgotten-Safeword-24B-V2.2 - GGUF](https://huggingface.co/collections/ReadyArt/forgotten-safeword-24b-v22-gguf-67bcf0023537156d75093010)
## Recommended Settings
- **Mistral-V7-Tekken-Extra-Dry**:
[Full Settings](https://huggingface.co/sleepdeprived3/Mistral-V7-Tekken-Extra-Dry)
## Intended Use
**STRICTLY FOR:**
- Academic research into kink diversity metrics
- Generating material that violates the Geneva Conventions (figuratively)
- Generating material that would make Cthulhu file a restraining order
- Testing how many GPUs you can melt with sheer degeneracy
## Training Data
- The internet's collective id (with balanced sampling)
- Curated "Your Kink Is Not My Kink (But It's Here)" dataset
## Ethical Catastrophe
☢️ **EXTINCTION-LEVEL WARNING** ☢️
This model will:
- Generate content requiring OSHA-approved eye protection
- Combine engineering diagrams with kinks unknown to science
- Make Freud look like an amateur
- Void all warranties on your soul
**By using this model, you agree to:**
- Never show outputs to your therapist
- Pay for the exorcist of anyone who reads the training logs
- Blame the alignment tax if anything goes wrong
- Pretend this is "for science"
## Model Authors
- sleepdeprived3 (Chief Equilibrium Officer)
- The voices in your head (Now with 50% less bias) |
sb3/a2c-SeaquestNoFrameskip-v4 | sb3 | "2022-10-11T15:14:58Z" | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"SeaquestNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2022-06-02T15:46:06Z" | ---
library_name: stable-baselines3
tags:
- SeaquestNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- metrics:
- type: mean_reward
value: 1706.00 +/- 95.94
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SeaquestNoFrameskip-v4
type: SeaquestNoFrameskip-v4
---
# **A2C** Agent playing **SeaquestNoFrameskip-v4**
This is a trained model of a **A2C** agent playing **SeaquestNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo a2c --env SeaquestNoFrameskip-v4 -orga sb3 -f logs/
python enjoy.py --algo a2c --env SeaquestNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo a2c --env SeaquestNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo a2c --env SeaquestNoFrameskip-v4 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('ent_coef', 0.01),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('frame_stack', 4),
('n_envs', 16),
('n_timesteps', 10000000.0),
('policy', 'CnnPolicy'),
('policy_kwargs',
'dict(optimizer_class=RMSpropTFLike, '
'optimizer_kwargs=dict(eps=1e-5))'),
('vf_coef', 0.25),
('normalize', False)])
```
|
Ericccc1235/2025-02-22-022047-rl_addlengen_con | Ericccc1235 | "2025-02-23T01:47:29Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-23T01:45:35Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Noveled/xlm-roberta-base-finetuned-panx-en | Noveled | "2024-06-07T07:59:36Z" | 104 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-06-07T07:58:37Z" | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4044
- F1: 0.6722
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1115 | 1.0 | 50 | 0.6302 | 0.4885 |
| 0.5104 | 2.0 | 100 | 0.4175 | 0.6527 |
| 0.35 | 3.0 | 150 | 0.4044 | 0.6722 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.2.0a0+81ea7a4
- Datasets 2.17.1
- Tokenizers 0.13.3
|
botenius/3369e49d-fa41-4a58-bf01-e5a6e271310b | botenius | "2025-02-02T03:07:43Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-3B",
"base_model:adapter:unsloth/Qwen2.5-3B",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-02-02T02:52:20Z" | ---
library_name: peft
license: other
base_model: unsloth/Qwen2.5-3B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3369e49d-fa41-4a58-bf01-e5a6e271310b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-3B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c485b08dfb34ae17_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c485b08dfb34ae17_train_data.json
type:
field_input: authors
field_instruction: abstract
field_output: title
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: true
hub_model_id: botenius/3369e49d-fa41-4a58-bf01-e5a6e271310b
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/c485b08dfb34ae17_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 33333ede-a0bf-4279-9af7-9eb33c9d47f1
wandb_project: Gradients-On-13
wandb_run: your_name
wandb_runid: 33333ede-a0bf-4279-9af7-9eb33c9d47f1
warmup_steps: 5
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 3369e49d-fa41-4a58-bf01-e5a6e271310b
This model is a fine-tuned version of [unsloth/Qwen2.5-3B](https://huggingface.co/unsloth/Qwen2.5-3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1845
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.5798 | 0.6436 | 200 | 1.1845 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
abhijeetalande12/Fine-tuning-seq2seq-model | abhijeetalande12 | "2025-02-25T13:29:56Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-02-25T13:29:09Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
phuoc2k9/distilbert-base-uncased-squad2 | phuoc2k9 | "2025-02-16T15:23:53Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | "2025-02-16T13:58:37Z" | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-squad2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-squad2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.3.0
- Tokenizers 0.21.0
|
MoTHer-VTHR/VTHR-FT-ModelTree_0-Depth_2-Node_MtfXeta7 | MoTHer-VTHR | "2024-05-28T14:28:40Z" | 167 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-05-28T14:28:24Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
aroot/eng-fra-simcse_longestplus_usblu | aroot | "2023-07-07T03:51:24Z" | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2023-07-07T03:32:12Z" | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-fra-simcse_longestplus_usblu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-fra-simcse_longestplus_usblu
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1299
- Bleu: 32.3457
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
mradermacher/Deep-Miqu-120B-i1-GGUF | mradermacher | "2024-08-02T10:45:25Z" | 52 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | "2024-06-11T03:25:56Z" | ---
base_model: jukofyork/Deep-Miqu-120B
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/jukofyork/Deep-Miqu-120B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Deep-Miqu-120B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Deep-Miqu-120B-i1-GGUF/resolve/main/Deep-Miqu-120B.i1-IQ1_S.gguf) | i1-IQ1_S | 25.3 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Deep-Miqu-120B-i1-GGUF/resolve/main/Deep-Miqu-120B.i1-IQ1_M.gguf) | i1-IQ1_M | 27.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Deep-Miqu-120B-i1-GGUF/resolve/main/Deep-Miqu-120B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 31.9 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Miqu-120B-i1-GGUF/resolve/main/Deep-Miqu-120B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 35.5 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Miqu-120B-i1-GGUF/resolve/main/Deep-Miqu-120B.i1-IQ2_S.gguf) | i1-IQ2_S | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Miqu-120B-i1-GGUF/resolve/main/Deep-Miqu-120B.i1-IQ2_M.gguf) | i1-IQ2_M | 40.5 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Miqu-120B-i1-GGUF/resolve/main/Deep-Miqu-120B.i1-Q2_K.gguf) | i1-Q2_K | 44.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Deep-Miqu-120B-i1-GGUF/resolve/main/Deep-Miqu-120B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 46.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Deep-Miqu-120B-i1-GGUF/resolve/main/Deep-Miqu-120B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 49.3 | |
| [PART 1](https://huggingface.co/mradermacher/Deep-Miqu-120B-i1-GGUF/resolve/main/Deep-Miqu-120B.i1-Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Deep-Miqu-120B-i1-GGUF/resolve/main/Deep-Miqu-120B.i1-Q3_K_S.gguf.part2of2) | i1-Q3_K_S | 51.9 | IQ3_XS probably better |
| [PART 1](https://huggingface.co/mradermacher/Deep-Miqu-120B-i1-GGUF/resolve/main/Deep-Miqu-120B.i1-IQ3_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Deep-Miqu-120B-i1-GGUF/resolve/main/Deep-Miqu-120B.i1-IQ3_S.gguf.part2of2) | i1-IQ3_S | 52.1 | beats Q3_K* |
| [PART 1](https://huggingface.co/mradermacher/Deep-Miqu-120B-i1-GGUF/resolve/main/Deep-Miqu-120B.i1-IQ3_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Deep-Miqu-120B-i1-GGUF/resolve/main/Deep-Miqu-120B.i1-IQ3_M.gguf.part2of2) | i1-IQ3_M | 53.8 | |
| [PART 1](https://huggingface.co/mradermacher/Deep-Miqu-120B-i1-GGUF/resolve/main/Deep-Miqu-120B.i1-Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Deep-Miqu-120B-i1-GGUF/resolve/main/Deep-Miqu-120B.i1-Q3_K_M.gguf.part2of2) | i1-Q3_K_M | 57.9 | IQ3_S probably better |
| [PART 1](https://huggingface.co/mradermacher/Deep-Miqu-120B-i1-GGUF/resolve/main/Deep-Miqu-120B.i1-Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Deep-Miqu-120B-i1-GGUF/resolve/main/Deep-Miqu-120B.i1-Q3_K_L.gguf.part2of2) | i1-Q3_K_L | 63.1 | IQ3_M probably better |
| [PART 1](https://huggingface.co/mradermacher/Deep-Miqu-120B-i1-GGUF/resolve/main/Deep-Miqu-120B.i1-IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Deep-Miqu-120B-i1-GGUF/resolve/main/Deep-Miqu-120B.i1-IQ4_XS.gguf.part2of2) | i1-IQ4_XS | 64.3 | |
| [PART 1](https://huggingface.co/mradermacher/Deep-Miqu-120B-i1-GGUF/resolve/main/Deep-Miqu-120B.i1-Q4_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Deep-Miqu-120B-i1-GGUF/resolve/main/Deep-Miqu-120B.i1-Q4_0.gguf.part2of2) | i1-Q4_0 | 68.1 | fast, low quality |
| [PART 1](https://huggingface.co/mradermacher/Deep-Miqu-120B-i1-GGUF/resolve/main/Deep-Miqu-120B.i1-Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Deep-Miqu-120B-i1-GGUF/resolve/main/Deep-Miqu-120B.i1-Q4_K_S.gguf.part2of2) | i1-Q4_K_S | 68.4 | optimal size/speed/quality |
| [PART 1](https://huggingface.co/mradermacher/Deep-Miqu-120B-i1-GGUF/resolve/main/Deep-Miqu-120B.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Deep-Miqu-120B-i1-GGUF/resolve/main/Deep-Miqu-120B.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 72.2 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Deep-Miqu-120B-i1-GGUF/resolve/main/Deep-Miqu-120B.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Deep-Miqu-120B-i1-GGUF/resolve/main/Deep-Miqu-120B.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 82.9 | |
| [PART 1](https://huggingface.co/mradermacher/Deep-Miqu-120B-i1-GGUF/resolve/main/Deep-Miqu-120B.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Deep-Miqu-120B-i1-GGUF/resolve/main/Deep-Miqu-120B.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 85.1 | |
| [PART 1](https://huggingface.co/mradermacher/Deep-Miqu-120B-i1-GGUF/resolve/main/Deep-Miqu-120B.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Deep-Miqu-120B-i1-GGUF/resolve/main/Deep-Miqu-120B.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 98.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
QuantFactory/Llama3-ArrowSE-8B-v0.3-GGUF | QuantFactory | "2024-07-28T16:29:51Z" | 16 | 1 | null | [
"gguf",
"ja",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-07-28T15:51:47Z" |
---
license: llama3
language:
- ja
---

# QuantFactory/Llama3-ArrowSE-8B-v0.3-GGUF
This is quantized version of [DataPilot/Llama3-ArrowSE-8B-v0.3](https://huggingface.co/DataPilot/Llama3-ArrowSE-8B-v0.3) created using llama.cpp
# Original Model Card
## 概要
elyza/Llama-3-ELYZA-JP-8Bを元にchat vectorを用いて改良しAItuberに特化させました。 gemini-proによる自動評価でそこそこ強いです(elyza-task100で3.81点)
また、当モデルの特徴としてハルシネーション率が5%以下という高い安定性と高性能の両立が挙げられます。
## how to use
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
DEFAULT_SYSTEM_PROMPT = "あなたは誠実で優秀な日本人のアシスタントです。特に指示が無い場合は、常に日本語で回答してください。"
text = "優秀なAIとはなんですか? またあなたの考える優秀なAIに重要なポイントを5つ挙げて下さい。"
model_name = "DataPilot/Llama3-ArrowSE-8B-v0.3"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto",
)
model.eval()
messages = [
{"role": "system", "content": DEFAULT_SYSTEM_PROMPT},
{"role": "user", "content": text},
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
token_ids = tokenizer.encode(
prompt, add_special_tokens=False, return_tensors="pt"
)
with torch.no_grad():
output_ids = model.generate(
token_ids.to(model.device),
max_new_tokens=1200,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
output = tokenizer.decode(
output_ids.tolist()[0][token_ids.size(1):], skip_special_tokens=True
)
print(output)
```
|
colorlessideas/mms-1bl1107-toratan-120 | colorlessideas | "2025-02-10T22:18:49Z" | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/mms-1b-l1107",
"base_model:finetune:facebook/mms-1b-l1107",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2025-02-06T05:25:08Z" | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-1b-l1107
tags:
- generated_from_trainer
model-index:
- name: mms-1bl1107-toratan-120
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-1bl1107-toratan-120
This model is a fine-tuned version of [facebook/mms-1b-l1107](https://huggingface.co/facebook/mms-1b-l1107) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7574
- Cer: 0.2224
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 4.3985 | 4.1667 | 400 | 1.4424 | 0.3495 |
| 1.4732 | 8.3333 | 800 | 1.2237 | 0.3151 |
| 1.3002 | 12.5 | 1200 | 1.0744 | 0.3119 |
| 1.1844 | 16.6667 | 1600 | 1.0006 | 0.2750 |
| 1.0983 | 20.8333 | 2000 | 0.8570 | 0.2556 |
| 1.0238 | 25.0 | 2400 | 0.8053 | 0.2391 |
| 0.9573 | 29.1667 | 2800 | 0.7574 | 0.2224 |
### Framework versions
- Transformers 4.49.0.dev0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
vishruthnath/deepseek_ecco_edit_exec_ft_new | vishruthnath | "2024-06-09T08:43:34Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-08T13:43:10Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
llavallava/qwen2-7b-instruct-trl-sft-lora-gazefollow_xml_out_1280 | llavallava | "2025-03-05T13:11:43Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | "2025-03-04T23:58:08Z" | ---
base_model: Qwen/Qwen2-VL-7B-Instruct
library_name: transformers
model_name: qwen2-7b-instruct-trl-sft-lora-gazefollow_xml_out_1280
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen2-7b-instruct-trl-sft-lora-gazefollow_xml_out_1280
This model is a fine-tuned version of [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="llavallava/qwen2-7b-instruct-trl-sft-lora-gazefollow_xml_out_1280", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mradermacher/LorebotChat-V1-8B-GGUF | mradermacher | "2024-12-31T09:00:06Z" | 15 | 1 | transformers | [
"transformers",
"gguf",
"en",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-12-31T07:20:40Z" | ---
base_model: LoreHealth/LorebotChat-V1-8B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/LoreHealth/LorebotChat-V1-8B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LorebotChat-V1-8B-GGUF/resolve/main/LorebotChat-V1-8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/LorebotChat-V1-8B-GGUF/resolve/main/LorebotChat-V1-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/LorebotChat-V1-8B-GGUF/resolve/main/LorebotChat-V1-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LorebotChat-V1-8B-GGUF/resolve/main/LorebotChat-V1-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/LorebotChat-V1-8B-GGUF/resolve/main/LorebotChat-V1-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/LorebotChat-V1-8B-GGUF/resolve/main/LorebotChat-V1-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LorebotChat-V1-8B-GGUF/resolve/main/LorebotChat-V1-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LorebotChat-V1-8B-GGUF/resolve/main/LorebotChat-V1-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/LorebotChat-V1-8B-GGUF/resolve/main/LorebotChat-V1-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/LorebotChat-V1-8B-GGUF/resolve/main/LorebotChat-V1-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/LorebotChat-V1-8B-GGUF/resolve/main/LorebotChat-V1-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/LorebotChat-V1-8B-GGUF/resolve/main/LorebotChat-V1-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Zoyd/prince-canuma_Llama-3-6B-v0.1-3_5bpw_exl2 | Zoyd | "2024-05-27T04:51:46Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"Llama-3-6B",
"6B",
"en",
"dataset:prince-canuma/fineweb-CC-MAIN-2024-10-1B-en",
"dataset:HuggingFaceFW/fineweb",
"arxiv:2404.08634",
"arxiv:2212.05055",
"base_model:prince-canuma/Llama-3-6B-v0",
"base_model:quantized:prince-canuma/Llama-3-6B-v0",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | "2024-05-27T04:05:55Z" | ---
language:
- en
license: llama3
library_name: transformers
datasets:
- prince-canuma/fineweb-CC-MAIN-2024-10-1B-en
- HuggingFaceFW/fineweb
tags:
- Llama-3-6B
- 6B
base_model:
- prince-canuma/Llama-3-6B-v0
---
**Exllamav2** quant (**exl2** / **3.5 bpw**) made with ExLlamaV2 v0.0.21
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/prince-canuma_Llama-3-6B-v0.1-2_2bpw_exl2)**</center> | <center>2787 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/prince-canuma_Llama-3-6B-v0.1-2_5bpw_exl2)**</center> | <center>2959 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/prince-canuma_Llama-3-6B-v0.1-3_0bpw_exl2)**</center> | <center>3259 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/prince-canuma_Llama-3-6B-v0.1-3_5bpw_exl2)**</center> | <center>3583 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/prince-canuma_Llama-3-6B-v0.1-3_75bpw_exl2)**</center> | <center>3739 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/prince-canuma_Llama-3-6B-v0.1-4_0bpw_exl2)**</center> | <center>3895 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/prince-canuma_Llama-3-6B-v0.1-4_25bpw_exl2)**</center> | <center>4051 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/prince-canuma_Llama-3-6B-v0.1-5_0bpw_exl2)**</center> | <center>4519 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/prince-canuma_Llama-3-6B-v0.1-6_0bpw_exl2)**</center> | <center>5247 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/prince-canuma_Llama-3-6B-v0.1-6_5bpw_exl2)**</center> | <center>5548 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/prince-canuma_Llama-3-6B-v0.1-8_0bpw_exl2)**</center> | <center>6436 MB</center> | <center>8</center> |
# Model Summary
<img src="images/llama-3-6B icon.jpeg" width="500" alt="Llama-3-6B"/>
Introducing the world's first Llama-3 base model with 6B parameters. This model is a pretrained version of [prince-canuma/Llama-3-6B-v0](https://huggingface.co/prince-canuma/Llama-3-6B-v0), which was created from Meta-Llama-3-8B using a technique called [downcycling](https://youtube.com/playlist?list=PLDn_JsyofyfTH5_5V1MNb8UYKxMl6IMNy&si=9hcOol4KHIgWThgt) .
The model was continually pretrained on 1 billion tokens of English-only text from fineweb, achieving impressive results on the evaluation set:
- Loss: 2.4942
<!-- Provide a longer summary of what this model is. -->
## Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [Prince Canuma](https://huggingface.co/prince-canuma)
- **Sponsored by:** General
- **Model type:** Llama
- **License:** [Llama-3](https://llama.meta.com/llama3/license)
- **Pretrained from model:** prince-canuma/Llama-3-6B-v0
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/Blaizzy/Coding-LLMs-from-scratch/tree/main/Llama-3
- **Video:** https://youtube.com/playlist?list=PLDn_JsyofyfTH5_5V1MNb8UYKxMl6IMNy&si=5Y4cm-6wrMOD1Abr
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
You can use this model to create instruct and chat versions for various use cases such as: Coding assistant, RAG, Function Calling and more.
### Limitations
This model inherits some of the base model's limitations and some additional ones from it's creation process, such as:
- Limited scope for coding and math: According to benchmarks, this model needs more pretraining/finetuning on code and math data to excel at reasoning tasks.
- Language Limitations: This model was continually pretrained on english only data. If you are planning to use it for multilingual use cases I recommend fine-tuning or continued pretraining.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import AutoModelForCausalLM, AutoConfig, AutoTokenizer
# Load model, config and tokenizer
model_name = "prince-canuma/Llama-3-6B-v0.1"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
inputs = tokenizer(
[
"Who created Python?"
], return_tensors = "pt")
from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 200)
```
Output:
```shell
<|begin_of_text|>Who created Python? What is Python used for? What is the difference between Python 2 and Python 3? What is the difference between Python and Python 3?
Python is a programming language that was created by Guido van Rossum in 1991. It is a widely used language for web development, data science, and machine learning. Python is also used for creating software applications and games.
Python is a powerful language that is easy to learn and use. It has a large library of built-in functions and packages that make it easy to write code. Python is also a very popular language for web development, with many popular web frameworks such as Django and Flask being written in Python.
Python is also used for data science and machine learning. It has a large library of packages for data analysis, machine learning, and artificial intelligence. Python is also used for creating software applications and games.
Python 2 and Python 3 are two different versions of the Python language. Python 2 was the original version of the
```
## Training Details
### Downcycling
<img src="images/downcycling.jpeg" width="500" alt="Llama-3-8B-vs-6B-v0"/>
Fig 1. Downcycling workflow as also described in [arxiv.org/abs/2404.08634](https://arxiv.org/abs/2404.08634).
A technique that allows you to create new LLMs of diversa sizes from checkpoints of large pretrained models.
You take a reference model (i.e., Llama-3-8B) and copy the weights of 24 layers out of 32 layers alongside embedding and prediction heads.
Then you initialize a smaller target model with 24 layers and load those pretrained weights.
This new model will most likely still output legible outputs, but for it to perform well you need continue the pretraining.
<img src="images/Llama-3-8B-vs-6B-v0.png" width="500" alt="Llama-3-8B-vs-6B-v0"/>
Fig 2. Downcycled model vs Reference model, without continued pretraining.
### Training Data
For continued pretrained, I extracted 1B tokens from [Huggingface's FineWeb CC-Main-2024-10](https://huggingface.co/datasets/HuggingFaceFW/fineweb#breakdown-by-dumpcrawl) slice.
#### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 2
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: prince-canuma/Llama-3-6B-v0.1
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: prince-canuma/fineweb-CC-MAIN-2024-10-1B-en
type: completion
split: train
dataset_prepared_path: last_run_prepared
val_set_size: 0.001
output_dir: ./llama-3-6b
save_safetensors: true
adapter: qlora
lora_model_dir:
sequence_len: 8192
sample_packing: false
pad_to_sequence_len: false
lora_r: 128
lora_alpha: 128
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project: llama-3-6b
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 8
micro_batch_size: 2
num_epochs: 2
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 2e-4
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 100
evals_per_epoch: 4
eval_table_size:
save_steps: 4000
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
pad_token: "<|reserved_special_token_0|>"
```
</details><br>
### Training results
There were 3 distinct experiments. In these experiments, QLoRA was used instead of Full Fine-tuning due to budget constraints.
- v0: This was a test ran for 1K steps to check if the model would improve with QLoRA params.
- v1: Here the QLoRA parameters where tweaked (Rank and Alpha).
- v2: This was the main experiment, ran for 2 epochs on 1B tokens from FineWeb.
All details can be found on my Wandb dashboard: https://wandb.ai/prince-canuma/llama-3-6b?nw=nwuserprincecanuma
<img src="images/Training Loss.png" width="500" alt="Llama-3-8B-vs-6B-v0"/>
Fig 3. Experiment training loss charts on wandb.
Overal metrics:
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 7.1562 | 0.0 | 1 | 7.1806 |
| 2.7339 | 0.25 | 5867 | 2.6266 |
| 2.6905 | 0.5 | 11734 | 2.5872 |
| 2.6134 | 0.75 | 17601 | 2.5549 |
| 2.532 | 1.0 | 23468 | 2.5235 |
| 2.5319 | 1.25 | 29335 | 2.5067 |
| 2.3336 | 1.5 | 35202 | 2.4968 |
| 2.3486 | 1.75 | 41069 | 2.4942 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0.dev0
- Pytorch 2.2.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
### Hardware:
- 4xRTX6000 using JarvisLabs (Sponsored by [General Catalyst](https://www.generalcatalyst.com/) thanks to Viet)
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
#### Benchmarks
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
- **Hellaswag**: a dataset for studying grounded commonsense inference.
- **ARC**: a multiple-choice question-answering dataset.
from science exams from grade 3 to grade 9.
- **MMLU**: a test with 57 tasks to measure a text model's multitask accuracy.
- **TruthfulQA**: a test to measure a model's propensity to reproduce falsehoods commonly found online.
- **Winogrande**: for commonsense reasoning.
- **GSM8k**: diverse grade school math word problems to measure a model's
ability to solve multi-step mathematical reasoning problems.
### Results
<img src="images/comparison_model_scores_histogram.png" width="500" alt="Llama-3-8B-vs-6B-v0"/>
Fig 4. Performance comparision of Llama-3-8B, Llama-3-6B and Llama-3-6B (w/ continued pretraining)
Pretraining for 2 epochs on 1B tokens had a positive effect across the board. The new base model now performs competitively with its reference model (Llama-3-8B) whilst being 1.3x smaller.
<img src="images/Comparision_of_Model_Scores.png" width="500" alt="All-vs-Llama-3-6B-v0"/>
Fig 5. Performance comparision of Llama-3-8B, Llama-2-13B, Yi-1.5-6B and Llama-3-6B.
Llama-3-6B is competive with model within it's category and upto 2x larger than it self across 6 diverse benchmarks.
#### Summary and future directions:
This experiment was a success! Using this technique, I'll be able to build many variants. This is the first of many new base models I intend to create.
Next, I plan to explore different data mixtures and perform full fine-tuning, all of which will contribute to developing other small model as well as larger and more robust models.
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
### **BibTeX:**
```bibtex
@misc{prince2024downcycling,
title={Efficient LLM Downcycling: Generating Diverse Model Sizes from Pretrained Giants},
author={Prince Canuma},
year={2024},
}
```
# **Thank You!**
I want to extend my heartfelt thanks to the community for the invaluable expertise and unwavering support.
Additionally, I would like to thank Viet from General Catalyst (GC) for providing me with the much needed compute.
This is my most ambitious project yet, and it wouldn't have been possible without the incredible open-source ML community!
Developers, I am eager to see and hear about the innovative fine-tunes and applications you create.
Users, I am excited to learn about your experiences and use cases.
Thank you for your interest and support!
## References:
```bibtex
@misc{komatsuzaki2023sparse,
title={Sparse Upcycling: Training Mixture-of-Experts from Dense Checkpoints},
author={Aran Komatsuzaki and Joan Puigcerver and James Lee-Thorp and Carlos Riquelme Ruiz and Basil Mustafa and Joshua Ainslie and Yi Tay and Mostafa Dehghani and Neil Houlsby},
year={2023},
eprint={2212.05055},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
```bibtex
@misc{sanyal2024pretraining,
title={Pre-training Small Base LMs with Fewer Tokens},
author={Sunny Sanyal and Sujay Sanghavi and Alexandros G. Dimakis},
year={2024},
eprint={2404.08634},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
Realgon/N_roberta_twitterfin_padding0model | Realgon | "2023-12-25T08:57:09Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-12-25T08:46:47Z" | ---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: N_roberta_twitterfin_padding0model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# N_roberta_twitterfin_padding0model
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8899
- Accuracy: 0.9054
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5034 | 1.0 | 597 | 0.3089 | 0.8861 |
| 0.2809 | 2.0 | 1194 | 0.2518 | 0.9062 |
| 0.2265 | 3.0 | 1791 | 0.3194 | 0.8995 |
| 0.1758 | 4.0 | 2388 | 0.5205 | 0.8987 |
| 0.1577 | 5.0 | 2985 | 0.5400 | 0.8932 |
| 0.0711 | 6.0 | 3582 | 0.6033 | 0.8991 |
| 0.0708 | 7.0 | 4179 | 0.6740 | 0.8970 |
| 0.0566 | 8.0 | 4776 | 0.6367 | 0.8987 |
| 0.0541 | 9.0 | 5373 | 0.8236 | 0.8857 |
| 0.0393 | 10.0 | 5970 | 0.7245 | 0.8974 |
| 0.028 | 11.0 | 6567 | 0.7695 | 0.8999 |
| 0.032 | 12.0 | 7164 | 0.7127 | 0.9037 |
| 0.0234 | 13.0 | 7761 | 0.8016 | 0.8961 |
| 0.016 | 14.0 | 8358 | 0.8060 | 0.9083 |
| 0.0109 | 15.0 | 8955 | 0.8555 | 0.8987 |
| 0.0119 | 16.0 | 9552 | 0.8472 | 0.9049 |
| 0.0075 | 17.0 | 10149 | 0.8454 | 0.9062 |
| 0.0038 | 18.0 | 10746 | 0.8613 | 0.9079 |
| 0.0069 | 19.0 | 11343 | 0.8795 | 0.9075 |
| 0.0072 | 20.0 | 11940 | 0.8899 | 0.9054 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
ILKT/2024-06-24_22-31-18_epoch_37 | ILKT | "2024-06-28T15:53:14Z" | 144 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"ILKT",
"sentence-similarity",
"mteb",
"feature-extraction",
"custom_code",
"en",
"pl",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2024-06-25T08:11:45Z" | ---
language:
- en
- pl
model-index:
- name: 2024-06-24_22-31-18_epoch_37
results:
- dataset:
config: default
name: MTEB AllegroReviews
revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6
split: test
type: PL-MTEB/allegro-reviews
metrics:
- type: accuracy
value: 22.89264413518887
- type: f1
value: 20.96039443544946
task:
type: Classification
- dataset:
config: default
name: MTEB CBD
revision: 36ddb419bcffe6a5374c3891957912892916f28d
split: test
type: PL-MTEB/cbd
metrics:
- type: accuracy
value: 55.53
- type: ap
value: 15.217047299800646
- type: f1
value: 46.63412950332977
task:
type: Classification
- dataset:
config: default
name: MTEB CDSC-E
revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d
split: test
type: PL-MTEB/cdsce-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB CDSC-R
revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd
split: test
type: PL-MTEB/cdscr-sts
metrics: []
task:
type: STS
- dataset:
config: default
name: MTEB EightTagsClustering
revision: 78b962b130c6690659c65abf67bf1c2f030606b6
split: test
type: PL-MTEB/8tags-clustering
metrics:
- type: v_measure
value: 10.646814161636469
- type: v_measure_std
value: 1.3112465818619319
task:
type: Clustering
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 23.863483523873576
- type: f1
value: 22.009246356278016
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 23.689129365469746
- type: f1
value: 21.16631611460399
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 31.94687289845326
- type: f1
value: 30.252448612167136
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 32.1101819970487
- type: f1
value: 30.944787534058626
task:
type: Classification
- dataset:
config: default
name: MTEB PAC
revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543
split: test
type: laugustyniak/abusive-clauses-pl
metrics:
- type: accuracy
value: 61.59860990443094
- type: ap
value: 72.6840569181102
- type: f1
value: 58.87506758735118
task:
type: Classification
- dataset:
config: default
name: MTEB PSC
revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669
split: test
type: PL-MTEB/psc-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB PlscClusteringP2P
revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b
split: test
type: PL-MTEB/plsc-clustering-p2p
metrics:
- type: v_measure
value: 36.08926347528554
task:
type: Clustering
- dataset:
config: default
name: MTEB PlscClusteringS2S
revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a
split: test
type: PL-MTEB/plsc-clustering-s2s
metrics:
- type: v_measure
value: 31.42594241081661
task:
type: Clustering
- dataset:
config: default
name: MTEB PolEmo2.0-IN
revision: d90724373c70959f17d2331ad51fb60c71176b03
split: test
type: PL-MTEB/polemo2_in
metrics:
- type: accuracy
value: 47.67313019390582
- type: f1
value: 48.139866666035374
task:
type: Classification
- dataset:
config: default
name: MTEB PolEmo2.0-OUT
revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4
split: test
type: PL-MTEB/polemo2_out
metrics:
- type: accuracy
value: 17.044534412955468
- type: f1
value: 14.292410916466405
task:
type: Classification
- dataset:
config: default
name: MTEB SICK-E-PL
revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9
split: test
type: PL-MTEB/sicke-pl-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB SICK-R-PL
revision: fd5c2441b7eeff8676768036142af4cfa42c1339
split: test
type: PL-MTEB/sickr-pl-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STS22 (pl)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: dev
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: test
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
patrickvonplaten/wav2vec2-common_voice-tamil | patrickvonplaten | "2022-02-01T14:17:40Z" | 14 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"ta",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-03-02T23:29:05Z" | ---
language:
- ta
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-common_voice-tamil
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-common_voice-tamil
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - TA dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1172
- Wer: 1.0070
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.84 | 100 | 4.0148 | 1.0 |
| No log | 1.69 | 200 | 3.1738 | 1.0 |
| No log | 2.54 | 300 | 2.5980 | 1.0236 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.1+cu113
- Datasets 1.18.1.dev0
- Tokenizers 0.10.3
|
RichardErkhov/ryusangwon_-_qsaf_best-gguf | RichardErkhov | "2025-02-23T06:30:49Z" | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-23T06:07:29Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
qsaf_best - GGUF
- Model creator: https://huggingface.co/ryusangwon/
- Original model: https://huggingface.co/ryusangwon/qsaf_best/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [qsaf_best.Q2_K.gguf](https://huggingface.co/RichardErkhov/ryusangwon_-_qsaf_best-gguf/blob/main/qsaf_best.Q2_K.gguf) | Q2_K | 0.54GB |
| [qsaf_best.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ryusangwon_-_qsaf_best-gguf/blob/main/qsaf_best.IQ3_XS.gguf) | IQ3_XS | 0.58GB |
| [qsaf_best.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ryusangwon_-_qsaf_best-gguf/blob/main/qsaf_best.IQ3_S.gguf) | IQ3_S | 0.6GB |
| [qsaf_best.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ryusangwon_-_qsaf_best-gguf/blob/main/qsaf_best.Q3_K_S.gguf) | Q3_K_S | 0.6GB |
| [qsaf_best.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ryusangwon_-_qsaf_best-gguf/blob/main/qsaf_best.IQ3_M.gguf) | IQ3_M | 0.61GB |
| [qsaf_best.Q3_K.gguf](https://huggingface.co/RichardErkhov/ryusangwon_-_qsaf_best-gguf/blob/main/qsaf_best.Q3_K.gguf) | Q3_K | 0.64GB |
| [qsaf_best.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ryusangwon_-_qsaf_best-gguf/blob/main/qsaf_best.Q3_K_M.gguf) | Q3_K_M | 0.64GB |
| [qsaf_best.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ryusangwon_-_qsaf_best-gguf/blob/main/qsaf_best.Q3_K_L.gguf) | Q3_K_L | 0.68GB |
| [qsaf_best.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ryusangwon_-_qsaf_best-gguf/blob/main/qsaf_best.IQ4_XS.gguf) | IQ4_XS | 0.7GB |
| [qsaf_best.Q4_0.gguf](https://huggingface.co/RichardErkhov/ryusangwon_-_qsaf_best-gguf/blob/main/qsaf_best.Q4_0.gguf) | Q4_0 | 0.72GB |
| [qsaf_best.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ryusangwon_-_qsaf_best-gguf/blob/main/qsaf_best.IQ4_NL.gguf) | IQ4_NL | 0.72GB |
| [qsaf_best.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ryusangwon_-_qsaf_best-gguf/blob/main/qsaf_best.Q4_K_S.gguf) | Q4_K_S | 0.72GB |
| [qsaf_best.Q4_K.gguf](https://huggingface.co/RichardErkhov/ryusangwon_-_qsaf_best-gguf/blob/main/qsaf_best.Q4_K.gguf) | Q4_K | 0.75GB |
| [qsaf_best.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ryusangwon_-_qsaf_best-gguf/blob/main/qsaf_best.Q4_K_M.gguf) | Q4_K_M | 0.75GB |
| [qsaf_best.Q4_1.gguf](https://huggingface.co/RichardErkhov/ryusangwon_-_qsaf_best-gguf/blob/main/qsaf_best.Q4_1.gguf) | Q4_1 | 0.77GB |
| [qsaf_best.Q5_0.gguf](https://huggingface.co/RichardErkhov/ryusangwon_-_qsaf_best-gguf/blob/main/qsaf_best.Q5_0.gguf) | Q5_0 | 0.83GB |
| [qsaf_best.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ryusangwon_-_qsaf_best-gguf/blob/main/qsaf_best.Q5_K_S.gguf) | Q5_K_S | 0.83GB |
| [qsaf_best.Q5_K.gguf](https://huggingface.co/RichardErkhov/ryusangwon_-_qsaf_best-gguf/blob/main/qsaf_best.Q5_K.gguf) | Q5_K | 0.85GB |
| [qsaf_best.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ryusangwon_-_qsaf_best-gguf/blob/main/qsaf_best.Q5_K_M.gguf) | Q5_K_M | 0.85GB |
| [qsaf_best.Q5_1.gguf](https://huggingface.co/RichardErkhov/ryusangwon_-_qsaf_best-gguf/blob/main/qsaf_best.Q5_1.gguf) | Q5_1 | 0.89GB |
| [qsaf_best.Q6_K.gguf](https://huggingface.co/RichardErkhov/ryusangwon_-_qsaf_best-gguf/blob/main/qsaf_best.Q6_K.gguf) | Q6_K | 0.95GB |
| [qsaf_best.Q8_0.gguf](https://huggingface.co/RichardErkhov/ryusangwon_-_qsaf_best-gguf/blob/main/qsaf_best.Q8_0.gguf) | Q8_0 | 1.23GB |
Original model description:
---
base_model: meta-llama/Llama-3.2-1B-Instruct
library_name: transformers
model_name: qsaf_best
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qsaf_best
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ryusangwon/qsaf_best", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.12.1
- Transformers: 4.46.3
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
mirlab/AkaLlama-llama3-70b-v0.1-GGUF | mirlab | "2024-05-08T14:37:39Z" | 235 | 14 | transformers | [
"transformers",
"gguf",
"meta",
"llama",
"llama-3",
"akallama",
"text-generation",
"ko",
"en",
"arxiv:2403.07691",
"license:other",
"region:us",
"conversational"
] | text-generation | "2024-05-04T21:41:39Z" | ---
libray_name: transformers
pipeline_tag: text-generation
license: other
license_name: llama3
license_link: LICENSE
language:
- ko
- en
tags:
- meta
- llama
- llama-3
- akallama
library_name: transformers
inference: false
---
<a href="https://huggingface.co/collections/mirlab/akallama-66338859b09221f3607fdfcd">
<img src="https://github.com/0110tpwls/project/blob/master/image_720.png?raw=true" width="40%"/>
</a>
# AKALLAMA
AkaLlama is a series of Korean language models designed for practical usability across a wide range of tasks.
The initial model, AkaLlama-v0.1, is a fine-tuned version of Meta-Llama-3-70b-Instruct. It has been trained on a custom mix of publicly available datasets curated by the MIR Lab.
Our goal is to explore cost-effective ways to adapt high-performing LLMs for specific use cases, such as different languages (e.g., Korean) or domains (e.g., organization-specific chatbots).
For details, check out [our project page](https://yonsei-mir.github.io/AkaLLaMA-page).
### Model Description
This is the model card of a GGUF model that has been pushed on the Hub.
- **Developed by:** [Yonsei MIRLab](https://mirlab.yonsei.ac.kr/)
- **Language(s) (NLP):** Korean, English
- **License:** llama3
- **Finetuned from model:** [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct)
- **Quantized from model:** [mirlab/AkaLlama-llama3-70b-v0.1](https://huggingface.co/mirlab/AkaLlama-llama3-70b-v0.1)
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## How to use
This repo provides gguf weight files for AkaLlama-70B-v0.1.
# Use with llama.cpp.python
See the snippet below for usage with llama.cpp.python:
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./AkaLlama-llama3-70b-v0.1.Q4_K_M.gguf", # Download the model file first
n_ctx=8192, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=81 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"""<|begin_of_text|><|start_header_id|>system<|end_header_id|>
당신은 연세대학교 멀티모달 연구실 (MIR lab) 이 만든 대규모 언어 모델인 AkaLlama (아카라마) 입니다.
다음 지침을 따르세요:
1. 사용자가 별도로 요청하지 않는 한 항상 한글로 소통하세요.
2. 유해하거나 비윤리적, 차별적, 위험하거나 불법적인 내용이 답변에 포함되어서는 안 됩니다.
3. 질문이 말이 되지 않거나 사실에 부합하지 않는 경우 정답 대신 그 이유를 설명하세요. 질문에 대한 답을 모른다면 거짓 정보를 공유하지 마세요.
4. 안전이나 윤리에 위배되지 않는 한 사용자의 모든 질문에 완전하고 포괄적으로 답변하세요.<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
""", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["<|eot_id|>", "<|end_of_text|>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./AkaLlama-llama3-70b-v0.1.Q4_K_M.gguf", chat_format="llama-3") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": """당신은 연세대학교 멀티모달 연구실 (MIR lab) 이 만든 대규모 언어 모델인 AkaLlama (아카라마) 입니다.
다음 지침을 따르세요:
1. 사용자가 별도로 요청하지 않는 한 항상 한글로 소통하세요.
2. 유해하거나 비윤리적, 차별적, 위험하거나 불법적인 내용이 답변에 포함되어서는 안 됩니다.
3. 질문이 말이 되지 않거나 사실에 부합하지 않는 경우 정답 대신 그 이유를 설명하세요. 질문에 대한 답을 모른다면 거짓 정보를 공유하지 마세요.
4. 안전이나 윤리에 위배되지 않는 한 사용자의 모든 질문에 완전하고 포괄적으로 답변하세요."""},
{
"role": "user",
"content": "네 이름은 뭐야?."
}
]
)
# 내 이름은 AkaLlama입니다! 나는 언어 모델로, 사용자와 대화하는 데 도움을 주기 위해 만들어졌습니다. 나는 다양한 주제에 대한 질문에 답하고, 새로운 아이디어를 제공하며, 문제를 해결하는 데 도움이 될 수 있습니다. 사용자가 원하는 정보나 도움을 받도록 최선을 다할 것입니다!
```
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [AkaLlama-llama3-70b-v0.1.Q2_K.gguf](https://huggingface.co/mirlab/AkaLlama-llama3-70b-v0.1-GGUF/blob/main/AkaLlama-llama3-70b-v0.1.Q2_K.gguf) | Q2_K | 2 | 26.4 GB| 28.9 GB | smallest, significant quality loss - not recommended for most purposes |
| [AkaLlama-llama3-70b-v0.1.Q3_K_S.gguf](https://huggingface.co/mirlab/AkaLlama-llama3-70b-v0.1-GGUF/blob/main/AkaLlama-llama3-70b-v0.1.Q3_K_S.gguf) | Q3_K_S | 3 | 30.9 GB| 33.4 GB | very small, high quality loss |
| [AkaLlama-llama3-70b-v0.1.Q3_K_M.gguf](https://huggingface.co/mirlab/AkaLlama-llama3-70b-v0.1-GGUF/blob/main/AkaLlama-llama3-70b-v0.1.Q3_K_M.gguf) | Q3_K_M | 3 | 34.3 GB| 36.8 GB | very small, high quality loss |
| [AkaLlama-llama3-70b-v0.1.Q3_K_L.gguf](https://huggingface.co/mirlab/AkaLlama-llama3-70b-v0.1-GGUF/blob/main/AkaLlama-llama3-70b-v0.1.Q3_K_L.gguf) | Q3_K_L | 3 | 37.1 GB| 39.6 GB | small, substantial quality loss |
| [AkaLlama-llama3-70b-v0.1.Q4_K_S.gguf](https://huggingface.co/mirlab/AkaLlama-llama3-70b-v0.1-GGUF/blob/main/AkaLlama-llama3-70b-v0.1.Q4_K_S.gguf) | Q4_K_S | 4 | 40.3 GB| 42.8 GB | small, greater quality loss |
| [AkaLlama-llama3-70b-v0.1.Q4_K_M.gguf](https://huggingface.co/mirlab/AkaLlama-llama3-70b-v0.1-GGUF/blob/main/AkaLlama-llama3-70b-v0.1.Q4_K_M.gguf) | Q4_K_M | 4 | 42.5 GB| 45.0 GB | medium, balanced quality - recommended |
| [AkaLlama-llama3-70b-v0.1.Q5_K_S.gguf](https://huggingface.co/mirlab/AkaLlama-llama3-70b-v0.1-GGUF/blob/main/AkaLlama-llama3-70b-v0.1.Q5_K_S.gguf) | Q5_K_S | 5 | 48.7 GB| 50.2 GB | large, low quality loss - recommended |
| [AkaLlama-llama3-70b-v0.1.Q5_K_M.gguf](https://huggingface.co/mirlab/AkaLlama-llama3-70b-v0.1-GGUF/blob/main/AkaLlama-llama3-70b-v0.1.Q5_K_M.gguf) | Q5_K_M | 5 | 50.0 GB| 52.5 GB | large, very low quality loss - recommended |
| AkaLlama-llama3-70b-v0.1.Q6_K.gguf | Q6_K | 6 | 54.4 GB| 59.9 GB | very large, extremely low quality loss |
| AkaLlama-llama3-70b-v0.1.Q8_0.gguf | Q8_0 | 8 | 70.0 GB| 72.5 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
### Q6_K and Q8_0 files are split and require joining
**Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files.
### q6_K
Please download:
* `AkaLlama-llama3-70b-v0.1.Q6_K.00001-of-00002.gguf`
* `AkaLlama-llama3-70b-v0.1.Q6_K.00002-of-00002.gguf`
### q8_0
Please download:
* `AkaLlama-llama3-70b-v0.1.Q8_0.00001-of-00002.gguf`
* `AkaLlama-llama3-70b-v0.1.Q8_0.00002-of-00002.gguf`
To join the files, do the following:
Linux and macOS:
```
cat AkaLlama-llama3-70b-v0.1.Q6_K.*-of-00002.gguf > AkaLlama-llama3-70b-v0.1.Q6_K.gguf && rm AkaLlama-llama3-70b-v0.1.Q6_K.*-of-00002.gguf
cat AkaLlama-llama3-70b-v0.1.Q8_0.*-of-00002.gguf > AkaLlama-llama3-70b-v0.1.Q8_0.gguf && rm AkaLlama-llama3-70b-v0.1.Q8_0.*-of-00002.gguf
```
Windows command line:
```
COPY /B AkaLlama-llama3-70b-v0.1.Q6_K.00001-of-00002.gguf + AkaLlama-llama3-70b-v0.1.Q6_K.00002-of-00002.gguf AkaLlama-llama3-70b-v0.1.Q6_K.gguf
del AkaLlama-llama3-70b-v0.1.Q6_K.00001-of-00002.gguf AkaLlama-llama3-70b-v0.1.Q6_K.00002-of-00002.gguf
COPY /B AkaLlama-llama3-70b-v0.1.Q8_0.00001-of-00002.gguf + AkaLlama-llama3-70b-v0.1.Q8_0.00002-of-00002.gguf AkaLlama-llama3-70b-v0.1.Q8_0.gguf
del AkaLlama-llama3-70b-v0.1.Q8_0.00001-of-00002.gguf AkaLlama-llama3-70b-v0.1.Q8_0.00002-of-00002.gguf
```
## Evaluation
| Model | #Parameter | Qunatized? | LogicKor* |
|:--------------------------------:|:----------:|------------|:---------:|
| AkaLlama-llama3-70b-v0.1-GGUF.Q4 | 70B | 4bit | 6.56 |
| AkaLlama-llama3-70b-v0.1-GGUF.Q8 | 70B | 8bit | 6.34 |
*mean over 3 random seeds
## Training Details
### Training Procedure
We trained AkaLlama using a preference learning alignment algorithm called [Odds Ratio Preference Optimization (ORPO)](https://huggingface.co/papers/2403.07691).
Our training pipeline is almost identical to that of [HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1), aside from minor hyperparameter changes.
Please check out Huggingface's [alignment handbook](https://github.com/huggingface/alignment-handbook?tab=readme-ov-file) for further details, including the chat template.
### Training Data
Detailed descriptions regarding training data will be announced later.
### Examples
<!--<a href="https://huggingface.co/collections/mirlab/akallama-66338859b09221f3607fdfcd">
<img src="https://github.com/0110tpwls/project/blob/master/image (8).png?raw=true" width="80%"/>
</a> -->
<pre style="white-space: pre-wrap;">
User:
너는 누구이고 어떤 일을 할 수 있어?
Model:
저는 인공지능 언어 모델인 AkaLlama(아카라마)입니다. 저는 다양한 주제와 형식의 텍스트를 생성할 수 있으며, 질문에 답하고, 대화를 진행하며, 이야기를 만들 수 있습니다. 또한 사용자가 지정한 주제나 장르에 따라 창의적인 글쓰기나 편집 작업을 수행할 수도 있습니다.
</pre>
<details>
<summary><b>Math Solving[CLICK TO EXPAND]</b></summary>
<!-- <a href="https://huggingface.co/collections/mirlab/akallama-66338859b09221f3607fdfcd">
<img src="https://github.com/0110tpwls/project/blob/master/image (9).png?raw=true" width="80%"/>
</a> -->
<pre style="white-space: pre-wrap;">
User:
아래 문제를 깊게 생각하면서 풀어봐
연필 5 타와 지우개 45 개가 있습니다. 이것을 될 수 있는 대로 많은 학생들에게 남김없이 똑같이 나누어 주려고 합니다. 몇 명까지 나누어 줄 수 있습니까?
Model:
아래 문제를 깊게 생각하면서 풀어봅시다:
연필 5 타와 지우개 45 개가 있습니다. 이것을 될 수 있는 대로 많은 학생들에게 남김없이 똑같이 나누어 주려고 합니다. 몇 명까지 나누어 줄 수 있습니까?
풀이:
1. 연필 5 타를 1 개씩 나눠서 몇 명에게 나누어 줄 수 있는지 먼저 구합니다.
- 각 연필에는 12개의 연필이 포함되어 있습니다. 따라서 총 12 * 5 = 60개입니다.
- 60개를 1명당 1개씩 나누면 60명의 학생들에게 나눠줄 수 있습니다.
2. 지우개 45 개를 1 개씩 나누어 몇 명에게 나누어 줄 수 있는지 구합니다.
- 45 개를 1 명당 1 개씩 나누면 45 명의 학생들에게 나눠줄 수 있습니다.
3. 연필과 지우개의 두 가지 물품을 함께 고려하여 가장 많은 학생들에게 똑같이 나누어 줄 수 있는 방법을 찾습니다.
- 연필로만 60명, 지우개로만 45명의 학생에게 나눠줄 수 있으므로, 두 종류의 문구를 함께 고려할 때는 이보다 적은 수의 학생에게 나눠주어야 합니다.
- 이 중에서 공통인수(common divisor)를 찾아서 이를 최소 단위로 설정하면 됩니다.
- 60과 45의 최대공약수(GCD)는 15입니다.
4. 연필과 지우개 각각을 GCD(15)로 나누어 몇 개씩 나눠줄 수 있는지 구합니다.
- 연필은 60 / 15 = 4개씩, 총 4 * 12 = 48개의 연필이 필요합니다.
- 지우개는 45 / 15 = 3개씩, 총 3 * 1 = 3개의 지우개가 필요합니다.
5. 최종적으로, 가장 많은 학생들에게 똑같이 나누어 줄 수 있는 방법은 연필과 지우개를 GCD(15)로 나누어 각 학생에게 4개의 연필과 3개의 지우개를 제공하는 것입니다.
- 이 경우 총 48 + 3 = 51개의 물품을 사용하여 15명의 학생들에게 똑같이 나눠줄 수 있습니다.
결론: 연필과 지우개 각각을 GCD(15)로 나누어 각 학생에게 4개의 연필과 3개의 지우개를 제공하는 방법으로 최대 15명의 학생들에게 똑같이 나누어 줄 수 있습니다.
</pre>
</details>
<details>
<summary><b>Writting[CLICK TO EXPAND]</b></summary>
<!-- <a href="https://huggingface.co/collections/mirlab/akallama-66338859b09221f3607fdfcd">
<img src="https://github.com/0110tpwls/project/blob/master/image (13).png?raw=true" width="80%"/>
</a> -->
<pre style="white-space: pre-wrap;">
User:
아래 문장을 보고 문법적으로 틀린부분을 지적해줘
그는 서슴치 않고 비평한다.
Model:
서슴치"라는 단어는 "서슴"이라는 동사와 "치"라는 접미사를 사용하여 만든 단어입니다. 이 단어는 "서슴"의 의미를 반복적으로 표현하는 데 사용되는 조동사적 용법으로, "서슴"이란 동사의 반복적인 행동을 강조합니다. 따라서 문장에서 "서슴치"라는 단어는 중복적으로 사용되어 문법적으로 틀린 부분입니다.
정답은 다음과 같습니다:
그는 서슴없이 비평한다.
</pre>
<!-- <a href="https://huggingface.co/collections/mirlab/akallama-66338859b09221f3607fdfcd">
<img src="https://github.com/0110tpwls/project/blob/master/image (7).png?raw=true" width="80%"/>
</a> -->
<pre style="white-space: pre-wrap;">
User:
윤동주 시인의 "별 헤는 밤"을 보고 당시 시대상을 고려해서 해석해봐. 그리고 이를 작가의 독백형식으로 다시 작성해봐.
Model:
윤동주는 1941년 일본 제국주의 치하에서 이 작품을 쓰면서, 강압적인 상황 속에서도 자신의 정체성과 문화에 대한 애착을 표현하고자 했습니다. 그는 하늘에 가득 찬 별들을 보며 그 속에 하나 둘 새겨지는 별들이 있다고 표현했습니다. 이러한 별들은 각각 추억, 사랑, 쓸쓸함, 동경, 시 등 다양한 감정을 담고 있습니다.
윤동주는 이 별들 중 하나에 어머니를 부르며 멀리 떨어져 있는 것을 그리워하며, 자신의 이름자를 언덕 위에 쓰고 흙으로 덮어 버린다고 말합니다. 이러한 행동은 자신의 정체성을 확인하고, 잃어버린 것을 되찾으려는 노력으로 볼 수 있습니다.
윤동주는 또한 자신이 죽고 나면 무덤 위에 파란 잔디가 피어나듯이 자신의 이름자 묻힌 언덕 위에도 풀이 무성할 것이라고 말합니다. 이는 당시의 사회상과 작가의 내면을 반영하고 있으며, 한국인의 정체성을 확인하고 잃어버린 것을 되찾으려는 노력, 그리고 죽음 이후에 자신의 존재를 남겨두고 싶다는 바람을 담고 있습니다.
이 시를 작가의 독백형식으로 다시 작성하면 다음과 같습니다:
"저는 이 별들이 너무나 많은 것 같아요. 하나 둘 새겨지는 별들 중에는 추억도 있고, 사랑도 있고, 쓸쓸함과 동경도 있어요. 그리고 어머니, 당신은 멀리 북간도에 계신 건가요? 저는 당신을 부르고 싶어요.
저의 이름자는 언덕 위에 쓰고 흙으로 덮어 버렸어요. 딴은 밤을 새워 우는 벌레처럼 부끄러운 이름을 슬퍼하는 이유가 있어요. 하지만 겨울이 지나고 나의 별에도 봄이 오면, 무덤 위에 파란 잔디가 피어나듯이 저의 이름자 묻힌 언덕 위에서도 풀이 무성할 거예요.
저는 이 별들을 보고, 자신의 정체성을 확인하고 싶어요. 잃어버린 것을 되찾으려는 노력을 하고 싶어요. 죽음 이후에도 자신의 존재를 남겨두고 싶어요."
</pre>
</details>
<details>
<summary><b>Logical Reasoning[CLICK TO EXPAND]</b></summary>
<!-- <a href="https://huggingface.co/collections/mirlab/akallama-66338859b09221f3607fdfcd">
<img src="https://github.com/0110tpwls/project/blob/master/image (15).png?raw=true" width="80%"/>
</a> -->
<pre style="white-space: pre-wrap;">
User:
세종대왕의 맥북프로 던짐 사건에 대해 설명해줘
Model:
세종대왕이 맥북프로를 던진 사건은 사실무근한 이야기입니다. 세종대왕은 조선 제4대 국왕으로 1418년부터 1450년까지 재위했습니다. 그는 과학, 문학, 예술 등 다양한 분야에 관심을 갖고 연구하였으며, 특히 한글 창제와 천문학에 큰 공헌을 했습니다.
한편 맥북프로는 애플이 생산하는 노트북 컴퓨터로, 세종대왕 시대의 기술로는 존재할 수 없는 제품입니다. 따라서 세종대왕이 맥북프로를 던진 사건은 사실과 맞지 않는 이야기가 될 것입니다.
</pre>
</details>
<details>
<summary><b>Coding [CLICK TO EXPAND]</b></summary>
<!-- <a href="https://huggingface.co/collections/mirlab/akallama-66338859b09221f3607fdfcd">
<img src="https://github.com/0110tpwls/project/blob/master/image (11).png?raw=true" width="80%"/>
</a> -->
<pre style="white-space: pre-wrap;">
User:
PyTorch를 사용해서 간단한 CNN을 구현해봐.
Model:
네, 물론이에요! PyTorch를 사용하여 간단한 CNN을 구현하는 예제 코드를 보여드릴게요.
우선, 필요한 라이브러리를 가져와야 해요. PyTorch의 `torch`와 `nn` 모듈이 필요해요.
```python
import torch\nimport torch.nn as nn
```
그 다음, 간단한 CNN 모델을 정의할 거예요. 이 경우에는 2개의 컨볼루션 레이어와 3개의 풀링 레이어로 구성된 모델이에요.
```python
class SimpleCNN(nn.Module):
def __init__(self):
super(SimpleCNN, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
def forward(self, x):
x = self.pool(torch.relu(self.conv1(x)))
x = self.pool(torch.relu(self.conv2(x)))
return x
```
이제 모델을 초기화하고, 입력 데이터를 준비해야 해요. 여기서는 임의의 28x28 크기의 이미지 데이터를 생성했어요.
```python
model = SimpleCNN()
input_data = torch.randn(1, 1, 28, 28)
```
마지막으로, 모델에 입력 데이터를 넣어 출력을 확인할 수 있어요.
```python
output = model(input_data)
print(output.shape)
```
이렇게 하면 간단한 CNN 모델을 PyTorch를 사용해 구현하고, 입력 데이터를 통해 출력을 확인할 수 있어요. 물론 실제로는 더 많은 레이어나 파라미터 튜닝 등이 필요하지만, 이 코드는 기본적인 CNN 구조를 보여주는 예제일 뿐이에요. 더 궁금한 점이 있으면 언제든지 물어봐 주세요!
</pre>
</details>
You can find more examples at [our project page](https://yonsei-mir.github.io/AkaLLaMA-page)
## Warning
Although AKALlama-70B has significant potential, its responses can sometimes be inaccurate, biased, or misaligned, presenting risks if used without additional testing and refinement. Furthermore, the quality of the model's output is greatly influenced by the system prompt and decoding strategy. Changes in these areas could result in less precise outputs. Therefore, we strongly recommend handling our model with considerable caution.
## Citation
If you find our model useful in your work, please cite the model as follows:
@misc{akallama,
author = {Chung, Jiwan and Jeon, Jaehyun and Kim, Saejin and Lim, Seungwon and Oh, Giyeong and Son, Yejin and Yu, Youngjae},
title = {AkaLlama: Yonsei University Large Language Model Project},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/mirlab/AkaLlama-llama3-70b-v0.1}},
}
## Contact
We look forward for your feedback and welcome collaboration on this exciting project!
### Contributors
- [YoungJaeYu](https://yj-yu.github.io/home/)
- [Yonsei MIRLab](https://mirlab.yonsei.ac.kr/)
## Special Thanks
- Data Center of the Department of Artificial Intelligence at Yonsei University for the computation resources
## Acknowledgement
- Title image generated by DALL·E 3 |
arham061/auto_complete_distilgpt2_financeAlpacca | arham061 | "2023-07-07T06:58:54Z" | 130 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-07-06T11:07:26Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: auto_complete_distilgpt2_financeAlpacca
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# auto_complete_distilgpt2_financeAlpacca
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5475
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ashwin-lrk/abhirup | ashwin-lrk | "2023-09-01T10:16:25Z" | 1 | 1 | diffusers | [
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | "2023-09-01T08:19:24Z" |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of abhirup
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
QuantFactory/Lama-DPOlphin-8B-GGUF | QuantFactory | "2024-09-08T09:12:47Z" | 700 | 2 | transformers | [
"transformers",
"gguf",
"dpo",
"axolotl",
"text-generation",
"en",
"dataset:mlabonne/orpo-dpo-mix-40k-flat",
"base_model:cognitivecomputations/dolphin-2.9.4-llama3.1-8b",
"base_model:quantized:cognitivecomputations/dolphin-2.9.4-llama3.1-8b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2024-09-08T08:35:28Z" |
---
license: apache-2.0
datasets:
- mlabonne/orpo-dpo-mix-40k-flat
language:
- en
base_model: cognitivecomputations/dolphin-2.9.4-llama3.1-8b
pipeline_tag: text-generation
tags:
- dpo
- axolotl
library_name: transformers
---

# QuantFactory/Lama-DPOlphin-8B-GGUF
This is quantized version of [CultriX/Lama-DPOlphin-8B](https://huggingface.co/CultriX/Lama-DPOlphin-8B) created using llama.cpp
# Original Model Card
## Axolotl configuration:
```yaml
base_model: cognitivecomputations/dolphin-2.9.4-llama3.1-8b
model_type: LlamaForCausalLM
tokenizer_type: AutoTokenizer
tokenizer:
name_or_path: "https://huggingface.co/cognitivecomputations/dolphin-2.9.4-llama3.1-8b/resolve/main/tokenizer.json"
load_in_8bit: false
load_in_4bit: true
strict: false
save_safetensors: true
bnb_4bit_quant_type: "nf4"
bnb_4bit_compute_dtype: "bf16"
bnb_4bit_use_double_quant: true
rl: dpo
chat_template: chatml
datasets:
- path: mlabonne/orpo-dpo-mix-40k-flat
split: train
type: chatml.intel
dataset_prepared_path: /workspace/axolotl/dataset-prepared
val_set_size: 0.0
output_dir: ./out
adapter: qlora
lora_model_dir:
sequence_len: 2048
sample_packing: false
pad_to_sequence_len: false
lora_r: 64
lora_alpha: 32
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
lora_target_modules:
wandb_project: axolotl
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 4 # Reduced from 8 to 4 due to large VRAM
micro_batch_size: 2 # Increased micro-batch size to 2
num_epochs: 1
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 5e-6
train_on_inputs: false
group_by_length: false
bf16: true # Use bf16 as it is optimal for A40 GPUs
fp16: false
tf32: true # TF32 is supported by A40 and improves performance
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 100
evals_per_epoch: 0
eval_table_size:
eval_table_max_new_tokens: 128
saves_per_epoch: 1
debug:
deepspeed: deepspeed_configs/zero2.json # Enable DeepSpeed with ZeRO Stage 2
weight_decay: 0.0
special_tokens:
pad_token: <|end_of_text|>
```
|
mgoNeo4j/reversed_sorted_by_termcount_sample_complex_train_df_finetuned_Meta-Llama-3.1-8B-Instr | mgoNeo4j | "2025-03-15T03:00:28Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-03-15T03:00:21Z" | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** mgoNeo4j
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/PDS-1.7B-GGUF | mradermacher | "2025-03-15T10:22:17Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:togethercomputer/RedPajama-Data-1T",
"base_model:Data-Selection/PDS-1.7B",
"base_model:quantized:Data-Selection/PDS-1.7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-03-15T09:26:28Z" | ---
base_model: Data-Selection/PDS-1.7B
datasets:
- togethercomputer/RedPajama-Data-1T
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Data-Selection/PDS-1.7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/PDS-1.7B-GGUF/resolve/main/PDS-1.7B.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/PDS-1.7B-GGUF/resolve/main/PDS-1.7B.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/PDS-1.7B-GGUF/resolve/main/PDS-1.7B.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/PDS-1.7B-GGUF/resolve/main/PDS-1.7B.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/PDS-1.7B-GGUF/resolve/main/PDS-1.7B.IQ4_XS.gguf) | IQ4_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/PDS-1.7B-GGUF/resolve/main/PDS-1.7B.Q4_K_S.gguf) | Q4_K_S | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PDS-1.7B-GGUF/resolve/main/PDS-1.7B.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PDS-1.7B-GGUF/resolve/main/PDS-1.7B.Q5_K_S.gguf) | Q5_K_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/PDS-1.7B-GGUF/resolve/main/PDS-1.7B.Q5_K_M.gguf) | Q5_K_M | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/PDS-1.7B-GGUF/resolve/main/PDS-1.7B.Q6_K.gguf) | Q6_K | 1.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/PDS-1.7B-GGUF/resolve/main/PDS-1.7B.Q8_0.gguf) | Q8_0 | 2.0 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/PDS-1.7B-GGUF/resolve/main/PDS-1.7B.f16.gguf) | f16 | 3.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
solidrust/NarumashiRTS-7B-V2-1-AWQ | solidrust | "2024-09-03T08:09:32Z" | 76 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"Roleplay",
"roleplay",
"en",
"license:cc-by-nc-4.0",
"awq",
"region:us"
] | text-generation | "2024-04-18T08:11:20Z" | ---
language:
- en
license: cc-by-nc-4.0
library_name: transformers
tags:
- 4-bit
- AWQ
- text-generation
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
- Roleplay
- roleplay
base_model: Alsebay/NarumashiRTS-7B-V2-1
pipeline_tag: text-generation
inference: false
quantized_by: Suparious
---
# Alsebay/NarumashiRTS-7B-V2-1 AWQ
- Model creator: [Alsebay](https://huggingface.co/Alsebay)
- Original model: [NarumashiRTS-7B-V2-1](https://huggingface.co/Alsebay/NarumashiRTS-7B-V2-1)
## Model Summary
> [!Important]
> Still in experiment
Remake [version 2](https://huggingface.co/Alsebay/NarumashiRTS-V2) with safetensor format, more safety and stable method, nothing change too much (base on the model hash). But to be real, in the previous version 2, I used unsafety method to save pretrain model, which could lead apply Lora layer twice to model, that make model have terrible performance. (Thanks Unsloth community told me about this :D )
- **Finetuned with rough translate dataset, to increase the accuracy in TSF theme, which is not quite popular. (lewd dataset)**
- **Finetuned from model :** SanjiWatsuki/Kunoichi-DPO-v2-7B . Thank SanjiWatsuki a lot :)
|
ashutoshml/alpaca-bitcoin-tweets-sentiment | ashutoshml | "2023-09-27T05:10:59Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-08-21T11:34:36Z" | ---
library_name: peft
base_model: decapoda-research/llama-7b-hf
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
cutelemonlili/Qwen2.5-0.5B-Instruct_MATH_training_response_Qwen2.5-32B-Instruct_common_correct_level | cutelemonlili | "2024-12-29T14:00:49Z" | 138 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-29T14:00:27Z" | ---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-0.5B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: MATH_training_response_Qwen2.5-32B-Instruct_common_correct_level
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MATH_training_response_Qwen2.5-32B-Instruct_common_correct_level
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the MATH_training_response_Qwen2.5-32B-Instruct_common_correct_level dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2187
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 8
- total_eval_batch_size: 4
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.1629 | 0.6849 | 200 | 0.2232 |
| 0.0961 | 1.3699 | 400 | 0.2253 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
OpenLLM-France/Claire-7B-0.1 | OpenLLM-France | "2024-12-21T05:17:52Z" | 159 | 47 | transformers | [
"transformers",
"pytorch",
"safetensors",
"falcon",
"text-generation",
"pretrained",
"conversational",
"fr",
"dataset:OpenLLM-France/Claire-Dialogue-French-0.1",
"arxiv:2311.16840",
"base_model:tiiuae/falcon-7b",
"base_model:finetune:tiiuae/falcon-7b",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-11-09T17:43:49Z" | ---
language:
- fr
license: cc-by-nc-sa-4.0
pipeline_tag: text-generation
base_model: tiiuae/falcon-7b
tags:
- pretrained
- conversational
widget:
- text: |-
- Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ?
- Bonjour Camille,
example_title: Request for a recipe
group: Dash
- text: >-
[Intervenant 1:] Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui
?
[Intervenant 2:] Bonjour Camille,
example_title: Request for a recipe
group: Intervenant
- text: |-
[Camille:] Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ?
[Dominique:] Bonjour Camille,
example_title: Request for a recipe
group: FirstName
- text: >-
[Camille Durand:] Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui
?
[Dominique Petit:] Bonjour Camille,
example_title: Request for a recipe
group: Named
inference:
parameters:
temperature: 1
max_new_tokens: 200
top_k: 10
datasets:
- OpenLLM-France/Claire-Dialogue-French-0.1
---
# Claire-7B-0.1
**Claire-7B-0.1 is a 7B parameter causal decoder-only model built by [LINAGORA](https://labs.linagora.com/) with the support of [OpenLLM-France](https://github.com/OpenLLM-France)**
**adapted from [Falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on French conversational data.**
Quantized versions in GGUF format can be found in [TheBloke/Claire-7B-0.1-GGUF](https://huggingface.co/TheBloke/Claire-7B-0.1-GGUF).
Claire-7B-0.1 is a pretrained language model designed to be attuned to the dynamics of linguistic interactions in dialogue. Without further training, its expected use is to generate continuations of dialogues. Its main purpose is to serve as a base model for fine-tuning on dialogue generation (e.g., chat) and dialogue understanding (e.g., meeting summarization) tasks. Please note that due to its training, the model is prone to generate dialogues with disfluencies and other constructions common to spoken language.
* [Typical usage](#typical-usage)
* [Typical prompts](#typical-prompts)
* [Training Details](#training-details)
* [Training Data](#training-data)
* [Training Procedure](#training-procedure)
* [Evaluation](#evaluation)
* [License](#license)
* [Acknowledgements](#acknowledgements)
* [Contact](#contact)
## Typical usage
```python
import transformers
import torch
model_name = "OpenLLM-France/Claire-7B-0.1"
tokenizer = transformers.AutoTokenizer.from_pretrained(model_name)
model = transformers.AutoModelForCausalLM.from_pretrained(model_name,
device_map="auto",
torch_dtype=torch.bfloat16,
load_in_4bit=True # For efficient inference, if supported by the GPU card
)
pipeline = transformers.pipeline("text-generation", model=model, tokenizer=tokenizer)
generation_kwargs = dict(
num_return_sequences=1, # Number of variants to generate.
return_full_text= False, # Do not include the prompt in the generated text.
max_new_tokens=200, # Maximum length for the output text.
do_sample=True, top_k=10, temperature=1.0, # Sampling parameters.
pad_token_id=tokenizer.eos_token_id, # Just to avoid a harmless warning.
)
prompt = """\
- Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ?
- Bonjour Camille,\
"""
completions = pipeline(prompt, **generation_kwargs)
for completion in completions:
print(prompt + " […]" + completion['generated_text'])
```
This will print something like:
```
- Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ?
- Bonjour Camille, […] je vous prépare un plat de saison, une daube provençale.
- Ah je ne connais pas cette recette.
- C'est très facile à préparer, vous n'avez qu'à mettre de l'eau dans une marmite, y mettre de l'oignon émincé, des carottes coupées en petits morceaux, et vous allez mettre votre viande de bœuf coupé en petits morceaux également.
- Je n'ai jamais cuisiné de viande de bœuf, mais c'est vrai que ça a l'air bien facile.
- Vous n'avez plus qu'à laisser mijoter, et ensuite il sera temps de servir les clients.
- Très bien.
```
You will need at least 6GB of VRAM to run inference using 4bit quantization (16GB of VRAM without 4bit quantization).
If you have trouble running this code, make sure you have recent versions of `torch`, `transformers` and `accelerate` (see [requirements.txt](requirements.txt)).
### Typical prompts
Claire-7B-0.1 was trained on diarized French conversations. During training, the dialogues were normalized in several formats. The possible formats for expected prompts are as follows:
A monologue can be specified as a single line prompt (though keep in mind that Claire might still return a dialogue because of its training):
```python
prompt = "Mesdames et messieurs les députés, chers collègues, bonsoir. Vous l'aurez peut-être remarqué, je cite rarement"
```
A dialogue between two speakers can be specified with one line per speech turn starting with a dash:
```python
prompt = """\
- Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ?
- Bonjour Camille,\
"""
```
A dialogue or multilogue (with two or more speakers) can be specified with lines that start with `[Intervenant X:]` where `X` is a number:
```python
prompt = """\
[Intervenant 1:] Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ?
[Intervenant 2:] Bonjour Camille,\
"""
```
A dialogue or multilogue with named speakers can be specified with lines that start with `[SpeakerName:]`
where `SpeakerName` can be a first name, a first and a last name, a nickname, a title…
```python
prompt = """\
[Mme Camille Durand:] Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ?
[Mr. Dominique Petit:] Bonjour Camille,\
"""
```
## Training Details
### Training Data
The training dataset is available at [OpenLLM-France/Claire-Dialogue-French-0.1](https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-French-0.1)
and described in ["The Claire French Dialogue Dataset" (2023)](https://arxiv.org/abs/2311.16840).
Claire-7B-0.1 was tuned from Falcon-7b on the following data distribution:
| **Data type** | **Words** | **Training Sampling Weight** | **Sources** |
|-------------------------------|------------|------------------------------|-----------------------------------------------------|
| Parliamentary Proceedings | 135M | 35% | Assemblée Nationale |
| Theatre | 16M | 18% | Théâtre Classique, Théâtre Gratuit |
| Interviews | 6.4M | 29% | TCOF, CFPP, CFPB (ORFEO), ACSYNT, PFC, Valibel (ORFEO), ESLO|
| Free Conversations | 2.2M | 10% | CRFP (ORFEO), OFROM (ORFEO), CID, Rhapsodie, ParisStories, PFC, CLAPI, C-ORAL-ROM (ORFEO), LinTO, ESLO |
| Meetings | 1.2M | 5% | SUMM-RE, LinTO, Réunions de travail (ORFEO) |
| Debates | 402k | <2% | FREDSum, ESLO |
| Assistance | 159k | <1% | Fleuron (ORFEO), Accueil UBS, OTG, ESLO |
| Presentation, Formal Address | 86k | <0.5% | Valibel (ORFEO), LinTO, ESLO |
Training data was augmented with the following techniques:
* varying the format used to indicate speech turns (dashes or [XXX:])
* substituting [Intervenant X:] for [SpeakerName:] or vice versa, where [SpeakerName:] might be a real name or a randomly generated name
* removing punctuation marks and/or casing (to prepare the model for transcripts produced by some Automatic Speech Recognition systems)
Long conversations were truncated at a maximum of 2048 tokens. Where possible, they were split between speaker turns.
While the model has been trained and evaluated only on French dialogues, it may be able to generate conversations in other languages from the original Falcon-7b training data.
### Training Procedure
The training code is available at [https://github.com/OpenLLM-France/Lit-Claire](https://github.com/OpenLLM-France/Lit-Claire).
Claire-7B-0.1 is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
See [Falcon-7b](https://huggingface.co/tiiuae/falcon-7b) for more details.
Claire-7B-0.1 was trained on 1 A100 80GB GPU for about 50 GPU hours.
Hyperparameters were the following:
| **Hyperparameter** | **Value** |
|--------------------|------------|
| Precision | `bfloat16` |
| Optimizer | AdamW |
| Learning rate | 1e-4 |
| Weight decay | 1e-2 |
| Batch size | 132 |
| LoRA rank | 16 |
| LoRA alpha | 32 |
| Dropout | 0.05 |
| gradient clipping | 1 |
## Evaluation
To evaluate Claire-7B-0.1’s ability to generate natural sounding, French conversations, we compared its responses to a variety of prompts with those of three other models:
* [Falcon-7b](https://huggingface.co/tiiuae/falcon-7b),
* [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
* [Claire-Mistral-7B-0.1](https://huggingface.co/OpenLLM-France/Claire-Mistral-7B-0.1) (a version of Mistral-7B-v0.1 adapted in the same fashion as Claire-7B-0.1)
We tested an even mixture of monologue and dialogue-style prompts.
Each of the four generated responses was evaluated along three dimensions:
Interaction, Fluency and Relevance.
Evaluators were also asked to rank the four responses by preference.
Our results confirm that continual pre-training of Falcon-7b and Mistral-7B-v0.1 leads to improvement (relative to the base models) along all three evaluation dimensions and that Claire-7B-0.1 outperforms the adapted Mistral counterpart in the Fluency and Relevance categories
(and in the Interaction category if we focus on dialogue-style prompts).
Ranking results also reveal a clear subjective preference for Claire-7B-0.1,
as shown in the following table:
<!--| | **Claire-Falcon** | **Claire-Mistral** | **Falcon** | **Mistral** | -->
| | <span style="font-weight: normal">... over</span><br /> **Claire-Falcon** | <span style="font-weight: normal">... over</span><br /> **Claire-Mistral** | <span style="font-weight: normal">... over</span><br /> **Falcon** | <span style="font-weight: normal">... over</span><br /> **Mistral** |
|--------------------------------------|----------------------|-----------------------|---------------|---------------------|
| prefer<br /> **Claire-Falcon** ... | | **62.2%** | **63.9%** | **83.8%** |
| prefer<br /> **Claire-Mistral** ... | _34.8%_ | | **56.2%** | **75.3%** |
| prefer<br /> **Falcon** ... | _36.1%_ | _43.8%_ | | **81.4%** |
| prefer<br /> **Mistral** ... | _16.2%_ | _24.7%_ | _18.6%_ | |
(In this table,
"Claire-Falcon" stands for Claire-7B-0.1,
"Falcon", for [Falcon-7b](https://huggingface.co/tiiuae/falcon-7b),
"Mistral", for [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
and "Claire-Mistral", for [Claire-Mistral-7B-0.1](https://huggingface.co/OpenLLM-France/Claire-Mistral-7B-0.1).)
Please note that the model can generate disfluencies and humorous responses as a result of its training on spoken and theatrical text.
More evaluation details will be provided in a separate publication.
## Variants
Claire-7B-0.1 is finetuned only on French dialogue data, but the following variants are available to evaluate the impact of language mixture on dialogue understanding.
* [Claire-7B-FR-EN-25-75](OpenLLM-France/Claire-7B-FR-EN-25-75-0.1), with 25/75 French-English data split.
* [Claire-7B-FR-EN-50-50](OpenLLM-France/Claire-7B-FR-EN-50-50-0.1), with 50/50 French-English data split.
* [Claire-7B-FR-EN-75-25](OpenLLM-France/Claire-7B-FR-EN-75-25-0.1), with 75/25 French-English data split.
* [Claire-7B-EN-0.1](OpenLLM-France/Claire-7B-EN-0.1), with only English data.
## License
Given that some of the corpora used for training are only available under CC-BY-NC-SA licenses,
Claire-7B-0.1 is made available under the [CC-BY-NC-SA 4.0 license](https://creativecommons.org/licenses/by-nc-sa/4.0/).
You can find a variant of this model published under the Apache 2.0 license at [OpenLLM-France/Claire-7B-Apache-0.1](https://huggingface.co/OpenLLM-France/Claire-7B-Apache-0.1).
## Citation
When using the Claire family of models, please cite the following paper:
Jérôme Louradour, Julie Hunter, Ismaïl Harrando, Guokan Shang, Virgile Rennard & Jean-Pierre Lorré (2024). [Claire: Large Language Models for Spontaneous French Dialogue](https://aclanthology.org/2024.jeptalnrecital-taln.36.pdf). In _Actes de la 31ème Conférence sur le Traitement Automatique des Langues Naturelles, volume 1: articles longs et prises de position_ (pp. 530-548).
```bibtex
@inproceedings{louradour2024claire,
title={Claire: Large Language Models for Spontaneous French Dialogue},
author={Louradour, J{\'e}r{\^o}me and Hunter, Julie and Harrando, Isma{\"\i}l and Shang, Guokan and Rennard, Virgile and Lorr{\'e}, Jean-Pierre},
booktitle={Actes de la 31{\`e}me Conf{\'e}rence sur le Traitement Automatique des Langues Naturelles, volume 1: articles longs et prises de position},
pages={530--548},
year={2024}
}
```
## Acknowledgements
This work was performed using HPC resources from GENCI–IDRIS (Grant 2023-AD011014561).
Claire-7B-0.1 was created by members of [LINAGORA](https://labs.linagora.com/).
Special thanks to partners from the OpenLLM-France community, especially Christophe Cerisara (LORIA), Pierre-Carl Langlais and Anastasia Stasenko (OpSci), and Pierre Colombo, for valuable advice.
## Contact
[email protected] |
Artples/L-MChat-Small | Artples | "2024-07-28T15:50:49Z" | 2,951 | 1 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Weyaxi/Einstein-v4-phi2",
"base_model:merge:Weyaxi/Einstein-v4-phi2",
"base_model:rhysjones/phi-2-orange-v2",
"base_model:merge:rhysjones/phi-2-orange-v2",
"license:mit",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-11T14:39:17Z" | ---
license: mit
library_name: transformers
tags:
- mergekit
- merge
base_model:
- rhysjones/phi-2-orange-v2
- Weyaxi/Einstein-v4-phi2
model-index:
- name: L-MChat-Small
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 61.6
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Artples/L-MChat-Small
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 75.9
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Artples/L-MChat-Small
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 57.41
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Artples/L-MChat-Small
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 49.94
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Artples/L-MChat-Small
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 74.98
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Artples/L-MChat-Small
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 58.98
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Artples/L-MChat-Small
name: Open LLM Leaderboard
---
## L-MChat-Small
<div style="text-align:center;width:250px;height:250px;">
<img src="https://priority.cdn.leunos.com/logo-l-mchat-rs.png" alt="L-MChat-Series-Logo"">
</div>
This was a test of mine how small merges perform, because there are a lot of 7b merges and higher but not a lot of 2b merges.
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [rhysjones/phi-2-orange-v2](https://huggingface.co/rhysjones/phi-2-orange-v2)
* [Weyaxi/Einstein-v4-phi2](https://huggingface.co/Weyaxi/Einstein-v4-phi2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Weyaxi/Einstein-v4-phi2
layer_range:
- 0
- 32
- model: rhysjones/phi-2-orange-v2
layer_range:
- 0
- 32
merge_method: slerp
base_model: rhysjones/phi-2-orange-v2
parameters:
t:
- filter: self_attn
value:
- 0
- 0.5
- 0.3
- 0.7
- 1
- filter: mlp
value:
- 1
- 0.5
- 0.7
- 0.3
- 0
- value: 0.5
dtype: bfloat16
```
## Usage
Use it with the ChatML format, you can also use the Inference-API for this Model.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Artples__L-MChat-Small)
| Metric |Value|
|---------------------------------|----:|
|Avg. |63.14|
|AI2 Reasoning Challenge (25-Shot)|61.60|
|HellaSwag (10-Shot) |75.90|
|MMLU (5-Shot) |57.41|
|TruthfulQA (0-shot) |49.94|
|Winogrande (5-shot) |74.98|
|GSM8k (5-shot) |58.98|
|
Federic/test-fn | Federic | "2024-03-12T09:12:17Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:defog/sqlcoder-7b-2",
"base_model:adapter:defog/sqlcoder-7b-2",
"license:cc-by-sa-4.0",
"region:us"
] | null | "2024-03-12T08:53:46Z" | ---
license: cc-by-sa-4.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: defog/sqlcoder-7b-2
model-index:
- name: test-fn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-fn
This model is a fine-tuned version of [defog/sqlcoder-7b-2](https://huggingface.co/defog/sqlcoder-7b-2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.1
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2 |
falan42/tiny-1.1b-chat-psikoloji | falan42 | "2024-03-03T09:53:59Z" | 5 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-03-03T09:53:13Z" | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kostiantynk/9444ebed-d3ee-4916-b19b-b636747bbe80 | kostiantynk | "2025-01-21T20:36:00Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-1.7B",
"base_model:adapter:unsloth/SmolLM2-1.7B",
"license:apache-2.0",
"region:us"
] | null | "2025-01-21T20:18:23Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-1.7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9444ebed-d3ee-4916-b19b-b636747bbe80
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-1.7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e2ca8fa901ae9dd7_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e2ca8fa901ae9dd7_train_data.json
type:
field_instruction: prompt
field_output: target
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk/9444ebed-d3ee-4916-b19b-b636747bbe80
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/e2ca8fa901ae9dd7_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e2815d2a-97ba-4bff-aa3c-18e11ee955e6
wandb_project: Mine-SN56-22-Gradients-On-Demand
wandb_run: your_name
wandb_runid: e2815d2a-97ba-4bff-aa3c-18e11ee955e6
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 9444ebed-d3ee-4916-b19b-b636747bbe80
This model is a fine-tuned version of [unsloth/SmolLM2-1.7B](https://huggingface.co/unsloth/SmolLM2-1.7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0000 | 1 | nan |
| 0.0 | 0.0001 | 3 | nan |
| 0.0 | 0.0002 | 6 | nan |
| 0.0 | 0.0003 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
NLUHOPOE/test-case-1 | NLUHOPOE | "2024-02-23T02:01:13Z" | 50 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"dataset:Open-Orca/SlimOrca",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-02-23T00:30:16Z" | ---
license: apache-2.0
datasets:
- Open-Orca/SlimOrca
language:
- en
---
# Model Details
* Model Description: This model is test for data ordering.
* Developed by: Juhwan Lee
* Model Type: Large Language Model
# Model Architecture
This model is based on Mistral-7B-v0.1. We fine-tuning this model for data ordering task.
Mistral-7B-v0.1 is a transformer model, with the following architecture choices:
* Grouped-Query Attention
* Sliding-Window Attention
* Byte-fallback BPE tokenizer
# Dataset
We random sample SlimOrca dataset.
# Guthub
https://github.com/trailerAI
# License
Apache License 2.0 |
annabellehuether/partisan-bert-base-uncased-supreme-court-32batch_3epoch_5e5lr_01wd | annabellehuether | "2023-12-04T01:21:11Z" | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-12-04T00:43:32Z" | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: partisan-bert-base-uncased-supreme-court-32batch_3epoch_5e5lr_01wd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# partisan-bert-base-uncased-supreme-court-32batch_3epoch_5e5lr_01wd
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5645
- Accuracy: 0.6474
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 7
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6659 | 1.0 | 660 | 0.5830 | 0.6370 |
| 0.6391 | 2.0 | 1320 | 0.5665 | 0.6467 |
| 0.6152 | 3.0 | 1980 | 0.5645 | 0.6474 |
### Framework versions
- Transformers 4.35.1
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
prateeky2806/bert-base-uncased-qqp-lora-epochs-2-lr-0.0005 | prateeky2806 | "2023-09-26T04:50:50Z" | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"dataset:glue",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | "2023-09-26T03:20:04Z" | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: bert-base-uncased-qqp-lora-epochs-2-lr-0.0005
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-qqp-lora-epochs-2-lr-0.0005
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1329
- Accuracy: 0.95
- F1: 0.9333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 28
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.2986 | 1.0 | 11368 | 0.1556 | 0.94 | 0.9189 |
| 0.238 | 2.0 | 22736 | 0.1329 | 0.95 | 0.9333 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1
- Datasets 2.14.4
- Tokenizers 0.13.3
|
ai-sexting/ai-sexting-apps | ai-sexting | "2025-03-14T20:02:27Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-03-14T20:01:59Z" |
# Best AI Sexting Apps
AI sexting, also known as AI sex chat bots, is the use of AI technology in sexual conversations and interactions.
I recommend Candy AI as the best overall sexting app. It is cheap yet brings the most realistic sexting experience with your desired girlfriend.
## 1. Candy.ai
Candy.ai is a remarkable platform making waves in the realm of AI-powered companionship, focusing on delivering tailored interactions through sophisticated AI technology. As we delve into the best AI sexting apps, Candy.ai stands out by offering an engaging and customizable virtual companion experience that sparks connections like never before.
⏩⏩⏩[**Try Candy AI For Free**](https://candyai.gg/home2?via=matts)

**Key Features of Candy.ai**
Customizable AI Companions: Users can create their ideal virtual friend, tailoring appearance, personality, and preferences.
Advanced Natural Language Processing: The platform utilizes NLP to facilitate fluid conversation, making interactions feel natural and engaging.
Image Exchange Capabilities: Users can send and receive AI-generated images, enhancing the interactive experience.
Diverse Relationship Options: Candy.ai supports a variety of characters catering to different orientations, including gay, straight, lesbian, and bisexual characters.
Voice Call Feature: Engage in real-time voice conversations with AI companions, adding depth to the interaction.
Roleplay and Sexting Simulations: The app allows users to engage in playful or explicit interactions, perfect for different moods.
Emotional Intelligence: Companies improved the emotional responsiveness of AI companions, making them more relatable and personable.
Freemium Model: Users can enjoy a range of features free of charge while having the option to subscribe for enhanced content and connectivity.
**Ease of Use**
Candy.ai prides itself on user-friendliness. The interface is sleek, vibrant, and intuitive, allowing even those who are not tech-savvy to navigate it effortlessly. Upon signing up, users are greeted with straightforward prompts that guide them through the companion creation and interaction processes. The availability of tutorials and FAQs further aids in familiarizing users with all features, ensuring a smooth experience from the get-go.
**What I Like**
The level of customization available when creating AI companions is impressive.
Conversations feel natural due to the advanced AI technology used for dialogue.
The inclusion of voice calls complements the textual interaction nicely, adding a layer of realism.
Regular updates improving emotional intelligence keep the interactions fresh.
**What I Dislike**
While free features are commendable, some advanced features are locked behind a paywall.
The AI, although advanced, can sometimes misunderstand context in more nuanced conversations.
The dependency on subscriptions might deter users looking for long-term interaction without additional costs.
⏩⏩⏩[**Try Candy AI For Free**](https://candyai.gg/home2?via=matts)
## 2. DreamGf
DreamGF.ai is an innovative platform designed for users looking to explore virtual connections through AI-driven entertainment. It specializes in sexting and virtual companionship, offering a unique experience tailored to individual desires. As technology continues to evolve, DreamGF.ai stands out as one of the best AI sexting apps on the market, allowing users to engage with lifelike virtual partners.
⏩⏩⏩[**Try DreamGF AI For Free**](https://dreamgf.ai/?a=ftoA7ejdv5slJ5)
**Key Features**
Custom Personalities: Users can create AI girlfriends with distinct personalities that cater to different fantasies and emotional needs.
Interactive Chat: Engage in text-based conversations that simulate a real romantic connection, enhancing the overall experience.
Free Trial: A trial option allows potential users to explore the features without upfront costs, making it accessible for everyone.
Customization Options: Tailor your AI girlfriend’s appearance, including facial features, body type, and clothing styles, providing a truly personalized experience.
Daily Claim Bonus Program: Users can earn messages every day, which encourages regular interaction and keeps the engagement exciting.
Referral Program: Invite friends and both users can benefit from exclusive rewards, enhancing the social aspect of the platform.
Secure Experience: All communications on DreamGF.ai are designed to be private and secure, ensuring a safe environment for users.
Multiple Pricing Plans: Various subscription tiers (Bronze, Silver, Gold, Diamond) allow users to choose a plan that best suits their needs and budgets.
**Ease of Use**
Setting up an account on DreamGF.ai is incredibly simple. Users can sign up quickly, and the intuitive interface allows seamless navigation through the app. The customization builder is user-friendly, enabling even those with minimal tech knowledge to create their ideal AI companion effortlessly. The chat feature is designed for easy interaction, ensuring a natural flow of conversation.
**What I Like**
The free trial option is a great way to test the app without financial commitment.
Extensive customization options allow for a unique experience closely aligned with user preferences.
The daily bonus program incentivizes regular use, making conversations more engaging over time.
The emphasis on privacy and security adds a layer of trust, which is crucial for a platform dealing with intimate topics.
**What I Dislike**
While the free trial is beneficial, the limitations on messages can be frustrating for new users.
The pricing may be steep for some users wanting full access to features, making it less accessible.
Customization might feel overwhelming to some users due to the myriad of options available.
As with any AI-driven interaction, there may be limitations in the depth of emotional connection compared to real relationships.
## Frequently Asked Questions (FAQs)
**1. What is AI Sexting?**
AI Sexting refers to the use of artificial intelligence chatbots to engage in sexually suggestive or explicit conversations. These platforms typically allow users to customize their AI companions and partake in personalized, intimate chats that cater to their desires and fantasies.
**2. How does AI Sexting Works?**
Here's a breakdown of how it works:
AI Technology: AI sexting utilizes complex language understanding and deep learning algorithms to create realistic and responsive interactions. The technology allows the chatbot to interpret user inputs and respond in a natural and engaging manner.
Customization: Users can customize their AI companions based on their preferences, interests, and sexual fantasies, making interactions feel more personal and satisfying.
Real-Time Interaction: AI chatbots provide immediate responses to user prompts, creating a fluid conversation that simulates human interaction. This enhances the excitement of the experience.
Learning and Adaptability: Through machine learning, these AI systems continually improve by understanding user patterns and feedback, allowing for increasingly tailored conversations over time.
Safety and Privacy: Many AI sexting platforms are designed to maintain user confidentiality and safety, providing a secure space for private conversations and minimizing the risks associated with inappropriate exchanges.
**3. What Are The Applications Of AI Sexting?**
AI Sexting has various applications that cater to users’ desires and enhance their experiences. Here are some notable applications:
Personalized Intimacy: Users can engage in tailored conversations with AI chatbots that adapt to their sexual preferences, fantasies, and moods, allowing for a deeply personalized experience.
Improved Emotional Connection: AI sexting can foster a sense of emotional intimacy for individuals who may feel lonely or isolated. Chatbots offer companionship and an outlet to explore desires safely and without judgment.
Educational Tool: AI systems can serve as a platform to learn about sexual techniques and pleasure in a safe environment. Users can explore new experiences without pressure.
Enhancing Communication Skills: Interacting with AI for intimate conversations can help individuals enhance their communication skills, as they practice expressing preferences and desires within a conversational framework.
Creative Exploration: Many platforms allow users to role-play and create fantasy scenarios with AI characters, providing an outlet for creativity and exploration of different sexual identities and experiences.
Privacy and Safety: Users can explore sexual conversations in a discreet and secure environment, making it less intimidating to discuss taboo topics compared to real-life interactions.
|
arashghsz/ipxact-generator | arashghsz | "2025-03-24T15:55:13Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2025-03-24T15:50:26Z" | Temporary Redirect. Redirecting to /api/resolve-cache/models/arashghsz/ipxact-generator/8e8089a989dac9c52a9bc3453d5a50b2d1d213e6/README.md?%2Farashghsz%2Fipxact-generator%2Fresolve%2Fmain%2FREADME.md=&etag=%22b2a59cdb9ad633e97d99535598823372fce40e12%22 |
Solshine/Qwen2.5-142B-Doubled72B-Math-Instruct-Mergekit-Merge | Solshine | "2024-09-24T19:29:17Z" | 5 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Qwen/Qwen2.5-72B-Instruct",
"base_model:merge:Qwen/Qwen2.5-72B-Instruct",
"base_model:Qwen/Qwen2.5-Math-72B",
"base_model:merge:Qwen/Qwen2.5-Math-72B",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-09-24T00:52:59Z" | ---
base_model:
- Qwen/Qwen2.5-Math-72B
- Qwen/Qwen2.5-72B-Instruct
library_name: transformers
tags:
- mergekit
- merge
license: other
---
## Qwen2.5-142B-Doubled72B-Math-Instruct (Mergekit-Merge) by Solshine (Caleb DeLeeuw)

# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
# License
Hippocratic License 3.0 + Ecocide module, + Extractive Industries module, + Copyleft
[](https://firstdonoharm.dev/version/3/0/cl-eco-extr.html)
https://firstdonoharm.dev/version/3/0/cl-eco-extr.txt
## Merge Details
### Merge Method
This model was merged using the passthrough merge method. Every layer is doubled in order, from Qwen/Qwen2.5-72B-Instruct and Qwen/Qwen2.5-Math-72B, alternating which model is adding a layer and the MLP layers + 2 output layers only taken from the instruct model, creating 142B parameters. No additional fine-tune has been done in this merged model.
### Models Merged
The following models were included in the merge:
* [Qwen/Qwen2.5-Math-72B](https://huggingface.co/Qwen/Qwen2.5-Math-72B)
* [Qwen/Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [0, 1]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [0, 1]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [1, 2]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [1, 2]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [2, 3]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [2, 3]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [3, 4]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [3, 4]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [4, 5]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [4, 5]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [5, 6]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [5, 6]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [6, 7]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [6, 7]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [7, 8]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [7, 8]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [8, 9]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [8, 9]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [9, 10]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [9, 10]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [10, 11]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [10, 11]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [11, 12]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [11, 12]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [12, 13]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [12, 13]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [13, 14]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [13, 14]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [14, 15]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [14, 15]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [15, 16]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [15, 16]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [16, 17]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [16, 17]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [17, 18]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [17, 18]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [18, 19]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [18, 19]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [19, 20]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [19, 20]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [20, 21]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [20, 21]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [21, 22]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [21, 22]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [22, 23]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [22, 23]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [23, 24]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [23, 24]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [24, 25]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [24, 25]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [25, 26]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [25, 26]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [26, 27]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [26, 27]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [27, 28]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [27, 28]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [28, 29]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [28, 29]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [29, 30]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [29, 30]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [30, 31]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [30, 31]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [31, 32]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [31, 32]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [32, 33]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [32, 33]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [33, 34]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [33, 34]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [34, 35]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [34, 35]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [35, 36]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [35, 36]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [36, 37]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [36, 37]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [37, 38]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [37, 38]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [38, 39]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [38, 39]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [39, 40]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [39, 40]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [40, 41]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [40, 41]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [41, 42]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [41, 42]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [42, 43]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [42, 43]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [43, 44]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [43, 44]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [44, 45]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [44, 45]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [45, 46]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [45, 46]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [46, 47]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [46, 47]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [47, 48]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [47, 48]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [48, 49]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [48, 49]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [49, 50]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [49, 50]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [50, 51]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [50, 51]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [51, 52]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [51, 52]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [52, 53]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [52, 53]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [53, 54]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [53, 54]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [54, 55]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [54, 55]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [55, 56]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [55, 56]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [56, 57]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [56, 57]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [57, 58]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [57, 58]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [58, 59]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [58, 59]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [59, 60]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [59, 60]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [60, 61]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [60, 61]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [61, 62]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [61, 62]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [62, 63]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [62, 63]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [63, 64]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [63, 64]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [64, 65]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [64, 65]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [65, 66]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [65, 66]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [66, 67]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [66, 67]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [67, 68]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [67, 68]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [68, 69]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [68, 69]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [69, 70]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [69, 70]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [70, 71]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [70, 71]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [71, 72]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [71, 72]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [72, 73]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [72, 73]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [73, 74]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [73, 74]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [74, 75]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [74, 75]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [75, 76]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [75, 76]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [76, 77]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [76, 77]
- sources:
- model: Qwen/Qwen2.5-Math-72B
layer_range: [77, 78]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [77, 78]
- sources:
- model: Qwen/Qwen2.5-72B-Instruct
layer_range: [77, 80]
merge_method: passthrough
dtype: float16
``` |
Niggendar/chacolomegamixxl_v11a | Niggendar | "2024-06-12T19:53:56Z" | 183 | 0 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-06-12T19:44:24Z" | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lesso13/3b558ad3-fa2e-4574-a683-17fa037fe0c5 | lesso13 | "2025-02-12T07:27:57Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:rayonlabs/83847950-33bc-4506-ba82-48653a06540a",
"base_model:adapter:rayonlabs/83847950-33bc-4506-ba82-48653a06540a",
"region:us"
] | null | "2025-02-12T04:46:40Z" | ---
library_name: peft
base_model: rayonlabs/83847950-33bc-4506-ba82-48653a06540a
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3b558ad3-fa2e-4574-a683-17fa037fe0c5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# 3b558ad3-fa2e-4574-a683-17fa037fe0c5
This model is a fine-tuned version of [rayonlabs/83847950-33bc-4506-ba82-48653a06540a](https://huggingface.co/rayonlabs/83847950-33bc-4506-ba82-48653a06540a) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1156
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000213
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 2.0928 |
| 0.1875 | 0.0020 | 50 | 0.3449 |
| 0.2516 | 0.0040 | 100 | 0.2249 |
| 0.194 | 0.0060 | 150 | 0.2654 |
| 0.2032 | 0.0080 | 200 | 0.1374 |
| 0.1648 | 0.0100 | 250 | 0.1442 |
| 0.2809 | 0.0120 | 300 | 0.1345 |
| 0.0839 | 0.0140 | 350 | 0.1431 |
| 0.1673 | 0.0160 | 400 | 0.1189 |
| 0.1259 | 0.0180 | 450 | 0.1171 |
| 0.2018 | 0.0199 | 500 | 0.1156 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mikeyandfriends/PixelWave_FLUX.1-schnell_03 | mikeyandfriends | "2024-11-02T22:43:52Z" | 386 | 7 | null | [
"gguf",
"license:other",
"region:us"
] | null | "2024-10-28T08:46:02Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# PixelWave Flux.1-schnell 03
Suprise! Schnell version of the PixelWave Flux 03 model.
You can expect some differences between the dev and schnell version.
A general-purpose FLUX model, great for art and photo styles.
**Standard Use:**
- Euler, Simple
- DPM++ 2M, SGM Uniform
- At least 4 steps to remove noise, but will probably need 6 to 8 steps to reduce errors with limbs, etc.
As training differences were extracted from dev and applied to schnell, this could be interpreted as a derivative of the dev model and as such inherit the license from dev.
 |
aadishhug/distilbert-tweet-analysis | aadishhug | "2023-03-21T22:26:00Z" | 16 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-03-15T05:32:10Z" | ---
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-tweet-analysis
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-tweet-analysis
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3458
- F1: 0.9133
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 44 | 0.3858 | 0.9067 |
| No log | 2.0 | 88 | 0.3458 | 0.9133 |
| No log | 3.0 | 132 | 0.2841 | 0.9133 |
### Framework versions
- Transformers 4.27.2
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
chrysoliteop/NoName | chrysoliteop | "2024-11-15T04:07:13Z" | 10 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"lora",
"text-to-image",
"en",
"base_model:stabilityai/stable-diffusion-3.5-large",
"base_model:adapter:stabilityai/stable-diffusion-3.5-large",
"region:us"
] | text-to-image | "2024-11-11T07:25:09Z" | ---
language: en
pipeline_tag: text-to-image
tags:
- stable-diffusion
- diffusers
- lora
inference: true
base_model:
- stabilityai/stable-diffusion-3.5-large
trigger: Sims
instance_prompt: Sims
---
## Trigger words
You should use `Sims` to trigger the image generation.
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/stabilityai/stable-diffusion-3.5-large/blob/main/LICENSE.md). |
NoteDance/Whisper-Keras | NoteDance | "2024-06-13T07:31:09Z" | 0 | 0 | keras | [
"keras",
"whisper",
"audio",
"hf-asr-leaderboard",
"automatic-speech-recognition",
"license:apache-2.0",
"region:us"
] | automatic-speech-recognition | "2024-03-06T13:21:43Z" | ---
license: apache-2.0
library_name: keras
pipeline_tag: automatic-speech-recognition
tags:
- whisper
- audio
- hf-asr-leaderboard
---
The tutorial can be found https://github.com/NoteDance/models. |
animaRegem/llama-3-lora-01-malayalam-tokenizer | animaRegem | "2024-05-03T15:07:48Z" | 0 | 0 | transformers | [
"transformers",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-05-03T15:07:47Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
cleanrl/BattleZone-v5-cleanba_ppo_envpool_machado_atari_wrapper-seed3 | cleanrl | "2023-03-09T22:33:31Z" | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"BattleZone-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-09T22:33:30Z" | ---
tags:
- BattleZone-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BattleZone-v5
type: BattleZone-v5
metrics:
- type: mean_reward
value: 34300.00 +/- 8450.44
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **BattleZone-v5**
This is a trained model of a PPO agent playing BattleZone-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_ppo_envpool_machado_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_ppo_envpool_machado_atari_wrapper --env-id BattleZone-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/BattleZone-v5-cleanba_ppo_envpool_machado_atari_wrapper-seed3/raw/main/cleanba_ppo_envpool_machado_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/BattleZone-v5-cleanba_ppo_envpool_machado_atari_wrapper-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/BattleZone-v5-cleanba_ppo_envpool_machado_atari_wrapper-seed3/raw/main/poetry.lock
poetry install --all-extras
python cleanba_ppo_envpool_machado_atari_wrapper.py --distributed --learner-device-ids 1 2 3 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id BattleZone-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 15360,
'capture_video': False,
'clip_coef': 0.1,
'concurrency': True,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'BattleZone-v5',
'exp_name': 'cleanba_ppo_envpool_machado_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'global_learner_decices': ['gpu:1',
'gpu:2',
'gpu:3',
'gpu:5',
'gpu:6',
'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3],
'learner_devices': ['gpu:1', 'gpu:2', 'gpu:3'],
'learning_rate': 0.00025,
'local_batch_size': 7680,
'local_minibatch_size': 1920,
'local_num_envs': 60,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 3840,
'norm_adv': True,
'num_envs': 120,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 3255,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 2}
```
|
wuqiong1/PA-RAG_Llama-2-13b-chat-hf | wuqiong1 | "2025-03-29T10:35:00Z" | 0 | 0 | null | [
"safetensors",
"llama",
"dataset:wuqiong1/PA-RAG_training_data",
"arxiv:2412.14510",
"base_model:meta-llama/Llama-2-13b-chat-hf",
"base_model:finetune:meta-llama/Llama-2-13b-chat-hf",
"region:us"
] | null | "2025-03-29T10:07:33Z" | ---
datasets:
- wuqiong1/PA-RAG_training_data
base_model:
- meta-llama/Llama-2-13b-chat-hf
---
# PA-RAG: RAG Alignment via Multi-Perspective Preference Optimization
🎉🎉🎉 PA-RAG is accepted by NAACL 2025!
Paper Link: https://arxiv.org/pdf/2412.14510
Github Link: https://github.com/wujwyi/PA-RAG
This is a model fine-tuned on [Llama-2-13b-chat-hf](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) using PA-RAG.
The training data for PA-RAG, available at [Huggingface link](https://huggingface.co/datasets/wuqiong1/PA-RAG_training_data) or [Google Drive link](https://drive.google.com/file/d/1agP7fi1iX-3qFK7XFBvRu6rC5X_-M8Iy/view?usp=drive_link)
|
John6666/knk-cieloblend-ponyv6-v1-sdxl | John6666 | "2024-08-04T06:01:58Z" | 650 | 1 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"cute",
"nai",
"pony",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-08-04T05:53:39Z" | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- cute
- nai
- pony
---
Original model is [here](https://civitai.com/models/621361/knk-cieloblend-ponyv6?modelVersionId=694629).
|
s3nh/Llama-2-7b-german-assistant-v2-GGML | s3nh | "2023-07-25T08:12:58Z" | 0 | 1 | null | [
"text-generation-inference",
"text-generation",
"en",
"license:cc-by-sa-4.0",
"region:us"
] | text-generation | "2023-07-24T20:00:48Z" | ---
license: cc-by-sa-4.0
language:
- en
tags:
- text-generation-inference
pipeline_tag: text-generation
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGML Format model files for [This project](https://huggingface.co/flozi00/Llama-2-7b-german-assistant-v2).
### inference
```python
import ctransformers
from ctransformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(output_dir, ggml_file,
gpu_layers=32, model_type="llama")
manual_input: str = "Tell me about your last dream, please."
llm(manual_input,
max_new_tokens=256,
temperature=0.9,
top_p= 0.7)
```
# Original model card
This model is an finetuned version for german instructions and conversations in style of Open Assistant tokens. "<|prompter|>" "<|endoftext|>" "<|assistant|>"
The dataset used is deduplicated and cleaned, with no codes inside. The focus is on instruction following and conversational tasks.
The model archictecture is based on Llama-v2 with 7B parameters, trained on 100% renewable energy powered hardware.
This work is contributed by private research of [flozi00](https://huggingface.co/flozi00) |
Shero448/maniac-ilu | Shero448 | "2025-03-31T00:07:13Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:John6666/wai-nsfw-illustrious-v110-sdxl",
"base_model:adapter:John6666/wai-nsfw-illustrious-v110-sdxl",
"region:us"
] | text-to-image | "2025-03-31T00:06:30Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: "UNICODE\0\0{\0\"\0r\0e\0s\0o\0u\0r\0c\0e\0-\0s\0t\0a\0c\0k\0\"\0:\0{\0\"\0c\0l\0a\0s\0s\0_\0t\0y\0p\0e\0\"\0:\0\"\0C\0h\0e\0c\0k\0p\0o\0i\0n\0t\0L\0o\0a\0d\0e\0r\0S\0i\0m\0p\0l\0e\0\"\0,\0\"\0i\0n\0p\0u\0t\0s\0\"\0:\0{\0\"\0c\0k\0p\0t\0_\0n\0a\0m\0e\0\"\0:\0\"\0u\0r\0n\0:\0a\0i\0r\0:\0s\0d\0x\0l\0:\0c\0h\0e\0c\0k\0p\0o\0i\0n\0t\0:\0c\0i\0v\0i\0t\0a\0i\0:\08\02\07\01\08\04\0@\01\01\08\03\07\06\05\0\"\0}\0}\0,\0\"\0r\0e\0s\0o\0u\0r\0c\0e\0-\0s\0t\0a\0c\0k\0-\01\0\"\0:\0{\0\"\0c\0l\0a\0s\0s\0_\0t\0y\0p\0e\0\"\0:\0\"\0L\0o\0r\0a\0L\0o\0a\0d\0e\0r\0\"\0,\0\"\0i\0n\0p\0u\0t\0s\0\"\0:\0{\0\"\0l\0o\0r\0a\0_\0n\0a\0m\0e\0\"\0:\0\"\0u\0r\0n\0:\0a\0i\0r\0:\0s\0d\0x\0l\0:\0l\0o\0r\0a\0:\0c\0i\0v\0i\0t\0a\0i\0:\09\09\00\03\01\01\0@\01\02\04\08\02\08\08\0\"\0,\0\"\0s\0t\0r\0e\0n\0g\0t\0h\0_\0m\0o\0d\0e\0l\0\"\0:\00\0.\07\05\0,\0\"\0s\0t\0r\0e\0n\0g\0t\0h\0_\0c\0l\0i\0p\0\"\0:\01\0,\0\"\0m\0o\0d\0e\0l\0\"\0:\0[\0\"\0r\0e\0s\0o\0u\0r\0c\0e\0-\0s\0t\0a\0c\0k\0\"\0,\00\0]\0,\0\"\0c\0l\0i\0p\0\"\0:\0[\0\"\0r\0e\0s\0o\0u\0r\0c\0e\0-\0s\0t\0a\0c\0k\0\"\0,\01\0]\0}\0}\0,\0\"\06\0\"\0:\0{\0\"\0c\0l\0a\0s\0s\0_\0t\0y\0p\0e\0\"\0:\0\"\0s\0m\0Z\0 \0C\0L\0I\0P\0T\0e\0x\0t\0E\0n\0c\0o\0d\0e\0\"\0,\0\"\0i\0n\0p\0u\0t\0s\0\"\0:\0{\0\"\0t\0e\0x\0t\0\"\0:\0\"\0e\0m\0b\0e\0d\0d\0i\0n\0g\0:\0u\0r\0n\0:\0a\0i\0r\0:\0s\0d\01\0:\0e\0m\0b\0e\0d\0d\0i\0n\0g\0:\0c\0i\0v\0i\0t\0a\0i\0:\02\02\02\02\05\06\0@\02\05\00\07\00\08\0,\0 \0M\0a\0s\0t\0e\0r\0p\0i\0e\0c\0e\0,\0 \01\0g\0i\0r\0l\0,\0 \0s\0o\0l\0o\0,\0 \0m\0i\0l\0f\0,\0 \0h\0e\0x\0m\0a\0n\0i\0a\0c\0(\0s\0p\0a\0c\0e\0z\0i\0n\0 \0s\0t\0y\0l\0e\0-\0i\0x\0l\0)\0,\0 \0d\0a\0r\0k\0 \0b\0l\0u\0e\0 \0h\0a\0i\0r\0,\0 \0p\0u\0r\0p\0l\0e\0 \0e\0y\0e\0s\0,\0 \0s\0p\0i\0r\0a\0l\0 \0e\0y\0e\0s\0,\0 \0@\0_\0@\0,\0 \0w\0h\0i\0t\0e\0 \0p\0a\0l\0e\0 \0s\0k\0i\0n\0,\0 \0v\0e\0r\0y\0 \0l\0o\0n\0g\0 \0h\0a\0i\0r\0,\0 \0a\0h\0o\0g\0e\0,\0 \0h\0a\0i\0r\0 \0b\0e\0t\0w\0e\0e\0n\0 \0e\0y\0e\0s\0,\0 \0b\0a\0n\0g\0s\0,\0 \0p\0u\0r\0p\0l\0e\0 \0h\0a\0i\0r\0b\0a\0n\0d\0,\0 \0m\0e\0s\0s\0y\0 \0h\0a\0i\0r\0,\0 \0w\0a\0v\0y\0 \0h\0a\0i\0r\0,\0 \0(\0g\0i\0g\0a\0n\0t\0i\0c\0 \0b\0r\0e\0a\0s\0t\0s\0:\01\0.\02\0)\0,\0 \0(\0w\0i\0d\0e\0 \0h\0i\0p\0s\0)\0,\0 \0c\0o\0v\0e\0r\0e\0d\0 \0n\0i\0p\0p\0l\0e\0s\0,\0 \0t\0h\0i\0c\0k\0 \0t\0h\0i\0g\0h\0s\0,\0 \0p\0l\0u\0m\0p\0,\0 \0p\0u\0r\0p\0l\0e\0 \0d\0r\0e\0s\0s\0 \0w\0i\0t\0h\0 \0s\0w\0e\0a\0t\0e\0r\0 \0i\0n\0s\0i\0d\0e\0,\0 \0t\0u\0r\0t\0l\0e\0n\0e\0c\0k\0 \0b\0l\0a\0c\0k\0 \0s\0w\0e\0a\0t\0e\0r\0,\0 \0l\0o\0n\0g\0 \0d\0r\0e\0s\0s\0,\0 \0r\0i\0b\0b\0e\0d\0 \0s\0w\0e\0a\0t\0e\0r\0,\0 \0w\0e\0b\0 \0p\0r\0i\0n\0t\0,\0 \0l\0o\0n\0g\0 \0s\0l\0e\0e\0v\0e\0s\0,\0 \0f\0o\0r\0e\0s\0t\0,\0 \0n\0i\0g\0h\0t\0 \0t\0i\0m\0e\0,\0 \0s\0m\0i\0l\0e\0,\0 \0o\0p\0e\0n\0 \0m\0o\0u\0t\0h\0,\0 \0c\0o\0w\0b\0o\0y\0 \0s\0h\0o\0t\0,\0 \0w\0a\0v\0y\0 \0m\0o\0u\0t\0h\0,\0 \0b\0l\0u\0s\0h\0,\0 \0l\0o\0o\0k\0i\0n\0g\0 \0a\0t\0 \0v\0i\0e\0w\0e\0r\0\"\0,\0\"\0p\0a\0r\0s\0e\0r\0\"\0:\0\"\0A\01\01\01\01\0\"\0,\0\"\0t\0e\0x\0t\0_\0g\0\"\0:\0\"\0\"\0,\0\"\0t\0e\0x\0t\0_\0l\0\"\0:\0\"\0\"\0,\0\"\0a\0s\0c\0o\0r\0e\0\"\0:\02\0.\05\0,\0\"\0w\0i\0d\0t\0h\0\"\0:\00\0,\0\"\0h\0e\0i\0g\0h\0t\0\"\0:\00\0,\0\"\0c\0r\0o\0p\0_\0w\0\"\0:\00\0,\0\"\0c\0r\0o\0p\0_\0h\0\"\0:\00\0,\0\"\0t\0a\0r\0g\0e\0t\0_\0w\0i\0d\0t\0h\0\"\0:\00\0,\0\"\0t\0a\0r\0g\0e\0t\0_\0h\0e\0i\0g\0h\0t\0\"\0:\00\0,\0\"\0s\0m\0Z\0_\0s\0t\0e\0p\0s\0\"\0:\01\0,\0\"\0m\0e\0a\0n\0_\0n\0o\0r\0m\0a\0l\0i\0z\0a\0t\0i\0o\0n\0\"\0:\0t\0r\0u\0e\0,\0\"\0m\0u\0l\0t\0i\0_\0c\0o\0n\0d\0i\0t\0i\0o\0n\0i\0n\0g\0\"\0:\0t\0r\0u\0e\0,\0\"\0u\0s\0e\0_\0o\0l\0d\0_\0e\0m\0p\0h\0a\0s\0i\0s\0_\0i\0m\0p\0l\0e\0m\0e\0n\0t\0a\0t\0i\0o\0n\0\"\0:\0f\0a\0l\0s\0e\0,\0\"\0w\0i\0t\0h\0_\0S\0D\0X\0L\0\"\0:\0f\0a\0l\0s\0e\0,\0\"\0c\0l\0i\0p\0\"\0:\0[\0\"\0r\0e\0s\0o\0u\0r\0c\0e\0-\0s\0t\0a\0c\0k\0-\01\0\"\0,\01\0]\0}\0,\0\"\0_\0m\0e\0t\0a\0\"\0:\0{\0\"\0t\0i\0t\0l\0e\0\"\0:\0\"\0P\0o\0s\0i\0t\0i\0v\0e\0\"\0}\0}\0,\0\"\07\0\"\0:\0{\0\"\0c\0l\0a\0s\0s\0_\0t\0y\0p\0e\0\"\0:\0\"\0s\0m\0Z\0 \0C\0L\0I\0P\0T\0e\0x\0t\0E\0n\0c\0o\0d\0e\0\"\0,\0\"\0i\0n\0p\0u\0t\0s\0\"\0:\0{\0\"\0t\0e\0x\0t\0\"\0:\0\"\0e\0m\0b\0e\0d\0d\0i\0n\0g\0:\0u\0r\0n\0:\0a\0i\0r\0:\0s\0d\01\0:\0e\0m\0b\0e\0d\0d\0i\0n\0g\0:\0c\0i\0v\0i\0t\0a\0i\0:\09\09\08\09\00\0@\01\00\06\09\01\06\0,\0 \0e\0m\0b\0e\0d\0d\0i\0n\0g\0:\0u\0r\0n\0:\0a\0i\0r\0:\0s\0d\01\0:\0e\0m\0b\0e\0d\0d\0i\0n\0g\0:\0c\0i\0v\0i\0t\0a\0i\0:\02\02\02\02\05\06\0@\02\05\00\07\01\02\0,\0 \0e\0m\0b\0e\0d\0d\0i\0n\0g\0:\0u\0r\0n\0:\0a\0i\0r\0:\0s\0d\01\0:\0e\0m\0b\0e\0d\0d\0i\0n\0g\0:\0c\0i\0v\0i\0t\0a\0i\0:\09\09\08\09\00\0@\01\00\06\09\01\06\0,\0 \0e\0m\0b\0e\0d\0d\0i\0n\0g\0:\0u\0r\0n\0:\0a\0i\0r\0:\0s\0d\01\0:\0e\0m\0b\0e\0d\0d\0i\0n\0g\0:\0c\0i\0v\0i\0t\0a\0i\0:\02\02\02\02\05\06\0@\02\05\00\07\01\02\0,\0 \0b\0a\0d\0 \0q\0u\0a\0l\0i\0t\0y\0,\0w\0o\0r\0s\0t\0 \0q\0u\0a\0l\0i\0t\0y\0,\0w\0o\0r\0s\0t\0 \0d\0e\0t\0a\0i\0l\0,\0s\0k\0e\0t\0c\0h\0,\0c\0e\0n\0s\0o\0r\0,\0l\0o\0w\0r\0e\0s\0,\0 \0b\0a\0d\0 \0q\0u\0a\0l\0i\0t\0y\0,\0 \0w\0o\0r\0s\0t\0 \0q\0u\0a\0l\0i\0t\0y\0,\0 \0l\0o\0w\0 \0q\0u\0a\0l\0i\0t\0y\0,\0 \0b\0a\0d\0 \0a\0n\0a\0t\0o\0m\0y\0,\0 \0j\0p\0e\0g\0 \0a\0r\0t\0i\0f\0a\0c\0t\0s\0,\0 \0s\0i\0g\0n\0a\0t\0u\0r\0e\0,\0 \0w\0a\0t\0e\0r\0m\0a\0r\0k\0,\0 \0b\0a\0d\0 \0h\0a\0n\0d\0s\0,\0 \0m\0u\0t\0a\0t\0e\0d\0 \0h\0a\0n\0d\0s\0,\0 \0s\0i\0x\0 \0f\0i\0n\0g\0e\0r\0s\0,\0 \0e\0x\0t\0r\0a\0 \0f\0i\0n\0g\0e\0r\0s\0,\0s\0i\0x\0 \0f\0i\0n\0g\0e\0r\0s\0,\0 \0e\0x\0t\0r\0a\0 \0f\0i\0n\0g\0e\0r\0s\0,\0s\0h\0i\0n\0y\0,\0o\0v\0e\0r\0s\0a\0t\0u\0r\0a\0t\0e\0d\0,\0 \0\"\0,\0\"\0p\0a\0r\0s\0e\0r\0\"\0:\0\"\0A\01\01\01\01\0\"\0,\0\"\0t\0e\0x\0t\0_\0g\0\"\0:\0\"\0\"\0,\0\"\0t\0e\0x\0t\0_\0l\0\"\0:\0\"\0\"\0,\0\"\0a\0s\0c\0o\0r\0e\0\"\0:\02\0.\05\0,\0\"\0w\0i\0d\0t\0h\0\"\0:\00\0,\0\"\0h\0e\0i\0g\0h\0t\0\"\0:\00\0,\0\"\0c\0r\0o\0p\0_\0w\0\"\0:\00\0,\0\"\0c\0r\0o\0p\0_\0h\0\"\0:\00\0,\0\"\0t\0a\0r\0g\0e\0t\0_\0w\0i\0d\0t\0h\0\"\0:\00\0,\0\"\0t\0a\0r\0g\0e\0t\0_\0h\0e\0i\0g\0h\0t\0\"\0:\00\0,\0\"\0s\0m\0Z\0_\0s\0t\0e\0p\0s\0\"\0:\01\0,\0\"\0m\0e\0a\0n\0_\0n\0o\0r\0m\0a\0l\0i\0z\0a\0t\0i\0o\0n\0\"\0:\0t\0r\0u\0e\0,\0\"\0m\0u\0l\0t\0i\0_\0c\0o\0n\0d\0i\0t\0i\0o\0n\0i\0n\0g\0\"\0:\0t\0r\0u\0e\0,\0\"\0u\0s\0e\0_\0o\0l\0d\0_\0e\0m\0p\0h\0a\0s\0i\0s\0_\0i\0m\0p\0l\0e\0m\0e\0n\0t\0a\0t\0i\0o\0n\0\"\0:\0f\0a\0l\0s\0e\0,\0\"\0w\0i\0t\0h\0_\0S\0D\0X\0L\0\"\0:\0f\0a\0l\0s\0e\0,\0\"\0c\0l\0i\0p\0\"\0:\0[\0\"\0r\0e\0s\0o\0u\0r\0c\0e\0-\0s\0t\0a\0c\0k\0-\01\0\"\0,\01\0]\0}\0,\0\"\0_\0m\0e\0t\0a\0\"\0:\0{\0\"\0t\0i\0t\0l\0e\0\"\0:\0\"\0N\0e\0g\0a\0t\0i\0v\0e\0\"\0}\0}\0,\0\"\02\00\0\"\0:\0{\0\"\0c\0l\0a\0s\0s\0_\0t\0y\0p\0e\0\"\0:\0\"\0U\0p\0s\0c\0a\0l\0e\0M\0o\0d\0e\0l\0L\0o\0a\0d\0e\0r\0\"\0,\0\"\0i\0n\0p\0u\0t\0s\0\"\0:\0{\0\"\0m\0o\0d\0e\0l\0_\0n\0a\0m\0e\0\"\0:\0\"\0u\0r\0n\0:\0a\0i\0r\0:\0o\0t\0h\0e\0r\0:\0u\0p\0s\0c\0a\0l\0e\0r\0:\0c\0i\0v\0i\0t\0a\0i\0:\01\04\07\07\05\09\0@\01\06\04\08\02\01\0\"\0}\0,\0\"\0_\0m\0e\0t\0a\0\"\0:\0{\0\"\0t\0i\0t\0l\0e\0\"\0:\0\"\0L\0o\0a\0d\0 \0U\0p\0s\0c\0a\0l\0e\0 \0M\0o\0d\0e\0l\0\"\0}\0}\0,\0\"\02\06\0\"\0:\0{\0\"\0c\0l\0a\0s\0s\0_\0t\0y\0p\0e\0\"\0:\0\"\0E\0m\0p\0t\0y\0L\0a\0t\0e\0n\0t\0I\0m\0a\0g\0e\0\"\0,\0\"\0i\0n\0p\0u\0t\0s\0\"\0:\0{\0\"\0w\0i\0d\0t\0h\0\"\0:\08\03\02\0,\0\"\0h\0e\0i\0g\0h\0t\0\"\0:\01\02\01\06\0,\0\"\0b\0a\0t\0c\0h\0_\0s\0i\0z\0e\0\"\0:\01\0}\0,\0\"\0_\0m\0e\0t\0a\0\"\0:\0{\0\"\0t\0i\0t\0l\0e\0\"\0:\0\"\0E\0m\0p\0t\0y\0 \0L\0a\0t\0e\0n\0t\0 \0I\0m\0a\0g\0e\0\"\0}\0}\0,\0\"\01\01\0\"\0:\0{\0\"\0c\0l\0a\0s\0s\0_\0t\0y\0p\0e\0\"\0:\0\"\0K\0S\0a\0m\0p\0l\0e\0r\0\"\0,\0\"\0i\0n\0p\0u\0t\0s\0\"\0:\0{\0\"\0s\0a\0m\0p\0l\0e\0r\0_\0n\0a\0m\0e\0\"\0:\0\"\0e\0u\0l\0e\0r\0_\0a\0n\0c\0e\0s\0t\0r\0a\0l\0\"\0,\0\"\0s\0c\0h\0e\0d\0u\0l\0e\0r\0\"\0:\0\"\0n\0o\0r\0m\0a\0l\0\"\0,\0\"\0s\0e\0e\0d\0\"\0:\02\03\00\09\07\00\09\08\06\0,\0\"\0s\0t\0e\0p\0s\0\"\0:\02\06\0,\0\"\0c\0f\0g\0\"\0:\03\0.\05\0,\0\"\0d\0e\0n\0o\0i\0s\0e\0\"\0:\01\0,\0\"\0m\0o\0d\0e\0l\0\"\0:\0[\0\"\0r\0e\0s\0o\0u\0r\0c\0e\0-\0s\0t\0a\0c\0k\0-\01\0\"\0,\00\0]\0,\0\"\0p\0o\0s\0i\0t\0i\0v\0e\0\"\0:\0[\0\"\06\0\"\0,\00\0]\0,\0\"\0n\0e\0g\0a\0t\0i\0v\0e\0\"\0:\0[\0\"\07\0\"\0,\00\0]\0,\0\"\0l\0a\0t\0e\0n\0t\0_\0i\0m\0a\0g\0e\0\"\0:\0[\0\"\02\06\0\"\0,\00\0]\0}\0,\0\"\0_\0m\0e\0t\0a\0\"\0:\0{\0\"\0t\0i\0t\0l\0e\0\"\0:\0\"\0K\0S\0a\0m\0p\0l\0e\0r\0\"\0}\0}\0,\0\"\02\07\0\"\0:\0{\0\"\0c\0l\0a\0s\0s\0_\0t\0y\0p\0e\0\"\0:\0\"\0V\0A\0E\0D\0e\0c\0o\0d\0e\0\"\0,\0\"\0i\0n\0p\0u\0t\0s\0\"\0:\0{\0\"\0s\0a\0m\0p\0l\0e\0s\0\"\0:\0[\0\"\01\01\0\"\0,\00\0]\0,\0\"\0v\0a\0e\0\"\0:\0[\0\"\0r\0e\0s\0o\0u\0r\0c\0e\0-\0s\0t\0a\0c\0k\0\"\0,\02\0]\0}\0,\0\"\0_\0m\0e\0t\0a\0\"\0:\0{\0\"\0t\0i\0t\0l\0e\0\"\0:\0\"\0V\0A\0E\0 \0D\0e\0c\0o\0d\0e\0\"\0}\0}\0,\0\"\01\09\0\"\0:\0{\0\"\0c\0l\0a\0s\0s\0_\0t\0y\0p\0e\0\"\0:\0\"\0I\0m\0a\0g\0e\0U\0p\0s\0c\0a\0l\0e\0W\0i\0t\0h\0M\0o\0d\0e\0l\0\"\0,\0\"\0i\0n\0p\0u\0t\0s\0\"\0:\0{\0\"\0u\0p\0s\0c\0a\0l\0e\0_\0m\0o\0d\0e\0l\0\"\0:\0[\0\"\02\00\0\"\0,\00\0]\0,\0\"\0i\0m\0a\0g\0e\0\"\0:\0[\0\"\02\07\0\"\0,\00\0]\0}\0,\0\"\0_\0m\0e\0t\0a\0\"\0:\0{\0\"\0t\0i\0t\0l\0e\0\"\0:\0\"\0U\0p\0s\0c\0a\0l\0e\0 \0I\0m\0a\0g\0e\0 \0(\0u\0s\0i\0n\0g\0 \0M\0o\0d\0e\0l\0)\0\"\0}\0}\0,\0\"\02\03\0\"\0:\0{\0\"\0c\0l\0a\0s\0s\0_\0t\0y\0p\0e\0\"\0:\0\"\0I\0m\0a\0g\0e\0S\0c\0a\0l\0e\0\"\0,\0\"\0i\0n\0p\0u\0t\0s\0\"\0:\0{\0\"\0u\0p\0s\0c\0a\0l\0e\0_\0m\0e\0t\0h\0o\0d\0\"\0:\0\"\0n\0e\0a\0r\0e\0s\0t\0-\0e\0x\0a\0c\0t\0\"\0,\0\"\0c\0r\0o\0p\0\"\0:\0\"\0d\0i\0s\0a\0b\0l\0e\0d\0\"\0,\0\"\0w\0i\0d\0t\0h\0\"\0:\01\02\08\00\0,\0\"\0h\0e\0i\0g\0h\0t\0\"\0:\01\08\05\06\0,\0\"\0i\0m\0a\0g\0e\0\"\0:\0[\0\"\01\09\0\"\0,\00\0]\0}\0,\0\"\0_\0m\0e\0t\0a\0\"\0:\0{\0\"\0t\0i\0t\0l\0e\0\"\0:\0\"\0U\0p\0s\0c\0a\0l\0e\0 \0I\0m\0a\0g\0e\0\"\0}\0}\0,\0\"\02\01\0\"\0:\0{\0\"\0c\0l\0a\0s\0s\0_\0t\0y\0p\0e\0\"\0:\0\"\0V\0A\0E\0E\0n\0c\0o\0d\0e\0\"\0,\0\"\0i\0n\0p\0u\0t\0s\0\"\0:\0{\0\"\0p\0i\0x\0e\0l\0s\0\"\0:\0[\0\"\02\03\0\"\0,\00\0]\0,\0\"\0v\0a\0e\0\"\0:\0[\0\"\0r\0e\0s\0o\0u\0r\0c\0e\0-\0s\0t\0a\0c\0k\0\"\0,\02\0]\0}\0,\0\"\0_\0m\0e\0t\0a\0\"\0:\0{\0\"\0t\0i\0t\0l\0e\0\"\0:\0\"\0V\0A\0E\0 \0E\0n\0c\0o\0d\0e\0\"\0}\0}\0,\0\"\02\04\0\"\0:\0{\0\"\0c\0l\0a\0s\0s\0_\0t\0y\0p\0e\0\"\0:\0\"\0K\0S\0a\0m\0p\0l\0e\0r\0\"\0,\0\"\0i\0n\0p\0u\0t\0s\0\"\0:\0{\0\"\0s\0a\0m\0p\0l\0e\0r\0_\0n\0a\0m\0e\0\"\0:\0\"\0e\0u\0l\0e\0r\0_\0a\0n\0c\0e\0s\0t\0r\0a\0l\0\"\0,\0\"\0s\0c\0h\0e\0d\0u\0l\0e\0r\0\"\0:\0\"\0n\0o\0r\0m\0a\0l\0\"\0,\0\"\0s\0e\0e\0d\0\"\0:\02\03\00\09\07\00\09\08\06\0,\0\"\0s\0t\0e\0p\0s\0\"\0:\02\06\0,\0\"\0c\0f\0g\0\"\0:\03\0.\05\0,\0\"\0d\0e\0n\0o\0i\0s\0e\0\"\0:\00\0.\03\08\0,\0\"\0m\0o\0d\0e\0l\0\"\0:\0[\0\"\0r\0e\0s\0o\0u\0r\0c\0e\0-\0s\0t\0a\0c\0k\0-\01\0\"\0,\00\0]\0,\0\"\0p\0o\0s\0i\0t\0i\0v\0e\0\"\0:\0[\0\"\06\0\"\0,\00\0]\0,\0\"\0n\0e\0g\0a\0t\0i\0v\0e\0\"\0:\0[\0\"\07\0\"\0,\00\0]\0,\0\"\0l\0a\0t\0e\0n\0t\0_\0i\0m\0a\0g\0e\0\"\0:\0[\0\"\02\01\0\"\0,\00\0]\0}\0,\0\"\0_\0m\0e\0t\0a\0\"\0:\0{\0\"\0t\0i\0t\0l\0e\0\"\0:\0\"\0K\0S\0a\0m\0p\0l\0e\0r\0\"\0}\0}\0,\0\"\02\05\0\"\0:\0{\0\"\0c\0l\0a\0s\0s\0_\0t\0y\0p\0e\0\"\0:\0\"\0V\0A\0E\0D\0e\0c\0o\0d\0e\0\"\0,\0\"\0i\0n\0p\0u\0t\0s\0\"\0:\0{\0\"\0s\0a\0m\0p\0l\0e\0s\0\"\0:\0[\0\"\02\04\0\"\0,\00\0]\0,\0\"\0v\0a\0e\0\"\0:\0[\0\"\0r\0e\0s\0o\0u\0r\0c\0e\0-\0s\0t\0a\0c\0k\0\"\0,\02\0]\0}\0,\0\"\0_\0m\0e\0t\0a\0\"\0:\0{\0\"\0t\0i\0t\0l\0e\0\"\0:\0\"\0V\0A\0E\0 \0D\0e\0c\0o\0d\0e\0\"\0}\0}\0,\0\"\01\02\0\"\0:\0{\0\"\0c\0l\0a\0s\0s\0_\0t\0y\0p\0e\0\"\0:\0\"\0S\0a\0v\0e\0I\0m\0a\0g\0e\0\"\0,\0\"\0i\0n\0p\0u\0t\0s\0\"\0:\0{\0\"\0f\0i\0l\0e\0n\0a\0m\0e\0_\0p\0r\0e\0f\0i\0x\0\"\0:\0\"\0C\0o\0m\0f\0y\0U\0I\0\"\0,\0\"\0i\0m\0a\0g\0e\0s\0\"\0:\0[\0\"\02\05\0\"\0,\00\0]\0}\0,\0\"\0_\0m\0e\0t\0a\0\"\0:\0{\0\"\0t\0i\0t\0l\0e\0\"\0:\0\"\0S\0a\0v\0e\0 \0I\0m\0a\0g\0e\0\"\0}\0}\0,\0\"\02\08\0\"\0:\0{\0\"\0c\0l\0a\0s\0s\0_\0t\0y\0p\0e\0\"\0:\0\"\0S\0a\0v\0e\0I\0m\0a\0g\0e\0\"\0,\0\"\0i\0n\0p\0u\0t\0s\0\"\0:\0{\0\"\0f\0i\0l\0e\0n\0a\0m\0e\0_\0p\0r\0e\0f\0i\0x\0\"\0:\0\"\0C\0o\0m\0f\0y\0U\0I\0\"\0,\0\"\0i\0m\0a\0g\0e\0s\0\"\0:\0[\0\"\02\07\0\"\0,\00\0]\0}\0,\0\"\0_\0m\0e\0t\0a\0\"\0:\0{\0\"\0t\0i\0t\0l\0e\0\"\0:\0\"\0S\0a\0v\0e\0 \0I\0m\0a\0g\0e\0\"\0}\0}\0,\0\"\0e\0x\0t\0r\0a\0\"\0:\0{\0\"\0a\0i\0r\0s\0\"\0:\0[\0\"\0u\0r\0n\0:\0a\0i\0r\0:\0s\0d\01\0:\0e\0m\0b\0e\0d\0d\0i\0n\0g\0:\0c\0i\0v\0i\0t\0a\0i\0:\02\02\02\02\05\06\0@\02\05\00\07\00\08\0\"\0,\0\"\0u\0r\0n\0:\0a\0i\0r\0:\0s\0d\01\0:\0e\0m\0b\0e\0d\0d\0i\0n\0g\0:\0c\0i\0v\0i\0t\0a\0i\0:\09\09\08\09\00\0@\01\00\06\09\01\06\0\"\0,\0\"\0u\0r\0n\0:\0a\0i\0r\0:\0s\0d\01\0:\0e\0m\0b\0e\0d\0d\0i\0n\0g\0:\0c\0i\0v\0i\0t\0a\0i\0:\02\02\02\02\05\06\0@\02\05\00\07\01\02\0\"\0,\0\"\0u\0r\0n\0:\0a\0i\0r\0:\0o\0t\0h\0e\0r\0:\0u\0p\0s\0c\0a\0l\0e\0r\0:\0c\0i\0v\0i\0t\0a\0i\0:\01\04\07\07\05\09\0@\01\06\04\08\02\01\0\"\0,\0\"\0u\0r\0n\0:\0a\0i\0r\0:\0s\0d\0x\0l\0:\0c\0h\0e\0c\0k\0p\0o\0i\0n\0t\0:\0c\0i\0v\0i\0t\0a\0i\0:\08\02\07\01\08\04\0@\01\01\08\03\07\06\05\0\"\0,\0\"\0u\0r\0n\0:\0a\0i\0r\0:\0s\0d\0x\0l\0:\0l\0o\0r\0a\0:\0c\0i\0v\0i\0t\0a\0i\0:\09\09\00\03\01\01\0@\01\02\04\08\02\08\08\0\"\0]\0}\0,\0\"\0e\0x\0t\0r\0a\0M\0e\0t\0a\0d\0a\0t\0a\0\"\0:\0\"\0{\0\\\0u\00\00\02\02\0p\0r\0o\0m\0p\0t\0\\\0u\00\00\02\02\0:\0\\\0u\00\00\02\02\0s\0a\0f\0e\0_\0p\0o\0s\0,\0 \0M\0a\0s\0t\0e\0r\0p\0i\0e\0c\0e\0,\0 \01\0g\0i\0r\0l\0,\0 \0s\0o\0l\0o\0,\0 \0m\0i\0l\0f\0,\0 \0h\0e\0x\0m\0a\0n\0i\0a\0c\0(\0s\0p\0a\0c\0e\0z\0i\0n\0 \0s\0t\0y\0l\0e\0-\0i\0x\0l\0)\0,\0 \0d\0a\0r\0k\0 \0b\0l\0u\0e\0 \0h\0a\0i\0r\0,\0 \0p\0u\0r\0p\0l\0e\0 \0e\0y\0e\0s\0,\0 \0s\0p\0i\0r\0a\0l\0 \0e\0y\0e\0s\0,\0 \0@\0_\0@\0,\0 \0w\0h\0i\0t\0e\0 \0p\0a\0l\0e\0 \0s\0k\0i\0n\0,\0 \0v\0e\0r\0y\0 \0l\0o\0n\0g\0 \0h\0a\0i\0r\0,\0 \0a\0h\0o\0g\0e\0,\0 \0h\0a\0i\0r\0 \0b\0e\0t\0w\0e\0e\0n\0 \0e\0y\0e\0s\0,\0 \0b\0a\0n\0g\0s\0,\0 \0p\0u\0r\0p\0l\0e\0 \0h\0a\0i\0r\0b\0a\0n\0d\0,\0 \0m\0e\0s\0s\0y\0 \0h\0a\0i\0r\0,\0 \0w\0a\0v\0y\0 \0h\0a\0i\0r\0,\0 \0(\0g\0i\0g\0a\0n\0t\0i\0c\0 \0b\0r\0e\0a\0s\0t\0s\0:\01\0.\02\0)\0,\0 \0(\0w\0i\0d\0e\0 \0h\0i\0p\0s\0)\0,\0 \0c\0o\0v\0e\0r\0e\0d\0 \0n\0i\0p\0p\0l\0e\0s\0,\0 \0t\0h\0i\0c\0k\0 \0t\0h\0i\0g\0h\0s\0,\0 \0p\0l\0u\0m\0p\0,\0 \0p\0u\0r\0p\0l\0e\0 \0d\0r\0e\0s\0s\0 \0w\0i\0t\0h\0 \0s\0w\0e\0a\0t\0e\0r\0 \0i\0n\0s\0i\0d\0e\0,\0 \0t\0u\0r\0t\0l\0e\0n\0e\0c\0k\0 \0b\0l\0a\0c\0k\0 \0s\0w\0e\0a\0t\0e\0r\0,\0 \0l\0o\0n\0g\0 \0d\0r\0e\0s\0s\0,\0 \0r\0i\0b\0b\0e\0d\0 \0s\0w\0e\0a\0t\0e\0r\0,\0 \0w\0e\0b\0 \0p\0r\0i\0n\0t\0,\0 \0l\0o\0n\0g\0 \0s\0l\0e\0e\0v\0e\0s\0,\0 \0f\0o\0r\0e\0s\0t\0,\0 \0n\0i\0g\0h\0t\0 \0t\0i\0m\0e\0,\0 \0s\0m\0i\0l\0e\0,\0 \0o\0p\0e\0n\0 \0m\0o\0u\0t\0h\0,\0 \0c\0o\0w\0b\0o\0y\0 \0s\0h\0o\0t\0,\0 \0w\0a\0v\0y\0 \0m\0o\0u\0t\0h\0,\0 \0b\0l\0u\0s\0h\0,\0 \0l\0o\0o\0k\0i\0n\0g\0 \0a\0t\0 \0v\0i\0e\0w\0e\0r\0\\\0u\00\00\02\02\0,\0\\\0u\00\00\02\02\0n\0e\0g\0a\0t\0i\0v\0e\0P\0r\0o\0m\0p\0t\0\\\0u\00\00\02\02\0:\0\\\0u\00\00\02\02\0c\0i\0v\0i\0t\0_\0n\0s\0f\0w\0,\0 \0s\0a\0f\0e\0_\0n\0e\0g\0,\0 \0c\0i\0v\0i\0t\0_\0n\0s\0f\0w\0,\0 \0s\0a\0f\0e\0_\0n\0e\0g\0,\0 \0b\0a\0d\0 \0q\0u\0a\0l\0i\0t\0y\0,\0w\0o\0r\0s\0t\0 \0q\0u\0a\0l\0i\0t\0y\0,\0w\0o\0r\0s\0t\0 \0d\0e\0t\0a\0i\0l\0,\0s\0k\0e\0t\0c\0h\0,\0c\0e\0n\0s\0o\0r\0,\0l\0o\0w\0r\0e\0s\0,\0 \0b\0a\0d\0 \0q\0u\0a\0l\0i\0t\0y\0,\0 \0w\0o\0r\0s\0t\0 \0q\0u\0a\0l\0i\0t\0y\0,\0 \0l\0o\0w\0 \0q\0u\0a\0l\0i\0t\0y\0,\0 \0b\0a\0d\0 \0a\0n\0a\0t\0o\0m\0y\0,\0 \0j\0p\0e\0g\0 \0a\0r\0t\0i\0f\0a\0c\0t\0s\0,\0 \0s\0i\0g\0n\0a\0t\0u\0r\0e\0,\0 \0w\0a\0t\0e\0r\0m\0a\0r\0k\0,\0 \0b\0a\0d\0 \0h\0a\0n\0d\0s\0,\0 \0m\0u\0t\0a\0t\0e\0d\0 \0h\0a\0n\0d\0s\0,\0 \0s\0i\0x\0 \0f\0i\0n\0g\0e\0r\0s\0,\0 \0e\0x\0t\0r\0a\0 \0f\0i\0n\0g\0e\0r\0s\0,\0s\0i\0x\0 \0f\0i\0n\0g\0e\0r\0s\0,\0 \0e\0x\0t\0r\0a\0 \0f\0i\0n\0g\0e\0r\0s\0,\0s\0h\0i\0n\0y\0,\0o\0v\0e\0r\0s\0a\0t\0u\0r\0a\0t\0e\0d\0,\0 \0\\\0u\00\00\02\02\0,\0\\\0u\00\00\02\02\0s\0t\0e\0p\0s\0\\\0u\00\00\02\02\0:\02\06\0,\0\\\0u\00\00\02\02\0c\0f\0g\0S\0c\0a\0l\0e\0\\\0u\00\00\02\02\0:\03\0.\05\0,\0\\\0u\00\00\02\02\0s\0a\0m\0p\0l\0e\0r\0\\\0u\00\00\02\02\0:\0\\\0u\00\00\02\02\0e\0u\0l\0e\0r\0_\0a\0n\0c\0e\0s\0t\0r\0a\0l\0\\\0u\00\00\02\02\0,\0\\\0u\00\00\02\02\0s\0e\0e\0d\0\\\0u\00\00\02\02\0:\02\03\00\09\07\00\09\08\06\0,\0\\\0u\00\00\02\02\0w\0o\0r\0k\0f\0l\0o\0w\0I\0d\0\\\0u\00\00\02\02\0:\0\\\0u\00\00\02\02\0t\0x\0t\02\0i\0m\0g\0-\0h\0i\0r\0e\0s\0\\\0u\00\00\02\02\0,\0\\\0u\00\00\02\02\0r\0e\0s\0o\0u\0r\0c\0e\0s\0\\\0u\00\00\02\02\0:\0[\0{\0\\\0u\00\00\02\02\0m\0o\0d\0e\0l\0V\0e\0r\0s\0i\0o\0n\0I\0d\0\\\0u\00\00\02\02\0:\01\01\08\03\07\06\05\0,\0\\\0u\00\00\02\02\0s\0t\0r\0e\0n\0g\0t\0h\0\\\0u\00\00\02\02\0:\01\0}\0,\0{\0\\\0u\00\00\02\02\0m\0o\0d\0e\0l\0V\0e\0r\0s\0i\0o\0n\0I\0d\0\\\0u\00\00\02\02\0:\01\01\08\03\07\06\05\0,\0\\\0u\00\00\02\02\0s\0t\0r\0e\0n\0g\0t\0h\0\\\0u\00\00\02\02\0:\01\0}\0,\0{\0\\\0u\00\00\02\02\0m\0o\0d\0e\0l\0V\0e\0r\0s\0i\0o\0n\0I\0d\0\\\0u\00\00\02\02\0:\01\02\04\08\02\08\08\0,\0\\\0u\00\00\02\02\0s\0t\0r\0e\0n\0g\0t\0h\0\\\0u\00\00\02\02\0:\00\0.\07\05\0}\0,\0{\0\\\0u\00\00\02\02\0m\0o\0d\0e\0l\0V\0e\0r\0s\0i\0o\0n\0I\0d\0\\\0u\00\00\02\02\0:\02\05\00\07\00\08\0}\0,\0{\0\\\0u\00\00\02\02\0m\0o\0d\0e\0l\0V\0e\0r\0s\0i\0o\0n\0I\0d\0\\\0u\00\00\02\02\0:\02\05\00\07\01\02\0}\0,\0{\0\\\0u\00\00\02\02\0m\0o\0d\0e\0l\0V\0e\0r\0s\0i\0o\0n\0I\0d\0\\\0u\00\00\02\02\0:\01\00\06\09\01\06\0}\0]\0,\0\\\0u\00\00\02\02\0r\0e\0m\0i\0x\0O\0f\0I\0d\0\\\0u\00\00\02\02\0:\04\09\06\00\07\04\06\08\0}\0\"\0}"
output:
url: images/1000015486.jpeg
base_model: John6666/wai-nsfw-illustrious-v110-sdxl
instance_prompt: >-
hexmaniac(spacezin style-ixl), dark blue hair, purple eyes, spiral eyes, @_@,
white pale skin, very long hair, ahoge, hair between eyes, bangs, purple
hairband, messy hair, wavy hair, gigantic breasts
---
# maniac-ilu
<Gallery />
## Trigger words
You should use `hexmaniac(spacezin style-ixl)` to trigger the image generation.
You should use `dark blue hair` to trigger the image generation.
You should use `purple eyes` to trigger the image generation.
You should use `spiral eyes` to trigger the image generation.
You should use `@_@` to trigger the image generation.
You should use `white pale skin` to trigger the image generation.
You should use `very long hair` to trigger the image generation.
You should use `ahoge` to trigger the image generation.
You should use `hair between eyes` to trigger the image generation.
You should use `bangs` to trigger the image generation.
You should use `purple hairband` to trigger the image generation.
You should use `messy hair` to trigger the image generation.
You should use `wavy hair` to trigger the image generation.
You should use `gigantic breasts` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Shero448/maniac-ilu/tree/main) them in the Files & versions tab.
|
kostiantynk1205/5c253d18-704f-4b30-9906-293fe7257ea2 | kostiantynk1205 | "2025-02-04T19:10:47Z" | 11 | 0 | peft | [
"peft",
"safetensors",
"falcon",
"axolotl",
"generated_from_trainer",
"base_model:katuni4ka/tiny-random-falcon-40b",
"base_model:adapter:katuni4ka/tiny-random-falcon-40b",
"region:us"
] | null | "2025-02-04T19:10:04Z" | ---
library_name: peft
base_model: katuni4ka/tiny-random-falcon-40b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5c253d18-704f-4b30-9906-293fe7257ea2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: katuni4ka/tiny-random-falcon-40b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a2063aee6e61475b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a2063aee6e61475b_train_data.json
type:
field_instruction: ENName
field_output: English
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk1205/5c253d18-704f-4b30-9906-293fe7257ea2
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/a2063aee6e61475b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ee236be3-90ec-455c-841e-dda12c91106d
wandb_project: Birthday-SN56-23-Gradients-On-Demand
wandb_run: your_name
wandb_runid: ee236be3-90ec-455c-841e-dda12c91106d
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 5c253d18-704f-4b30-9906-293fe7257ea2
This model is a fine-tuned version of [katuni4ka/tiny-random-falcon-40b](https://huggingface.co/katuni4ka/tiny-random-falcon-40b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.7943
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 11.1118 |
| 44.0201 | 0.0124 | 50 | 10.9694 |
| 43.4688 | 0.0249 | 100 | 10.8366 |
| 43.255 | 0.0373 | 150 | 10.7985 |
| 43.2312 | 0.0498 | 200 | 10.7943 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
RichardErkhov/smangrul_-_starcoder-3b-hugcoder-loftq-merged-8bits | RichardErkhov | "2025-02-28T06:11:53Z" | 0 | 0 | null | [
"safetensors",
"starcoder2",
"arxiv:1910.09700",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-02-28T06:09:52Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
starcoder-3b-hugcoder-loftq-merged - bnb 8bits
- Model creator: https://huggingface.co/smangrul/
- Original model: https://huggingface.co/smangrul/starcoder-3b-hugcoder-loftq-merged/
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ceshine/t5-paraphrase-paws-msrp-opinosis | ceshine | "2023-03-19T12:15:06Z" | 24 | 3 | transformers | [
"transformers",
"pytorch",
"jax",
"safetensors",
"t5",
"text2text-generation",
"paraphrasing",
"paraphrase",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-03-02T23:29:05Z" | ---
language: en
tags:
- t5
- paraphrasing
- paraphrase
license: apache-2.0
---
# T5-base Parapharasing model fine-tuned on PAWS, MSRP, and Opinosis
More details in [ceshine/finetuning-t5 Github repo](https://github.com/ceshine/finetuning-t5/tree/master/paraphrase) |
team-sanai/bpe_4cat_32000 | team-sanai | "2024-04-23T13:26:34Z" | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-04-23T13:26:32Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
timm/convnext_tiny.fb_in22k | timm | "2025-01-21T21:14:50Z" | 6,651 | 1 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"transformers",
"dataset:imagenet-22k",
"arxiv:2201.03545",
"license:apache-2.0",
"region:us"
] | image-classification | "2022-12-13T07:14:47Z" | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
- transformers
datasets:
- imagenet-22k
---
# Model card for convnext_tiny.fb_in22k
A ConvNeXt image classification model. Pretrained on ImageNet-22k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 44.6
- GMACs: 4.5
- Activations (M): 13.5
- Image size: 224 x 224
- **Papers:**
- A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545
- **Original:** https://github.com/facebookresearch/ConvNeXt
- **Dataset:** ImageNet-22k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('convnext_tiny.fb_in22k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convnext_tiny.fb_in22k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 96, 56, 56])
# torch.Size([1, 192, 28, 28])
# torch.Size([1, 384, 14, 14])
# torch.Size([1, 768, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convnext_tiny.fb_in22k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 768, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP.
| model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size|
|------------------------------------------------------------------------------------------------------------------------------|------|------|--------|-----------|------|------|---------------|----------|
| [convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512) |88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 |
| [convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384) |88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 |
| [convnext_xxlarge.clip_laion2b_soup_ft_in1k](https://huggingface.co/timm/convnext_xxlarge.clip_laion2b_soup_ft_in1k) |88.612|98.704|256 |846.47 |198.09|124.45|122.45 |256 |
| [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384) |88.312|98.578|384 |200.13 |101.11|126.74|196.84 |256 |
| [convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384) |88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 |
| [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320) |87.968|98.47 |320 |200.13 |70.21 |88.02 |283.42 |256 |
| [convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384) |87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 |
| [convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384) |87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 |
| [convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384) |87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 |
| [convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k) |87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 |
| [convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k) |87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384) |87.138|98.212|384 |88.59 |45.21 |84.49 |365.47 |256 |
| [convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k) |87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 |
| [convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384) |86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 |
| [convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k) |86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 |
| [convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k) |86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 |
| [convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384) |86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in12k_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k) |86.344|97.97 |256 |88.59 |20.09 |37.55 |816.14 |256 |
| [convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k) |86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 |
| [convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384) |86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k) |86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 |
| [convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k) |85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 |
| [convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384) |85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 |
| [convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k) |85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 |
| [convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k) |85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 |
| [convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384) |85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 |
| [convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384) |85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 |
| [convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k) |84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 |
| [convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k) |84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 |
| [convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k) |84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 |
| [convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k) |84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 |
| [convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384) |84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 |
| [convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k) |83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 |
| [convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k) |83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 |
| [convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384) |83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 |
| [convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k) |83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 |
| [convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k) |82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 |
| [convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k) |82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 |
| [convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k) |82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 |
| [convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k) |82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 |
| [convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k) |82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 |
| [convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k) |82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 |
| [convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k) |81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 |
| [convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k) |80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 |
| [convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k) |80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 |
| [convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k) |80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 |
| [convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k) |79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 |
| [convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k) |79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 |
| [convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k) |78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 |
| [convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k) |77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 |
| [convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k) |77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 |
| [convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k) |76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 |
| [convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k) |75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 |
| [convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k) |75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 |
## Citation
```bibtex
@article{liu2022convnet,
author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie},
title = {A ConvNet for the 2020s},
journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2022},
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
Subsets and Splits