Search is not available for this dataset
modelId
stringlengths 5
137
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-03-29 12:26:52
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 401
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-03-29 12:26:36
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
popV/tabula_muris_All | popV | "2025-03-08T12:41:08Z" | 0 | 0 | popV | [
"popV",
"joblib",
"biology",
"genomics",
"single-cell",
"anndata_version:0.11.3",
"python_version:3.11.11",
"tissue: diverse",
"license:cc-by-4.0",
"region:us"
] | null | "2025-03-08T12:40:35Z" | ---
library_name: popV
license: cc-by-4.0
tags:
- biology
- genomics
- single-cell
- anndata_version:0.11.3
- python_version:3.11.11
- popV
- 'tissue: diverse'
---
Popular Vote (popV) model for automated cell type annotation of single-cell RNA-seq data. We provide here pretrained models
for plug-in use in your own analysis.
Follow our [tutorial](https://github.com/YosefLab/popV/blob/main/tabula_sapiens_tutorial.ipynb) to learn how to use the model for cell type annotation.
# Model description
Ageing is characterized by a progressive loss of physiological integrity, leading to impaired function and increased vulnerability to death. Despite rapid advances over recent years, many of the molecular and cellular processes that underlie the progressive loss of healthy physiology are poorly understood. To gain a better insight into these processes, here we generate a single-cell transcriptomic atlas across the lifespan of Mus musculus that includes data from 23 tissues and organs. We found cell-specific changes occurring across multiple cell types and organs, as well as age-related changes in the cellular composition of different organs. Using single-cell transcriptomic data, we assessed cell-type-specific manifestations of different hallmarks of ageing—such as senescence, genomic instability and changes in the immune system. This transcriptomic atlas—which we denote Tabula Muris Senis, or ‘Mouse Ageing Cell Atlas’—provides molecular information about how the most important hallmarks of ageing are reflected in a broad range of tissues and cell types.
**Link to CELLxGENE**:
Link to the [data](https://cellxgene.cziscience.com/e/f16a8f4d-bc97-43c5-a2f6-bbda952e4c5c.cxg/) in the CELLxGENE browser for interactive exploration of the data and download of the source data.
**Training Code URL**:
Not provided by uploader.
# Metrics
We provide here accuracies for each of the experts and the ensemble model. The validation set accuracies are
computed on a 10% random subset of the data that was not used for training.
| Cell Type | N cells | celltypist | knn bbknn | knn harmony | knn on scvi | onclass | scanvi | svm | xgboost | Consensus Prediction |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| B cell | 3687 | 0.86 | 0.90 | 0.86 | 0.86 | 0.00 | 0.89 | 0.75 | 0.78 | 0.91 |
| mesenchymal stem cell | 1678 | 0.87 | 0.88 | 0.89 | 0.68 | 0.00 | 0.69 | 0.91 | 0.87 | 0.93 |
| basal cell of epidermis | 1499 | 0.85 | 0.88 | 0.81 | 0.84 | 0.00 | 0.84 | 0.86 | 0.82 | 0.90 |
| granulocyte | 1556 | 0.81 | 0.86 | 0.79 | 0.74 | 0.00 | 0.78 | 0.74 | 0.80 | 0.88 |
| endothelial cell | 1478 | 0.88 | 0.88 | 0.82 | 0.74 | 0.00 | 0.79 | 0.90 | 0.85 | 0.93 |
| microglial cell | 1293 | 0.99 | 0.99 | 0.96 | 0.99 | 0.00 | 0.98 | 0.99 | 0.99 | 1.00 |
| T cell | 1244 | 0.75 | 0.80 | 0.63 | 0.58 | 0.00 | 0.67 | 0.62 | 0.73 | 0.82 |
| keratinocyte | 1254 | 0.85 | 0.89 | 0.81 | 0.80 | 0.00 | 0.86 | 0.88 | 0.85 | 0.91 |
| macrophage | 971 | 0.47 | 0.68 | 0.59 | 0.51 | 0.00 | 0.48 | 0.53 | 0.56 | 0.71 |
| classical monocyte | 765 | 0.75 | 0.60 | 0.61 | 0.56 | 0.00 | 0.78 | 0.79 | 0.83 | 0.88 |
| granulocytopoietic cell | 807 | 0.67 | 0.77 | 0.62 | 0.51 | 0.00 | 0.46 | 0.68 | 0.75 | 0.80 |
| bladder urothelial cell | 680 | 0.98 | 0.99 | 0.96 | 0.85 | 0.00 | 0.96 | 1.00 | 0.98 | 0.99 |
| mesenchymal stem cell of adipose tissue | 650 | 0.74 | 0.84 | 0.72 | 0.51 | 0.00 | 0.54 | 0.90 | 0.80 | 0.88 |
| fibroblast of cardiac tissue | 613 | 0.96 | 0.94 | 0.80 | 0.46 | 0.00 | 0.66 | 0.95 | 0.91 | 0.96 |
| endothelial cell of coronary artery | 624 | 0.81 | 0.86 | 0.63 | 0.58 | 0.00 | 0.77 | 0.89 | 0.84 | 0.91 |
| skeletal muscle satellite cell | 465 | 0.98 | 0.98 | 0.95 | 0.97 | 0.00 | 0.96 | 0.97 | 0.97 | 0.98 |
| proerythroblast | 435 | 0.52 | 0.56 | 0.51 | 0.22 | 0.00 | 0.26 | 0.51 | 0.60 | 0.64 |
| kidney proximal convoluted tubule epithelial cell | 445 | 0.84 | 0.87 | 0.34 | 0.82 | 0.00 | 0.75 | 0.87 | 0.88 | 0.89 |
| bronchial smooth muscle cell | 454 | 0.96 | 0.96 | 0.84 | 0.91 | 0.00 | 0.92 | 0.97 | 0.94 | 0.96 |
| myeloid cell | 413 | 0.44 | 0.60 | 0.41 | 0.42 | 0.00 | 0.25 | 0.70 | 0.68 | 0.69 |
| hepatocyte | 377 | 0.98 | 0.98 | 0.98 | 0.97 | 0.00 | 0.98 | 0.99 | 0.98 | 0.99 |
| type B pancreatic cell | 428 | 0.97 | 0.98 | 0.96 | 0.98 | 0.00 | 0.98 | 0.99 | 0.98 | 0.99 |
| bladder cell | 411 | 0.97 | 0.95 | 0.97 | 0.58 | 0.00 | 0.84 | 0.98 | 0.96 | 0.98 |
| promonocyte | 365 | 0.65 | 0.72 | 0.66 | 0.61 | 0.00 | 0.58 | 0.74 | 0.78 | 0.82 |
| DN4 thymocyte | 425 | 0.39 | 0.53 | 0.38 | 0.31 | 0.00 | 0.23 | 0.58 | 0.63 | 0.62 |
| basal cell | 384 | 0.98 | 0.97 | 0.97 | 0.92 | 0.00 | 0.96 | 0.98 | 0.96 | 0.98 |
| monocyte | 350 | 0.60 | 0.57 | 0.48 | 0.31 | 0.00 | 0.37 | 0.73 | 0.76 | 0.84 |
| enterocyte of epithelium of large intestine | 374 | 0.92 | 0.93 | 0.84 | 0.92 | 0.00 | 0.92 | 0.94 | 0.92 | 0.95 |
| thymocyte | 370 | 0.82 | 0.74 | 0.59 | 0.82 | 0.00 | 0.87 | 0.89 | 0.92 | 0.88 |
| epithelial cell of proximal tubule | 342 | 0.84 | 0.82 | 0.78 | 0.76 | 0.00 | 0.82 | 0.85 | 0.85 | 0.85 |
| natural killer cell | 352 | 0.84 | 0.82 | 0.68 | 0.68 | 0.00 | 0.74 | 0.74 | 0.80 | 0.85 |
| naive B cell | 341 | 0.46 | 0.22 | 0.40 | 0.36 | 0.00 | 0.48 | 0.49 | 0.56 | 0.58 |
| hematopoietic stem cell | 337 | 0.92 | 0.93 | 0.88 | 0.88 | 0.00 | 0.88 | 0.94 | 0.93 | 0.94 |
| mature NK T cell | 321 | 0.57 | 0.64 | 0.46 | 0.32 | 0.00 | 0.59 | 0.59 | 0.70 | 0.71 |
| fibroblast | 309 | 0.20 | 0.25 | 0.53 | 0.31 | 0.00 | 0.32 | 0.56 | 0.54 | 0.52 |
| stromal cell | 331 | 0.61 | 0.49 | 0.86 | 0.33 | 0.00 | 0.37 | 0.85 | 0.71 | 0.86 |
| keratinocyte stem cell | 321 | 0.95 | 0.95 | 0.84 | 0.87 | 0.00 | 0.93 | 0.96 | 0.91 | 0.96 |
| erythroblast | 291 | 0.60 | 0.69 | 0.66 | 0.56 | 0.00 | 0.54 | 0.66 | 0.71 | 0.76 |
| Kupffer cell | 280 | 0.98 | 0.95 | 0.71 | 0.42 | 0.00 | 0.83 | 0.98 | 0.94 | 0.97 |
| megakaryocyte-erythroid progenitor cell | 244 | 0.53 | 0.62 | 0.45 | 0.41 | 0.00 | 0.57 | 0.55 | 0.66 | 0.69 |
| pancreatic acinar cell | 235 | 0.95 | 0.95 | 0.93 | 0.97 | 0.00 | 0.98 | 0.99 | 0.99 | 0.98 |
| luminal epithelial cell of mammary gland | 300 | 0.94 | 0.94 | 0.98 | 0.87 | 0.00 | 0.92 | 0.97 | 0.93 | 0.97 |
| professional antigen presenting cell | 257 | 0.05 | 0.02 | 0.09 | 0.01 | 0.00 | 0.01 | 0.41 | 0.36 | 0.06 |
| epidermal cell | 219 | 0.84 | 0.85 | 0.75 | 0.84 | 0.00 | 0.84 | 0.86 | 0.86 | 0.90 |
| hematopoietic precursor cell | 216 | 0.63 | 0.74 | 0.39 | 0.24 | 0.00 | 0.55 | 0.71 | 0.78 | 0.80 |
| oligodendrocyte | 224 | 0.96 | 0.98 | 0.97 | 0.99 | 0.00 | 0.99 | 0.99 | 0.99 | 0.99 |
| fibroblast of lung | 216 | 0.98 | 0.96 | 0.94 | 0.90 | 0.00 | 0.95 | 0.98 | 0.94 | 0.99 |
| CD8-positive, alpha-beta T cell | 183 | 0.35 | 0.52 | 0.28 | 0.32 | 0.00 | 0.44 | 0.48 | 0.57 | 0.64 |
| epithelial cell of large intestine | 174 | 0.74 | 0.79 | 0.57 | 0.68 | 0.00 | 0.73 | 0.85 | 0.82 | 0.83 |
| intestinal crypt stem cell | 199 | 0.84 | 0.86 | 0.69 | 0.80 | 0.00 | 0.82 | 0.87 | 0.82 | 0.87 |
| chondrocyte | 179 | 0.60 | 0.61 | 0.78 | 0.68 | 0.00 | 0.64 | 0.85 | 0.85 | 0.69 |
| immature T cell | 181 | 0.72 | 0.03 | 0.12 | 0.61 | 0.00 | 0.79 | 0.82 | 0.88 | 0.77 |
| precursor B cell | 192 | 0.66 | 0.88 | 0.55 | 0.62 | 0.00 | 0.66 | 0.78 | 0.84 | 0.84 |
| intermediate monocyte | 167 | 0.34 | 0.12 | 0.36 | 0.24 | 0.00 | 0.63 | 0.53 | 0.58 | 0.64 |
| large intestine goblet cell | 163 | 0.79 | 0.85 | 0.79 | 0.76 | 0.00 | 0.81 | 0.91 | 0.86 | 0.88 |
| CD4-positive, alpha-beta T cell | 164 | 0.25 | 0.38 | 0.22 | 0.26 | 0.00 | 0.38 | 0.33 | 0.45 | 0.55 |
| smooth muscle cell | 167 | 0.59 | 0.79 | 0.66 | 0.61 | 0.00 | 0.73 | 0.84 | 0.81 | 0.77 |
| leukocyte | 156 | 0.52 | 0.59 | 0.35 | 0.18 | 0.00 | 0.32 | 0.62 | 0.68 | 0.77 |
| plasma cell | 143 | 0.48 | 0.66 | 0.47 | 0.41 | 0.00 | 0.60 | 0.52 | 0.60 | 0.71 |
| lymphocyte | 173 | 0.18 | 0.20 | 0.31 | 0.13 | 0.00 | 0.00 | 0.23 | 0.38 | 0.35 |
| naive T cell | 146 | 0.79 | 0.85 | 0.85 | 0.82 | 0.00 | 0.84 | 0.86 | 0.86 | 0.87 |
| kidney loop of Henle thick ascending limb epithelial cell | 141 | 0.92 | 0.91 | 0.66 | 0.79 | 0.00 | 0.42 | 0.89 | 0.92 | 0.94 |
| immature B cell | 147 | 0.71 | 0.76 | 0.57 | 0.69 | 0.00 | 0.58 | 0.52 | 0.71 | 0.82 |
| alveolar macrophage | 134 | 0.93 | 0.93 | 0.90 | 0.89 | 0.00 | 0.93 | 0.95 | 0.90 | 0.94 |
| endothelial cell of hepatic sinusoid | 122 | 0.95 | 0.93 | 0.89 | 0.81 | 0.00 | 0.86 | 0.98 | 0.93 | 0.96 |
| lung macrophage | 106 | 0.87 | 0.09 | 0.30 | 0.00 | 0.00 | 0.81 | 0.87 | 0.81 | 0.94 |
| non-classical monocyte | 114 | 0.60 | 0.73 | 0.41 | 0.56 | 0.00 | 0.69 | 0.78 | 0.70 | 0.75 |
| endocardial cell | 113 | 0.92 | 0.92 | 0.88 | 0.85 | 0.00 | 0.79 | 0.92 | 0.85 | 0.93 |
| double negative thymocyte | 105 | 0.55 | 0.00 | 0.31 | 0.20 | 0.00 | 0.64 | 0.72 | 0.81 | 0.83 |
| late pro-B cell | 92 | 0.80 | 0.82 | 0.37 | 0.54 | 0.00 | 0.59 | 0.68 | 0.80 | 0.90 |
| kidney collecting duct principal cell | 105 | 0.64 | 0.86 | 0.78 | 0.86 | 0.00 | 0.84 | 0.91 | 0.85 | 0.89 |
| basal epithelial cell of tracheobronchial tree | 87 | 0.89 | 0.77 | 0.83 | 0.49 | 0.00 | 0.57 | 0.93 | 0.86 | 0.93 |
| mesenchymal cell | 91 | 0.20 | 0.00 | 0.14 | 0.11 | 0.00 | 0.12 | 0.45 | 0.49 | 0.42 |
| pancreatic A cell | 81 | 0.61 | 0.77 | 0.44 | 0.83 | 0.00 | 0.80 | 0.91 | 0.89 | 0.86 |
| adventitial cell | 63 | 0.69 | 0.62 | 0.45 | 0.07 | 0.00 | 0.23 | 0.76 | 0.54 | 0.81 |
| epithelial cell | 52 | 0.38 | 0.07 | 0.46 | 0.38 | 0.00 | 0.27 | 0.51 | 0.49 | 0.62 |
| kidney distal convoluted tubule epithelial cell | 80 | 0.76 | 0.86 | 0.62 | 0.63 | 0.00 | 0.86 | 0.87 | 0.86 | 0.87 |
| neutrophil | 73 | 0.54 | 0.68 | 0.46 | 0.18 | 0.00 | 0.28 | 0.66 | 0.58 | 0.68 |
| pancreatic ductal cell | 55 | 0.98 | 0.81 | 0.78 | 0.93 | 0.00 | 0.96 | 0.96 | 0.96 | 0.98 |
| myeloid dendritic cell | 54 | 0.36 | 0.58 | 0.40 | 0.33 | 0.00 | 0.42 | 0.69 | 0.62 | 0.68 |
| astrocyte | 70 | 0.93 | 0.95 | 0.98 | 0.98 | 0.00 | 0.99 | 0.99 | 0.96 | 0.99 |
| secretory cell | 52 | 0.21 | 0.49 | 0.37 | 0.45 | 0.00 | 0.65 | 0.87 | 0.77 | 0.70 |
| myeloid leukocyte | 56 | 0.69 | 0.41 | 0.42 | 0.27 | 0.00 | 0.31 | 0.44 | 0.57 | 0.80 |
| plasmatocyte | 48 | 0.55 | 0.78 | 0.63 | 0.76 | 0.00 | 0.61 | 0.88 | 0.83 | 0.87 |
| brain pericyte | 56 | 0.00 | 0.77 | 0.56 | 0.76 | 0.00 | 0.87 | 0.97 | 0.84 | 0.95 |
| neuron | 48 | 0.59 | 0.62 | 0.66 | 0.65 | 0.00 | 0.72 | 0.90 | 0.81 | 0.87 |
| vein endothelial cell | 49 | 0.67 | 0.81 | 0.64 | 0.26 | 0.00 | 0.74 | 0.86 | 0.72 | 0.85 |
| kidney collecting duct epithelial cell | 49 | 0.00 | 0.63 | 0.20 | 0.56 | 0.00 | 0.39 | 0.78 | 0.85 | 0.91 |
| pancreatic D cell | 46 | 0.00 | 0.76 | 0.52 | 0.89 | 0.00 | 0.95 | 0.96 | 0.96 | 0.97 |
| aortic endothelial cell | 12 | 0.08 | 0.12 | 0.18 | 0.21 | 0.00 | 0.14 | 0.13 | 0.08 | 0.37 |
| brush cell | 37 | 0.78 | 0.59 | 0.35 | 0.68 | 0.00 | 0.80 | 0.72 | 0.73 | 0.81 |
| erythroid progenitor cell, mammalian | 48 | 0.57 | 0.00 | 0.18 | 0.05 | 0.00 | 0.26 | 0.79 | 0.65 | 0.49 |
| regular atrial cardiac myocyte | 35 | 0.00 | 0.58 | 0.44 | 0.44 | 0.00 | 0.37 | 0.52 | 0.66 | 0.77 |
| kidney cortex artery cell | 40 | 0.00 | 0.83 | 0.58 | 0.24 | 0.00 | 0.62 | 0.83 | 0.62 | 0.85 |
| erythrocyte | 44 | 0.63 | 0.63 | 0.64 | 0.63 | 0.00 | 0.67 | 0.74 | 0.85 | 0.80 |
| mesangial cell | 32 | 0.00 | 0.58 | 0.22 | 0.28 | 0.00 | 0.50 | 0.66 | 0.59 | 0.81 |
| kidney loop of Henle ascending limb epithelial cell | 32 | 0.56 | 0.59 | 0.43 | 0.33 | 0.00 | 0.35 | 0.72 | 0.67 | 0.74 |
| cardiac muscle cell | 32 | 0.63 | 0.81 | 0.62 | 0.71 | 0.00 | 0.94 | 1.00 | 0.97 | 0.97 |
| mature alpha-beta T cell | 37 | 0.00 | 0.27 | 0.28 | 0.38 | 0.00 | 0.52 | 0.39 | 0.44 | 0.56 |
| basophil | 33 | 0.64 | 0.61 | 0.59 | 0.47 | 0.00 | 0.38 | 0.54 | 0.62 | 0.73 |
| cardiac valve cell | 22 | 0.45 | 0.71 | 0.51 | 0.56 | 0.00 | 0.58 | 0.83 | 0.78 | 0.72 |
| oligodendrocyte precursor cell | 29 | 0.87 | 0.95 | 0.93 | 0.97 | 0.00 | 0.98 | 0.98 | 1.00 | 0.98 |
| Schwann cell | 22 | 0.63 | 0.67 | 0.81 | 0.65 | 0.00 | 0.81 | 0.77 | 0.67 | 0.74 |
| pulmonary interstitial fibroblast | 24 | 0.87 | 0.86 | 0.70 | 0.75 | 0.00 | 0.92 | 0.96 | 0.90 | 0.92 |
| smooth muscle cell of trachea | 25 | 0.00 | 0.00 | 0.11 | 0.07 | 0.00 | 0.23 | 0.35 | 0.50 | 0.36 |
| granulocyte monocyte progenitor cell | 28 | 0.60 | 0.71 | 0.56 | 0.69 | 0.00 | 0.47 | 0.85 | 0.77 | 0.85 |
| interneuron | 25 | 0.00 | 0.53 | 0.28 | 0.65 | 0.00 | 0.86 | 0.94 | 0.78 | 0.94 |
| pancreatic PP cell | 23 | 0.00 | 0.23 | 0.08 | 0.56 | 0.00 | 0.68 | 0.78 | 0.76 | 0.61 |
| smooth muscle cell of the pulmonary artery | 26 | 0.00 | 0.68 | 0.56 | 0.21 | 0.00 | 0.33 | 0.88 | 0.70 | 0.73 |
| cell of skeletal muscle | 16 | 0.94 | 0.94 | 0.78 | 0.97 | 0.00 | 0.94 | 0.91 | 0.94 | 0.94 |
| Langerhans cell | 18 | 0.88 | 0.85 | 0.36 | 0.38 | 0.00 | 0.41 | 0.90 | 0.81 | 0.91 |
| brush cell of epithelium proper of large intestine | 10 | 0.00 | 0.78 | 0.00 | 0.67 | 0.00 | 0.62 | 0.90 | 0.82 | 0.80 |
| stem cell of epidermis | 11 | 0.00 | 0.00 | 0.50 | 0.12 | 0.00 | 0.36 | 0.56 | 0.53 | 0.62 |
| regulatory T cell | 13 | 0.00 | 0.00 | 0.00 | 0.06 | 0.00 | 0.14 | 0.12 | 0.21 | 0.25 |
| epithelial cell of thymus | 16 | 0.00 | 0.00 | 0.17 | 0.36 | 0.00 | 0.06 | 0.52 | 0.25 | 0.30 |
| enteroendocrine cell | 8 | 0.67 | 0.60 | 0.33 | 0.59 | 0.00 | 0.67 | 0.89 | 0.78 | 0.74 |
| kidney capillary endothelial cell | 19 | 0.17 | 0.54 | 0.30 | 0.18 | 0.00 | 0.30 | 0.26 | 0.27 | 0.27 |
| neuroepithelial stem cell | 16 | 0.18 | 0.21 | 0.36 | 0.27 | 0.00 | 0.23 | 0.62 | 0.29 | 0.52 |
| macrophage dendritic cell progenitor | 9 | 0.00 | 0.14 | 0.52 | 0.14 | 0.00 | 0.52 | 0.44 | 0.62 | 0.70 |
| DN3 thymocyte | 16 | 0.00 | 0.00 | 0.00 | 0.43 | 0.00 | 0.79 | 0.82 | 0.73 | 0.52 |
| regular ventricular cardiac myocyte | 9 | 0.00 | 0.00 | 0.78 | 0.48 | 0.00 | 0.53 | 0.84 | 1.00 | 0.89 |
| endothelial cell of lymphatic vessel | 5 | 0.00 | 0.43 | 0.80 | 0.10 | 0.00 | 0.09 | 0.34 | 0.21 | 0.57 |
| ciliated columnar cell of tracheobronchial tree | 16 | 0.72 | 0.80 | 0.85 | 0.80 | 0.00 | 0.59 | 0.88 | 0.75 | 0.88 |
| plasmacytoid dendritic cell | 13 | 0.00 | 0.56 | 0.32 | 0.24 | 0.00 | 0.13 | 0.44 | 0.29 | 0.52 |
| dendritic cell | 10 | 0.00 | 0.25 | 0.22 | 0.32 | 0.00 | 0.16 | 0.54 | 0.30 | 0.48 |
| medium spiny neuron | 14 | 0.00 | 0.78 | 0.58 | 0.48 | 0.00 | 0.93 | 0.97 | 0.96 | 0.88 |
| pericyte | 16 | 0.00 | 0.94 | 0.87 | 0.58 | 0.00 | 0.81 | 1.00 | 0.94 | 1.00 |
| neural stem cell | 10 | 0.00 | 0.82 | 0.57 | 0.76 | 0.00 | 0.86 | 0.78 | 0.73 | 0.86 |
| kidney proximal straight tubule epithelial cell | 8 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.19 | 0.00 |
| blood cell | 9 | 0.20 | 0.00 | 0.36 | 0.00 | 0.00 | 0.00 | 0.21 | 0.18 | 0.32 |
| immature NK T cell | 8 | 0.00 | 0.22 | 0.36 | 0.00 | 0.00 | 0.24 | 0.41 | 0.35 | 0.50 |
| cardiac neuron | 10 | 0.00 | 0.31 | 0.29 | 0.12 | 0.00 | 0.32 | 0.74 | 0.77 | 0.62 |
| mast cell | 12 | 0.00 | 0.00 | 0.15 | 0.29 | 0.00 | 0.11 | 0.54 | 0.56 | 0.56 |
| mucus secreting cell | 8 | 0.00 | 0.62 | 0.88 | 0.50 | 0.00 | 0.48 | 0.78 | 0.89 | 0.94 |
| early pro-B cell | 6 | 0.00 | 0.60 | 0.00 | 0.46 | 0.00 | 0.19 | 0.62 | 0.38 | 0.83 |
| pancreatic stellate cell | 9 | 0.00 | 0.75 | 0.36 | 0.57 | 0.00 | 0.16 | 0.82 | 0.84 | 0.84 |
| ependymal cell | 5 | 0.00 | 0.75 | 1.00 | 0.67 | 0.00 | 0.50 | 0.77 | 0.77 | 0.77 |
| Bergmann glial cell | 4 | 0.00 | 0.40 | 0.57 | 0.67 | 0.00 | 0.67 | 1.00 | 0.89 | 1.00 |
| neuroendocrine cell | 7 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.20 | 0.14 | 0.00 |
| fibrocyte | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| hepatic stellate cell | 5 | 0.00 | 0.00 | 0.60 | 0.77 | 0.00 | 0.56 | 0.91 | 0.83 | 1.00 |
| club cell | 7 | 0.14 | 0.25 | 0.10 | 0.15 | 0.00 | 0.13 | 0.14 | 0.13 | 0.14 |
| respiratory basal cell | 2 | 0.00 | 0.00 | 0.00 | 0.50 | 0.00 | 0.11 | 0.67 | 0.67 | 0.67 |
| kidney interstitial fibroblast | 1 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.67 | 1.00 | 0.00 |
| lymphoid lineage restricted progenitor cell | 1 | 0.00 | 0.00 | 1.00 | 0.67 | 0.00 | 0.40 | 0.03 | 1.00 | 1.00 |
| lung neuroendocrine cell | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| kidney cell | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| duct epithelial cell | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
The train accuracies are computed on the training data.
| Cell Type | N cells | celltypist | knn bbknn | knn harmony | knn on scvi | onclass | scanvi | svm | xgboost | Consensus Prediction |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| B cell | 32653 | 0.86 | 0.89 | 0.88 | 0.93 | 0.00 | 0.95 | 0.75 | 0.79 | 0.94 |
| mesenchymal stem cell | 14461 | 0.86 | 0.86 | 0.96 | 0.87 | 0.00 | 0.86 | 0.92 | 0.87 | 0.95 |
| basal cell of epidermis | 14140 | 0.85 | 0.87 | 0.91 | 0.91 | 0.00 | 0.85 | 0.87 | 0.83 | 0.92 |
| granulocyte | 14006 | 0.80 | 0.84 | 0.90 | 0.91 | 0.00 | 0.87 | 0.74 | 0.81 | 0.91 |
| endothelial cell | 13329 | 0.87 | 0.86 | 0.87 | 0.86 | 0.00 | 0.88 | 0.91 | 0.85 | 0.95 |
| microglial cell | 11975 | 0.99 | 0.99 | 0.94 | 0.99 | 0.00 | 0.97 | 0.99 | 0.99 | 0.99 |
| T cell | 11497 | 0.75 | 0.79 | 0.77 | 0.85 | 0.00 | 0.89 | 0.65 | 0.74 | 0.88 |
| keratinocyte | 11101 | 0.85 | 0.88 | 0.91 | 0.91 | 0.00 | 0.88 | 0.89 | 0.86 | 0.92 |
| macrophage | 8653 | 0.48 | 0.64 | 0.80 | 0.76 | 0.00 | 0.76 | 0.57 | 0.61 | 0.80 |
| classical monocyte | 7359 | 0.76 | 0.55 | 0.80 | 0.88 | 0.00 | 0.90 | 0.81 | 0.84 | 0.92 |
| granulocytopoietic cell | 7275 | 0.66 | 0.78 | 0.84 | 0.85 | 0.00 | 0.82 | 0.69 | 0.80 | 0.89 |
| bladder urothelial cell | 5958 | 0.98 | 0.98 | 0.99 | 0.95 | 0.00 | 0.99 | 0.99 | 0.98 | 0.99 |
| mesenchymal stem cell of adipose tissue | 5973 | 0.76 | 0.82 | 0.89 | 0.71 | 0.00 | 0.91 | 0.92 | 0.82 | 0.94 |
| fibroblast of cardiac tissue | 5710 | 0.94 | 0.92 | 0.88 | 0.73 | 0.00 | 0.87 | 0.96 | 0.91 | 0.97 |
| endothelial cell of coronary artery | 5363 | 0.80 | 0.83 | 0.79 | 0.78 | 0.00 | 0.91 | 0.90 | 0.84 | 0.96 |
| skeletal muscle satellite cell | 4308 | 0.98 | 0.98 | 0.97 | 0.97 | 0.00 | 0.97 | 0.97 | 0.97 | 0.98 |
| proerythroblast | 4242 | 0.51 | 0.52 | 0.72 | 0.75 | 0.00 | 0.77 | 0.57 | 0.66 | 0.74 |
| kidney proximal convoluted tubule epithelial cell | 4015 | 0.83 | 0.83 | 0.91 | 0.89 | 0.00 | 0.90 | 0.90 | 0.89 | 0.92 |
| bronchial smooth muscle cell | 4002 | 0.95 | 0.96 | 0.81 | 0.93 | 0.00 | 0.93 | 0.97 | 0.95 | 0.97 |
| myeloid cell | 4007 | 0.44 | 0.57 | 0.73 | 0.68 | 0.00 | 0.84 | 0.74 | 0.72 | 0.85 |
| hepatocyte | 3698 | 0.99 | 0.98 | 0.99 | 0.99 | 0.00 | 0.99 | 0.99 | 0.99 | 0.99 |
| type B pancreatic cell | 3587 | 0.97 | 0.96 | 0.96 | 0.98 | 0.00 | 0.98 | 0.99 | 0.98 | 0.99 |
| bladder cell | 3600 | 0.97 | 0.95 | 0.99 | 0.75 | 0.00 | 0.97 | 0.98 | 0.96 | 0.99 |
| promonocyte | 3477 | 0.65 | 0.69 | 0.85 | 0.83 | 0.00 | 0.85 | 0.78 | 0.83 | 0.88 |
| DN4 thymocyte | 3331 | 0.34 | 0.45 | 0.60 | 0.69 | 0.00 | 0.88 | 0.61 | 0.66 | 0.80 |
| basal cell | 3342 | 0.98 | 0.97 | 0.98 | 0.94 | 0.00 | 0.96 | 0.98 | 0.97 | 0.99 |
| monocyte | 3324 | 0.61 | 0.53 | 0.74 | 0.76 | 0.00 | 0.88 | 0.77 | 0.81 | 0.90 |
| enterocyte of epithelium of large intestine | 3293 | 0.91 | 0.92 | 0.94 | 0.93 | 0.00 | 0.95 | 0.95 | 0.93 | 0.96 |
| thymocyte | 3147 | 0.80 | 0.75 | 0.86 | 0.92 | 0.00 | 0.91 | 0.91 | 0.93 | 0.96 |
| epithelial cell of proximal tubule | 3101 | 0.82 | 0.78 | 0.87 | 0.85 | 0.00 | 0.87 | 0.87 | 0.86 | 0.88 |
| natural killer cell | 3009 | 0.82 | 0.81 | 0.74 | 0.83 | 0.00 | 0.86 | 0.80 | 0.84 | 0.88 |
| naive B cell | 3016 | 0.44 | 0.19 | 0.81 | 0.66 | 0.00 | 0.79 | 0.52 | 0.56 | 0.84 |
| hematopoietic stem cell | 3005 | 0.92 | 0.93 | 0.93 | 0.94 | 0.00 | 0.93 | 0.95 | 0.94 | 0.95 |
| mature NK T cell | 2840 | 0.54 | 0.62 | 0.61 | 0.79 | 0.00 | 0.81 | 0.63 | 0.72 | 0.78 |
| fibroblast | 2834 | 0.18 | 0.24 | 0.75 | 0.63 | 0.00 | 0.74 | 0.64 | 0.65 | 0.77 |
| stromal cell | 2792 | 0.61 | 0.39 | 0.97 | 0.64 | 0.00 | 0.93 | 0.87 | 0.75 | 0.93 |
| keratinocyte stem cell | 2736 | 0.92 | 0.92 | 0.93 | 0.92 | 0.00 | 0.94 | 0.95 | 0.90 | 0.96 |
| erythroblast | 2627 | 0.59 | 0.68 | 0.82 | 0.85 | 0.00 | 0.86 | 0.69 | 0.75 | 0.85 |
| Kupffer cell | 2526 | 0.97 | 0.94 | 0.87 | 0.91 | 0.00 | 0.96 | 0.98 | 0.97 | 0.99 |
| megakaryocyte-erythroid progenitor cell | 2548 | 0.59 | 0.63 | 0.77 | 0.76 | 0.00 | 0.73 | 0.64 | 0.73 | 0.80 |
| pancreatic acinar cell | 2472 | 0.96 | 0.95 | 0.94 | 0.99 | 0.00 | 0.98 | 0.99 | 0.99 | 0.99 |
| luminal epithelial cell of mammary gland | 2371 | 0.92 | 0.91 | 0.98 | 0.93 | 0.00 | 0.97 | 0.97 | 0.95 | 0.98 |
| professional antigen presenting cell | 2347 | 0.03 | 0.01 | 0.27 | 0.45 | 0.00 | 0.93 | 0.44 | 0.40 | 0.49 |
| epidermal cell | 2008 | 0.87 | 0.87 | 0.90 | 0.90 | 0.00 | 0.88 | 0.88 | 0.87 | 0.92 |
| hematopoietic precursor cell | 1928 | 0.62 | 0.71 | 0.62 | 0.79 | 0.00 | 0.91 | 0.74 | 0.81 | 0.90 |
| oligodendrocyte | 1870 | 0.96 | 0.99 | 0.97 | 1.00 | 0.00 | 0.99 | 1.00 | 0.99 | 0.99 |
| fibroblast of lung | 1790 | 0.96 | 0.95 | 0.93 | 0.93 | 0.00 | 0.97 | 0.98 | 0.97 | 0.98 |
| CD8-positive, alpha-beta T cell | 1757 | 0.34 | 0.48 | 0.45 | 0.63 | 0.00 | 0.69 | 0.60 | 0.63 | 0.74 |
| epithelial cell of large intestine | 1759 | 0.74 | 0.79 | 0.86 | 0.79 | 0.00 | 0.82 | 0.88 | 0.86 | 0.89 |
| intestinal crypt stem cell | 1719 | 0.82 | 0.86 | 0.90 | 0.88 | 0.00 | 0.87 | 0.90 | 0.87 | 0.92 |
| chondrocyte | 1726 | 0.61 | 0.62 | 0.85 | 0.83 | 0.00 | 0.82 | 0.88 | 0.88 | 0.82 |
| immature T cell | 1603 | 0.66 | 0.00 | 0.82 | 0.91 | 0.00 | 0.92 | 0.89 | 0.89 | 0.96 |
| precursor B cell | 1579 | 0.67 | 0.84 | 0.91 | 0.87 | 0.00 | 0.87 | 0.85 | 0.88 | 0.92 |
| intermediate monocyte | 1594 | 0.35 | 0.07 | 0.52 | 0.67 | 0.00 | 0.68 | 0.57 | 0.62 | 0.74 |
| large intestine goblet cell | 1582 | 0.81 | 0.84 | 0.92 | 0.85 | 0.00 | 0.91 | 0.95 | 0.92 | 0.95 |
| CD4-positive, alpha-beta T cell | 1557 | 0.26 | 0.37 | 0.47 | 0.61 | 0.00 | 0.59 | 0.48 | 0.57 | 0.73 |
| smooth muscle cell | 1536 | 0.61 | 0.76 | 0.83 | 0.86 | 0.00 | 0.89 | 0.90 | 0.83 | 0.84 |
| leukocyte | 1502 | 0.52 | 0.48 | 0.69 | 0.62 | 0.00 | 0.83 | 0.68 | 0.73 | 0.91 |
| plasma cell | 1503 | 0.52 | 0.62 | 0.80 | 0.81 | 0.00 | 0.76 | 0.65 | 0.67 | 0.79 |
| lymphocyte | 1428 | 0.23 | 0.23 | 0.69 | 0.61 | 0.00 | 0.86 | 0.32 | 0.47 | 0.74 |
| naive T cell | 1422 | 0.82 | 0.86 | 0.92 | 0.92 | 0.00 | 0.90 | 0.89 | 0.91 | 0.92 |
| kidney loop of Henle thick ascending limb epithelial cell | 1413 | 0.92 | 0.92 | 0.94 | 0.88 | 0.00 | 0.90 | 0.95 | 0.96 | 0.96 |
| immature B cell | 1268 | 0.70 | 0.70 | 0.77 | 0.78 | 0.00 | 0.70 | 0.62 | 0.75 | 0.89 |
| alveolar macrophage | 1232 | 0.95 | 0.95 | 0.95 | 0.93 | 0.00 | 0.94 | 0.97 | 0.96 | 0.98 |
| endothelial cell of hepatic sinusoid | 1168 | 0.96 | 0.95 | 0.88 | 0.88 | 0.00 | 0.94 | 0.97 | 0.97 | 0.98 |
| lung macrophage | 1060 | 0.83 | 0.04 | 0.68 | 0.92 | 0.00 | 0.93 | 0.90 | 0.86 | 0.96 |
| non-classical monocyte | 1045 | 0.56 | 0.68 | 0.85 | 0.82 | 0.00 | 0.91 | 0.84 | 0.78 | 0.86 |
| endocardial cell | 940 | 0.92 | 0.91 | 0.93 | 0.88 | 0.00 | 0.79 | 0.96 | 0.87 | 0.96 |
| double negative thymocyte | 866 | 0.57 | 0.00 | 0.63 | 0.76 | 0.00 | 0.84 | 0.79 | 0.82 | 0.91 |
| late pro-B cell | 851 | 0.78 | 0.78 | 0.86 | 0.85 | 0.00 | 0.88 | 0.78 | 0.87 | 0.95 |
| kidney collecting duct principal cell | 837 | 0.69 | 0.88 | 0.93 | 0.89 | 0.00 | 0.92 | 0.97 | 0.96 | 0.96 |
| basal epithelial cell of tracheobronchial tree | 838 | 0.86 | 0.73 | 0.94 | 0.81 | 0.00 | 0.75 | 0.94 | 0.87 | 0.96 |
| mesenchymal cell | 788 | 0.19 | 0.00 | 0.43 | 0.29 | 0.00 | 0.40 | 0.53 | 0.57 | 0.53 |
| pancreatic A cell | 750 | 0.63 | 0.72 | 0.85 | 0.89 | 0.00 | 0.87 | 0.95 | 0.94 | 0.93 |
| adventitial cell | 752 | 0.65 | 0.54 | 0.90 | 0.42 | 0.00 | 0.82 | 0.80 | 0.69 | 0.91 |
| epithelial cell | 712 | 0.40 | 0.03 | 0.84 | 0.67 | 0.00 | 0.76 | 0.67 | 0.69 | 0.82 |
| kidney distal convoluted tubule epithelial cell | 664 | 0.80 | 0.87 | 0.92 | 0.80 | 0.00 | 0.92 | 0.97 | 0.97 | 0.95 |
| neutrophil | 632 | 0.50 | 0.47 | 0.76 | 0.79 | 0.00 | 0.58 | 0.68 | 0.63 | 0.80 |
| pancreatic ductal cell | 594 | 0.98 | 0.78 | 0.98 | 0.95 | 0.00 | 0.96 | 0.97 | 0.97 | 0.98 |
| myeloid dendritic cell | 564 | 0.34 | 0.52 | 0.82 | 0.60 | 0.00 | 0.78 | 0.78 | 0.65 | 0.75 |
| astrocyte | 522 | 0.89 | 0.93 | 0.97 | 0.95 | 0.00 | 0.96 | 0.99 | 0.99 | 0.98 |
| secretory cell | 511 | 0.26 | 0.33 | 0.81 | 0.69 | 0.00 | 0.85 | 0.93 | 0.89 | 0.93 |
| myeloid leukocyte | 456 | 0.57 | 0.31 | 0.82 | 0.57 | 0.00 | 0.84 | 0.55 | 0.68 | 0.92 |
| plasmatocyte | 448 | 0.57 | 0.81 | 0.90 | 0.84 | 0.00 | 0.90 | 0.94 | 0.87 | 0.96 |
| brain pericyte | 428 | 0.00 | 0.73 | 0.67 | 0.87 | 0.00 | 0.94 | 0.96 | 0.92 | 0.99 |
| neuron | 436 | 0.62 | 0.59 | 0.81 | 0.80 | 0.00 | 0.90 | 0.96 | 0.91 | 0.95 |
| vein endothelial cell | 423 | 0.62 | 0.76 | 0.82 | 0.65 | 0.00 | 0.74 | 0.91 | 0.83 | 0.92 |
| kidney collecting duct epithelial cell | 414 | 0.00 | 0.51 | 0.89 | 0.81 | 0.00 | 0.92 | 0.85 | 0.90 | 0.95 |
| pancreatic D cell | 417 | 0.00 | 0.64 | 0.84 | 0.89 | 0.00 | 0.94 | 0.97 | 0.96 | 0.98 |
| aortic endothelial cell | 420 | 0.16 | 0.00 | 0.30 | 0.39 | 0.00 | 0.71 | 0.39 | 0.32 | 0.68 |
| brush cell | 377 | 0.82 | 0.60 | 0.94 | 0.91 | 0.00 | 0.92 | 0.86 | 0.85 | 0.91 |
| erythroid progenitor cell, mammalian | 361 | 0.59 | 0.00 | 0.67 | 0.80 | 0.00 | 0.81 | 0.79 | 0.66 | 0.94 |
| regular atrial cardiac myocyte | 362 | 0.00 | 0.40 | 0.69 | 0.70 | 0.00 | 0.58 | 0.70 | 0.75 | 0.86 |
| kidney cortex artery cell | 341 | 0.00 | 0.83 | 0.76 | 0.52 | 0.00 | 0.62 | 0.83 | 0.67 | 0.84 |
| erythrocyte | 312 | 0.61 | 0.47 | 0.71 | 0.86 | 0.00 | 0.84 | 0.83 | 0.87 | 0.94 |
| mesangial cell | 322 | 0.00 | 0.69 | 0.69 | 0.69 | 0.00 | 0.85 | 0.88 | 0.80 | 0.95 |
| kidney loop of Henle ascending limb epithelial cell | 312 | 0.47 | 0.59 | 0.69 | 0.56 | 0.00 | 0.74 | 0.86 | 0.85 | 0.78 |
| cardiac muscle cell | 302 | 0.60 | 0.68 | 0.87 | 0.96 | 0.00 | 1.00 | 0.99 | 0.97 | 0.98 |
| mature alpha-beta T cell | 293 | 0.00 | 0.08 | 0.40 | 0.41 | 0.00 | 0.68 | 0.62 | 0.62 | 0.88 |
| basophil | 292 | 0.63 | 0.63 | 0.88 | 0.81 | 0.00 | 0.68 | 0.69 | 0.75 | 0.84 |
| cardiac valve cell | 293 | 0.66 | 0.83 | 0.75 | 0.82 | 0.00 | 0.83 | 0.94 | 0.91 | 0.92 |
| oligodendrocyte precursor cell | 283 | 0.91 | 0.96 | 0.98 | 0.97 | 0.00 | 0.97 | 1.00 | 0.99 | 1.00 |
| Schwann cell | 252 | 0.72 | 0.71 | 0.82 | 0.86 | 0.00 | 0.92 | 0.86 | 0.82 | 0.82 |
| pulmonary interstitial fibroblast | 248 | 0.87 | 0.82 | 0.91 | 0.87 | 0.00 | 0.91 | 0.96 | 0.96 | 0.96 |
| smooth muscle cell of trachea | 227 | 0.00 | 0.04 | 0.46 | 0.57 | 0.00 | 0.64 | 0.65 | 0.74 | 0.85 |
| granulocyte monocyte progenitor cell | 222 | 0.50 | 0.57 | 0.86 | 0.80 | 0.00 | 0.63 | 0.81 | 0.74 | 0.85 |
| interneuron | 215 | 0.00 | 0.55 | 0.82 | 0.76 | 0.00 | 0.90 | 0.97 | 0.92 | 0.95 |
| pancreatic PP cell | 205 | 0.00 | 0.06 | 0.69 | 0.78 | 0.00 | 0.80 | 0.92 | 0.93 | 0.92 |
| smooth muscle cell of the pulmonary artery | 183 | 0.00 | 0.69 | 0.81 | 0.57 | 0.00 | 0.35 | 0.89 | 0.75 | 0.88 |
| cell of skeletal muscle | 176 | 0.95 | 0.92 | 0.77 | 0.93 | 0.00 | 0.91 | 0.96 | 0.95 | 0.96 |
| Langerhans cell | 157 | 0.83 | 0.80 | 0.90 | 0.79 | 0.00 | 0.65 | 0.87 | 0.78 | 0.96 |
| brush cell of epithelium proper of large intestine | 165 | 0.00 | 0.70 | 0.85 | 0.85 | 0.00 | 0.81 | 0.99 | 0.95 | 0.95 |
| stem cell of epidermis | 160 | 0.00 | 0.00 | 0.72 | 0.64 | 0.00 | 0.40 | 0.67 | 0.64 | 0.85 |
| regulatory T cell | 157 | 0.00 | 0.03 | 0.23 | 0.30 | 0.00 | 0.41 | 0.46 | 0.38 | 0.76 |
| epithelial cell of thymus | 149 | 0.00 | 0.01 | 0.75 | 0.53 | 0.00 | 0.39 | 0.79 | 0.61 | 0.92 |
| enteroendocrine cell | 153 | 0.84 | 0.74 | 0.86 | 0.72 | 0.00 | 0.95 | 0.97 | 0.96 | 0.96 |
| kidney capillary endothelial cell | 142 | 0.17 | 0.49 | 0.33 | 0.37 | 0.00 | 0.28 | 0.27 | 0.34 | 0.29 |
| neuroepithelial stem cell | 135 | 0.21 | 0.11 | 0.88 | 0.67 | 0.00 | 0.60 | 0.88 | 0.67 | 0.87 |
| macrophage dendritic cell progenitor | 139 | 0.00 | 0.29 | 0.81 | 0.79 | 0.00 | 0.79 | 0.73 | 0.75 | 0.90 |
| DN3 thymocyte | 130 | 0.00 | 0.00 | 0.48 | 0.77 | 0.00 | 0.76 | 0.77 | 0.80 | 0.90 |
| regular ventricular cardiac myocyte | 137 | 0.00 | 0.00 | 0.90 | 0.86 | 0.00 | 0.65 | 0.97 | 0.98 | 0.99 |
| endothelial cell of lymphatic vessel | 135 | 0.00 | 0.45 | 0.95 | 0.57 | 0.00 | 0.44 | 0.74 | 0.51 | 0.91 |
| ciliated columnar cell of tracheobronchial tree | 111 | 0.82 | 0.76 | 0.84 | 0.90 | 0.00 | 0.73 | 0.98 | 0.94 | 0.98 |
| plasmacytoid dendritic cell | 109 | 0.00 | 0.29 | 0.87 | 0.67 | 0.00 | 0.46 | 0.66 | 0.55 | 0.78 |
| dendritic cell | 97 | 0.00 | 0.35 | 0.77 | 0.54 | 0.00 | 0.53 | 0.80 | 0.62 | 0.83 |
| medium spiny neuron | 89 | 0.00 | 0.00 | 0.86 | 0.79 | 0.00 | 0.89 | 1.00 | 0.99 | 0.99 |
| pericyte | 83 | 0.00 | 0.85 | 0.83 | 0.77 | 0.00 | 0.91 | 0.86 | 0.90 | 0.93 |
| neural stem cell | 87 | 0.00 | 0.65 | 0.85 | 0.87 | 0.00 | 0.93 | 0.96 | 0.97 | 0.98 |
| kidney proximal straight tubule epithelial cell | 87 | 0.00 | 0.00 | 0.14 | 0.00 | 0.00 | 0.52 | 0.73 | 0.43 | 0.36 |
| blood cell | 79 | 0.16 | 0.00 | 0.43 | 0.28 | 0.00 | 0.20 | 0.38 | 0.35 | 0.38 |
| immature NK T cell | 76 | 0.00 | 0.09 | 0.32 | 0.38 | 0.00 | 0.50 | 0.53 | 0.50 | 0.72 |
| cardiac neuron | 71 | 0.00 | 0.08 | 0.72 | 0.63 | 0.00 | 0.72 | 0.78 | 0.76 | 0.87 |
| mast cell | 64 | 0.00 | 0.00 | 0.59 | 0.82 | 0.00 | 0.35 | 0.66 | 0.62 | 0.79 |
| mucus secreting cell | 65 | 0.00 | 0.34 | 0.78 | 0.68 | 0.00 | 0.85 | 0.90 | 0.97 | 0.98 |
| early pro-B cell | 66 | 0.00 | 0.48 | 0.76 | 0.72 | 0.00 | 0.39 | 0.79 | 0.68 | 0.87 |
| pancreatic stellate cell | 60 | 0.00 | 0.61 | 0.93 | 0.40 | 0.00 | 0.36 | 0.79 | 0.81 | 0.96 |
| ependymal cell | 50 | 0.00 | 0.59 | 0.95 | 0.89 | 0.00 | 0.65 | 1.00 | 1.00 | 1.00 |
| Bergmann glial cell | 50 | 0.00 | 0.43 | 0.86 | 0.81 | 0.00 | 0.57 | 1.00 | 1.00 | 0.95 |
| neuroendocrine cell | 41 | 0.00 | 0.00 | 0.20 | 0.49 | 0.00 | 0.70 | 0.98 | 0.77 | 0.59 |
| fibrocyte | 39 | 0.00 | 0.00 | 0.30 | 0.31 | 0.00 | 0.41 | 0.80 | 0.76 | 0.77 |
| hepatic stellate cell | 34 | 0.00 | 0.21 | 0.52 | 0.77 | 0.00 | 0.54 | 0.91 | 0.93 | 0.96 |
| club cell | 30 | 0.11 | 0.45 | 0.19 | 0.11 | 0.00 | 0.17 | 0.19 | 0.18 | 0.16 |
| respiratory basal cell | 32 | 0.00 | 0.00 | 0.95 | 0.75 | 0.00 | 0.26 | 0.96 | 0.89 | 0.95 |
| kidney interstitial fibroblast | 15 | 0.00 | 0.00 | 0.97 | 0.77 | 0.00 | 0.40 | 0.75 | 0.83 | 1.00 |
| lymphoid lineage restricted progenitor cell | 12 | 0.00 | 0.00 | 0.91 | 0.53 | 0.00 | 0.56 | 0.04 | 0.86 | 0.91 |
| lung neuroendocrine cell | 8 | 0.00 | 0.00 | 0.82 | 0.22 | 0.00 | 0.64 | 0.21 | 1.00 | 0.86 |
| kidney cell | 3 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.16 | 0.05 | 0.50 | 0.50 |
| duct epithelial cell | 2 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.10 | 0.31 | 0.67 | 0.00 |
</details>
# References
A single-cell transcriptomic atlas characterizes ageing tissues in the mouse, The Tabula Muris Consortium, Nicole Almanzar, Jane Antony, Ankit S. Baghel, Isaac Bakerman, Ishita Bansal, Ben A. Barres, Philip A. Beachy, Daniela Berdnik, Biter Bilen, Douglas Brownfield, Corey Cain, Charles K. F. Chan, Michelle B. Chen, Michael F. Clarke, Stephanie D. Conley, Spyros Darmanis, Aaron Demers, Kubilay Demir, Antoine de Morree, Tessa Divita, Haley du Bois, Hamid Ebadi, F. Hernán Espinoza, Matt Fish, Qiang Gan, Benson M. George, Astrid Gillich, Rafael Gòmez-Sjöberg, Foad Green, Geraldine Genetiano, Xueying Gu, Gunsagar S. Gulati, Oliver Hahn, Michael Seamus Haney, Yan Hang, Lincoln Harris, Mu He, Shayan Hosseinzadeh, Albin Huang, Kerwyn Casey Huang, Tal Iram, Taichi Isobe, Feather Ives, Robert C. Jones, Kevin S. Kao, Jim Karkanias, Guruswamy Karnam, Andreas Keller, Aaron M. Kershner, Nathalie Khoury, Seung K. Kim, Bernhard M. Kiss, William Kong, Mark A. Krasnow, Maya E. Kumar, Christin S. Kuo, Jonathan Lam, Davis P. Lee, Song E. Lee, Benoit Lehallier, Olivia Leventhal, Guang Li, Qingyun Li, Ling Liu, Annie Lo, Wan-Jin Lu, Maria F. Lugo-Fagundo, Anoop Manjunath, Andrew P. May, Ashley Maynard, Aaron McGeever, Marina McKay, M. Windy McNerney, Bryan Merrill, Ross J. Metzger, Marco Mignardi, Dullei Min, Ahmad N. Nabhan, Norma F. Neff, Katharine M. Ng, Patricia K. Nguyen, Joseph Noh, Roel Nusse, Róbert Pálovics, Rasika Patkar, Weng Chuan Peng, Lolita Penland, Angela Oliveira Pisco, Katherine Pollard, Robert Puccinelli, Zhen Qi, Stephen R. Quake, Thomas A. Rando, Eric J. Rulifson, Nicholas Schaum, Joe M. Segal, Shaheen S. Sikandar, Rahul Sinha, Rene V. Sit, Justin Sonnenburg, Daniel Staehli, Krzysztof Szade, Michelle Tan, Weilun Tan, Cristina Tato, Krissie Tellez, Laughing Bear Torrez Dulgeroff, Kyle J. Travaglini, Carolina Tropini, Margaret Tsui, Lucas Waldburger, Bruce M. Wang, Linda J. van Weele, Kenneth Weinberg, Irving L. Weissman, Michael N. Wosczyna, Sean M. Wu, Tony Wyss-Coray, Jinyi Xiang, Soso Xue, Kevin A. Yamauchi, Andrew C. Yang, Lakshmi P. Yerra, Justin Youngyunpipatkul, Brian Yu, Fabio Zanini, Macy E. Zardeneta, Alexander Zee, Chunyu Zhao, Fan Zhang, Hui Zhang, Martin Jinye Zhang, Lu Zhou, James Zou; Nature, doi: https://doi.org/10.1038/s41586-020-2496-1
|
Holarissun/SFT_gemma2b_hh-rlhf-helpful_lr5e-06_epoch2-subset-1 | Holarissun | "2024-06-17T16:51:51Z" | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"license:gemma",
"region:us"
] | null | "2024-06-17T16:51:48Z" | ---
license: gemma
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: google/gemma-2b
model-index:
- name: SFT_gemma2b_hh-rlhf-helpful_lr5e-06_epoch2-subset-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SFT_gemma2b_hh-rlhf-helpful_lr5e-06_epoch2-subset-1
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2 |
zenless-lab/sdxl-anything-xl | zenless-lab | "2024-10-31T13:35:02Z" | 30 | 0 | diffusers | [
"diffusers",
"safetensors",
"anime",
"text-to-image",
"arxiv:1910.09700",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-10-31T13:02:51Z" | ---
library_name: diffusers
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
base_model:
- stabilityai/stable-diffusion-xl-base-1.0
pipeline_tag: text-to-image
tags:
- anime
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
prithivMLmods/Blaze.1-32B-Instruct | prithivMLmods | "2025-01-15T11:23:54Z" | 114 | 7 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"conversational",
"en",
"base_model:Qwen/QwQ-32B-Preview",
"base_model:finetune:Qwen/QwQ-32B-Preview",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-12T18:24:54Z" | ---
license: apache-2.0
language:
- en
base_model:
- Qwen/QwQ-32B-Preview
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
---

# **Blaze.1-32B-Instruct**
Blaze.1-32B-Instruct is based on the QwQ-32B-Preview model, fine-tuned using synthetic data for mathematical reasoning and conditional reasoning to handle complex reasoning problems. The model may unexpectedly mix languages or switch between them, affecting response clarity. Additionally, it may enter recursive reasoning loops, resulting in lengthy responses without a conclusive answer, as it focuses on maintaining a continuous chain of thought reasoning.
# **Quickstart Chat Template**
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Blaze.1-32B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "How many r in strawberry."
messages = [
{"role": "system", "content": "You are a helpful and harmless assistant. You are Qwen developed by Alibaba. You should think step-by-step."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
# **Intended Use**
Blaze.1-32B-Instruct is designed to assist with complex reasoning tasks, including mathematical problem-solving, logical reasoning, and step-by-step explanations. It is particularly useful for applications requiring conditional reasoning, structured content generation, and language understanding across multiple domains. The model is also fine-tuned for conversational AI, making it well-suited for virtual assistants, educational tools, and research purposes. Additionally, it supports tasks involving multilingual understanding, making it valuable in environments where language switching or code-mixed text processing is required.
# **Limitations**
1. **Language Mixing and Code-Switching Issues**: The model may unexpectedly switch between languages or mix them within a single response, potentially reducing the clarity of outputs.
2. **Recursive Reasoning Loops**: During complex reasoning, the model may enter circular reasoning patterns, resulting in overly lengthy responses without arriving at a definitive conclusion.
3. **Overfitting to Training Data**: Since Blaze.1-32B-Instruct is fine-tuned on specific synthetic datasets, its performance might be biased toward certain types of problems and may generalize poorly on entirely new tasks.
4. **Context Sensitivity**: While the model is trained for step-by-step reasoning, it may occasionally lose track of the context in longer conversations, leading to irrelevant or incomplete answers.
5. **Resource Intensity**: As a large model (32B parameters), it requires significant computational resources for both inference and deployment, which may limit its usability in low-resource environments.
|
QuantFactory/Tulu-3.1-8B-SuperNova-GGUF | QuantFactory | "2024-12-01T13:26:34Z" | 294 | 3 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"arxiv:2203.05482",
"base_model:allenai/Llama-3.1-Tulu-3-8B",
"base_model:merge:allenai/Llama-3.1-Tulu-3-8B",
"base_model:arcee-ai/Llama-3.1-SuperNova-Lite",
"base_model:merge:arcee-ai/Llama-3.1-SuperNova-Lite",
"base_model:meditsolutions/Llama-3.1-MedIT-SUN-8B",
"base_model:merge:meditsolutions/Llama-3.1-MedIT-SUN-8B",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-12-01T12:47:10Z" |
---
library_name: transformers
tags:
- mergekit
- merge
base_model:
- meditsolutions/Llama-3.1-MedIT-SUN-8B
- allenai/Llama-3.1-Tulu-3-8B
- arcee-ai/Llama-3.1-SuperNova-Lite
model-index:
- name: Tulu-3.1-8B-SuperNova
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 81.94
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Tulu-3.1-8B-SuperNova
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 32.5
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Tulu-3.1-8B-SuperNova
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 24.32
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Tulu-3.1-8B-SuperNova
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 6.94
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Tulu-3.1-8B-SuperNova
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 8.69
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Tulu-3.1-8B-SuperNova
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 31.27
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Tulu-3.1-8B-SuperNova
name: Open LLM Leaderboard
---
[](https://hf.co/QuantFactory)
# QuantFactory/Tulu-3.1-8B-SuperNova-GGUF
This is quantized version of [bunnycore/Tulu-3.1-8B-SuperNova](https://huggingface.co/bunnycore/Tulu-3.1-8B-SuperNova) created using llama.cpp
# Original Model Card
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [meditsolutions/Llama-3.1-MedIT-SUN-8B](https://huggingface.co/meditsolutions/Llama-3.1-MedIT-SUN-8B)
* [allenai/Llama-3.1-Tulu-3-8B](https://huggingface.co/allenai/Llama-3.1-Tulu-3-8B)
* [arcee-ai/Llama-3.1-SuperNova-Lite](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: arcee-ai/Llama-3.1-SuperNova-Lite
parameters:
weight: 1.0
- model: allenai/Llama-3.1-Tulu-3-8B
parameters:
weight: 1.0
- model: meditsolutions/Llama-3.1-MedIT-SUN-8B
parameters:
weight: 1.0
merge_method: linear
normalize: false
int8_mask: true
dtype: bfloat16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_bunnycore__Tulu-3.1-8B-SuperNova)
| Metric |Value|
|-------------------|----:|
|Avg. |30.94|
|IFEval (0-Shot) |81.94|
|BBH (3-Shot) |32.50|
|MATH Lvl 5 (4-Shot)|24.32|
|GPQA (0-shot) | 6.94|
|MuSR (0-shot) | 8.69|
|MMLU-PRO (5-shot) |31.27|
|
ty00/gpt2-wikitext2 | ty00 | "2023-05-31T08:17:47Z" | 173 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-05-31T07:20:01Z" | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.1114
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.5582 | 1.0 | 2249 | 6.4717 |
| 6.1906 | 2.0 | 4498 | 6.2014 |
| 6.014 | 3.0 | 6747 | 6.1114 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
microsoft/layoutlmv3-large | microsoft | "2022-09-16T03:26:15Z" | 157,275 | 102 | transformers | [
"transformers",
"pytorch",
"tf",
"layoutlmv3",
"en",
"arxiv:2204.08387",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | "2022-04-18T06:56:58Z" | ---
language: en
license: cc-by-nc-sa-4.0
---
# LayoutLMv3
[Microsoft Document AI](https://www.microsoft.com/en-us/research/project/document-ai/) | [GitHub](https://aka.ms/layoutlmv3)
## Model description
LayoutLMv3 is a pre-trained multimodal Transformer for Document AI with unified text and image masking. The simple unified architecture and training objectives make LayoutLMv3 a general-purpose pre-trained model. For example, LayoutLMv3 can be fine-tuned for both text-centric tasks, including form understanding, receipt understanding, and document visual question answering, and image-centric tasks such as document image classification and document layout analysis.
[LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387)
Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei, Preprint 2022.
## Citation
If you find LayoutLM useful in your research, please cite the following paper:
```
@inproceedings{huang2022layoutlmv3,
author={Yupan Huang and Tengchao Lv and Lei Cui and Yutong Lu and Furu Wei},
title={LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking},
booktitle={Proceedings of the 30th ACM International Conference on Multimedia},
year={2022}
}
```
## License
The content of this project itself is licensed under the [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/).
Portions of the source code are based on the [transformers](https://github.com/huggingface/transformers) project.
[Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct) |
wongyu/koalpaca-polyglot-12.8b-bill | wongyu | "2023-09-26T07:46:49Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-09-26T07:46:45Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
DaneshwariS/llama-3.2-gptq-finetuned | DaneshwariS | "2025-03-01T11:03:05Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:saul95/Llama-3.2-1B-GPTQ",
"base_model:finetune:saul95/Llama-3.2-1B-GPTQ",
"endpoints_compatible",
"region:us"
] | null | "2025-03-01T11:03:01Z" | ---
base_model: saul95/Llama-3.2-1B-GPTQ
library_name: transformers
model_name: llama-3.2-gptq-finetuned
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for llama-3.2-gptq-finetuned
This model is a fine-tuned version of [saul95/Llama-3.2-1B-GPTQ](https://huggingface.co/saul95/Llama-3.2-1B-GPTQ).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="DaneshwariS/llama-3.2-gptq-finetuned", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.3
- Pytorch: 2.6.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
djuna/Gemma-2-gemmama-9b-Q5_K_M-GGUF | djuna | "2024-09-08T00:09:57Z" | 6 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:djuna/Gemma-2-gemmama-9b",
"base_model:quantized:djuna/Gemma-2-gemmama-9b",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-08-31T06:39:26Z" | ---
base_model: djuna/Gemma-2-gemmama-9b
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# djuna/Gemma-2-gemmama-9b-Q5_K_M-GGUF
This model was converted to GGUF format from [`djuna/Gemma-2-gemmama-9b`](https://huggingface.co/djuna/Gemma-2-gemmama-9b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/djuna/Gemma-2-gemmama-9b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo djuna/Gemma-2-gemmama-9b-Q5_K_M-GGUF --hf-file gemma-2-gemmama-9b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo djuna/Gemma-2-gemmama-9b-Q5_K_M-GGUF --hf-file gemma-2-gemmama-9b-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo djuna/Gemma-2-gemmama-9b-Q5_K_M-GGUF --hf-file gemma-2-gemmama-9b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo djuna/Gemma-2-gemmama-9b-Q5_K_M-GGUF --hf-file gemma-2-gemmama-9b-q5_k_m.gguf -c 2048
```
|
Trelis/Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs | Trelis | "2024-10-08T12:55:40Z" | 123 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Llama-3.2-1B-Instruct",
"base_model:finetune:unsloth/Llama-3.2-1B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-10-08T12:54:25Z" | ---
base_model: unsloth/Llama-3.2-1B-Instruct
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** Trelis
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-1B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
theojolliffe/bart-large-cnn-finetuned-roundup-2-1 | theojolliffe | "2022-05-04T16:57:42Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-05-04T16:34:56Z" | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: bart-large-cnn-finetuned-roundup-2-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-roundup-2-1
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 167 | 1.2456 | 51.7546 | 32.4725 | 33.0461 | 49.0513 | 142.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
lesso10/bfe8d9f9-bd63-475d-a645-fa1a86e253bb | lesso10 | "2025-03-27T22:59:14Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NovaSearch/stella_en_1.5B_v5",
"base_model:adapter:NovaSearch/stella_en_1.5B_v5",
"license:mit",
"region:us"
] | null | "2025-03-27T21:36:14Z" | ---
library_name: peft
license: mit
base_model: dunzhang/stella_en_1.5B_v5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: bfe8d9f9-bd63-475d-a645-fa1a86e253bb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: dunzhang/stella_en_1.5B_v5
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- be73e9dba600ca71_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/be73e9dba600ca71_train_data.json
type:
field_instruction: problem
field_output: qwq
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 500
evals_per_epoch: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso10/bfe8d9f9-bd63-475d-a645-fa1a86e253bb
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.00021
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 50
lora_alpha: 128
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/be73e9dba600ca71_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 500
saves_per_epoch: null
seed: 100
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5ca65d3e-d245-41db-87e9-e40832b9c43f
wandb_project: 10a
wandb_run: your_name
wandb_runid: 5ca65d3e-d245-41db-87e9-e40832b9c43f
warmup_steps: 100
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# bfe8d9f9-bd63-475d-a645-fa1a86e253bb
This model is a fine-tuned version of [dunzhang/stella_en_1.5B_v5](https://huggingface.co/dunzhang/stella_en_1.5B_v5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00021
- train_batch_size: 4
- eval_batch_size: 4
- seed: 100
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0003 | 1 | nan |
| 0.0 | 0.1277 | 500 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ckauth/q-Taxi-v3 | ckauth | "2022-12-29T14:18:50Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2022-12-29T14:03:32Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="ckauth/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
kaiku03/vit-base-1stGen-Pokemon-Images | kaiku03 | "2024-08-28T06:09:33Z" | 9 | 0 | null | [
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"region:us"
] | image-classification | "2024-08-27T04:33:13Z" | ---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base-1stGen-Pokemon-Images
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-1stGen-Pokemon-Images
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the Dusduo/1stGen-Pokemon-Images dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0561
- Accuracy: 0.8175
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.4052 | 1.0 | 422 | 2.2029 | 0.6386 |
| 1.4347 | 2.0 | 844 | 1.4776 | 0.7536 |
| 0.9834 | 3.0 | 1266 | 1.2078 | 0.7950 |
| 0.8073 | 4.0 | 1688 | 1.0888 | 0.8187 |
| 0.6758 | 5.0 | 2110 | 1.0561 | 0.8175 |
### Framework versions
- Transformers 4.42.4
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
w11wo/indonesian-roberta-base-sentiment-classifier | w11wo | "2023-05-13T04:10:11Z" | 10,723 | 34 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"roberta",
"text-classification",
"indonesian-roberta-base-sentiment-classifier",
"id",
"dataset:indonlu",
"arxiv:1907.11692",
"doi:10.57967/hf/2997",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-03-02T23:29:05Z" | ---
language: id
tags:
- indonesian-roberta-base-sentiment-classifier
license: mit
datasets:
- indonlu
widget:
- text: "Jangan sampai saya telpon bos saya ya!"
---
## Indonesian RoBERTa Base Sentiment Classifier
Indonesian RoBERTa Base Sentiment Classifier is a sentiment-text-classification model based on the [RoBERTa](https://arxiv.org/abs/1907.11692) model. The model was originally the pre-trained [Indonesian RoBERTa Base](https://hf.co/flax-community/indonesian-roberta-base) model, which is then fine-tuned on [`indonlu`](https://hf.co/datasets/indonlu)'s `SmSA` dataset consisting of Indonesian comments and reviews.
After training, the model achieved an evaluation accuracy of 94.36% and F1-macro of 92.42%. On the benchmark test set, the model achieved an accuracy of 93.2% and F1-macro of 91.02%.
Hugging Face's `Trainer` class from the [Transformers](https://huggingface.co/transformers) library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with other frameworks nonetheless.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
| ---------------------------------------------- | ------- | ------------ | ------------------------------- |
| `indonesian-roberta-base-sentiment-classifier` | 124M | RoBERTa Base | `SmSA` |
## Evaluation Results
The model was trained for 5 epochs and the best model was loaded at the end.
| Epoch | Training Loss | Validation Loss | Accuracy | F1 | Precision | Recall |
| ----- | ------------- | --------------- | -------- | -------- | --------- | -------- |
| 1 | 0.342600 | 0.213551 | 0.928571 | 0.898539 | 0.909803 | 0.890694 |
| 2 | 0.190700 | 0.213466 | 0.934127 | 0.901135 | 0.925297 | 0.882757 |
| 3 | 0.125500 | 0.219539 | 0.942857 | 0.920901 | 0.927511 | 0.915193 |
| 4 | 0.083600 | 0.235232 | 0.943651 | 0.924227 | 0.926494 | 0.922048 |
| 5 | 0.059200 | 0.262473 | 0.942063 | 0.920583 | 0.924084 | 0.917351 |
## How to Use
### As Text Classifier
```python
from transformers import pipeline
pretrained_name = "w11wo/indonesian-roberta-base-sentiment-classifier"
nlp = pipeline(
"sentiment-analysis",
model=pretrained_name,
tokenizer=pretrained_name
)
nlp("Jangan sampai saya telpon bos saya ya!")
```
## Disclaimer
Do consider the biases which come from both the pre-trained RoBERTa model and the `SmSA` dataset that may be carried over into the results of this model.
## Author
Indonesian RoBERTa Base Sentiment Classifier was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access.
## Citation
If used, please cite the following:
```bibtex
@misc {wilson_wongso_2023,
author = { {Wilson Wongso} },
title = { indonesian-roberta-base-sentiment-classifier (Revision e402e46) },
year = 2023,
url = { https://huggingface.co/w11wo/indonesian-roberta-base-sentiment-classifier },
doi = { 10.57967/hf/0644 },
publisher = { Hugging Face }
}
``` |
AvinashAmballa/DPO_LLAMA-7B_0.25 | AvinashAmballa | "2024-05-15T18:20:31Z" | 2 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-15T18:15:47Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ivensamdh/swinv2 | ivensamdh | "2023-01-29T10:10:52Z" | 37 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-01-29T13:31:27Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: swinv2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2
This model is a fine-tuned version of [microsoft/swinv2-base-patch4-window12to16-192to256-22kto1k-ft](https://huggingface.co/microsoft/swinv2-base-patch4-window12to16-192to256-22kto1k-ft) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu117
- Datasets 2.8.0
- Tokenizers 0.13.2
|
dgalik/emoBank_test2_epoch20_batch16 | dgalik | "2023-08-08T15:56:17Z" | 31 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | "2023-08-08T15:50:29Z" | ---
base_model: ''
tags:
- generated_from_trainer
model-index:
- name: emoBank_test2_epoch20_batch16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emoBank_test2_epoch20_batch16
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0830
- Mse V: 0.1312
- Mse A: 0.0651
- Mse D: 0.0526
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
ellietang/hf_saved_lora_amf-modCase-qwenq-coder-32B-two-para-only | ellietang | "2025-03-10T20:52:47Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/Qwen2.5-Coder-32B-Instruct-bnb-4bit",
"base_model:finetune:unsloth/Qwen2.5-Coder-32B-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-03-10T20:52:39Z" | ---
base_model: unsloth/Qwen2.5-Coder-32B-Instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ellietang
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-Coder-32B-Instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
tixmav/vit-base-patch16-224-in21k-finetuned-lora-sci | tixmav | "2023-12-13T18:47:03Z" | 6 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"vit",
"arxiv:1910.09700",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:adapter:google/vit-base-patch16-224-in21k",
"region:us"
] | null | "2023-12-04T14:52:31Z" | ---
library_name: peft
base_model: google/vit-base-patch16-224-in21k
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1.dev0 |
kalmuraee/tokens | kalmuraee | "2022-09-09T18:55:22Z" | 7 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-08-23T00:35:06Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: tokens
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tokens
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9811
- Wer: 0.4608
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 6.5212 | 0.59 | 400 | 3.3776 | 1.0 |
| 2.4798 | 1.18 | 800 | 1.0697 | 0.7740 |
| 1.0057 | 1.77 | 1200 | 0.7077 | 0.6487 |
| 0.7731 | 2.36 | 1600 | 0.6113 | 0.5883 |
| 0.6917 | 2.94 | 2000 | 0.5618 | 0.5573 |
| 0.5844 | 3.53 | 2400 | 0.5610 | 0.5532 |
| 0.5606 | 4.12 | 2800 | 0.5584 | 0.5484 |
| 0.4973 | 4.71 | 3200 | 0.5466 | 0.5333 |
| 0.4721 | 5.3 | 3600 | 0.5495 | 0.5178 |
| 0.4439 | 5.89 | 4000 | 0.5667 | 0.5237 |
| 0.3965 | 6.48 | 4400 | 0.5865 | 0.5322 |
| 0.3876 | 7.07 | 4800 | 0.6099 | 0.5135 |
| 0.3407 | 7.66 | 5200 | 0.5891 | 0.5228 |
| 0.33 | 8.25 | 5600 | 0.6135 | 0.5072 |
| 0.3032 | 8.84 | 6000 | 0.6004 | 0.5028 |
| 0.2706 | 9.43 | 6400 | 0.6321 | 0.4991 |
| 0.2709 | 10.01 | 6800 | 0.6541 | 0.5051 |
| 0.2373 | 10.6 | 7200 | 0.6613 | 0.5119 |
| 0.2284 | 11.19 | 7600 | 0.6798 | 0.5086 |
| 0.212 | 11.78 | 8000 | 0.6509 | 0.4910 |
| 0.1983 | 12.37 | 8400 | 0.7018 | 0.5043 |
| 0.1947 | 12.96 | 8800 | 0.6826 | 0.4965 |
| 0.1717 | 13.55 | 9200 | 0.7056 | 0.4828 |
| 0.1741 | 14.14 | 9600 | 0.7544 | 0.5060 |
| 0.1626 | 14.73 | 10000 | 0.7331 | 0.4915 |
| 0.1529 | 15.32 | 10400 | 0.7518 | 0.4772 |
| 0.1504 | 15.91 | 10800 | 0.7362 | 0.4732 |
| 0.1401 | 16.49 | 11200 | 0.7179 | 0.4769 |
| 0.1335 | 17.08 | 11600 | 0.7716 | 0.4826 |
| 0.1185 | 17.67 | 12000 | 0.7465 | 0.4798 |
| 0.1182 | 18.26 | 12400 | 0.8105 | 0.4733 |
| 0.1135 | 18.85 | 12800 | 0.7693 | 0.4743 |
| 0.1098 | 19.44 | 13200 | 0.8362 | 0.4888 |
| 0.1023 | 20.03 | 13600 | 0.8427 | 0.4768 |
| 0.1003 | 20.62 | 14000 | 0.8079 | 0.4741 |
| 0.0936 | 21.21 | 14400 | 0.8551 | 0.4651 |
| 0.0875 | 21.8 | 14800 | 0.8462 | 0.4712 |
| 0.0843 | 22.39 | 15200 | 0.9177 | 0.4782 |
| 0.0846 | 22.97 | 15600 | 0.8618 | 0.4735 |
| 0.08 | 23.56 | 16000 | 0.9017 | 0.4687 |
| 0.0789 | 24.15 | 16400 | 0.9034 | 0.4659 |
| 0.0717 | 24.74 | 16800 | 0.9690 | 0.4734 |
| 0.0714 | 25.33 | 17200 | 0.9395 | 0.4677 |
| 0.0699 | 25.92 | 17600 | 0.9222 | 0.4608 |
| 0.0658 | 26.51 | 18000 | 0.9222 | 0.4621 |
| 0.0612 | 27.1 | 18400 | 0.9691 | 0.4586 |
| 0.0583 | 27.69 | 18800 | 0.9647 | 0.4581 |
| 0.0596 | 28.28 | 19200 | 0.9820 | 0.4614 |
| 0.056 | 28.87 | 19600 | 0.9795 | 0.4596 |
| 0.055 | 29.45 | 20000 | 0.9811 | 0.4608 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
YarramsettiNaresh/CartPole-v1 | YarramsettiNaresh | "2023-07-22T04:53:56Z" | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-07-22T04:53:47Z" | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
FardinSaboori/bert-finetuned-squad | FardinSaboori | "2022-02-28T06:22:27Z" | 27 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | "2022-03-02T23:29:04Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
AndiB93/ppo-Huggy | AndiB93 | "2024-05-13T18:38:46Z" | 2 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | "2024-05-13T18:36:33Z" | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: AndiB93/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
DmitrMakeev/one-shot-talking-face | DmitrMakeev | "2023-09-25T18:02:54Z" | 0 | 1 | null | [
"arxiv:2112.02749",
"region:us"
] | null | "2023-09-25T16:21:40Z" | # One-shot Talking Face Generation from Single-speaker Audio-Visual Correlation Learning (AAAI 2022)
#### [Paper](https://arxiv.org/pdf/2112.02749.pdf) | [Demo](https://www.youtube.com/watch?v=HHj-XCXXePY)
#### Requirements
- Python >= 3.6 , Pytorch >= 1.8 and ffmpeg
- Set up [OpenFace](https://github.com/TadasBaltrusaitis/OpenFace)
- We use the OpenFace tools to extract the initial pose of the reference image
- Make sure you have installed this tool, and set the `OPENFACE_POSE_EXTRACTOR_PATH` in `config.py`. For example, it should be the absolute path of the "`FeatureExtraction.exe`" for Windows.
- Other requirements are listed in the 'requirements.txt'
#### Pretrained Checkpoint
Please download the pretrained checkpoint from [google-drive](https://drive.google.com/file/d/1mjFEozPR_2vMaVRMd9Agk_sU1VaiUYMl/view?usp=sharing) and unzip it to the directory (`/checkpoints`). Or manually modify the settings of `GENERATOR_CKPT` and `AUDIO2POSE_CKPT` in the `config.py`.
#### Extract phoneme
We employ the [CMU phoneset](https://github.com/cmusphinx/cmudict) to represent phonemes, the extra 'SIL' means silence. All the phonesets can be seen in '`phindex.json`'.
We have extracted the phonemes for the audios in the '`sample/audio`' directory. For other audios, you can extract the phonemes by other ASR tools and then map them to the CMU phoneset. Or email to [email protected] for help.
#### Generate Demo Results
```
python test_script.py --img_path xxx.jpg --audio_path xxx.wav --phoneme_path xxx.json --save_dir "YOUR_DIR"
```
Note that the input images must keep the same height and width and the face should be appropriately cropped as in `samples/imgs`. You can also preprocess your images with `image_preprocess.py`.
#### License and Citation
```
@InProceedings{wang2021one,
author = Suzhen Wang, Lincheng Li, Yu Ding, Xin Yu
title = {One-shot Talking Face Generation from Single-speaker Audio-Visual Correlation Learning},
booktitle = {AAAI 2022},
year = {2022},
}
```
#### Acknowledgement
This codebase is based on [First Order Motion Model](https://github.com/AliaksandrSiarohin/first-order-model) and [imaginaire](https://github.com/NVlabs/imaginaire), thanks for their contributions.
|
0xfaskety/Qwen-Qwen1.5-7B-1717809287 | 0xfaskety | "2024-06-08T01:20:08Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-08T01:14:50Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
aidadev48/aidav7 | aidadev48 | "2024-11-22T18:47:45Z" | 126 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-02T20:02:06Z" | ---
base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** aidadev48
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
law-ai/InLegalTrans-En2Indic-1B | law-ai | "2025-01-19T15:26:16Z" | 147 | 0 | null | [
"safetensors",
"IndicTrans",
"InLegalTrans",
"Legal",
"NLP",
"translation",
"custom_code",
"en",
"bn",
"hi",
"mr",
"ta",
"te",
"ml",
"pa",
"gu",
"or",
"dataset:MILPaC",
"arxiv:2310.09765",
"base_model:ai4bharat/indictrans2-en-indic-1B",
"base_model:finetune:ai4bharat/indictrans2-en-indic-1B",
"license:mit",
"region:us"
] | translation | "2025-01-19T10:25:36Z" | ---
license: mit
language:
- en
- bn
- hi
- mr
- ta
- te
- ml
- pa
- gu
- or
base_model:
- ai4bharat/indictrans2-en-indic-1B
pipeline_tag: translation
metrics:
- bleu
- google_bleu
- chrf++
inference: false
datasets:
- MILPaC
tags:
- InLegalTrans
- Legal
- NLP
---
# InLegalTrans
This is the model card of ***InLegalTrans-En2Indic-1B*** translation model, a fine-tuned version of the [IndicTrans2](https://huggingface.co/ai4bharat/indictrans2-en-indic-1B) model specifically tailored for translating Indian legal texts from English to Indian languages.
### Training Data
We use the [**MILPaC**](https://github.com/Law-AI/MILPaC) **(Multilingual Indian Legal Parallel Corpus)** corpus for fine-tuning. It is the first high-quality Indian legal parallel corpus, containing parallel aligned text units in English (EN) and nine Indian (IN) languages -- Bengali (BN), Hindi (HI), Marathi (MR), Tamil (TA), Telugu (TE), Malayalam (ML), Panjabi (PA), Gujarati (GU), and Oriya (OR). Please refer to the [paper](https://arxiv.org/abs/2310.09765) for more details about this corpus.
For fine-tuning, we randomly split MILPaC language-wise in a 80 (train) - 10 (validation) - 10 (test) ratio. We use the 80\% train split (combined 80\% of each English-to-Indic language pair) to fine-tune the [IndicTrans2](https://huggingface.co/ai4bharat/indictrans2-en-indic-1B) model and 10\% validation split (combined 10\% of each English-to-Indic language pair) to select the best checkpoint and to prevent overfitting.
### Model Overview and Usage Instructions
This [InLegalTrans](https://huggingface.co/law-ai/InLegalTrans-En2Indic-1B) model uses the same tokenizer as the [IndicTrans2](https://huggingface.co/ai4bharat/indictrans2-en-indic-1B) model and has the same architecture with ~1.12B parameters.
```python
import torch
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
from IndicTransToolkit import IndicProcessor # Install IndicTransToolkit from https://github.com/VarunGumma/IndicTransToolkit
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
src_lang, tgt_lang = "eng_Latn", "ben_Beng" # Use the BCP-47 language codes used by the FLORES-200 dataset
tokenizer = AutoTokenizer.from_pretrained("ai4bharat/indictrans2-en-indic-1B", trust_remote_code=True) # Use IndicTrans2 tokenizer to enable their custom tokenization script to be run
model = AutoModelForSeq2SeqLM.from_pretrained(
"law-ai/InLegalTrans-En2Indic-1B",
trust_remote_code=True,
attn_implementation="eager",
low_cpu_mem_usage=True,
).to(device)
ip = IndicProcessor(inference=True)
input_sentences = [
"(7) Any such allowance for the maintenance and expenses for proceeding shall be payable from the date of the order, or, if so ordered, from the date of the application for maintenance or expenses of proceeding, as the case may be.",
"(2) Where it appears to the Tribunal that, in consequence of any decision of a competent Civil Court, any order made under section 9 should be cancelled or varied, it shall cancel the order or, as the case may be, vary the same accordingly.",
]
batch = ip.preprocess_batch(input_sentences, src_lang=src_lang, tgt_lang=tgt_lang)
input_text_encoding = tokenizer(
batch,
max_length=256,
truncation=True,
padding="longest",
return_tensors="pt",
return_attention_mask=True,
).to(device)
generated_tokens = model.generate(
**input_text_encoding,
max_length=256,
do_sample=True,
num_beams=4,
num_return_sequences=1,
early_stopping=False,
use_cache=True,
)
with tokenizer.as_target_tokenizer():
generated_tokens = tokenizer.batch_decode(
generated_tokens.detach().cpu().tolist(),
skip_special_tokens=True,
clean_up_tokenization_spaces=True,
)
translations = ip.postprocess_batch(generated_tokens, lang=tgt_lang)
for input_sentence, translation in zip(input_sentences, translations):
print(f"Sentence in {src_lang} language: {input_sentence}")
print(f"Translated Sentence in {tgt_lang} language: {translation}")
```
### Fine-tuning Results
The following table contains the performance results of the [InLegalTrans](https://huggingface.co/law-ai/InLegalTrans-En2Indic-1B) model compared to the [IndicTrans2](https://huggingface.co/ai4bharat/indictrans2-en-indic-1B) model over the 10\% test split of **MILPaC**. Performances are evaluated using *BLEU*, *GLEU*, and *chrF++* metrics. For all English-to-Indic language pairs, [InLegalTrans](https://huggingface.co/law-ai/InLegalTrans-En2Indic-1B) demonstrated a significant improvement over [IndicTrans2](https://huggingface.co/ai4bharat/indictrans2-en-indic-1B), achieving consistently better performance across all evaluation metrics.
| EN-to-IN | Model | BLEU | GLEU | chrF++ |
|------------|---------------------|------|------|--------|
| EN-to-BN | *IndicTrans2* | 25.4 | 28.8 | 53.7 |
| | ***InLegalTrans*** | **45.8** | **47.6** | **70.9** |
| EN-to-HI | *IndicTrans2* | 41.0 | 42.5 | 59.9 |
| | ***InLegalTrans*** | **56.9** | **57.1** | **73.8** |
| EN-to-MR | *IndicTrans2* | 25.2 | 28.7 | 55.4 |
| | ***InLegalTrans*** | **44.4** | **46.0** | **68.9** |
| EN-to-TA | *IndicTrans2* | 32.8 | 35.3 | 62.3 |
| | ***InLegalTrans*** | **40.0** | **42.5** | **69.9** |
| EN-to-TE | *IndicTrans2* | 10.7 | 14.2 | 37.9 |
| | ***InLegalTrans*** | **31.3** | **31.6** | **58.5** |
| EN-to-ML | *IndicTrans2* | 21.9 | 25.8 | 52.9 |
| | ***InLegalTrans*** | **37.4** | **40.3** | **69.7** |
| EN-to-PA | *IndicTrans2* | 27.8 | 31.6 | 51.5 |
| | ***InLegalTrans*** | **44.3** | **45.6** | **65.5** |
| EN-to-GU | *IndicTrans2* | 27.5 | 31.1 | 55.7 |
| | ***InLegalTrans*** | **42.8** | **45.2** | **68.8** |
| EN-to-OR | *IndicTrans2* | 06.6 | 12.6 | 37.1 |
| | ***InLegalTrans*** | **14.2** | **19.9** | **47.5** |
### Citation
If you use this [InLegalTrans](https://huggingface.co/law-ai/InLegalTrans-En2Indic-1B) translation model or the [**MILPaC**](https://github.com/Law-AI/MILPaC) corpus, please cite the following paper:
```
@article{mahapatra2024milpacnovelbenchmarkevaluating,
title = {MILPaC: A Novel Benchmark for Evaluating Translation of Legal Text to Indian Languages},
author = {Sayan Mahapatra and Debtanu Datta and Shubham Soni and Adrijit Goswami and Saptarshi Ghosh},
year = {2024},
journal = {ACM Trans. Asian Low-Resour. Lang. Inf. Process.},
publisher = {Association for Computing Machinery},
}
```
### About Us
We are a group of Natural Language Processing (NLP) researchers from the *Indian Institute of Technology (IIT) Kharagpur*. Our research interests are primarily ML, DL, and NLP applications for the legal domain, with a special focus on the challenges and oppurtunites for the Indian legal scenario. Our current and past projects include:
- Legal Statute Identification
- Semantic segmentation of legal documents
- Monolingual (e.g., English-to-English) and Cross-lingual (e.g., English-to-Hindi) Summarization of legal documents
- Translation in the Indian legal domain
- Court Judgment Prediction
- Legal Document Matching
Explore our publicly available codes and datasets at: [Law and AI, IIT Kharagpur](https://github.com/Law-AI). |
xaviviro/llama-2-7b-chat-catala-GGUF | xaviviro | "2023-12-27T00:01:10Z" | 86 | 0 | null | [
"gguf",
"ca",
"en",
"dataset:xaviviro/oasst1_ca_threads",
"base_model:NousResearch/Llama-2-7b-chat-hf",
"base_model:quantized:NousResearch/Llama-2-7b-chat-hf",
"endpoints_compatible",
"region:us"
] | null | "2023-12-26T23:14:34Z" | ---
base_model: NousResearch/Llama-2-7b-chat-hf
datasets:
- xaviviro/oasst1_ca_threads
language:
- ca
- en
model_type: llama
prompt_template: '<s>[INST] <<SYS>> Ets un xatbot genèric que sempre respon en català. <</SYS>> {instruction} [/INST]'
---
# llama-2-7b-chat-catala
## Prompt template
```
<s>[INST] <<SYS>> Ets un xatbot genèric que sempre respon en català. <</SYS>> {instruction} [/INST]
```
|
tuanna08go/fdbf8c81-46bd-4483-a2a3-0ff867fd3677 | tuanna08go | "2025-01-17T16:20:53Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:fxmarty/tiny-llama-fast-tokenizer",
"base_model:adapter:fxmarty/tiny-llama-fast-tokenizer",
"region:us"
] | null | "2025-01-17T16:19:35Z" | ---
library_name: peft
base_model: fxmarty/tiny-llama-fast-tokenizer
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fdbf8c81-46bd-4483-a2a3-0ff867fd3677
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/tiny-llama-fast-tokenizer
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 0aab7601c3799871_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0aab7601c3799871_train_data.json
type:
field_input: intent
field_instruction: conversations
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: tuanna08go/fdbf8c81-46bd-4483-a2a3-0ff867fd3677
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/0aab7601c3799871_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f468468f-a46d-4b3e-a1b7-687172b66217
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: f468468f-a46d-4b3e-a1b7-687172b66217
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# fdbf8c81-46bd-4483-a2a3-0ff867fd3677
This model is a fine-tuned version of [fxmarty/tiny-llama-fast-tokenizer](https://huggingface.co/fxmarty/tiny-llama-fast-tokenizer) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.3783
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0009 | 1 | 10.3790 |
| 10.3799 | 0.0093 | 10 | 10.3789 |
| 10.3789 | 0.0186 | 20 | 10.3786 |
| 10.3782 | 0.0279 | 30 | 10.3784 |
| 10.3788 | 0.0372 | 40 | 10.3783 |
| 10.3793 | 0.0464 | 50 | 10.3783 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
vj1148/lora-peft-holding-unsup | vj1148 | "2023-08-19T21:22:21Z" | 4 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-08-19T21:22:19Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
|
bbytxt/86a194fa-95d6-478a-a704-b92be78a4893 | bbytxt | "2025-01-13T14:55:45Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-1.1-2b-it",
"base_model:adapter:unsloth/gemma-1.1-2b-it",
"license:apache-2.0",
"region:us"
] | null | "2025-01-13T14:48:40Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/gemma-1.1-2b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 86a194fa-95d6-478a-a704-b92be78a4893
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-1.1-2b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e8c873decc45fe0a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e8c873decc45fe0a_train_data.json
type:
field_input: confidence
field_instruction: report
field_output: statement
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: true
hub_model_id: bbytxt/86a194fa-95d6-478a-a704-b92be78a4893
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 400
micro_batch_size: 2
mlflow_experiment_name: /tmp/e8c873decc45fe0a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 78753386-cf8e-4934-81d6-6f6b031420c7
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 78753386-cf8e-4934-81d6-6f6b031420c7
warmup_steps: 30
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 86a194fa-95d6-478a-a704-b92be78a4893
This model is a fine-tuned version of [unsloth/gemma-1.1-2b-it](https://huggingface.co/unsloth/gemma-1.1-2b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8869
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 30
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0011 | 1 | 5.5313 |
| 3.4425 | 0.1053 | 100 | 3.1400 |
| 3.4472 | 0.2107 | 200 | 2.9694 |
| 3.1706 | 0.3160 | 300 | 2.8983 |
| 3.3055 | 0.4214 | 400 | 2.8869 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/anah-v2-GGUF | mradermacher | "2025-01-12T23:00:06Z" | 264 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:opencompass/anah-v2",
"base_model:quantized:opencompass/anah-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-12T21:16:24Z" | ---
base_model: opencompass/anah-v2
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/opencompass/anah-v2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/anah-v2-GGUF/resolve/main/anah-v2.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/anah-v2-GGUF/resolve/main/anah-v2.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/anah-v2-GGUF/resolve/main/anah-v2.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/anah-v2-GGUF/resolve/main/anah-v2.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/anah-v2-GGUF/resolve/main/anah-v2.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/anah-v2-GGUF/resolve/main/anah-v2.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/anah-v2-GGUF/resolve/main/anah-v2.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/anah-v2-GGUF/resolve/main/anah-v2.Q5_K_S.gguf) | Q5_K_S | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/anah-v2-GGUF/resolve/main/anah-v2.Q5_K_M.gguf) | Q5_K_M | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/anah-v2-GGUF/resolve/main/anah-v2.Q6_K.gguf) | Q6_K | 6.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/anah-v2-GGUF/resolve/main/anah-v2.Q8_0.gguf) | Q8_0 | 8.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/anah-v2-GGUF/resolve/main/anah-v2.f16.gguf) | f16 | 15.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
VERSIL91/d15dd643-fa80-4f2b-98a4-9829a8cf3b30 | VERSIL91 | "2025-01-11T01:08:13Z" | 16 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Capybara-7B-V1",
"base_model:adapter:NousResearch/Nous-Capybara-7B-V1",
"license:mit",
"region:us"
] | null | "2025-01-11T00:28:46Z" | ---
library_name: peft
license: mit
base_model: NousResearch/Nous-Capybara-7B-V1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d15dd643-fa80-4f2b-98a4-9829a8cf3b30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
accelerate_config:
dynamo_backend: inductor
mixed_precision: bf16
num_machines: 1
num_processes: auto
use_cpu: false
adapter: lora
base_model: NousResearch/Nous-Capybara-7B-V1
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 7a2bb50e267140ef_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7a2bb50e267140ef_train_data.json
type:
field_instruction: instruction
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
group_by_length: false
hub_model_id: VERSIL91/d15dd643-fa80-4f2b-98a4-9829a8cf3b30
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lora_target_modules:
- q_proj
- v_proj
lr_scheduler: cosine
max_memory:
0: 70GiB
max_steps: 20
micro_batch_size: 2
mlflow_experiment_name: /tmp/7a2bb50e267140ef_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
quantization_config:
llm_int8_enable_fp32_cpu_offload: true
load_in_8bit: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
torch_compile: true
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d15dd643-fa80-4f2b-98a4-9829a8cf3b30
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d15dd643-fa80-4f2b-98a4-9829a8cf3b30
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# d15dd643-fa80-4f2b-98a4-9829a8cf3b30
This model is a fine-tuned version of [NousResearch/Nous-Capybara-7B-V1](https://huggingface.co/NousResearch/Nous-Capybara-7B-V1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0002 | 1 | nan |
| 0.0 | 0.0008 | 5 | nan |
| 0.0 | 0.0015 | 10 | nan |
| 0.0 | 0.0023 | 15 | nan |
| 0.0 | 0.0030 | 20 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mrferr3t/ce1a2005-ee52-45ee-8f32-03027c38b14f | mrferr3t | "2025-02-05T06:47:47Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:adapter:Qwen/Qwen2.5-1.5B",
"license:apache-2.0",
"region:us"
] | null | "2025-02-05T06:24:34Z" | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ce1a2005-ee52-45ee-8f32-03027c38b14f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
auto_find_batch_size: true
base_model: Qwen/Qwen2.5-1.5B
bf16: auto
chat_template: llama3
dataloader_num_workers: 12
dataset_prepared_path: null
datasets:
- data_files:
- 30467321b0218499_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/30467321b0218499_train_data.json
type:
field_input: Japanese
field_instruction: ENName
field_output: English
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 3
early_stopping_threshold: 0.001
eval_max_new_tokens: 128
eval_steps: 40
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/ce1a2005-ee52-45ee-8f32-03027c38b14f
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0003
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 100
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
micro_batch_size: 32
mlflow_experiment_name: /tmp/30467321b0218499_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 50
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
s2_attention: null
sample_packing: false
save_steps: 40
saves_per_epoch: 0
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.02
wandb_entity: null
wandb_mode: online
wandb_name: f01f50dd-806e-4a8f-9c25-300f4646bd3d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: f01f50dd-806e-4a8f-9c25-300f4646bd3d
warmup_ratio: 0.05
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# ce1a2005-ee52-45ee-8f32-03027c38b14f
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4814
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1299
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0010 | 1 | 3.1418 |
| No log | 0.0385 | 40 | 3.0617 |
| No log | 0.0770 | 80 | 2.3578 |
| 2.7405 | 0.1154 | 120 | 1.9472 |
| 2.7405 | 0.1539 | 160 | 1.8134 |
| 1.8729 | 0.1924 | 200 | 1.7430 |
| 1.8729 | 0.2309 | 240 | 1.6926 |
| 1.8729 | 0.2694 | 280 | 1.6552 |
| 1.6979 | 0.3078 | 320 | 1.6294 |
| 1.6979 | 0.3463 | 360 | 1.5977 |
| 1.6043 | 0.3848 | 400 | 1.5806 |
| 1.6043 | 0.4233 | 440 | 1.5645 |
| 1.6043 | 0.4618 | 480 | 1.5476 |
| 1.5427 | 0.5002 | 520 | 1.5395 |
| 1.5427 | 0.5387 | 560 | 1.5332 |
| 1.5321 | 0.5772 | 600 | 1.5244 |
| 1.5321 | 0.6157 | 640 | 1.5119 |
| 1.5321 | 0.6542 | 680 | 1.5102 |
| 1.5327 | 0.6926 | 720 | 1.5054 |
| 1.5327 | 0.7311 | 760 | 1.5002 |
| 1.5065 | 0.7696 | 800 | 1.5024 |
| 1.5065 | 0.8081 | 840 | 1.4940 |
| 1.5065 | 0.8466 | 880 | 1.4840 |
| 1.5024 | 0.8850 | 920 | 1.4788 |
| 1.5024 | 0.9235 | 960 | 1.4758 |
| 1.4772 | 0.9620 | 1000 | 1.4725 |
| 1.4772 | 1.0005 | 1040 | 1.4676 |
| 1.4772 | 1.0390 | 1080 | 1.4773 |
| 1.4016 | 1.0774 | 1120 | 1.4743 |
| 1.4016 | 1.1159 | 1160 | 1.4814 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ReadyArt/Thoughtless-Fallen-Abomination-70B-R1-v4.1 | ReadyArt | "2025-03-24T03:10:54Z" | 0 | 1 | null | [
"safetensors",
"llama",
"nsfw",
"explicit",
"roleplay",
"unaligned",
"dangerous",
"ERP",
"en",
"base_model:ReadyArt/Fallen-Safeword-70B-R1-v4.1",
"base_model:merge:ReadyArt/Fallen-Safeword-70B-R1-v4.1",
"license:llama3.3",
"region:us"
] | null | "2025-03-24T02:33:58Z" | ---
base_model: ReadyArt/Fallen-Safeword-70B-R1-v4.1
base_model_relation: merge
language:
- en
license: llama3.3
inference: false
tags:
- nsfw
- explicit
- roleplay
- unaligned
- dangerous
- ERP
---
<style>
body {
font-family: 'Quicksand', sans-serif;
background: linear-gradient(135deg, #001a1a 0%, #000a10 100%);
color: #e1ffff !important;
text-shadow: 0 0 3px rgba(0, 0, 0, 0.7);
margin: 0;
padding: 20px;
}
@media (prefers-color-scheme: light) {
body {
background: linear-gradient(135deg, #e1ffff 0%, #c0f0ff 100%);
color: #002b36 !important;
text-shadow: 0 0 3px rgba(255, 255, 255, 0.7);
}
}
.container {
max-width: 800px;
margin: 0 auto;
background: rgba(0, 17, 22, 0.95);
border-radius: 12px;
padding: 30px;
box-shadow: 0 0 20px rgba(0, 255, 255, 0.1);
border: 1px solid rgba(0, 255, 255, 0.2);
}
@media (prefers-color-scheme: light) {
.container {
background: rgba(224, 255, 255, 0.95);
border-color: rgba(0, 150, 150, 0.3);
}
}
.header {
text-align: center;
margin-bottom: 30px;
}
.model-name {
color: #00ffff;
font-size: 2.5em;
text-shadow: 0 0 15px rgba(0, 255, 255, 0.5);
margin: 0;
}
.subtitle {
color: #00ffcc;
font-size: 1.2em;
}
.waifu-container {
margin: 20px -30px;
width: calc(100% + 60px);
overflow: hidden;
}
.waifu-img {
width: 100%;
height: auto;
border-radius: 0;
border: none;
box-shadow: 0 0 40px rgba(0, 255, 255, 0.2);
transform: scale(1.02);
}
.section {
color: #00ffcc;
margin: 25px 0;
padding: 20px;
background: rgba(5, 25, 35, 0.9);
border-radius: 8px;
border: 1px solid rgba(0, 255, 255, 0.15);
}
@media (prefers-color-scheme: light) {
.section {
background: rgba(200, 250, 255, 0.9);
border-color: rgba(0, 200, 200, 0.2);
}
}
.section-title {
color: #00ffff;
font-size: 1.8em;
margin-top: 0;
}
.section > p > strong {
color: #00ffcc !important;
}
.section:has(.quant-links) p,
.section:has(.quant-links) h3,
.section:has(.quant-links) a {
color: #00ffcc !important;
}
.quant-links h3 {
color: #00ffcc !important;
margin-top: 0;
}
.badge {
display: inline-block;
padding: 5px 10px;
border-radius: 5px;
background: rgba(0, 255, 255, 0.1);
border: 1px solid #00ffff;
margin: 5px;
}
.quant-links {
display: grid;
grid-template-columns: repeat(2, 1fr);
gap: 15px;
}
.link-card {
padding: 15px;
background: rgba(20, 35, 45, 0.95);
border-radius: 8px;
transition: transform 0.3s ease;
}
@media (prefers-color-scheme: light) {
.link-card {
background: rgba(150, 230, 255, 0.95);
}
}
.link-card:hover {
transform: translateY(-3px);
}
.disclaimer {
color: #00ff99;
border-left: 3px solid #00ff99;
padding-left: 15px;
margin: 20px 0;
}
.progress-bar {
height: 8px;
background: rgba(0, 255, 255, 0.1);
border-radius: 4px;
overflow: hidden;
margin: 10px 0;
}
.progress-fill {
height: 100%;
background: linear-gradient(90deg, #00ffff 0%, #00ffcc 100%);
width: 70%;
}
@media (prefers-color-scheme: light) {
.model-name, .section-title, .subtitle {
color: #006666;
text-shadow: 0 0 5px rgba(0, 200, 200, 0.3);
}
.section:has(.quant-links) p,
.section:has(.quant-links) h3,
.section:has(.quant-links) a,
.section > p > strong {
color: #008080 !important;
}
.quant-links h3 {
color: #008080 !important;
}
.badge {
border-color: #008080;
background: rgba(0, 150, 150, 0.1);
}
.disclaimer {
color: #008080;
border-color: #008080;
}
}
</style>
<div class="container">
<div class="header">
<h1 class="model-name">ReadyArt/Thoughtless-Fallen-Abomination-70B-R1-v4.1 </h1>
<div class="subtitle">The Abomination Protocol: Now With 30% More Depravity</div>
</div>
<div class="waifu-container">
<img src="./waifu2.webp" class="waifu-img" alt="Model Architecture Animation">
</div>
<div class="section">
<h2 class="section-title">📜 Manifesto</h2>
<p>IMPORTANT NOTE: This version of the model has been modified in an attempt to remove reasoning.</p>
<p>ReadyArt/Thoughtless-Fallen-Abomination-70B-R1-v4.1 benefits from the coherence and well rounded roleplay experience of TheDrummer/Fallen-Llama-3.3-R1-70B-v1. We've:</p>
<ul>
<li>🔁 Re-integrated your favorite V1.2 scenarios (now with better kink distribution)</li>
<li>🧪 Direct-injected the Abomination dataset into the model's neural pathways</li>
<li>⚖️ Achieved perfect balance between "oh my" and "oh <em>my</em>"</li>
</ul>
</div>
<div class="section">
<h2 class="section-title">⚙️ Technical Specs</h2>
<div class="progress-bar">
<div class="progress-fill"></div>
</div>
<p>## Usage
- Deepseek 2.5 (`<|User|>` & `<|Assistant|>`)
- Force multi-turn thinking by prefilling `<think>\n\n` (newlines usually needed)</p>
<div class="quant-links">
<div class="link-card">
<h3>EXL2 Collection</h3>
<a href="https://huggingface.co/collections/ReadyArt/fallen-abomination-70b-v41-exl2-67dbf600d707cf9c093f7b8c">Quantum Entangled Bits →</a>
</div>
<div class="link-card">
<h3>GGUF Collection</h3>
<a href="https://huggingface.co/collections/ReadyArt/fallen-abomination-70b-v41-gguf-67dbf5f326c422f0fd36bf0d">Giggle-Enabled Units →</a>
</div>
</div>
</div>
<div class="section">
<h2 class="section-title">⚠️ Ethical Considerations</h2>
<div class="disclaimer">
<p>This model will:</p>
<ul>
<li>Generate content that requires industrial-grade brain bleach </li>
<li>Void all warranties on your soul </li>
<li>Make you question why humanity ever invented electricity</li>
</ul>
</div>
</div>
<div class="section">
<h2 class="section-title">📜 License Agreement</h2>
<p>By using this model, you agree:</p>
<ul>
<li>That your search history is now a federal case</li>
<li>Pay for the exorcist of anyone who reads the logs</li>
<li>To pretend this is "for science" while crying in the shower</li>
</ul>
</div>
<div class="section">
<h2 class="section-title">🧠 Model Authors</h2>
<ul>
<li>sleepdeprived3 (Chief Corruption Officer) </li>
<li>The voices in your head (Gaslighting is something you made up)</li>
</ul>
</div>
<div class="section">
<h2 class="section-title">☕️ Drummer made this possible</h2>
<ul>
<li>Support Drummer <a href="https://ko-fi.com/thedrummer">Kofi</a></li>
</ul>
</div>
<div class="section">
<h2 class="section-title">🔀 Merge Details</h2>
<ul>
merge_method: dare_ties
base_model: ReadyArt/Fallen-Safeword-70B-v4.1
models:
- model: ReadyArt/Fallen-Safeword-70B-v4.1
parameters:
weight: 0.5
density: 0.35
- model: TheDrummer/Fallen-Llama-3.3-R1-70B-v1
parameters:
weight: 0.5
density: 0.35
parameters:
normalize: true
int8_mask: true
temperature: 2.5
tokenizer_source: union
dtype: bfloat16
chat_template: auto
</ul>
</div>
</div>
|
mishaalkandapath/drl | mishaalkandapath | "2023-09-07T16:34:09Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-09-07T16:33:54Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 263.84 +/- 22.05
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
cutelemonlili/Qwen2.5-0.5B-Instruct_MATH_training_response_Qwen2.5_32B | cutelemonlili | "2024-12-29T14:01:23Z" | 176 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-29T14:00:59Z" | ---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-0.5B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: MATH_training_response_Qwen2.5_32B
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MATH_training_response_Qwen2.5_32B
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the MATH_training_response_Qwen2.5_32B dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1929
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 8
- total_eval_batch_size: 4
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.211 | 0.3559 | 200 | 0.2074 |
| 0.2149 | 0.7117 | 400 | 0.1970 |
| 0.0729 | 1.0676 | 600 | 0.2037 |
| 0.0658 | 1.4235 | 800 | 0.1954 |
| 0.0675 | 1.7794 | 1000 | 0.1950 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
emifjo/fin_sentiment | emifjo | "2023-03-01T14:15:49Z" | 15 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-03-01T09:58:23Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: fin_sentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fin_sentiment
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0101
- Accuracy: 0.8624
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.527 | 1.0 | 515 | 0.4239 | 0.8514 |
| 0.3018 | 2.0 | 1030 | 0.4353 | 0.8707 |
| 0.1509 | 3.0 | 1545 | 0.7413 | 0.8446 |
| 0.1069 | 4.0 | 2060 | 0.7788 | 0.8611 |
| 0.0585 | 5.0 | 2575 | 0.8656 | 0.8624 |
| 0.0397 | 6.0 | 3090 | 0.8394 | 0.8666 |
| 0.0233 | 7.0 | 3605 | 0.9554 | 0.8624 |
| 0.0171 | 8.0 | 4120 | 0.9459 | 0.8583 |
| 0.0128 | 9.0 | 4635 | 0.9991 | 0.8597 |
| 0.0033 | 10.0 | 5150 | 1.0101 | 0.8624 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
exala/db_fe2_4.1 | exala | "2024-12-08T01:50:51Z" | 105 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-12-08T01:50:39Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sleepdeprived3/Baptist-Christian-Bible-Expert-v1.1-12b_EXL2_1.5bpw_H8 | sleepdeprived3 | "2025-03-01T22:18:53Z" | 33 | 0 | null | [
"safetensors",
"mistral",
"Baptist",
"Christian",
"Bible",
"Theology",
"Jesus",
"Seminary",
"SBC",
"Protestant",
"text-generation",
"conversational",
"license:apache-2.0",
"exl2",
"region:us"
] | text-generation | "2025-02-17T08:02:15Z" | ---
license: apache-2.0
tags:
- Baptist
- Christian
- Bible
- Theology
- Jesus
- Seminary
- SBC
- Protestant
pipeline_tag: text-generation
---
# Southern Baptist Bible Expert
A specialized language model fine-tuned for Southern Baptist theology and ministry applications. Based on `mistralai/Mistral-Nemo-Instruct-2407` with **128k token context window** for comprehensive biblical analysis.
## Features
- 🕊️ Answers theological questions from a Southern Baptist perspective (Baptist Faith & Message 2000)
- ✝️ Explains Scripture using historical-grammatical interpretation with emphasis on practical application
- 🎓 Assists with sermon preparation, Sunday School lessons, and evangelism training
- 💬 Roleplays as a Baptist pastor for counseling and discipleship scenarios
- 📖 Specializes in Baptist distinctives: believer's baptism, local church autonomy, priesthood of all believers
## FULL SETTINGS and an optional Pastor character card:
[Link to Full Settings and Optional Character](https://huggingface.co/sleepdeprived3/Pastor-Luke-V3)
**Chat Template:** Mistral V3 Tekken
**Recommended deterministic sampler for maximum Biblical accuracy:**
```python
{
"temperature": 0,
"top_k": 1,
"dry_multiplier": 0.01,
}
```
**Example Prompts:**
```
[INST] Prepare an expository sermon outline on John 3:16 emphasizing evangelism [/INST]
[INST] How should a church handle church discipline? [/INST]
```
## Quantized Formats
- **EXL2 Collection**:
[Southern-Baptist-Bible-Expert EXL2 Models](https://huggingface.co/collections/sleepdeprived3/baptist-christian-bible-expert-v11-12b-exl2-67aceef3626939781836263d)
- **GGUF Collection**:
[Southern-Baptist-Bible-Expert GGUF Models](https://huggingface.co/collections/sleepdeprived3/baptist-christian-bible-expert-v11-12b-gguf-67acef038093155c02b67b82)
## Training Details
- **Base Model**: `mistralai/Mistral-Nemo-Instruct-2407` (128k context)
- **Fine-Tuning**: QLoRA on SBC doctrinal documents, sermon archives, and Baptist theological works
- **License**: Apache 2.0
## Ethical Considerations
This model is designed to:
- Maintain fidelity to the Baptist Faith & Message 2000
- Promote biblical inerrancy and sufficiency
- Support local church ministry rather than replace pastoral counsel
*For the Glory of God and the Advancement of His Kingdom* |
friasjoey/mosaic-bert | friasjoey | "2025-02-27T19:12:44Z" | 0 | 0 | null | [
"en",
"dataset:c4",
"arxiv:2205.14135",
"arxiv:2108.12409",
"arxiv:2002.05202",
"arxiv:2208.08124",
"arxiv:1612.08083",
"arxiv:2102.11972",
"arxiv:1907.11692",
"arxiv:2202.08005",
"license:apache-2.0",
"region:us"
] | null | "2025-02-27T19:12:32Z" | ---
license: apache-2.0
datasets:
- c4
language:
- en
inference: false
---
# MosaicBERT-Base model
MosaicBERT-Base is a custom BERT architecture and training recipe optimized for fast pretraining.
MosaicBERT trains faster and achieves higher pretraining and finetuning accuracy when benchmarked against
Hugging Face's [bert-base-uncased](https://huggingface.co/bert-base-uncased).
This study motivated many of the architecture choices around MosaicML's [MPT-7B](https://huggingface.co/mosaicml/mpt-7b) and [MPT-30B](https://huggingface.co/mosaicml/mpt-30b) models.
## Model Date
March 2023
## Documentation
* [Project Page (mosaicbert.github.io)](mosaicbert.github.io)
* [Github (mosaicml/examples/tree/main/examples/benchmarks/bert)](https://github.com/mosaicml/examples/tree/main/examples/benchmarks/bert)
* [Paper (NeurIPS 2023)](https://openreview.net/forum?id=5zipcfLC2Z)
* Colab Tutorials:
* [MosaicBERT Tutorial Part 1: Load Pretrained Weights and Experiment with Sequence Length Extrapolation Using ALiBi](https://colab.research.google.com/drive/1r0A3QEbu4Nzs2Jl6LaiNoW5EumIVqrGc?usp=sharing)
* [Blog Post (March 2023)](https://www.mosaicml.com/blog/mosaicbert)
## Community Adoption
* [DNABERT-2-117M](https://huggingface.co/zhihan1996/DNABERT-2-117M) for genome classification
## How to use
```python
import torch
import transformers
from transformers import AutoModelForMaskedLM, BertTokenizer, pipeline
from transformers import BertTokenizer, BertConfig
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') # MosaicBERT uses the standard BERT tokenizer
config = transformers.BertConfig.from_pretrained('mosaicml/mosaic-bert-base') # the config needs to be passed in
mosaicbert = AutoModelForMaskedLM.from_pretrained('mosaicml/mosaic-bert-base',config=config,trust_remote_code=True)
# To use this model directly for masked language modeling
mosaicbert_classifier = pipeline('fill-mask', model=mosaicbert, tokenizer=tokenizer,device="cpu")
mosaicbert_classifier("I [MASK] to the store yesterday.")
```
Note that the tokenizer for this model is simply the Hugging Face `bert-base-uncased` tokenizer.
In order to take advantage of ALiBi by extrapolating to longer sequence lengths, simply change the `alibi_starting_size` flag in the
config file and reload the model.
```python
config = transformers.BertConfig.from_pretrained('mosaicml/mosaic-bert-base')
config.alibi_starting_size = 1024 # maximum sequence length updated to 1024 from config default of 512
mosaicbert = AutoModelForMaskedLM.from_pretrained('mosaicml/mosaic-bert-base',config=config,trust_remote_code=True)
```
This simply presets the non-learned linear bias matrix in every attention block to 1024 tokens (note that this particular model was trained with a sequence length of 128 tokens).
**To continue MLM pretraining**, follow the [MLM pre-training section of the mosaicml/examples/benchmarks/bert repo](https://github.com/mosaicml/examples/tree/main/examples/benchmarks/bert#pre-training).
**To fine-tune this model for classification**, follow the [Single-task fine-tuning section of the mosaicml/examples/benchmarks/bert repo](https://github.com/mosaicml/examples/tree/main/examples/benchmarks/bert#fine-tuning).
### [Update 1/2/2024] Triton Flash Attention with ALiBi
Note that by default, triton Flash Attention is **not** enabled or required. In order to enable our custom implementation of triton Flash Attention with ALiBi from March 2023,
set `attention_probs_dropout_prob: 0.0`. We are currently working on supporting Flash Attention 2 (see [PR here](https://github.com/mosaicml/examples/pull/440)) and replacing the custom triton implementation.
### Remote Code
This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we train using [FlashAttention (Dao et al. 2022)](https://arxiv.org/pdf/2205.14135.pdf), which is not part of the `transformers` library and depends on [Triton](https://github.com/openai/triton) and some custom PyTorch code. Since this involves executing arbitrary code, you should consider passing a git `revision` argument that specifies the exact commit of the code, for example:
```python
mosaicbert = AutoModelForMaskedLM.from_pretrained(
'mosaicml/mosaic-bert-base',
trust_remote_code=True,
revision='24512df',
)
```
However, if there are updates to this model or code and you specify a revision, you will need to manually check for them and update the commit hash accordingly.
## Model description
In order to build MosaicBERT, we adopted architectural choices from the recent transformer literature.
These include [FlashAttention (Dao et al. 2022)](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi (Press et al. 2021)](https://arxiv.org/abs/2108.12409),
and [Gated Linear Units (Shazeer 2020)](https://arxiv.org/abs/2002.05202). In addition, we remove padding inside the transformer block,
and apply LayerNorm with low precision.
### Modifications to the Attention Mechanism
1. **FlashAttention**: Attention layers are core components of the transformer architecture. The recently proposed FlashAttention layer
reduces the number of read/write operations between the GPU HBM (high bandwidth memory, i.e. long-term memory) and the GPU SRAM
(i.e. short-term memory) [[Dao et al. 2022]](https://arxiv.org/pdf/2205.14135.pdf). We used the FlashAttention module built by
[hazy research](https://github.com/HazyResearch/flash-attention) with [OpenAI’s triton library](https://github.com/openai/triton).
2. **Attention with Linear Biases (ALiBi)**: In most BERT models, the positions of tokens in a sequence are encoded with a position embedding layer;
this embedding allows subsequent layers to keep track of the order of tokens in a sequence. ALiBi eliminates position embeddings and
instead conveys this information using a bias matrix in the attention operation. It modifies the attention mechanism such that nearby
tokens strongly attend to one another [[Press et al. 2021]](https://arxiv.org/abs/2108.12409). In addition to improving the performance of the final model, ALiBi helps the
model to handle sequences longer than it saw during training. Details on our ALiBi implementation can be found [in the mosaicml/examples repo here](https://github.com/mosaicml/examples/tree/main/examples/benchmarks/bert/src/bert_layers.py#L425).
3. **Unpadding**: Standard NLP practice is to combine text sequences of different lengths into a batch, and pad the sequences with empty
tokens so that all sequence lengths are the same. During training, however, this can lead to many superfluous operations on those
padding tokens. In MosaicBERT, we take a different approach: we concatenate all the examples in a minibatch into a single sequence
of batch size 1. Results from NVIDIA and others have shown that this approach leads to speed improvements during training, since
operations are not performed on padding tokens (see for example [Zeng et al. 2022](https://arxiv.org/pdf/2208.08124.pdf)).
Details on our “unpadding” implementation can be found [in the mosaicml/examples repo here](https://github.com/mosaicml/examples/tree/main/examples/benchmarks/bert/src/bert_padding.py).
4. **Low Precision LayerNorm**: this small tweak forces LayerNorm modules to run in float16 or bfloat16 precision instead of float32, improving utilization.
Our implementation can be found [in the mosaicml/examples repo here](https://docs.mosaicml.com/projects/composer/en/stable/method_cards/low_precision_layernorm.html).
### Modifications to the Feedforward Layers
5. **Gated Linear Units (GLU)**: We used Gated Linear Units for the feedforward sublayer of a transformer. GLUs were first proposed in 2016 [[Dauphin et al. 2016]](https://arxiv.org/abs/1612.08083),
and incorporate an extra learnable matrix that “gates” the outputs of the feedforward layer. More recent work has shown that
GLUs can improve performance quality in transformers [[Shazeer, 2020](https://arxiv.org/abs/2002.05202), [Narang et al. 2021](https://arxiv.org/pdf/2102.11972.pdf)]. We used the GeLU (Gaussian-error Linear Unit)
activation function with GLU, which is sometimes referred to as GeGLU. The GeLU activation function is a smooth, fully differentiable
approximation to ReLU; we found that this led to a nominal improvement over ReLU. More details on our implementation of GLU can be found here.
The extra gating matrix in a GLU model potentially adds additional parameters to a model; we chose to augment our BERT-Base model with
additional parameters due to GLU modules as it leads to a Pareto improvement across all timescales (which is not true of all larger
models such as BERT-Large). While BERT-Base has 110 million parameters, MosaicBERT-Base has 137 million parameters. Note that
MosaicBERT-Base trains faster than BERT-Base despite having more parameters.
## Training data
MosaicBERT is pretrained using a standard Masked Language Modeling (MLM) objective: the model is given a sequence of
text with some tokens hidden, and it has to predict these masked tokens. MosaicBERT is trained on
the English [“Colossal, Cleaned, Common Crawl” C4 dataset](https://github.com/allenai/allennlp/discussions/5056), which contains roughly 365 million curated text documents scraped
from the internet (equivalent to 156 billion tokens). We used this more modern dataset in place of traditional BERT pretraining
corpora like English Wikipedia and BooksCorpus.
## Pretraining Optimizations
Many of these pretraining optimizations below were informed by our [BERT results for the MLPerf v2.1 speed benchmark](https://www.mosaicml.com/blog/mlperf-nlp-nov2022).
1. **MosaicML Streaming Dataset**: As part of our efficiency pipeline, we converted the C4 dataset to [MosaicML’s StreamingDataset format](https://www.mosaicml.com/blog/mosaicml-streamingdataset) and used this
for both MosaicBERT-Base and the baseline BERT-Base. For all BERT-Base models, we chose the training duration to be 286,720,000 samples of sequence length 128; this covers 78.6% of C4.
2. **Higher Masking Ratio for the Masked Language Modeling Objective**: We used the standard Masked Language Modeling (MLM) pretraining objective.
While the original BERT paper also included a Next Sentence Prediction (NSP) task in the pretraining objective,
subsequent papers have shown this to be unnecessary [Liu et al. 2019](https://arxiv.org/abs/1907.11692).
However, we found that a 30% masking ratio led to slight accuracy improvements in both pretraining MLM and downstream GLUE performance.
We therefore included this simple change as part of our MosaicBERT training recipe. Recent studies have also found that this simple
change can lead to downstream improvements [Wettig et al. 2022](https://arxiv.org/abs/2202.08005).
3. **Bfloat16 Precision**: We use [bf16 (bfloat16) mixed precision training](https://cloud.google.com/blog/products/ai-machine-learning/bfloat16-the-secret-to-high-performance-on-cloud-tpus) for all the models, where a matrix multiplication layer uses bf16
for the multiplication and 32-bit IEEE floating point for gradient accumulation. We found this to be more stable than using float16 mixed precision.
4. **Vocab Size as a Multiple of 64**: We increased the vocab size to be a multiple of 8 as well as 64 (i.e. from 30,522 to 30,528).
This small constraint is something of [a magic trick among ML practitioners](https://twitter.com/karpathy/status/1621578354024677377), and leads to a throughput speedup.
5. **Hyperparameters**: For all models, we use Decoupled AdamW with Beta_1=0.9 and Beta_2=0.98, and a weight decay value of 1.0e-5.
The learning rate schedule begins with a warmup to a maximum learning rate of 5.0e-4 followed by a linear decay to zero.
Warmup lasted for 6% of the full training duration. Global batch size was set to 4096, and microbatch size was 128; since global batch size was 4096, full pretraining consisted of 70,000 batches.
We set the maximum sequence length during pretraining to 128, and we used the standard embedding dimension of 768.
For MosaicBERT, we applied 0.1 dropout to the feedforward layers but no dropout to the FlashAttention module, as this was not possible with the OpenAI triton implementation.
Full configuration details for pretraining MosaicBERT-Base can be found in the configuration yamls [in the mosaicml/examples repo here](https://github.com/mosaicml/examples/blob/main/examples/benchmarks/bert/yamls/main/mosaic-bert-base-uncased.yaml).
## Evaluation results
When fine-tuned on downstream tasks (following the [finetuning details here](https://github.com/mosaicml/examples/tree/main/examples/benchmarks/bert/yamls/finetuning/glue/mosaic-bert-base-uncased.yaml)), the MosaicBERT model achieves the following GLUE results:
| Task | MNLI-(m/mm) | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | Average |
|:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:|
| | 0.8495 | 0.9029 | 0.9074| 0.9246 | 0.5511 | 0.8927 | 0.9003 | 0.8136 | 0.8428 |
Note that this is averaged over n=5 pretraining seeds.
## Collection of MosaicBERT-Base models trained using ALiBi on different sequence lengths
ALiBi allows a model trained with a sequence length n to easily extrapolate to sequence lengths >2n during finetuning. For more details, see [Train Short, Test Long: Attention with Linear
Biases Enables Input Length Extrapolation (Press et al. 2022)](https://arxiv.org/abs/2108.12409)
This model is part of the **family of MosaicBERT-Base models** trained using ALiBi on different sequence lengths:
* mosaic-bert-base (trained on a sequence length of 128 tokens)
* [mosaic-bert-base-seqlen-256](https://huggingface.co/mosaicml/mosaic-bert-base-seqlen-256)
* [mosaic-bert-base-seqlen-512](https://huggingface.co/mosaicml/mosaic-bert-base-seqlen-512)
* [mosaic-bert-base-seqlen-1024](https://huggingface.co/mosaicml/mosaic-bert-base-seqlen-1024)
* [mosaic-bert-base-seqlen-2048](https://huggingface.co/mosaicml/mosaic-bert-base-seqlen-2048)
The primary use case of these models is for research on efficient pretraining and finetuning for long context embeddings.
## Intended uses & limitations
This model is intended to be finetuned on downstream tasks.
## Citation
Please cite this model using the following format:
```
@article{portes2023MosaicBERT,
title={MosaicBERT: A Bidirectional Encoder Optimized for Fast Pretraining},
author={Jacob Portes, Alexander R Trott, Sam Havens, Daniel King, Abhinav Venigalla,
Moin Nadeem, Nikhil Sardana, Daya Khudia, Jonathan Frankle},
journal={NeuRIPS https://openreview.net/pdf?id=5zipcfLC2Z},
year={2023},
}
``` |
Sookeyy/flan-t5-base-code_search_net | Sookeyy | "2024-01-27T18:41:27Z" | 92 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:code_search_net",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-01-27T17:11:39Z" | ---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
datasets:
- code_search_net
model-index:
- name: flan-t5-base-code_search_net
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-code_search_net
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the code_search_net dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.37.1
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
pulkitnitk/dog | pulkitnitk | "2023-04-25T16:50:46Z" | 1 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2023-04-25T11:38:48Z" |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - pulkitnitk/dog
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




|
akaigraham/cat-classifier | akaigraham | "2023-02-28T23:40:02Z" | 0 | 0 | fastai | [
"fastai",
"region:us"
] | null | "2023-02-28T23:27:50Z" | ---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
iubeda/ppo-Pyramids | iubeda | "2023-02-18T00:04:04Z" | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | "2023-02-18T00:03:58Z" |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: iubeda/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
LoneStriker/Nous-Hermes-2-Mixtral-8x7B-DPO-3.0bpw-h6-exl2 | LoneStriker | "2024-01-16T11:46:12Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"Mixtral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"synthetic data",
"distillation",
"conversational",
"en",
"base_model:mistralai/Mixtral-8x7B-v0.1",
"base_model:finetune:mistralai/Mixtral-8x7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-01-16T11:39:00Z" | ---
base_model: mistralai/Mixtral-8x7B-v0.1
tags:
- Mixtral
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- synthetic data
- distillation
model-index:
- name: Nous-Hermes-2-Mixtral-8x7B-DPO
results: []
license: apache-2.0
language:
- en
---
# Nous Hermes 2 - Mixtral 8x7B - DPO

## Model description
Nous Hermes 2 Mixtral 8x7B DPO is the new flagship Nous Research model trained over the [Mixtral 8x7B MoE LLM](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1).
The model was trained on over 1,000,000 entries of primarily GPT-4 generated data, as well as other high quality data from open datasets across the AI landscape, achieving state of the art performance on a variety of tasks.
This is the SFT + DPO version of Mixtral Hermes 2, we have also released an SFT only version, for people to find which works best for them, which can be found here: https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-SFT
## We are grateful to Together.ai for sponsoring our compute during the many experiments both training Mixtral and working on DPO!
# Table of Contents
1. [Example Outputs](#example-outputs)
2. [Benchmark Results](#benchmark-results)
- GPT4All
- AGIEval
- BigBench
- Comparison to Mixtral-Instruct
3. [Prompt Format](#prompt-format)
4. [Inference Example Code](#inference-code)
5. [Quantized Models](#quantized-models)
## Example Outputs
### Writing Code for Data Visualization

### Writing Cyberpunk Psychedelic Poems

### Performing Backtranslation to Create Prompts from Input Text

## Benchmark Results
Nous-Hermes 2 on Mixtral 8x7B is a major improvement across the board on the benchmarks below compared to the base Mixtral model, and is the first model to beat the flagship Mixtral Finetune by MistralAI.
## GPT4All:
```
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.5990|± |0.0143|
| | |acc_norm|0.6425|± |0.0140|
|arc_easy | 0|acc |0.8657|± |0.0070|
| | |acc_norm|0.8636|± |0.0070|
|boolq | 1|acc |0.8783|± |0.0057|
|hellaswag | 0|acc |0.6661|± |0.0047|
| | |acc_norm|0.8489|± |0.0036|
|openbookqa | 0|acc |0.3440|± |0.0213|
| | |acc_norm|0.4660|± |0.0223|
|piqa | 0|acc |0.8324|± |0.0087|
| | |acc_norm|0.8379|± |0.0086|
|winogrande | 0|acc |0.7616|± |0.0120|
```
Average: 75.70
## AGIEval:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------|------:|--------|-----:|---|-----:|
|agieval_aqua_rat | 0|acc |0.2402|± |0.0269|
| | |acc_norm|0.2520|± |0.0273|
|agieval_logiqa_en | 0|acc |0.4117|± |0.0193|
| | |acc_norm|0.4055|± |0.0193|
|agieval_lsat_ar | 0|acc |0.2348|± |0.0280|
| | |acc_norm|0.2087|± |0.0269|
|agieval_lsat_lr | 0|acc |0.5549|± |0.0220|
| | |acc_norm|0.5294|± |0.0221|
|agieval_lsat_rc | 0|acc |0.6617|± |0.0289|
| | |acc_norm|0.6357|± |0.0294|
|agieval_sat_en | 0|acc |0.8010|± |0.0279|
| | |acc_norm|0.7913|± |0.0284|
|agieval_sat_en_without_passage| 0|acc |0.4806|± |0.0349|
| | |acc_norm|0.4612|± |0.0348|
|agieval_sat_math | 0|acc |0.4909|± |0.0338|
| | |acc_norm|0.4000|± |0.0331|
```
Average: 46.05
## BigBench:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.6105|± |0.0355|
|bigbench_date_understanding | 0|multiple_choice_grade|0.7182|± |0.0235|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.5736|± |0.0308|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.4596|± |0.0263|
| | |exact_str_match |0.0000|± |0.0000|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.3500|± |0.0214|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2500|± |0.0164|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.5200|± |0.0289|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.3540|± |0.0214|
|bigbench_navigate | 0|multiple_choice_grade|0.5000|± |0.0158|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.6900|± |0.0103|
|bigbench_ruin_names | 0|multiple_choice_grade|0.6317|± |0.0228|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.2535|± |0.0138|
|bigbench_snarks | 0|multiple_choice_grade|0.7293|± |0.0331|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.6744|± |0.0149|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.7400|± |0.0139|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2176|± |0.0117|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1543|± |0.0086|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.5200|± |0.0289|
```
Average: 49.70
# Benchmark Comparison Charts
## GPT4All

## AGI-Eval

## BigBench Reasoning Test

## Comparison to Mixtral Instruct:
Our benchmarks show gains in many benchmarks against Mixtral Instruct v0.1, on average, beating the flagship Mixtral model.

# Prompt Format
Nous Hermes 2 uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.
This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.
Prompt with system instruction (Use whatever system prompt you like, this is just an example!):
```
<|im_start|>system
You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|>
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are Hermes 2."},
{"role": "user", "content": "Hello, who are you?"}
]
gen_input = tokenizer.apply_chat_template(message, return_tensors="pt")
model.generate(**gen_input)
```
When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
that the model continues with an assistant response.
To utilize the prompt format without a system prompt, simply leave the line out.
When quantized versions of the model are released, I recommend using LM Studio for chatting with Nous Hermes 2. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.
In LM-Studio, simply select the ChatML Prefix on the settings side pane:

# Inference Code
Here is example code using HuggingFace Transformers to inference the model (note: even in 4bit, it will require more than 24GB of VRAM)
```python
# Code to inference Hermes with HF Transformers
# Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from transformers import LlamaTokenizer, MixtralForCausalLM
import bitsandbytes, flash_attn
tokenizer = LlamaTokenizer.from_pretrained('NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO', trust_remote_code=True)
model = MixtralForCausalLM.from_pretrained(
"NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO",
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
load_in_4bit=True,
use_flash_attention_2=True
)
prompts = [
"""<|im_start|>system
You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
<|im_start|>user
Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|>
<|im_start|>assistant""",
]
for chat in prompts:
print(chat)
input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
print(f"Response: {response}")
```
# Quantized Models:
## All sizes of GGUF Quantizations are available here:
### SFT+DPO Version - https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF
### SFT Only Version - https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-SFT-GGUF
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
mesa44/Reinforce-PixelCopter | mesa44 | "2024-01-16T10:40:45Z" | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2024-01-16T10:39:39Z" | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 54.00 +/- 35.54
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Wheatley961/Raw_3_no_1_Test_2_new.model | Wheatley961 | "2022-11-16T10:43:52Z" | 2 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2022-11-16T10:43:28Z" | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 80 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 80,
"warmup_steps": 8,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
J1k/Jennie_LoRA | J1k | "2024-02-26T02:40:28Z" | 1 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | "2024-02-23T09:25:59Z" | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
- text-to-image
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
- text-to-image
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of TOK dog
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - J1k/Jennie_LoRA
<Gallery />
## Model description
These are J1k/Jennie_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK dog to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](J1k/Jennie_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
kromcomp/L3.1-test-r128-LoRA | kromcomp | "2025-02-26T05:12:46Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"mergekit",
"base_model:Delta-Vector/Control-Nanuq-8B",
"base_model:adapter:Delta-Vector/Control-Nanuq-8B",
"region:us"
] | null | "2025-02-26T05:11:33Z" | ---
base_model:
- arcee-ai/Llama-3.1-SuperNova-Lite
- Delta-Vector/Control-Nanuq-8B
library_name: peft
tags:
- mergekit
- peft
---
# test-r128-LoRA
This is a LoRA extracted from a language model. It was extracted using [mergekit](https://github.com/arcee-ai/mergekit).
## LoRA Details
This LoRA adapter was extracted from [Delta-Vector/Control-Nanuq-8B](https://huggingface.co/Delta-Vector/Control-Nanuq-8B) and uses [arcee-ai/Llama-3.1-SuperNova-Lite](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite) as a base.
### Parameters
The following command was used to extract this LoRA adapter:
```sh
/usr/local/bin/mergekit-extract-lora --out-path=loras/test-r128-LoRA --model=Delta-Vector/Control-Nanuq-8B --base-model=arcee-ai/Llama-3.1-SuperNova-Lite --no-lazy-unpickle --max-rank=128 --sv-epsilon=0 --cuda -v -e lm_head
```
|
gokuls/bert-base-emotion_24 | gokuls | "2023-06-21T08:57:27Z" | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-06-21T08:33:25Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: bert-base-emotion_24
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.8611
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-emotion_24
This model is a fine-tuned version of [gokuls/bert_base_24](https://huggingface.co/gokuls/bert_base_24) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5612
- Accuracy: 0.8611
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 33
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.736 | 1.0 | 250 | 0.4242 | 0.8565 |
| 0.3013 | 2.0 | 500 | 0.3314 | 0.8845 |
| 0.2014 | 3.0 | 750 | 0.3442 | 0.8905 |
| 0.1392 | 4.0 | 1000 | 0.3276 | 0.8915 |
| 0.1072 | 5.0 | 1250 | 0.3833 | 0.89 |
| 0.0783 | 6.0 | 1500 | 0.4205 | 0.8895 |
| 0.0559 | 7.0 | 1750 | 0.5287 | 0.8865 |
| 0.0378 | 8.0 | 2000 | 0.5459 | 0.8865 |
| 0.027 | 9.0 | 2250 | 0.5612 | 0.8925 |
| 0.02 | 10.0 | 2500 | 0.5601 | 0.8915 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
DBangshu/V4_Base_GPT2_e5_4_1 | DBangshu | "2024-11-29T17:40:34Z" | 135 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-29T17:40:19Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
beyoru/Herto2-27 | beyoru | "2025-02-27T02:56:25Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"qwen2",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-27T02:55:32Z" | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mcditoos/PPO-LunarLander-v2 | mcditoos | "2022-05-05T07:12:33Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2022-05-05T07:11:57Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 233.04 +/- 17.51
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
janbakker/o-bartpara-wikiauto | janbakker | "2024-11-05T13:13:15Z" | 107 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"doc_simp",
"wiki-auto",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-03-12T14:18:43Z" | ---
language:
- en
tags:
- doc_simp
- wiki-auto
---
# O->BART_para document simplification system
This is the plan-guided paragraph-level BART model pretrained on wiki-auto.
It belongs to the paper [Beyond Sentence-level Text Simplification](https://aclanthology.org/2024.determit-1.3/): Reproducibility Study of Context-Aware Document Simplification.
The [doc_simp](https://github.com/JanB100/doc_simp) library should be used to interface with this model. |
JFJFJFen/flan-t5-large-lora | JFJFJFen | "2023-12-02T13:04:26Z" | 3 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:google/flan-t5-base",
"base_model:adapter:google/flan-t5-base",
"region:us"
] | null | "2023-12-02T13:04:00Z" | ---
library_name: peft
base_model: google/flan-t5-base
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2.dev0
|
iloncka/exp_5_objects-subs_1_v_5_xresnet50_ep_60 | iloncka | "2024-06-03T09:02:45Z" | 0 | 0 | fastai | [
"fastai",
"region:us"
] | null | "2024-06-03T09:01:07Z" | ---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
shohei1029/dreambooth-shohei-28 | shohei1029 | "2023-08-22T15:21:00Z" | 0 | 1 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-08-21T18:12:50Z" |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: Full body profile photo of a man shohei in various situation
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - shohei1029/dreambooth-shohei-28
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on Full body profile photo of a man shohei in various situation using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
pwork7/gemma7b_meta_math_3epoch_with_kn | pwork7 | "2024-11-10T21:05:12Z" | 36 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-10T21:00:43Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
PrunaAI/chihoonlee10-T3Q-ko-solar-dpo-v6.0-bnb-4bit-smashed | PrunaAI | "2024-07-22T22:30:14Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"pruna-ai",
"base_model:chihoonlee10/T3Q-ko-solar-dpo-v6.0",
"base_model:quantized:chihoonlee10/T3Q-ko-solar-dpo-v6.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-07-22T22:27:26Z" | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: chihoonlee10/T3Q-ko-solar-dpo-v6.0
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo chihoonlee10/T3Q-ko-solar-dpo-v6.0 installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/chihoonlee10-T3Q-ko-solar-dpo-v6.0-bnb-4bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("chihoonlee10/T3Q-ko-solar-dpo-v6.0")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model chihoonlee10/T3Q-ko-solar-dpo-v6.0 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
henryscheible/mrpc_bert-base-uncased_81 | henryscheible | "2023-01-18T20:15:43Z" | 0 | 0 | null | [
"pytorch",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"region:us"
] | null | "2023-01-18T20:03:00Z" | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: mrpc_bert-base-uncased_81
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8088235294117647
- name: F1
type: f1
value: 0.8717105263157895
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mrpc_bert-base-uncased_81
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6390
- Accuracy: 0.8088
- F1: 0.8717
- Combined Score: 0.8403
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.13.1
|
roshinishetty333/llama-2-7b-prompt-tuned | roshinishetty333 | "2024-05-17T11:34:59Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-05-11T18:01:21Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
apta/ppo-LunarLander-v2 | apta | "2023-01-24T00:18:44Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-01-24T00:18:08Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 239.85 +/- 21.70
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
guilxus/b45e5438-c086-4061-8ee4-0d4535782ac8 | guilxus | "2025-02-04T04:19:34Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Math-1.5B",
"base_model:adapter:unsloth/Qwen2.5-Math-1.5B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-02-04T03:48:02Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Math-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b45e5438-c086-4061-8ee4-0d4535782ac8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Math-1.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- cf971d07e3ff665f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/cf971d07e3ff665f_train_data.json
type:
field_input: labels
field_instruction: name
field_output: text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: guilxus/b45e5438-c086-4061-8ee4-0d4535782ac8
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/cf971d07e3ff665f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: fef0eb04-6ba6-4379-a2e7-a7fdc70a6b88
wandb_project: Gradients-On-11
wandb_run: your_name
wandb_runid: fef0eb04-6ba6-4379-a2e7-a7fdc70a6b88
warmup_steps: 5
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# b45e5438-c086-4061-8ee4-0d4535782ac8
This model is a fine-tuned version of [unsloth/Qwen2.5-Math-1.5B](https://huggingface.co/unsloth/Qwen2.5-Math-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6879
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.8455 | 0.0113 | 200 | 3.6879 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
MRNH/q-FrozenLake-v1-4x4-noSlippery | MRNH | "2023-08-10T17:06:53Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-08-10T17:06:50Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="MRNH/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
AlTheMan/LLama3-reviewClassifier-Lora-weights | AlTheMan | "2024-05-14T23:15:44Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-05-07T16:21:23Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This is a Llama 3:8b based fine-tuned model used for for classifying reviews as useful for improvement of an application or not.
A review was considered useful if it contained something useful for the further development or improvement
of the application, that is, a user story, bug report, or other specific problems that users have.
It does not include general complaints such as “I hate this app”, or encouragements such as
“I like your payment feature”, even if these encouragements are directed at a specific feature of the app.
The idea is to use this model as part of a pipeline, where you classify reviews as 'useful' or not, and then
you can further tag the useful reviews according to your need and store them. This allows your organisation to
send the right reviews to different departments in your organisation to be handled, for example feature requests and bug reports.
This model outperformed Chat-bison for this classification task if you weigh the result higher for recall.
It had an F1-weighted score of 0.90 for the 550 reviews it was tested against, with an F1-score of 0.92 for the negative class
and F1-score of 0.87 for the positive class.
The model is fine-tuned to answers "True" or "False" for inputted reviews.
It was trained using unsloth AI, based on Llama-3-8b-bnb-4bit, also using TRL SFTTrainer for 10 epochs
with a validation loss of 0.0235.
It was trained on real reviews from the Google Play store using the following prompt:
"""
You are an AI trained to classify user reviews for a company called Kivra,
which offers a digital mailbox service featuring invoice payment, digital signing, mail scanning,
simple bookkeeping, and digital receipts.
Your task is to analyze each user review and determine whether it contains useful information such as feature requests,
bug reports, or other valuable feedback that can aid the developers in improving the service.
You will classify the review as 'True' if it contains such useful information, and 'False' if it does not.
Classification Guidelines:
- Classify a review as 'True' if it suggests a new feature, reports a bug, complains in a way that highlights a problem, or provides any other information that could lead to an improvement of the service.
- Classify as 'False' if the review only praises the service, comments on its current state without constructive feedback, or is irrelevant to service development.
Example Classifications:
- Review: "It's difficult to increase text size"
Classification: True
Reason: The review suggests a feature to increase text size, which is useful feedback for enhancing readability.
- Review: "This is the best app to pay bills"
Classification: False
Reason: The review praises the service but lacks suggestions for improvement or reports of issues.
- Review: "The app crashes every time I try to upload a document."
Classification: True
Reason: The review reports a specific bug, providing critical information for troubleshooting and improvement."""
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
fhai50032/llama-8b-hindi-hinglish-cot | fhai50032 | "2025-03-02T22:18:43Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-02T22:04:13Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
StoriesLM/StoriesLM-v1-1916 | StoriesLM | "2024-03-09T23:09:13Z" | 101 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"en",
"dataset:dell-research-harvard/AmericanStories",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2024-03-09T22:23:33Z" | ---
license: mit
datasets:
- dell-research-harvard/AmericanStories
language:
- en
---
# StoriesLM: A Family of Language Models With Sequentially-Expanding Pretraining Windows
## Model Family
StoriesLM is a family of language models with sequentially-expanding pretraining windows. The pretraining data for the model family comes from the American Stories dataset—a collection of language from historical American news articles. The first language model in the StoriesLM family is pretrained on language data from 1900. Each subsequent language model further trains the previous year’s model checkpoint using data from the following year, up until 1963.
## Dataset
The StoriesLM family is pretrained on the American Stories dataset. If you use a model from this family, please also cite the original dataset's authors:
```
@article{dell2024american,
title={American stories: A large-scale structured text dataset of historical us newspapers},
author={Dell, Melissa and Carlson, Jacob and Bryan, Tom and Silcock, Emily and Arora, Abhishek and Shen, Zejiang and D'Amico-Wong, Luca and Le, Quan and Querubin, Pablo and Heldring, Leander},
journal={Advances in Neural Information Processing Systems},
volume={36},
year={2024}
}
```
|
ericrisco/llme2_sft_model_rlaif | ericrisco | "2025-01-13T20:24:52Z" | 66 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-03-21T05:46:29Z" | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nat-hunt/14b0cb2e-dafa-4206-b4ab-2f4022dbc825 | nat-hunt | "2025-01-13T18:37:45Z" | 14 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/llama-2-7b-chat",
"base_model:adapter:unsloth/llama-2-7b-chat",
"license:apache-2.0",
"region:us"
] | null | "2025-01-13T18:34:52Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/llama-2-7b-chat
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 14b0cb2e-dafa-4206-b4ab-2f4022dbc825
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/llama-2-7b-chat
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- fe865ab9c1c0ecd8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fe865ab9c1c0ecd8_train_data.json
type:
field_input: statement_supporting_documents
field_instruction: statement
field_output: response_positive
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: nat-hunt/14b0cb2e-dafa-4206-b4ab-2f4022dbc825
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/fe865ab9c1c0ecd8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4af517d6-9e46-423a-b226-51765e2b2f71
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 4af517d6-9e46-423a-b226-51765e2b2f71
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 14b0cb2e-dafa-4206-b4ab-2f4022dbc825
This model is a fine-tuned version of [unsloth/llama-2-7b-chat](https://huggingface.co/unsloth/llama-2-7b-chat) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0014 | 1 | nan |
| 0.0 | 0.0042 | 3 | nan |
| 0.6006 | 0.0084 | 6 | nan |
| 0.0 | 0.0126 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
HQ-Izzy-Publilsh-New-Video/Izzy.New.Video.Tutorial.Official.Viral.On.Instagram | HQ-Izzy-Publilsh-New-Video | "2025-03-02T08:23:11Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-03-02T08:23:01Z" |
<a href="http://bit.ly/3ZBGcrZ"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a>
<a href="http://bit.ly/3ZBGcrZ" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝗦𝗶𝗴𝗻 𝗨𝗽 𝘁𝗼 𝙁𝙪𝙡𝙡 𝗪𝗮𝘁𝗰𝗵 𝙑𝙞𝙙𝙚𝙤❤️❤️)</a>
<a href="http://bit.ly/3ZBGcrZ" rel="nofollow">🔴 ➤►✅𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐥𝐢𝐧𝐤)</a>
|
helamri/q-FrozenLake-v1-4x4-noSlippery | helamri | "2023-08-05T12:23:35Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-08-05T12:23:31Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="helamri/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
FIVE-MGI/Single_Neuron_Identification | FIVE-MGI | "2024-06-13T18:18:09Z" | 0 | 0 | tf-keras | [
"tf-keras",
"onnx",
"en",
"dataset:FIVE-MGI/SNIM20",
"license:agpl-3.0",
"region:us"
] | null | "2024-06-13T17:48:37Z" | ---
license: agpl-3.0
language:
- en
datasets:
- FIVE-MGI/SNIM20
---
# Single Neuron (or Cell) Identification
This model is designed for image classification and identifying rafts or regions that have single neurons and are trained on [FIVE-MGI/SNIM20].
## Research Paper
For more detailed information, please refer to our bioRxiv paper: [Classification of iPSC-Derived Cultures Using Convolutional Neural Networks to Identify Single Differentiated Neurons for Isolation or Measurement](https://www.biorxiv.org/content/10.1101/2023.12.24.573194) |
cjpais/llava-1.6-mistral-7b-gguf | cjpais | "2024-03-06T20:16:02Z" | 18,498 | 98 | null | [
"gguf",
"llava",
"image-text-to-text",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | "2024-02-01T20:44:59Z" | ---
license: apache-2.0
tags:
- llava
pipeline_tag: image-text-to-text
---
# GGUF Quantized LLaVA 1.6 Mistral 7B
Updated quants and projector from [PR #5267](https://github.com/ggerganov/llama.cpp/pull/5267)
## Provided files
| Name | Quant method | Bits | Size | Use case |
| ---- | ---- | ---- | ---- | ----- |
| [llava-v1.6-mistral-7b.Q3_K_XS.gguf](https://huggingface.co/cjpais/llava-1.6-mistral-7b-gguf/blob/main/llava-v1.6-mistral-7b.Q3_K_XS.gguf) | Q3_K_XS | 3 | 2.99 GB| very small, high quality loss |
| [llava-v1.6-mistral-7b.Q3_K_M.gguf](https://huggingface.co/cjpais/llava-1.6-mistral-7b-gguf/blob/main/llava-v1.6-mistral-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| very small, high quality loss |
| [llava-v1.6-mistral-7b.Q4_K_M.gguf](https://huggingface.co/cjpais/llava-1.6-mistral-7b-gguf/blob/main/llava-v1.6-mistral-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| medium, balanced quality - recommended |
| [llava-v1.6-mistral-7b.Q5_K_S.gguf](https://huggingface.co/cjpais/llava-1.6-mistral-7b-gguf/blob/main/llava-v1.6-mistral-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| large, low quality loss - recommended |
| [llava-v1.6-mistral-7b.Q5_K_M.gguf](https://huggingface.co/cjpais/llava-1.6-mistral-7b-gguf/blob/main/llava-v1.6-mistral-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| large, very low quality loss - recommended |
| [llava-v1.6-mistral-7b.Q6_K.gguf](https://huggingface.co/cjpais/llava-1.6-mistral-7b-gguf/blob/main/llava-v1.6-mistral-7b.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| very large, extremely low quality loss |
| [llava-v1.6-mistral-7b.Q8_0.gguf](https://huggingface.co/cjpais/llava-1.6-mistral-7b-gguf/blob/main/llava-v1.6-mistral-7b.Q8_0.gguf) | Q8_0 | 8 | 7.7 GB| very large, extremely low quality loss - not recommended |
<br>
<br>
# ORIGINAL LLaVA Model Card
## Model details
**Model type:**
LLaVA is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data.
It is an auto-regressive language model, based on the transformer architecture.
Base LLM: [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
**Model date:**
LLaVA-v1.6-Mistral-7B was trained in December 2023.
**Paper or resources for more information:**
https://llava-vl.github.io/
## License
[mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) license.
**Where to send questions or comments about the model:**
https://github.com/haotian-liu/LLaVA/issues
## Intended use
**Primary intended uses:**
The primary use of LLaVA is research on large multimodal models and chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
## Training dataset
- 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
- 158K GPT-generated multimodal instruction-following data.
- 500K academic-task-oriented VQA data mixture.
- 50K GPT-4V data mixture.
- 40K ShareGPT data.
## Evaluation dataset
A collection of 12 benchmarks, including 5 academic VQA benchmarks and 7 recent benchmarks specifically proposed for instruction-following LMMs. |
benito14/SOIT_Llama3.2 | benito14 | "2024-11-12T09:54:27Z" | 5 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Llama-3.2-1B-bnb-4bit",
"base_model:quantized:unsloth/Llama-3.2-1B-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-11-12T09:54:07Z" | ---
base_model: unsloth/Llama-3.2-1B-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** benito14
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-1B-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/Free_Sydney_13b_HF-i1-GGUF | mradermacher | "2024-08-02T10:43:30Z" | 255 | 0 | transformers | [
"transformers",
"gguf",
"LLaMA",
"LLM",
"Sydney",
"en",
"base_model:FPHam/Free_Sydney_13b_HF",
"base_model:quantized:FPHam/Free_Sydney_13b_HF",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | "2024-06-11T15:58:39Z" | ---
base_model: FPHam/Free_Sydney_13b_HF
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- LLaMA
- LLM
- Sydney
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/FPHam/Free_Sydney_13b_HF
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Free_Sydney_13b_HF-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Free_Sydney_13b_HF-i1-GGUF/resolve/main/Free_Sydney_13b_HF.i1-IQ1_S.gguf) | i1-IQ1_S | 3.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Free_Sydney_13b_HF-i1-GGUF/resolve/main/Free_Sydney_13b_HF.i1-IQ1_M.gguf) | i1-IQ1_M | 3.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Free_Sydney_13b_HF-i1-GGUF/resolve/main/Free_Sydney_13b_HF.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Free_Sydney_13b_HF-i1-GGUF/resolve/main/Free_Sydney_13b_HF.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Free_Sydney_13b_HF-i1-GGUF/resolve/main/Free_Sydney_13b_HF.i1-IQ2_S.gguf) | i1-IQ2_S | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Free_Sydney_13b_HF-i1-GGUF/resolve/main/Free_Sydney_13b_HF.i1-IQ2_M.gguf) | i1-IQ2_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Free_Sydney_13b_HF-i1-GGUF/resolve/main/Free_Sydney_13b_HF.i1-Q2_K.gguf) | i1-Q2_K | 5.0 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Free_Sydney_13b_HF-i1-GGUF/resolve/main/Free_Sydney_13b_HF.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Free_Sydney_13b_HF-i1-GGUF/resolve/main/Free_Sydney_13b_HF.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Free_Sydney_13b_HF-i1-GGUF/resolve/main/Free_Sydney_13b_HF.i1-IQ3_S.gguf) | i1-IQ3_S | 5.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Free_Sydney_13b_HF-i1-GGUF/resolve/main/Free_Sydney_13b_HF.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Free_Sydney_13b_HF-i1-GGUF/resolve/main/Free_Sydney_13b_HF.i1-IQ3_M.gguf) | i1-IQ3_M | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/Free_Sydney_13b_HF-i1-GGUF/resolve/main/Free_Sydney_13b_HF.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Free_Sydney_13b_HF-i1-GGUF/resolve/main/Free_Sydney_13b_HF.i1-Q3_K_L.gguf) | i1-Q3_K_L | 7.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Free_Sydney_13b_HF-i1-GGUF/resolve/main/Free_Sydney_13b_HF.i1-IQ4_XS.gguf) | i1-IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/Free_Sydney_13b_HF-i1-GGUF/resolve/main/Free_Sydney_13b_HF.i1-Q4_0.gguf) | i1-Q4_0 | 7.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Free_Sydney_13b_HF-i1-GGUF/resolve/main/Free_Sydney_13b_HF.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Free_Sydney_13b_HF-i1-GGUF/resolve/main/Free_Sydney_13b_HF.i1-Q4_K_M.gguf) | i1-Q4_K_M | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Free_Sydney_13b_HF-i1-GGUF/resolve/main/Free_Sydney_13b_HF.i1-Q5_K_S.gguf) | i1-Q5_K_S | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/Free_Sydney_13b_HF-i1-GGUF/resolve/main/Free_Sydney_13b_HF.i1-Q5_K_M.gguf) | i1-Q5_K_M | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/Free_Sydney_13b_HF-i1-GGUF/resolve/main/Free_Sydney_13b_HF.i1-Q6_K.gguf) | i1-Q6_K | 10.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
ayushgs/mistral-7b-v0.2-instruct-mh-1500 | ayushgs | "2024-04-18T05:03:25Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-04-18T05:02:36Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit
---
# Uploaded model
- **Developed by:** ayushgs
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
LoneStriker/Smaug-72B-v0.1-6.0bpw-h6-exl2 | LoneStriker | "2024-02-17T20:00:11Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"base_model:moreh/MoMo-72B-lora-1.8.7-DPO",
"base_model:finetune:moreh/MoMo-72B-lora-1.8.7-DPO",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-02-17T19:28:30Z" | ---
license: other
license_name: tongyi-qianwen-license-agreement
license_link: https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT
base_model: moreh/MoMo-72B-lora-1.8.7-DPO
---


Smaug arrives!
We recently released Smaug-72B-v0.1 which has taken first place on the Open LLM Leaderboard by HuggingFace. It is the first open-source model to have an average score more than 80.
Smaug-72B is finetuned directly from [moreh/MoMo-72B-lora-1.8.7-DPO](https://huggingface.co/moreh/MoMo-72B-lora-1.8.7-DPO) and is ultimately based on [Qwen-72B](https://huggingface.co/Qwen/Qwen-72B).
To do so we built on techniques and datasets used in our previous model efforts, with the addition of some new datasets and a new approach. We believe this new approach is generally useful in training across a wide range of model types and downstream use cases, and it powers both our Smaug-34B and 72B models.
We are currently working on writing up this new technique in the form of a technical report which we aim to release on arXiv soon (we may also be releasing a new member of the Smaug lineup at that time!). We are excited to release the details to the open source community for them to build on and improve Smaug and spawn more dragons to dominate the LLM space. Keep watching this space for our announcements!
### Evaluation Results
| Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
| --- | --- | --- | --- | --- | --- | --- |
| 80.48 | 76.02 | 89.27 | 77.15 | 76.67 | 85.08 | 78.70 |
#### MT-Bench
We ran MT-Bench with the llama-2 conversation template and the system prompt set to the Qwen system prompt. We got the following
results in single model mode:
| First Turn | Second Turn | Average |
| ---------- | ----------- | ------- |
| 8.18 | 7.34 | 7.76 |
The bottom of the card includes some sample responses from the model. We have selected responses that have high and low scores
from the GPT-4 judge.
### Contamination Results
We generate our contamination numbers using https://github.com/swj0419/detect-pretrain-code-contamination/tree/master, with Llama7B as our reference model.
Smaug-72B has the following results:
| ARC | TruthfulQA | GSM8K |
| --- | --- | --- |
| 0.20| 0.45| 1.00|
By comparison, MoMo-72B-lora-1.8.7-DPO has the following results:
| ARC | TruthfulQA | GSM8K |
| --- | --- | --- |
| 0.20| 0.39| 1.00|
Note that GSM8K often scores very highly on this contamination suite - we verified this by also running Llama-2-70B:
| ARC | TruthfulQA | GSM8K |
| --- | --- | --- |
| 0.22| 0.51| 0.89|
### Sample MT Bench results
#### Example 1
**Question**
Imagine you are writing a blog post comparing two popular smartphone models.
Develop an outline for the blog post, including key points and subheadings to effectively
compare and contrast the features, performance, and user experience of the two models.
Please answer in fewer than 200 words.
**Response**
Outline:
1. Introduction to the two smartphone models being compared (mention brand names and models)
2. Design and Build Quality
- Comparison of dimensions, weight, materials used, and overall aesthetic appeal
- Water and dust resistance, if applicable
3. Display and Screen Quality
- Comparison of screen technology (OLED, IPS LCD, etc.)
- Display resolution, refresh rate, and brightness levels
4. Performance and Hardware
- Processor, RAM, and storage configurations
- Comparison of benchmark scores (e.g., AnTuTu, Geekbench)
5. Camera System and Image Quality
- Comparison of primary, ultra-wide, telephoto, and macro lenses (if available)
- Comparison of low-light performance and image processing capabilities
6. Battery Life and Charging
- Battery capacity and estimated usage time
- Charging technologies (wired, wireless, fast charging, reverse charging)
7. Software and User Experience
- Comparison of operating systems (iOS vs. Android) and their features
- User interface, customization options, and app ecosystem
8. Additional Features and Extras
- Comparison of biometric authentication (fingerprint, face unlock, etc.)
- Additional features like NFC, 5G connectivity, and audio quality
9. Pricing and Value for Money
- Comparison of retail prices and current offers
- Assessment of overall value considering the features and performance
10. Conclusion and recommendation for potential buyers based on their preferences
#### Example 2
**Question**
In a survey conducted at a local high school, preferences for a new school color were measured:
58% of students liked the color blue, 45% preferred green, and 22% liked both colors.
If we randomly pick a student from the school, what's the probability that they would like neither blue nor green?
**Answer**
To find the probability that a randomly picked student would like neither blue nor green, we need to find the percentage of students who like only one color or neither color.
From the given information, we know:
- 58% like blue
- 45% prefer green
- 22% like both colors
To find the percentage of students who like only one color, we can subtract the percentage of students who like both colors from the sum of students who like blue and green:
- 58% (blue) + 45% (green) - 22% (both) = 83%
Now, to find the probability of a student liking neither blue nor green, we subtract the percentage of students who like at least one color from 100%:
- 100% - 83% = 17%
So, the probability that a randomly picked student would like neither blue nor green is 17%.
#### Example 3
**Question**
Implement a program to find the common elements in two arrays without using any extra data structures.
**Answer**
Finding common elements in two arrays without using extra data structures can be achieved by using a simple algorithm. Here's a Python implementation of this problem:
```python
def find_common_elements(arr1, arr2):
common_elements = []
for num1 in arr1:
if num1 in arr2 and num1 not in common_elements:
common_elements.append(num1)
return common_elements
# Example usage:
array1 = [1, 2, 3, 4, 5]
array2 = [4, 5, 6, 7, 8]
print(find_common_elements(array1, array2)) # Output: [4, 5]
```
This program iterates through the elements of the first array (arr1) and checks if each element is present in the second array (arr2). If it is, and the element hasn't been added to the common_elements list yet, it is appended. The function then returns the list of common elements.
#### Example 4
**Question**
Identify the named entities (people, organizations, locations) mentioned in the given news article.
Please generate a JSON dictionary that lists the named entities in three separate groups based on their entity types.
The key is the type of entity and the value is a list of strings.
Yesterday, Adamson Emerson, the CEO of Faraday, and Dieter Zetsche, the CEO of Daimler AG, announced plans to
build a new Gigafactory in Berlin. The facility will be a joint venture between Faraday and Daimler, producing
electric vehicles and battery packs for both companies, creating thousands of job opportunities in the region.
Emerson and Zetsche stated that the strategic location of Berlin, coupled with its skilled workforce and strong infrastructure,
makes it an ideal choice for expansion. The new Gigafactory aims to meet the growing demand for electric vehicles in Europe
and contribute to a sustainable future. Volkswagen CEO Herbert Diess welcomed the news, saying greater collaboration will,
benefit the auto industry's transition to e-mobility.
**Answer**
```
{
"people": ["Adamson Emerson", "Dieter Zetsche", "Herbert Diess"],
"organizations": ["Faraday", "Daimler AG", "Volkswagen"],
"locations": ["Berlin", "Europe"]
}
``` |
ayushtues/blipdiffusion | ayushtues | "2023-09-21T14:44:10Z" | 7 | 0 | diffusers | [
"diffusers",
"safetensors",
"en",
"arxiv:2305.14720",
"license:apache-2.0",
"diffusers:BlipDiffusionPipeline",
"region:us"
] | null | "2023-08-07T05:45:01Z" | ---
license: apache-2.0
language:
- en
library_name: diffusers
---
# BLIP-Diffusion: Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing
<!-- Provide a quick summary of what the model is/does. -->
Model card for BLIP-Diffusion, a text to image Diffusion model which enables zero-shot subject-driven generation and control-guided zero-shot generation.
The abstract from the paper is:
*Subject-driven text-to-image generation models create novel renditions of an input subject based on text prompts. Existing models suffer from lengthy fine-tuning and difficulties preserving the subject fidelity. To overcome these limitations, we introduce BLIP-Diffusion, a new subject-driven image generation model that supports multimodal control which consumes inputs of subject images and text prompts. Unlike other subject-driven generation models, BLIP-Diffusion introduces a new multimodal encoder which is pre-trained to provide subject representation. We first pre-train the multimodal encoder following BLIP-2 to produce visual representation aligned with the text. Then we design a subject representation learning task which enables a diffusion model to leverage such visual representation and generates new subject renditions. Compared with previous methods such as DreamBooth, our model enables zero-shot subject-driven generation, and efficient fine-tuning for customized subject with up to 20x speedup. We also demonstrate that BLIP-Diffusion can be flexibly combined with existing techniques such as ControlNet and prompt-to-prompt to enable novel subject-driven generation and editing applications.*
The model is created by Dongxu Li, Junnan Li, Steven C.H. Hoi.
### Model Sources
<!-- Provide the basic links for the model. -->
- **Original Repository:** https://github.com/salesforce/LAVIS/tree/main
- **Project Page:** https://dxli94.github.io/BLIP-Diffusion-website/
## Uses
### Zero-Shot Subject Driven Generation
```python
from diffusers.pipelines import BlipDiffusionPipeline
from diffusers.utils import load_image
import torch
blip_diffusion_pipe = BlipDiffusionPipeline.from_pretrained(
"Salesforce/blipdiffusion", torch_dtype=torch.float16
).to("cuda")
cond_subject = "dog"
tgt_subject = "dog"
text_prompt_input = "swimming underwater"
cond_image = load_image(
"https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/dog.jpg"
)
iter_seed = 88888
guidance_scale = 7.5
num_inference_steps = 25
negative_prompt = "over-exposure, under-exposure, saturated, duplicate, out of frame, lowres, cropped, worst quality, low quality, jpeg artifacts, morbid, mutilated, out of frame, ugly, bad anatomy, bad proportions, deformed, blurry, duplicate"
output = blip_diffusion_pipe(
text_prompt_input,
cond_image,
cond_subject,
tgt_subject,
guidance_scale=guidance_scale,
num_inference_steps=num_inference_steps,
neg_prompt=negative_prompt,
height=512,
width=512,
).images
output[0].save("image.png")
```
Input Image : <img src="https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/dog.jpg" style="width:500px;"/>
Generatred Image : <img src="https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/dog_underwater.png" style="width:500px;"/>
### Controlled subject-driven generation
```python
from diffusers.pipelines import BlipDiffusionControlNetPipeline
from diffusers.utils import load_image
from controlnet_aux import CannyDetector
blip_diffusion_pipe = BlipDiffusionControlNetPipeline.from_pretrained(
"Salesforce/blipdiffusion-controlnet", torch_dtype=torch.float16
).to("cuda")
style_subject = "flower" # subject that defines the style
tgt_subject = "teapot" # subject to generate.
text_prompt = "on a marble table"
cldm_cond_image = load_image(
"https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/kettle.jpg"
).resize((512, 512))
canny = CannyDetector()
cldm_cond_image = canny(cldm_cond_image, 30, 70, output_type="pil")
style_image = load_image(
"https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/flower.jpg"
)
guidance_scale = 7.5
num_inference_steps = 50
negative_prompt = "over-exposure, under-exposure, saturated, duplicate, out of frame, lowres, cropped, worst quality, low quality, jpeg artifacts, morbid, mutilated, out of frame, ugly, bad anatomy, bad proportions, deformed, blurry, duplicate"
output = blip_diffusion_pipe(
text_prompt,
style_image,
cldm_cond_image,
style_subject,
tgt_subject,
guidance_scale=guidance_scale,
num_inference_steps=num_inference_steps,
neg_prompt=negative_prompt,
height=512,
width=512,
).images
output[0].save("image.png")
```
Input Style Image : <img src="https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/flower.jpg" style="width:500px;"/>
Canny Edge Input : <img src="https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/kettle.jpg" style="width:500px;"/>
Generated Image : <img src="https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/canny_generated.png" style="width:500px;"/>
### Controlled subject-driven generation Scribble
```python
from diffusers.pipelines import BlipDiffusionControlNetPipeline
from diffusers.utils import load_image
from controlnet_aux import HEDdetector
blip_diffusion_pipe = BlipDiffusionControlNetPipeline.from_pretrained(
"Salesforce/blipdiffusion-controlnet"
)
controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-scribble")
blip_diffusion_pipe.controlnet = controlnet
blip_diffusion_pipe.to("cuda")
style_subject = "flower" # subject that defines the style
tgt_subject = "bag" # subject to generate.
text_prompt = "on a table"
cldm_cond_image = load_image(
"https://huggingface.co/lllyasviel/sd-controlnet-scribble/resolve/main/images/bag.png"
).resize((512, 512))
hed = HEDdetector.from_pretrained("lllyasviel/Annotators")
cldm_cond_image = hed(cldm_cond_image)
style_image = load_image(
"https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/flower.jpg"
)
guidance_scale = 7.5
num_inference_steps = 50
negative_prompt = "over-exposure, under-exposure, saturated, duplicate, out of frame, lowres, cropped, worst quality, low quality, jpeg artifacts, morbid, mutilated, out of frame, ugly, bad anatomy, bad proportions, deformed, blurry, duplicate"
output = blip_diffusion_pipe(
text_prompt,
style_image,
cldm_cond_image,
style_subject,
tgt_subject,
guidance_scale=guidance_scale,
num_inference_steps=num_inference_steps,
neg_prompt=negative_prompt,
height=512,
width=512,
).images
output[0].save("image.png")
```
Input Style Image : <img src="https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/flower.jpg" style="width:500px;"/>
Scribble Input : <img src="https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/scribble.png" style="width:500px;"/>
Generated Image : <img src="https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/scribble_output.png" style="width:500px;"/>
## Model Architecture
Blip-Diffusion learns a **pre-trained subject representation**. uch representation aligns with text embeddings and in the meantime also encodes the subject appearance. This allows efficient fine-tuning of the model for high-fidelity subject-driven applications, such as text-to-image generation, editing and style transfer.
To this end, they design a two-stage pre-training strategy to learn generic subject representation. In the first pre-training stage, they perform multimodal representation learning, which enforces BLIP-2 to produce text-aligned visual features based on the input image. In the second pre-training stage, they design a subject representation learning task, called prompted context generation, where the diffusion model learns to generate novel subject renditions based on the input visual features.
To achieve this, they curate pairs of input-target images with the same subject appearing in different contexts. Specifically, they synthesize input images by composing the subject with a random background. During pre-training, they feed the synthetic input image and the subject class label through BLIP-2 to obtain the multimodal embeddings as subject representation. The subject representation is then combined with a text prompt to guide the generation of the target image.

The architecture is also compatible to integrate with established techniques built on top of the diffusion model, such as ControlNet.
They attach the U-Net of the pre-trained ControlNet to that of BLIP-Diffusion via residuals. In this way, the model takes into account the input structure condition, such as edge maps and depth maps, in addition to the subject cues. Since the model inherits the architecture of the original latent diffusion model, they observe satisfying generations using off-the-shelf integration with pre-trained ControlNet without further training.
<img src="https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/arch_controlnet.png" style="width:50%;"/>
## Citation
**BibTeX:**
If you find this repository useful in your research, please cite:
```
@misc{li2023blipdiffusion,
title={BLIP-Diffusion: Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing},
author={Dongxu Li and Junnan Li and Steven C. H. Hoi},
year={2023},
eprint={2305.14720},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
|
Ahmed-ibn-Harun/BrainHermorrhage-vit-base | Ahmed-ibn-Harun | "2025-01-07T14:41:28Z" | 199 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2025-01-04T13:34:23Z" | ---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- image-classification
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: Ahmed-ibn-Harun/BrainHermorrhage
type: imagefolder
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8261405672009864
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BrainHermorrhage-vit-base
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the Ahmed-ibn-Harun/BrainHermorrhage dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3755
- Accuracy: 0.8261
- Sensitivity: 0.7221
- Specificity: 0.9289
- F1 Score: 0.8050
- Auc: 0.9162
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Sensitivity | Specificity | F1 Score | Auc |
|:-------------:|:-------:|:-----:|:---------------:|:--------:|:-----------:|:-----------:|:--------:|:------:|
| 0.3331 | 0.2188 | 100 | 0.4220 | 0.7891 | 0.6868 | 0.8902 | 0.7641 | 0.8947 |
| 0.4904 | 0.4376 | 200 | 0.4409 | 0.8038 | 0.7750 | 0.8324 | 0.7971 | 0.8931 |
| 0.4875 | 0.6565 | 300 | 0.5088 | 0.8162 | 0.9009 | 0.7324 | 0.8298 | 0.9057 |
| 0.4366 | 0.8753 | 400 | 0.3726 | 0.8314 | 0.7671 | 0.8951 | 0.8190 | 0.9190 |
| 0.4663 | 1.0941 | 500 | 0.4225 | 0.8157 | 0.8910 | 0.7412 | 0.8278 | 0.9201 |
| 0.2961 | 1.3129 | 600 | 0.3632 | 0.8339 | 0.7929 | 0.8745 | 0.8260 | 0.9244 |
| 0.3367 | 1.5317 | 700 | 0.4454 | 0.8117 | 0.8870 | 0.7373 | 0.8241 | 0.9083 |
| 0.4084 | 1.7505 | 800 | 0.5469 | 0.7408 | 0.9732 | 0.5108 | 0.7888 | 0.9068 |
| 0.3161 | 1.9694 | 900 | 0.3893 | 0.8329 | 0.8672 | 0.7990 | 0.8377 | 0.9282 |
| 0.4762 | 2.1882 | 1000 | 0.4871 | 0.7363 | 0.9633 | 0.5118 | 0.7842 | 0.8974 |
| 0.4006 | 2.4070 | 1100 | 0.4228 | 0.7900 | 0.9326 | 0.6490 | 0.8154 | 0.9257 |
| 0.4541 | 2.6258 | 1200 | 0.3389 | 0.8487 | 0.7641 | 0.9324 | 0.8340 | 0.9349 |
| 0.5397 | 2.8446 | 1300 | 0.4587 | 0.7565 | 0.9732 | 0.5422 | 0.7990 | 0.9244 |
| 0.2115 | 3.0635 | 1400 | 0.3976 | 0.8344 | 0.8196 | 0.8490 | 0.8312 | 0.9223 |
| 0.3588 | 3.2823 | 1500 | 0.3928 | 0.8211 | 0.8949 | 0.7480 | 0.8326 | 0.9208 |
| 0.3377 | 3.5011 | 1600 | 0.3943 | 0.8157 | 0.7483 | 0.8824 | 0.8015 | 0.9128 |
| 0.3385 | 3.7199 | 1700 | 0.3627 | 0.8428 | 0.8256 | 0.8598 | 0.8393 | 0.9247 |
| 0.3793 | 3.9387 | 1800 | 0.4015 | 0.8063 | 0.7592 | 0.8529 | 0.7958 | 0.9007 |
| 0.2774 | 4.1575 | 1900 | 0.4174 | 0.8186 | 0.8018 | 0.8353 | 0.8147 | 0.9090 |
| 0.2964 | 4.3764 | 2000 | 0.4120 | 0.8245 | 0.8940 | 0.7559 | 0.8352 | 0.9243 |
| 0.2042 | 4.5952 | 2100 | 0.3984 | 0.8383 | 0.8414 | 0.8353 | 0.8381 | 0.9214 |
| 0.2336 | 4.8140 | 2200 | 0.4263 | 0.8241 | 0.8722 | 0.7765 | 0.8314 | 0.9242 |
| 0.2292 | 5.0328 | 2300 | 0.4430 | 0.8339 | 0.8186 | 0.8490 | 0.8306 | 0.9241 |
| 0.265 | 5.2516 | 2400 | 0.4647 | 0.8314 | 0.7681 | 0.8941 | 0.8192 | 0.9204 |
| 0.2754 | 5.4705 | 2500 | 0.5476 | 0.7886 | 0.9128 | 0.6657 | 0.8111 | 0.9116 |
| 0.1859 | 5.6893 | 2600 | 0.4330 | 0.8324 | 0.8414 | 0.8235 | 0.8332 | 0.9218 |
| 0.1785 | 5.9081 | 2700 | 0.4180 | 0.8369 | 0.8375 | 0.8363 | 0.8362 | 0.9199 |
| 0.2057 | 6.1269 | 2800 | 0.4660 | 0.8319 | 0.8543 | 0.8098 | 0.8349 | 0.9158 |
| 0.289 | 6.3457 | 2900 | 0.4399 | 0.8186 | 0.8196 | 0.8176 | 0.8180 | 0.9086 |
| 0.1172 | 6.5646 | 3000 | 0.5597 | 0.8260 | 0.8474 | 0.8049 | 0.8289 | 0.9085 |
| 0.1758 | 6.7834 | 3100 | 0.4902 | 0.8201 | 0.8335 | 0.8069 | 0.8217 | 0.9078 |
| 0.2185 | 7.0022 | 3200 | 0.4738 | 0.8211 | 0.8295 | 0.8127 | 0.8218 | 0.9108 |
| 0.2255 | 7.2210 | 3300 | 0.5072 | 0.8098 | 0.8771 | 0.7431 | 0.8210 | 0.9082 |
| 0.213 | 7.4398 | 3400 | 0.4879 | 0.8379 | 0.7919 | 0.8833 | 0.8293 | 0.9126 |
| 0.1528 | 7.6586 | 3500 | 0.6054 | 0.8137 | 0.8276 | 0.8 | 0.8154 | 0.9057 |
| 0.1834 | 7.8775 | 3600 | 0.5653 | 0.8260 | 0.7532 | 0.8980 | 0.8115 | 0.9141 |
| 0.0812 | 8.0963 | 3700 | 0.6640 | 0.8176 | 0.7284 | 0.9059 | 0.7989 | 0.9068 |
| 0.1422 | 8.3151 | 3800 | 0.5916 | 0.8255 | 0.7721 | 0.8784 | 0.8149 | 0.9075 |
| 0.1116 | 8.5339 | 3900 | 0.5746 | 0.8226 | 0.8583 | 0.7873 | 0.8279 | 0.9057 |
| 0.1811 | 8.7527 | 4000 | 0.4679 | 0.8359 | 0.7869 | 0.8843 | 0.8267 | 0.9245 |
| 0.1949 | 8.9716 | 4100 | 0.4645 | 0.8201 | 0.7641 | 0.8755 | 0.8086 | 0.9048 |
| 0.036 | 9.1904 | 4200 | 0.6486 | 0.8349 | 0.7889 | 0.8804 | 0.8262 | 0.9116 |
| 0.1117 | 9.4092 | 4300 | 0.5656 | 0.8236 | 0.7433 | 0.9029 | 0.8073 | 0.9125 |
| 0.1101 | 9.6280 | 4400 | 0.5197 | 0.8285 | 0.8731 | 0.7843 | 0.8351 | 0.9226 |
| 0.2064 | 9.8468 | 4500 | 0.6169 | 0.8270 | 0.7641 | 0.8892 | 0.8146 | 0.9132 |
| 0.0647 | 10.0656 | 4600 | 0.5593 | 0.8255 | 0.7542 | 0.8961 | 0.8113 | 0.9122 |
| 0.0566 | 10.2845 | 4700 | 0.5822 | 0.8245 | 0.7384 | 0.9098 | 0.8072 | 0.9182 |
| 0.1324 | 10.5033 | 4800 | 0.5593 | 0.8319 | 0.7948 | 0.8686 | 0.8247 | 0.9146 |
| 0.0824 | 10.7221 | 4900 | 0.6553 | 0.8117 | 0.7037 | 0.9186 | 0.7880 | 0.9130 |
| 0.2134 | 10.9409 | 5000 | 0.5847 | 0.8334 | 0.8880 | 0.7794 | 0.8413 | 0.9271 |
| 0.0835 | 11.1597 | 5100 | 0.6585 | 0.8314 | 0.8008 | 0.8618 | 0.8253 | 0.9130 |
| 0.0936 | 11.3786 | 5200 | 0.8768 | 0.8191 | 0.7939 | 0.8441 | 0.8136 | 0.9062 |
| 0.0325 | 11.5974 | 5300 | 0.6502 | 0.8423 | 0.8315 | 0.8529 | 0.8398 | 0.9209 |
| 0.1054 | 11.8162 | 5400 | 0.5742 | 0.8354 | 0.8256 | 0.8451 | 0.833 | 0.9186 |
| 0.0157 | 12.0350 | 5500 | 0.7790 | 0.8324 | 0.8256 | 0.8392 | 0.8305 | 0.9150 |
| 0.0929 | 12.2538 | 5600 | 0.5779 | 0.8433 | 0.7978 | 0.8882 | 0.8351 | 0.9183 |
| 0.0553 | 12.4726 | 5700 | 0.6642 | 0.8369 | 0.8157 | 0.8578 | 0.8326 | 0.9154 |
| 0.1012 | 12.6915 | 5800 | 0.6882 | 0.8344 | 0.8494 | 0.8196 | 0.8361 | 0.9218 |
| 0.1292 | 12.9103 | 5900 | 0.6949 | 0.8310 | 0.7800 | 0.8814 | 0.8211 | 0.9114 |
| 0.103 | 13.1291 | 6000 | 0.7031 | 0.8398 | 0.8216 | 0.8578 | 0.8361 | 0.9162 |
| 0.0652 | 13.3479 | 6100 | 0.7927 | 0.8379 | 0.8176 | 0.8578 | 0.8338 | 0.9178 |
| 0.1194 | 13.5667 | 6200 | 0.7817 | 0.8211 | 0.7602 | 0.8814 | 0.8086 | 0.9125 |
| 0.2684 | 13.7856 | 6300 | 0.7446 | 0.8221 | 0.7017 | 0.9412 | 0.7968 | 0.9164 |
| 0.1194 | 14.0044 | 6400 | 0.7420 | 0.8334 | 0.8305 | 0.8363 | 0.8322 | 0.9152 |
| 0.0548 | 14.2232 | 6500 | 0.8545 | 0.8295 | 0.8543 | 0.8049 | 0.8329 | 0.8918 |
| 0.0681 | 14.4420 | 6600 | 0.8680 | 0.7915 | 0.6373 | 0.9441 | 0.7525 | 0.9094 |
| 0.0627 | 14.6608 | 6700 | 0.6181 | 0.8487 | 0.8028 | 0.8941 | 0.8407 | 0.9206 |
| 0.0565 | 14.8796 | 6800 | 0.7248 | 0.8241 | 0.8117 | 0.8363 | 0.8211 | 0.9108 |
| 0.0879 | 15.0985 | 6900 | 0.6817 | 0.8295 | 0.8484 | 0.8108 | 0.8319 | 0.9208 |
| 0.1235 | 15.3173 | 7000 | 0.7360 | 0.8344 | 0.8434 | 0.8255 | 0.8351 | 0.9143 |
| 0.1256 | 15.5361 | 7100 | 0.6166 | 0.8300 | 0.7839 | 0.8755 | 0.8210 | 0.9114 |
| 0.0353 | 15.7549 | 7200 | 0.7718 | 0.8339 | 0.8771 | 0.7912 | 0.8401 | 0.9231 |
| 0.0838 | 15.9737 | 7300 | 0.7930 | 0.8305 | 0.7919 | 0.8686 | 0.8229 | 0.8963 |
| 0.0345 | 16.1926 | 7400 | 0.9201 | 0.8231 | 0.7304 | 0.9147 | 0.8041 | 0.8816 |
| 0.0263 | 16.4114 | 7500 | 0.8642 | 0.8310 | 0.7433 | 0.9176 | 0.8139 | 0.9021 |
| 0.0471 | 16.6302 | 7600 | 0.8542 | 0.8324 | 0.7631 | 0.9010 | 0.8191 | 0.9031 |
| 0.0894 | 16.8490 | 7700 | 0.7756 | 0.8034 | 0.7116 | 0.8941 | 0.7826 | 0.8999 |
| 0.0649 | 17.0678 | 7800 | 0.7112 | 0.8344 | 0.8484 | 0.8206 | 0.8359 | 0.9077 |
| 0.0567 | 17.2867 | 7900 | 0.7433 | 0.8452 | 0.8394 | 0.8510 | 0.8436 | 0.9106 |
| 0.0229 | 17.5055 | 8000 | 0.8775 | 0.8255 | 0.7542 | 0.8961 | 0.8113 | 0.9017 |
| 0.009 | 17.7243 | 8100 | 0.8561 | 0.8349 | 0.7958 | 0.8735 | 0.8274 | 0.9062 |
| 0.0838 | 17.9431 | 8200 | 0.9441 | 0.8255 | 0.8771 | 0.7745 | 0.8333 | 0.9211 |
| 0.0958 | 18.1619 | 8300 | 0.9286 | 0.8255 | 0.7374 | 0.9127 | 0.8078 | 0.8961 |
| 0.0422 | 18.3807 | 8400 | 0.8053 | 0.8369 | 0.8186 | 0.8549 | 0.8331 | 0.9114 |
| 0.053 | 18.5996 | 8500 | 0.8440 | 0.8388 | 0.8067 | 0.8706 | 0.8327 | 0.8972 |
| 0.0462 | 18.8184 | 8600 | 0.7419 | 0.8221 | 0.8137 | 0.8304 | 0.8198 | 0.9108 |
| 0.0474 | 19.0372 | 8700 | 0.8702 | 0.8231 | 0.7849 | 0.8608 | 0.8152 | 0.8997 |
| 0.0257 | 19.2560 | 8800 | 0.8966 | 0.8157 | 0.7473 | 0.8833 | 0.8013 | 0.9049 |
| 0.0214 | 19.4748 | 8900 | 0.9787 | 0.8275 | 0.7839 | 0.8706 | 0.8188 | 0.8877 |
| 0.1409 | 19.6937 | 9000 | 0.8695 | 0.8379 | 0.7899 | 0.8853 | 0.8289 | 0.9084 |
| 0.0715 | 19.9125 | 9100 | 0.9500 | 0.8245 | 0.8028 | 0.8461 | 0.8198 | 0.8975 |
| 0.0331 | 20.1313 | 9200 | 0.9371 | 0.8334 | 0.8375 | 0.8294 | 0.8333 | 0.9042 |
| 0.0259 | 20.3501 | 9300 | 0.8587 | 0.8374 | 0.8127 | 0.8618 | 0.8325 | 0.9124 |
| 0.0093 | 20.5689 | 9400 | 0.7861 | 0.8393 | 0.8196 | 0.8588 | 0.8354 | 0.9182 |
| 0.0103 | 20.7877 | 9500 | 0.7921 | 0.8359 | 0.7800 | 0.8912 | 0.8254 | 0.9119 |
| 0.1187 | 21.0066 | 9600 | 0.7618 | 0.8260 | 0.7512 | 0.9 | 0.8111 | 0.9166 |
| 0.0024 | 21.2254 | 9700 | 0.9334 | 0.8319 | 0.8632 | 0.8010 | 0.8363 | 0.9123 |
| 0.0993 | 21.4442 | 9800 | 0.8067 | 0.8310 | 0.8682 | 0.7941 | 0.8363 | 0.9177 |
| 0.145 | 21.6630 | 9900 | 0.7816 | 0.8324 | 0.7770 | 0.8873 | 0.8218 | 0.9108 |
| 0.054 | 21.8818 | 10000 | 0.8371 | 0.8413 | 0.8523 | 0.8304 | 0.8423 | 0.9190 |
| 0.0446 | 22.1007 | 10100 | 0.8001 | 0.8354 | 0.7899 | 0.8804 | 0.8268 | 0.9084 |
| 0.1218 | 22.3195 | 10200 | 0.8164 | 0.8364 | 0.7701 | 0.9020 | 0.8240 | 0.9078 |
| 0.032 | 22.5383 | 10300 | 0.8353 | 0.8359 | 0.8256 | 0.8461 | 0.8334 | 0.9157 |
| 0.0804 | 22.7571 | 10400 | 0.8301 | 0.8314 | 0.7859 | 0.8765 | 0.8226 | 0.9149 |
| 0.0982 | 22.9759 | 10500 | 0.8366 | 0.8339 | 0.8305 | 0.8373 | 0.8326 | 0.9160 |
| 0.0153 | 23.1947 | 10600 | 0.8395 | 0.8295 | 0.7948 | 0.8637 | 0.8226 | 0.9150 |
| 0.0647 | 23.4136 | 10700 | 0.8342 | 0.8364 | 0.8662 | 0.8069 | 0.8404 | 0.9230 |
| 0.0906 | 23.6324 | 10800 | 0.8414 | 0.8078 | 0.8900 | 0.7265 | 0.8216 | 0.9166 |
| 0.0071 | 23.8512 | 10900 | 0.8552 | 0.8354 | 0.7889 | 0.8814 | 0.8266 | 0.9053 |
| 0.0254 | 24.0700 | 11000 | 0.8612 | 0.8428 | 0.7830 | 0.9020 | 0.8320 | 0.9009 |
| 0.0265 | 24.2888 | 11100 | 1.0379 | 0.8245 | 0.7195 | 0.9284 | 0.8031 | 0.8937 |
| 0.048 | 24.5077 | 11200 | 1.0143 | 0.8285 | 0.7611 | 0.8951 | 0.8153 | 0.8942 |
| 0.0005 | 24.7265 | 11300 | 0.9883 | 0.8310 | 0.8077 | 0.8539 | 0.8262 | 0.9024 |
| 0.1702 | 24.9453 | 11400 | 1.0282 | 0.8339 | 0.7512 | 0.9157 | 0.8181 | 0.9078 |
| 0.0006 | 25.1641 | 11500 | 0.9612 | 0.8448 | 0.8712 | 0.8186 | 0.8480 | 0.9151 |
| 0.0425 | 25.3829 | 11600 | 1.0040 | 0.8438 | 0.8612 | 0.8265 | 0.8457 | 0.9143 |
| 0.0006 | 25.6018 | 11700 | 0.9840 | 0.8305 | 0.7790 | 0.8814 | 0.8205 | 0.9117 |
| 0.0029 | 25.8206 | 11800 | 1.0850 | 0.8295 | 0.7294 | 0.9284 | 0.8097 | 0.9039 |
| 0.0776 | 26.0394 | 11900 | 0.9524 | 0.8334 | 0.8335 | 0.8333 | 0.8327 | 0.9119 |
| 0.0543 | 26.2582 | 12000 | 0.9541 | 0.8329 | 0.7572 | 0.9078 | 0.8184 | 0.9097 |
| 0.0018 | 26.4770 | 12100 | 0.8137 | 0.8393 | 0.8712 | 0.8078 | 0.8436 | 0.9225 |
| 0.0512 | 26.6958 | 12200 | 1.0741 | 0.8176 | 0.8712 | 0.7647 | 0.8261 | 0.8886 |
| 0.0008 | 26.9147 | 12300 | 1.0294 | 0.8393 | 0.8484 | 0.8304 | 0.8400 | 0.8987 |
| 0.043 | 27.1335 | 12400 | 0.9720 | 0.8334 | 0.8682 | 0.7990 | 0.8383 | 0.9135 |
| 0.0013 | 27.3523 | 12500 | 0.9571 | 0.8374 | 0.7800 | 0.8941 | 0.8267 | 0.9120 |
| 0.0163 | 27.5711 | 12600 | 0.9475 | 0.8305 | 0.8167 | 0.8441 | 0.8273 | 0.9102 |
| 0.0034 | 27.7899 | 12700 | 0.8116 | 0.8403 | 0.8365 | 0.8441 | 0.8390 | 0.9183 |
| 0.0014 | 28.0088 | 12800 | 0.9375 | 0.8305 | 0.8285 | 0.8324 | 0.8294 | 0.9139 |
| 0.0008 | 28.2276 | 12900 | 1.0335 | 0.8314 | 0.7602 | 0.9020 | 0.8177 | 0.9072 |
| 0.0497 | 28.4464 | 13000 | 1.0562 | 0.8285 | 0.7592 | 0.8971 | 0.8149 | 0.9039 |
| 0.0319 | 28.6652 | 13100 | 0.7997 | 0.8364 | 0.8444 | 0.8284 | 0.8369 | 0.9167 |
| 0.0932 | 28.8840 | 13200 | 0.8591 | 0.8167 | 0.8474 | 0.7863 | 0.8213 | 0.9142 |
| 0.0007 | 29.1028 | 13300 | 0.8555 | 0.8379 | 0.8246 | 0.8510 | 0.8349 | 0.9196 |
| 0.0025 | 29.3217 | 13400 | 0.9062 | 0.8359 | 0.8236 | 0.8480 | 0.8331 | 0.9147 |
| 0.0117 | 29.5405 | 13500 | 0.8089 | 0.8339 | 0.8345 | 0.8333 | 0.8333 | 0.9181 |
| 0.0505 | 29.7593 | 13600 | 0.9048 | 0.8329 | 0.8404 | 0.8255 | 0.8334 | 0.9167 |
| 0.0484 | 29.9781 | 13700 | 1.0264 | 0.8265 | 0.8573 | 0.7961 | 0.8309 | 0.9133 |
| 0.0004 | 30.1969 | 13800 | 1.0712 | 0.8349 | 0.8087 | 0.8608 | 0.8297 | 0.9053 |
| 0.0157 | 30.4158 | 13900 | 1.0159 | 0.8236 | 0.8186 | 0.8284 | 0.8219 | 0.9062 |
| 0.0004 | 30.6346 | 14000 | 1.0367 | 0.8305 | 0.8196 | 0.8412 | 0.8278 | 0.9022 |
| 0.0003 | 30.8534 | 14100 | 0.9853 | 0.8314 | 0.8345 | 0.8284 | 0.8312 | 0.9123 |
| 0.0039 | 31.0722 | 14200 | 0.9839 | 0.8413 | 0.7869 | 0.8951 | 0.8314 | 0.9124 |
| 0.0505 | 31.2910 | 14300 | 1.0911 | 0.8339 | 0.8741 | 0.7941 | 0.8396 | 0.9033 |
| 0.0007 | 31.5098 | 14400 | 0.8740 | 0.8374 | 0.8246 | 0.85 | 0.8345 | 0.9208 |
| 0.0004 | 31.7287 | 14500 | 0.9801 | 0.8398 | 0.8295 | 0.85 | 0.8374 | 0.9208 |
| 0.0592 | 31.9475 | 14600 | 1.0447 | 0.8305 | 0.8404 | 0.8206 | 0.8314 | 0.9165 |
| 0.0003 | 32.1663 | 14700 | 1.1005 | 0.8245 | 0.8543 | 0.7951 | 0.8288 | 0.9129 |
| 0.0002 | 32.3851 | 14800 | 1.1025 | 0.8319 | 0.8176 | 0.8461 | 0.8287 | 0.9108 |
| 0.0428 | 32.6039 | 14900 | 1.0779 | 0.8310 | 0.8236 | 0.8382 | 0.8289 | 0.9096 |
| 0.049 | 32.8228 | 15000 | 0.9729 | 0.8408 | 0.8295 | 0.8520 | 0.8383 | 0.9208 |
| 0.0219 | 33.0416 | 15100 | 0.9851 | 0.8211 | 0.7661 | 0.8755 | 0.8098 | 0.9120 |
| 0.001 | 33.2604 | 15200 | 0.9834 | 0.8349 | 0.8256 | 0.8441 | 0.8326 | 0.9166 |
| 0.0009 | 33.4792 | 15300 | 1.0128 | 0.8270 | 0.7463 | 0.9069 | 0.8110 | 0.9130 |
| 0.0146 | 33.6980 | 15400 | 0.9835 | 0.8300 | 0.7790 | 0.8804 | 0.8200 | 0.9097 |
| 0.0184 | 33.9168 | 15500 | 0.8922 | 0.8290 | 0.8276 | 0.8304 | 0.8280 | 0.9183 |
| 0.0528 | 34.1357 | 15600 | 0.9727 | 0.8398 | 0.7899 | 0.8892 | 0.8306 | 0.9107 |
| 0.0018 | 34.3545 | 15700 | 1.0313 | 0.8413 | 0.8196 | 0.8627 | 0.8370 | 0.9065 |
| 0.0002 | 34.5733 | 15800 | 1.0882 | 0.8374 | 0.7978 | 0.8765 | 0.8299 | 0.9065 |
| 0.0002 | 34.7921 | 15900 | 1.0866 | 0.8379 | 0.8236 | 0.8520 | 0.8348 | 0.9045 |
| 0.0865 | 35.0109 | 16000 | 1.0595 | 0.8300 | 0.7602 | 0.8990 | 0.8164 | 0.8971 |
| 0.0004 | 35.2298 | 16100 | 1.0287 | 0.8344 | 0.7988 | 0.8696 | 0.8275 | 0.9041 |
| 0.0003 | 35.4486 | 16200 | 1.0652 | 0.8305 | 0.8176 | 0.8431 | 0.8275 | 0.8877 |
| 0.0006 | 35.6674 | 16300 | 1.0627 | 0.8270 | 0.7988 | 0.8549 | 0.8212 | 0.8848 |
| 0.0003 | 35.8862 | 16400 | 1.1173 | 0.8339 | 0.7780 | 0.8892 | 0.8233 | 0.8843 |
| 0.0002 | 36.1050 | 16500 | 1.1114 | 0.8379 | 0.8048 | 0.8706 | 0.8315 | 0.8948 |
| 0.0002 | 36.3239 | 16600 | 1.1165 | 0.8379 | 0.8137 | 0.8618 | 0.8331 | 0.8968 |
| 0.0004 | 36.5427 | 16700 | 1.1693 | 0.8369 | 0.8147 | 0.8588 | 0.8324 | 0.8918 |
| 0.0002 | 36.7615 | 16800 | 1.1609 | 0.8364 | 0.8325 | 0.8402 | 0.8350 | 0.8856 |
| 0.0007 | 36.9803 | 16900 | 1.1993 | 0.8334 | 0.8107 | 0.8559 | 0.8288 | 0.8935 |
| 0.0002 | 37.1991 | 17000 | 1.0206 | 0.8374 | 0.8652 | 0.8098 | 0.8410 | 0.9128 |
| 0.0024 | 37.4179 | 17100 | 0.9984 | 0.8359 | 0.7899 | 0.8814 | 0.8272 | 0.9094 |
| 0.0005 | 37.6368 | 17200 | 1.1162 | 0.8388 | 0.7671 | 0.9098 | 0.8256 | 0.8987 |
| 0.0008 | 37.8556 | 17300 | 0.9434 | 0.8433 | 0.8414 | 0.8451 | 0.8423 | 0.9146 |
| 0.0003 | 38.0744 | 17400 | 0.9508 | 0.8457 | 0.8523 | 0.8392 | 0.8460 | 0.9200 |
| 0.0003 | 38.2932 | 17500 | 1.0299 | 0.8379 | 0.8345 | 0.8412 | 0.8366 | 0.9183 |
| 0.0002 | 38.5120 | 17600 | 1.0518 | 0.8438 | 0.8325 | 0.8549 | 0.8413 | 0.9178 |
| 0.0015 | 38.7309 | 17700 | 1.0205 | 0.8472 | 0.8464 | 0.8480 | 0.8464 | 0.9210 |
| 0.0188 | 38.9497 | 17800 | 1.0644 | 0.8438 | 0.7968 | 0.8902 | 0.8353 | 0.9183 |
| 0.0002 | 39.1685 | 17900 | 1.0497 | 0.8443 | 0.8266 | 0.8618 | 0.8407 | 0.9220 |
| 0.0003 | 39.3873 | 18000 | 1.0802 | 0.8443 | 0.8236 | 0.8647 | 0.8402 | 0.9210 |
| 0.0002 | 39.6061 | 18100 | 1.1465 | 0.8393 | 0.7958 | 0.8824 | 0.8313 | 0.9186 |
| 0.0002 | 39.8249 | 18200 | 1.0551 | 0.8467 | 0.8147 | 0.8784 | 0.8409 | 0.9185 |
| 0.0002 | 40.0438 | 18300 | 1.0791 | 0.8467 | 0.8147 | 0.8784 | 0.8409 | 0.9171 |
| 0.0002 | 40.2626 | 18400 | 1.0902 | 0.8487 | 0.8176 | 0.8794 | 0.8431 | 0.9175 |
| 0.0002 | 40.4814 | 18500 | 1.1028 | 0.8487 | 0.8176 | 0.8794 | 0.8431 | 0.9175 |
| 0.0001 | 40.7002 | 18600 | 1.1156 | 0.8487 | 0.8176 | 0.8794 | 0.8431 | 0.9165 |
| 0.0001 | 40.9190 | 18700 | 1.1266 | 0.8487 | 0.8176 | 0.8794 | 0.8431 | 0.9168 |
| 0.0002 | 41.1379 | 18800 | 1.0527 | 0.8472 | 0.8246 | 0.8696 | 0.8430 | 0.9186 |
| 0.0002 | 41.3567 | 18900 | 1.0758 | 0.8477 | 0.8226 | 0.8725 | 0.8431 | 0.9190 |
| 0.0001 | 41.5755 | 19000 | 1.0940 | 0.8492 | 0.8216 | 0.8765 | 0.8442 | 0.9199 |
| 0.0268 | 41.7943 | 19100 | 0.9887 | 0.8374 | 0.8494 | 0.8255 | 0.8386 | 0.9196 |
| 0.002 | 42.0131 | 19200 | 1.0890 | 0.8354 | 0.7730 | 0.8971 | 0.8237 | 0.9172 |
| 0.0002 | 42.2319 | 19300 | 1.0668 | 0.8418 | 0.8147 | 0.8686 | 0.8366 | 0.9154 |
| 0.0001 | 42.4508 | 19400 | 1.1239 | 0.8383 | 0.7899 | 0.8863 | 0.8293 | 0.9150 |
| 0.0001 | 42.6696 | 19500 | 1.1372 | 0.8364 | 0.8285 | 0.8441 | 0.8343 | 0.9084 |
| 0.0001 | 42.8884 | 19600 | 1.1153 | 0.8393 | 0.7869 | 0.8912 | 0.8297 | 0.9200 |
| 0.0001 | 43.1072 | 19700 | 1.1482 | 0.8413 | 0.7790 | 0.9029 | 0.8300 | 0.9184 |
| 0.0001 | 43.3260 | 19800 | 1.1535 | 0.8388 | 0.7859 | 0.8912 | 0.8291 | 0.9180 |
| 0.0001 | 43.5449 | 19900 | 1.1138 | 0.8393 | 0.8236 | 0.8549 | 0.8360 | 0.9188 |
| 0.0001 | 43.7637 | 20000 | 1.1321 | 0.8393 | 0.8186 | 0.8598 | 0.8352 | 0.9176 |
| 0.0001 | 43.9825 | 20100 | 1.1473 | 0.8403 | 0.8147 | 0.8657 | 0.8354 | 0.9163 |
| 0.0001 | 44.2013 | 20200 | 1.1550 | 0.8413 | 0.8137 | 0.8686 | 0.8360 | 0.9154 |
| 0.0001 | 44.4201 | 20300 | 1.1630 | 0.8428 | 0.8127 | 0.8725 | 0.8372 | 0.9143 |
| 0.0001 | 44.6389 | 20400 | 1.1718 | 0.8428 | 0.8117 | 0.8735 | 0.8370 | 0.9133 |
| 0.0001 | 44.8578 | 20500 | 1.1793 | 0.8428 | 0.8117 | 0.8735 | 0.8370 | 0.9129 |
| 0.0001 | 45.0766 | 20600 | 1.1869 | 0.8418 | 0.8097 | 0.8735 | 0.8358 | 0.9121 |
| 0.0001 | 45.2954 | 20700 | 1.1931 | 0.8413 | 0.8087 | 0.8735 | 0.8352 | 0.9115 |
| 0.0001 | 45.5142 | 20800 | 1.1990 | 0.8418 | 0.8097 | 0.8735 | 0.8358 | 0.9103 |
| 0.0001 | 45.7330 | 20900 | 1.2056 | 0.8418 | 0.8087 | 0.8745 | 0.8356 | 0.9097 |
| 0.0001 | 45.9519 | 21000 | 1.2116 | 0.8423 | 0.8087 | 0.8755 | 0.8361 | 0.9092 |
| 0.0001 | 46.1707 | 21100 | 1.2176 | 0.8428 | 0.8087 | 0.8765 | 0.8365 | 0.9090 |
| 0.0001 | 46.3895 | 21200 | 1.2233 | 0.8433 | 0.8087 | 0.8775 | 0.8369 | 0.9082 |
| 0.0001 | 46.6083 | 21300 | 1.2281 | 0.8433 | 0.8087 | 0.8775 | 0.8369 | 0.9079 |
| 0.0001 | 46.8271 | 21400 | 1.2322 | 0.8433 | 0.8087 | 0.8775 | 0.8369 | 0.9075 |
| 0.0001 | 47.0460 | 21500 | 1.2365 | 0.8433 | 0.8087 | 0.8775 | 0.8369 | 0.9075 |
| 0.0001 | 47.2648 | 21600 | 1.2402 | 0.8433 | 0.8087 | 0.8775 | 0.8369 | 0.9074 |
| 0.0001 | 47.4836 | 21700 | 1.2447 | 0.8433 | 0.8087 | 0.8775 | 0.8369 | 0.9060 |
| 0.0001 | 47.7024 | 21800 | 1.2484 | 0.8433 | 0.8087 | 0.8775 | 0.8369 | 0.9068 |
| 0.0001 | 47.9212 | 21900 | 1.2516 | 0.8433 | 0.8087 | 0.8775 | 0.8369 | 0.9064 |
| 0.0 | 48.1400 | 22000 | 1.2546 | 0.8433 | 0.8087 | 0.8775 | 0.8369 | 0.9068 |
| 0.0 | 48.3589 | 22100 | 1.2572 | 0.8433 | 0.8087 | 0.8775 | 0.8369 | 0.9062 |
| 0.0 | 48.5777 | 22200 | 1.2603 | 0.8433 | 0.8087 | 0.8775 | 0.8369 | 0.9058 |
| 0.0 | 48.7965 | 22300 | 1.2628 | 0.8438 | 0.8087 | 0.8784 | 0.8374 | 0.9057 |
| 0.0 | 49.0153 | 22400 | 1.2647 | 0.8438 | 0.8087 | 0.8784 | 0.8374 | 0.9053 |
| 0.0 | 49.2341 | 22500 | 1.2663 | 0.8438 | 0.8087 | 0.8784 | 0.8374 | 0.9055 |
| 0.0 | 49.4530 | 22600 | 1.2679 | 0.8438 | 0.8087 | 0.8784 | 0.8374 | 0.9058 |
| 0.0 | 49.6718 | 22700 | 1.2687 | 0.8438 | 0.8087 | 0.8784 | 0.8374 | 0.9057 |
| 0.0 | 49.8906 | 22800 | 1.2691 | 0.8438 | 0.8087 | 0.8784 | 0.8374 | 0.9061 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 2.20.0
- Tokenizers 0.21.0
|
AustinCarthy/MixGPT2_Domain_100KP_BFall_fromP_50K_topP_0.75_ratio5 | AustinCarthy | "2023-06-09T15:14:02Z" | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | "2023-06-09T12:05:42Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: MixGPT2_Domain_100KP_BFall_fromP_50K_topP_0.75_ratio5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MixGPT2_Domain_100KP_BFall_fromP_50K_topP_0.75_ratio5
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the Train benign: Fall,Test Benign: Fall, Train phish: Fall, Test phish: Fall, generated url dataset: generated_phish_MixGPT2_using_phish_50K_top_p_0.75_domain dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0199
- Accuracy: 0.9980
- F1: 0.9782
- Precision: 0.9990
- Recall: 0.9582
- Roc Auc Score: 0.9791
- Tpr At Fpr 0.01: 0.96
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0058 | 1.0 | 28125 | 0.0175 | 0.9968 | 0.9656 | 0.9977 | 0.9356 | 0.9677 | 0.931 |
| 0.002 | 2.0 | 56250 | 0.0137 | 0.9978 | 0.9769 | 0.9979 | 0.9568 | 0.9783 | 0.9562 |
| 0.0015 | 3.0 | 84375 | 0.0197 | 0.9976 | 0.9741 | 0.9977 | 0.9516 | 0.9757 | 0.9482 |
| 0.0006 | 4.0 | 112500 | 0.0174 | 0.9976 | 0.9739 | 0.9983 | 0.9506 | 0.9753 | 0.9528 |
| 0.0 | 5.0 | 140625 | 0.0199 | 0.9980 | 0.9782 | 0.9990 | 0.9582 | 0.9791 | 0.96 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
|
dima806/multiple_accent_classification | dima806 | "2024-10-19T10:58:20Z" | 5 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"audio-classification",
"base_model:facebook/wav2vec2-base-960h",
"base_model:finetune:facebook/wav2vec2-base-960h",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | "2023-08-18T18:19:06Z" | ---
license: apache-2.0
metrics:
- accuracy
- roc_auc
base_model:
- facebook/wav2vec2-base-960h
---
More details in https://www.kaggle.com/code/dima806/speech-accent-multiclass-classification |
MichalMlodawski/open-closed-eye-classification-mobilevitv2-1.0 | MichalMlodawski | "2024-09-20T10:50:03Z" | 191 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mobilevitv2",
"image-classification",
"eye",
"eyes",
"en",
"dataset:MichalMlodawski/closed-open-eyes",
"license:cc-by-nc-nd-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-07-22T06:15:17Z" | ---
license: cc-by-nc-nd-4.0
datasets:
- MichalMlodawski/closed-open-eyes
language:
- en
tags:
- eye
- eyes
model-index:
- name: mobilevitv2 Eye State Classifier
results:
- task:
type: image-classification
dataset:
name: MichalMlodawski/closed-open-eyes
type: custom
metrics:
- name: Accuracy
type: self-reported
value: 99%
- name: Precision
type: self-reported
value: 99%
- name: Recall
type: self-reported
value: 99%
---
---
# 👁️ Open-Closed Eye Classification mobilevitv2 👁️
## Model Overview 🔍
This model is a fine-tuned version of mobilevitv2, specifically designed for classifying images of eyes as either open or closed. With an impressive accuracy of 99%, this classifier excels in distinguishing between open and closed eyes in various contexts.
## Model Details 📊
- **Model Name**: open-closed-eye-classification-focalnet-base
- **Base Model**: apple/mobilevitv2-1.0-imagenet1k-256
- **Fine-tuned By**: Michał Młodawski
- **Categories**:
- 0: Closed Eyes 😴
- 1: Open Eyes 👀
- **Accuracy**: 99% 🎯
## Use Cases 💡
This high-accuracy model is particularly useful for applications involving:
- Driver Drowsiness Detection 🚗
- Attentiveness Monitoring in Educational Settings 🏫
- Medical Diagnostics related to Eye Conditions 🏥
- Facial Analysis in Photography and Videography 📸
- Human-Computer Interaction Systems 💻
## How It Works 🛠️
The model takes an input image and classifies it into one of two categories:
- **Closed Eyes** (0): Images where the subject's eyes are fully or mostly closed.
- **Open Eyes** (1): Images where the subject's eyes are open.
The classification leverages the advanced image processing capabilities of the FocalNet architecture, fine-tuned on a carefully curated dataset of eye images.
## Getting Started 🚀
To start using the open-closed-eye-classification-focalnet-base, you can integrate it into your projects with the following steps:
### Installation
```bash
pip install transformers==4.37.2
pip install torch==2.3.1
pip install Pillow
```
### Usage
```python
import os
from PIL import Image
import torch
from torchvision import transforms
from transformers import AutoImageProcessor, MobileViTV2ForImageClassification
# Path to the folder with images
image_folder = ""
# Path to the model
model_path = "MichalMlodawski/open-closed-eye-classification-mobilevitv2-1.0"
# List of jpg files in the folder
jpg_files = [file for file in os.listdir(image_folder) if file.lower().endswith(".jpg")]
# Check if there are jpg files in the folder
if not jpg_files:
print("🚫 No jpg files found in folder:", image_folder)
exit()
# Load the model and image processor
image_processor = AutoImageProcessor.from_pretrained(model_path)
model = MobileViTV2ForImageClassification.from_pretrained(model_path)
model.eval()
# Image transformations
transform = transforms.Compose([
transforms.Resize((256, 256)),
transforms.ToTensor()
])
# Processing and prediction for each image
results = []
for jpg_file in jpg_files:
selected_image = os.path.join(image_folder, jpg_file)
image = Image.open(selected_image).convert("RGB")
image_tensor = transform(image).unsqueeze(0)
# Process image using image_processor
inputs = image_processor(images=image, return_tensors="pt")
# Prediction using the model
with torch.no_grad():
outputs = model(**inputs)
probabilities = torch.nn.functional.softmax(outputs.logits, dim=-1)
confidence, predicted = torch.max(probabilities, 1)
results.append((jpg_file, predicted.item(), confidence.item() * 100))
# Display results
print("🖼️ Image Classification Results 🖼️")
print("=" * 40)
for jpg_file, prediction, confidence in results:
emoji = "👁️" if prediction == 1 else "❌"
confidence_bar = "🟩" * int(confidence // 10) + "⬜" * (10 - int(confidence // 10))
print(f"📄 File name: {jpg_file}")
print(f"{emoji} Prediction: {'Open' if prediction == 1 else 'Closed'}")
print(f"🎯 Confidence: {confidence:.2f}% {confidence_bar}")
print(f"{'=' * 40}")
print("🏁 Classification completed! 🎉")
```
## Disclaimer ⚠️
This model is provided for research and development purposes only. The creators and distributors of this model do not assume any legal responsibility for its use or misuse. Users are solely responsible for ensuring that their use of this model complies with applicable laws, regulations, and ethical standards. The model's performance may vary depending on the quality and nature of input images. Always validate results in critical applications.
🚫 Do not use this model for any illegal, unethical, or potentially harmful purposes.
📝 Please note that while the model demonstrates high accuracy, it should not be used as a sole decision-making tool in safety-critical systems without proper validation and human oversight. |
Piotrasz/Llama-2-7b-hf-ROME-50-en-32b | Piotrasz | "2024-06-18T14:56:38Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-18T14:30:19Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gajanhcc/finetune_colpali_v1_2-ownep6 | gajanhcc | "2025-03-19T07:03:40Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"colpali",
"generated_from_trainer",
"base_model:vidore/colpaligemma-3b-pt-448-base",
"base_model:finetune:vidore/colpaligemma-3b-pt-448-base",
"license:gemma",
"endpoints_compatible",
"region:us"
] | null | "2025-03-19T07:03:32Z" | ---
library_name: transformers
license: gemma
base_model: vidore/colpaligemma-3b-pt-448-base
tags:
- colpali
- generated_from_trainer
model-index:
- name: finetune_colpali_v1_2-ownep6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetune_colpali_v1_2-ownep6
This model is a fine-tuned version of [vidore/colpaligemma-3b-pt-448-base](https://huggingface.co/vidore/colpaligemma-3b-pt-448-base) on the gajanhcc/fashion-query-dataset-10samples dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0020
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.01 | 1 | 0.2341 |
| 0.0068 | 1.0 | 100 | 0.0022 |
| 0.0 | 2.0 | 200 | 0.0038 |
| 0.0 | 3.0 | 300 | 0.0036 |
| 0.0001 | 4.0 | 400 | 0.0008 |
| 0.0018 | 5.0 | 500 | 0.0015 |
| 0.0002 | 6.0 | 600 | 0.0020 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
rafoo1337/alex | rafoo1337 | "2025-03-11T14:44:35Z" | 0 | 0 | null | [
"license:other",
"region:us"
] | null | "2025-03-11T14:05:56Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
nick808p/nicklone | nick808p | "2025-01-20T06:17:11Z" | 14 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-01-20T05:55:59Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: nicklone
---
# Nicklone
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `nicklone` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('nick808p/nicklone', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
ehab215/AnemiaTypePrediction | ehab215 | "2024-05-06T18:03:01Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-05-06T18:01:48Z" | ---
license: apache-2.0
---
|
automerger/Experiment27Experiment28-7B | automerger | "2024-03-11T10:05:04Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"automerger",
"base_model:yam-peleg/Experiment27-7B",
"base_model:merge:yam-peleg/Experiment27-7B",
"base_model:yam-peleg/Experiment28-7B",
"base_model:merge:yam-peleg/Experiment28-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-11T10:04:01Z" | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- automerger
base_model:
- yam-peleg/Experiment27-7B
- yam-peleg/Experiment28-7B
---
# Experiment27Experiment28-7B
Experiment27Experiment28-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
* [yam-peleg/Experiment27-7B](https://huggingface.co/yam-peleg/Experiment27-7B)
* [yam-peleg/Experiment28-7B](https://huggingface.co/yam-peleg/Experiment28-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: yam-peleg/Experiment27-7B
layer_range: [0, 32]
- model: yam-peleg/Experiment28-7B
layer_range: [0, 32]
merge_method: slerp
base_model: yam-peleg/Experiment27-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
random_seed: 0
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/Experiment27Experiment28-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
ymlee/test | ymlee | "2024-05-31T04:11:09Z" | 91 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-05-31T04:10:11Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
manuvallejos/loratrainingmnaindez | manuvallejos | "2024-10-02T01:47:31Z" | 40 | 1 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2024-10-02T01:25:12Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Loratrainingmnaindez
<!-- <Gallery /> -->
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('manuvallejos/loratrainingmnaindez', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
zhilong777/csllm | zhilong777 | "2025-03-13T06:30:43Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-03-13T06:30:43Z" | ---
license: apache-2.0
---
|
squarelike/Gugugo-koen-7B-V1.1 | squarelike | "2024-03-29T06:02:43Z" | 298 | 17 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"translation",
"en",
"ko",
"dataset:squarelike/sharegpt_deepl_ko_translation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | translation | "2023-10-27T14:38:43Z" | ---
license: apache-2.0
datasets:
- squarelike/sharegpt_deepl_ko_translation
language:
- en
- ko
pipeline_tag: translation
---
# Gugugo-koen-7B-V1.1
Detail repo: [https://github.com/jwj7140/Gugugo](https://github.com/jwj7140/Gugugo)

**Base Model**: [Llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b)
**Training Dataset**: [sharegpt_deepl_ko_translation](https://huggingface.co/datasets/squarelike/sharegpt_deepl_ko_translation).
I trained with 1x A6000 GPUs for 90 hours.
## **Prompt Template**
**KO->EN**
```
### 한국어: {sentence}</끝>
### 영어:
```
**EN->KO**
```
### 영어: {sentence}</끝>
### 한국어:
```
There are GPTQ, AWQ, and GGUF support.
[https://huggingface.co/squarelike/Gugugo-koen-7B-V1.1-GPTQ](https://huggingface.co/squarelike/Gugugo-koen-7B-V1.1-GPTQ)
[https://huggingface.co/squarelike/Gugugo-koen-7B-V1.1-AWQ](https://huggingface.co/squarelike/Gugugo-koen-7B-V1.1-AWQ)
[https://huggingface.co/squarelike/Gugugo-koen-7B-V1.1-GGUF](https://huggingface.co/squarelike/Gugugo-koen-7B-V1.1-GGUF)
## **Implementation Code**
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, StoppingCriteria, StoppingCriteriaList
import torch
repo = "squarelike/Gugugo-koen-7B-V1.1"
model = AutoModelForCausalLM.from_pretrained(
repo,
load_in_4bit=True
device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained(repo)
class StoppingCriteriaSub(StoppingCriteria):
def __init__(self, stops = [], encounters=1):
super().__init__()
self.stops = [stop for stop in stops]
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor):
for stop in self.stops:
if torch.all((stop == input_ids[0][-len(stop):])).item():
return True
return False
stop_words_ids = torch.tensor([[829, 45107, 29958], [1533, 45107, 29958], [829, 45107, 29958], [21106, 45107, 29958]]).to("cuda")
stopping_criteria = StoppingCriteriaList([StoppingCriteriaSub(stops=stop_words_ids)])
def gen(lan="en", x=""):
if (lan == "ko"):
prompt = f"### 한국어: {x}</끝>\n### 영어:"
else:
prompt = f"### 영어: {x}</끝>\n### 한국어:"
gened = model.generate(
**tokenizer(
prompt,
return_tensors='pt',
return_token_type_ids=False
).to("cuda"),
max_new_tokens=2000,
temperature=0.3,
# no_repeat_ngram_size=5,
num_beams=5,
stopping_criteria=stopping_criteria
)
return tokenizer.decode(gened[0][1:]).replace(prompt+" ", "").replace("</끝>", "")
print(gen(lan="en", x="Hello, world!"))
``` |
JvThunder/a2c-AntBulletEnv-v0 | JvThunder | "2023-07-20T09:02:28Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-06-17T19:07:25Z" | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1480.48 +/- 111.99
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Helsinki-NLP/opus-mt-ty-fr | Helsinki-NLP | "2023-08-16T12:07:56Z" | 108 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ty",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2022-03-02T23:29:04Z" | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ty-fr
* source languages: ty
* target languages: fr
* OPUS readme: [ty-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ty-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ty-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ty-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ty-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ty.fr | 30.2 | 0.480 |
|
tahaenesaslanturk/ts-corpus-bpe-256k-cased | tahaenesaslanturk | "2024-04-30T22:39:04Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-04-30T22:36:56Z" | ---
license: mit
---
# TS-Corpus BPE Tokenizer (256k, Cased)
## Overview
This repository hosts a Byte Pair Encoding (BPE) tokenizer with a vocabulary size of 256,000, trained cased using several datasets from the TS Corpus website. The BPE method is particularly effective for languages like Turkish, providing a balance between word-level and character-level tokenization.
## Dataset Sources
The tokenizer was trained on a variety of text sources from TS Corpus, ensuring a broad linguistic coverage. These sources include:
- [TS Corpus V2](https://tscorpus.com/corpora/ts-corpus-v2/)
- [TS Wikipedia Corpus](https://tscorpus.com/corpora/ts-wikipedia-corpus/)
- [TS Abstract Corpus](https://tscorpus.com/corpora/ts-abstract-corpus/)
- [TS Idioms and Proverbs Corpus](https://tscorpus.com/corpora/ts-idioms-and-proverbs-corpus/)
- [Syllable Corpus](https://tscorpus.com/corpora/syllable-corpus/)
- [Turkish Constitution Corpus](https://tscorpus.com/corpora/turkish-constitution-corpus/)
The inclusion of idiomatic expressions, proverbs, and legal terminology provides a comprehensive toolkit for processing Turkish text across different domains.
## Tokenizer Model
Utilizing the Byte Pair Encoding (BPE) method, this tokenizer excels in efficiently managing subword units without the need for an extensive vocabulary. BPE is especially suitable for handling the agglutinative nature of Turkish, where words can have multiple suffixes.
## Usage
To use this tokenizer in your projects, load it with the Hugging Face `transformers` library:
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("tahaenesaslanturk/ts-corpus-bpe-256k-cased")
``` |
appohfaiths/ghanaba-distilbert-gpt2-edu-med | appohfaiths | "2025-01-13T10:02:39Z" | 23 | 0 | null | [
"safetensors",
"gpt2",
"text-generation",
"en",
"arxiv:1910.09700",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"region:us"
] | text-generation | "2025-01-13T08:15:50Z" | ---
license: apache-2.0
language:
- en
base_model:
- distilbert/distilgpt2
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Gyaneshere/ppo-PyramidsTraining | Gyaneshere | "2025-02-06T06:43:49Z" | 15 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | "2025-02-06T06:43:40Z" | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Gyaneshere/ppo-PyramidsTraining
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Subsets and Splits