modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-16 00:39:17
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 427
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-16 00:38:50
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
yashmalviya/multi-qa-mpnet-base-dot-v1-4-frozen | yashmalviya | "2024-11-23T15:43:14Z" | 6 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"mpnet",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"loss:TripletLoss",
"arxiv:1908.10084",
"arxiv:1703.07737",
"base_model:sentence-transformers/multi-qa-mpnet-base-dot-v1",
"base_model:finetune:sentence-transformers/multi-qa-mpnet-base-dot-v1",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2024-11-23T15:42:48Z" | ---
base_model: sentence-transformers/multi-qa-mpnet-base-dot-v1
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- loss:TripletLoss
---
# SentenceTransformer based on sentence-transformers/multi-qa-mpnet-base-dot-v1
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/multi-qa-mpnet-base-dot-v1](https://huggingface.co/sentence-transformers/multi-qa-mpnet-base-dot-v1). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/multi-qa-mpnet-base-dot-v1](https://huggingface.co/sentence-transformers/multi-qa-mpnet-base-dot-v1) <!-- at revision 4633e80e17ea975bc090c97b049da26062b054d3 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Dot Product
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("yashmalviya/multi-qa-mpnet-base-dot-v1-4-frozen")
# Run inference
sentences = [
'The weather is lovely today.',
"It's so sunny outside!",
'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `learning_rate`: 2e-06
- `max_steps`: 82000
- `lr_scheduler_type`: inverse_sqrt
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-06
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3.0
- `max_steps`: 82000
- `lr_scheduler_type`: inverse_sqrt
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss |
|:------:|:-----:|:-------------:|
| None | 0 | - |
| 0.0001 | 10 | 0.2111 |
| 0.0002 | 20 | 0.208 |
| 0.0004 | 30 | 0.2087 |
| 0.0005 | 40 | 0.2095 |
| 0.0006 | 50 | 0.207 |
| 0.0007 | 60 | 0.2064 |
| 0.0009 | 70 | 0.2079 |
| 0.0010 | 80 | 0.2046 |
| 0.0011 | 90 | 0.2032 |
| 0.0012 | 100 | 0.2052 |
| 0.0013 | 110 | 0.2036 |
| 0.0015 | 120 | 0.1995 |
| 0.0016 | 130 | 0.1991 |
| 0.0017 | 140 | 0.1991 |
| 0.0018 | 150 | 0.2035 |
| 0.0020 | 160 | 0.2021 |
| 0.0021 | 170 | 0.2036 |
| 0.0022 | 180 | 0.2017 |
| 0.0023 | 190 | 0.1966 |
| 0.0024 | 200 | 0.2013 |
| 0.0026 | 210 | 0.2056 |
| 0.0027 | 220 | 0.1969 |
| 0.0028 | 230 | 0.1999 |
| 0.0029 | 240 | 0.1887 |
| 0.0030 | 250 | 0.2004 |
| 0.0032 | 260 | 0.1967 |
| 0.0033 | 270 | 0.1969 |
| 0.0034 | 280 | 0.1947 |
| 0.0035 | 290 | 0.1944 |
| 0.0037 | 300 | 0.1919 |
| 0.0038 | 310 | 0.196 |
| 0.0039 | 320 | 0.192 |
| 0.0040 | 330 | 0.1919 |
| 0.0041 | 340 | 0.1836 |
| 0.0043 | 350 | 0.1878 |
| 0.0044 | 360 | 0.1894 |
| 0.0045 | 370 | 0.1965 |
| 0.0046 | 380 | 0.1948 |
| 0.0048 | 390 | 0.1878 |
| 0.0049 | 400 | 0.1954 |
| 0.005 | 410 | 0.1849 |
| 0.0051 | 420 | 0.1918 |
| 0.0052 | 430 | 0.1878 |
| 0.0054 | 440 | 0.1869 |
| 0.0055 | 450 | 0.1862 |
| 0.0056 | 460 | 0.1885 |
| 0.0057 | 470 | 0.1884 |
| 0.0059 | 480 | 0.1831 |
| 0.0060 | 490 | 0.1838 |
| 0.0061 | 500 | 0.1901 |
| 0.0062 | 510 | 0.1832 |
| 0.0063 | 520 | 0.1809 |
| 0.0065 | 530 | 0.1813 |
| 0.0066 | 540 | 0.1779 |
| 0.0067 | 550 | 0.1794 |
| 0.0068 | 560 | 0.186 |
| 0.0070 | 570 | 0.18 |
| 0.0071 | 580 | 0.1823 |
| 0.0072 | 590 | 0.1736 |
| 0.0073 | 600 | 0.1747 |
| 0.0074 | 610 | 0.185 |
| 0.0076 | 620 | 0.1701 |
| 0.0077 | 630 | 0.1819 |
| 0.0078 | 640 | 0.1778 |
| 0.0079 | 650 | 0.1694 |
| 0.0080 | 660 | 0.1699 |
| 0.0082 | 670 | 0.1746 |
| 0.0083 | 680 | 0.1851 |
| 0.0084 | 690 | 0.1728 |
| 0.0085 | 700 | 0.1704 |
| 0.0087 | 710 | 0.1741 |
| 0.0088 | 720 | 0.1616 |
| 0.0089 | 730 | 0.1676 |
| 0.0090 | 740 | 0.1708 |
| 0.0091 | 750 | 0.1723 |
| 0.0093 | 760 | 0.1677 |
| 0.0094 | 770 | 0.1682 |
| 0.0095 | 780 | 0.1585 |
| 0.0096 | 790 | 0.16 |
| 0.0098 | 800 | 0.1644 |
| 0.0099 | 810 | 0.166 |
| 0.01 | 820 | 0.173 |
| 0.0101 | 830 | 0.2083 |
| 0.0102 | 840 | 0.2063 |
| 0.0104 | 850 | 0.2107 |
| 0.0105 | 860 | 0.2054 |
| 0.0106 | 870 | 0.2061 |
| 0.0107 | 880 | 0.2115 |
| 0.0109 | 890 | 0.2043 |
| 0.0110 | 900 | 0.2058 |
| 0.0111 | 910 | 0.2081 |
| 0.0112 | 920 | 0.2035 |
| 0.0113 | 930 | 0.2058 |
| 0.0115 | 940 | 0.1983 |
| 0.0116 | 950 | 0.2058 |
| 0.0117 | 960 | 0.2055 |
| 0.0118 | 970 | 0.209 |
| 0.0120 | 980 | 0.2058 |
| 0.0121 | 990 | 0.2096 |
| 0.0122 | 1000 | 0.2077 |
| 0.0123 | 1010 | 0.2055 |
| 0.0124 | 1020 | 0.2072 |
| 0.0126 | 1030 | 0.2029 |
| 0.0127 | 1040 | 0.2042 |
| 0.0128 | 1050 | 0.2032 |
| 0.0129 | 1060 | 0.2014 |
| 0.0130 | 1070 | 0.2071 |
| 0.0132 | 1080 | 0.2039 |
| 0.0133 | 1090 | 0.2096 |
| 0.0134 | 1100 | 0.2051 |
| 0.0135 | 1110 | 0.1997 |
| 0.0137 | 1120 | 0.2048 |
| 0.0138 | 1130 | 0.2042 |
| 0.0139 | 1140 | 0.2038 |
| 0.0140 | 1150 | 0.204 |
| 0.0141 | 1160 | 0.2034 |
| 0.0143 | 1170 | 0.201 |
| 0.0144 | 1180 | 0.2027 |
| 0.0145 | 1190 | 0.1998 |
| 0.0146 | 1200 | 0.2047 |
| 0.0148 | 1210 | 0.2046 |
| 0.0149 | 1220 | 0.2012 |
| 0.015 | 1230 | 0.2027 |
| 0.0151 | 1240 | 0.1978 |
| 0.0152 | 1250 | 0.2041 |
| 0.0154 | 1260 | 0.196 |
| 0.0155 | 1270 | 0.2047 |
| 0.0156 | 1280 | 0.2004 |
| 0.0157 | 1290 | 0.2025 |
| 0.0159 | 1300 | 0.2009 |
| 0.0160 | 1310 | 0.2009 |
| 0.0161 | 1320 | 0.2021 |
| 0.0162 | 1330 | 0.1983 |
| 0.0163 | 1340 | 0.2007 |
| 0.0165 | 1350 | 0.2059 |
| 0.0166 | 1360 | 0.1993 |
| 0.0167 | 1370 | 0.2017 |
| 0.0168 | 1380 | 0.1973 |
| 0.0170 | 1390 | 0.2042 |
| 0.0171 | 1400 | 0.205 |
| 0.0172 | 1410 | 0.2006 |
| 0.0173 | 1420 | 0.1987 |
| 0.0174 | 1430 | 0.1989 |
| 0.0176 | 1440 | 0.2004 |
| 0.0177 | 1450 | 0.1959 |
| 0.0178 | 1460 | 0.198 |
| 0.0179 | 1470 | 0.2004 |
| 0.0180 | 1480 | 0.1972 |
| 0.0182 | 1490 | 0.1956 |
| 0.0183 | 1500 | 0.1947 |
| 0.0184 | 1510 | 0.1996 |
| 0.0185 | 1520 | 0.1939 |
| 0.0187 | 1530 | 0.1935 |
| 0.0188 | 1540 | 0.1971 |
| 0.0189 | 1550 | 0.1979 |
| 0.0190 | 1560 | 0.1967 |
| 0.0191 | 1570 | 0.1986 |
| 0.0193 | 1580 | 0.1957 |
| 0.0194 | 1590 | 0.2007 |
| 0.0195 | 1600 | 0.1982 |
| 0.0196 | 1610 | 0.1986 |
| 0.0198 | 1620 | 0.2014 |
| 0.0199 | 1630 | 0.1918 |
| 0.02 | 1640 | 0.1984 |
| 0.0201 | 1650 | 0.2065 |
| 0.0202 | 1660 | 0.2039 |
| 0.0204 | 1670 | 0.2091 |
| 0.0205 | 1680 | 0.2051 |
| 0.0206 | 1690 | 0.2069 |
| 0.0207 | 1700 | 0.2051 |
| 0.0209 | 1710 | 0.2131 |
| 0.0210 | 1720 | 0.2021 |
| 0.0211 | 1730 | 0.2049 |
| 0.0212 | 1740 | 0.2047 |
| 0.0213 | 1750 | 0.2025 |
| 0.0215 | 1760 | 0.2116 |
| 0.0216 | 1770 | 0.2043 |
| 0.0217 | 1780 | 0.2065 |
| 0.0218 | 1790 | 0.2048 |
| 0.0220 | 1800 | 0.2056 |
| 0.0221 | 1810 | 0.204 |
| 0.0222 | 1820 | 0.2079 |
| 0.0223 | 1830 | 0.2085 |
| 0.0224 | 1840 | 0.2074 |
| 0.0226 | 1850 | 0.2059 |
| 0.0227 | 1860 | 0.2071 |
| 0.0228 | 1870 | 0.2012 |
| 0.0229 | 1880 | 0.2046 |
| 0.0230 | 1890 | 0.2045 |
| 0.0232 | 1900 | 0.2055 |
| 0.0233 | 1910 | 0.2022 |
| 0.0234 | 1920 | 0.2066 |
| 0.0235 | 1930 | 0.2034 |
| 0.0237 | 1940 | 0.2042 |
| 0.0238 | 1950 | 0.2073 |
| 0.0239 | 1960 | 0.2046 |
| 0.0240 | 1970 | 0.2072 |
| 0.0241 | 1980 | 0.2053 |
| 0.0243 | 1990 | 0.1999 |
| 0.0244 | 2000 | 0.2046 |
| 0.0245 | 2010 | 0.2068 |
| 0.0246 | 2020 | 0.203 |
| 0.0248 | 2030 | 0.2019 |
| 0.0249 | 2040 | 0.2026 |
| 0.025 | 2050 | 0.2029 |
| 0.0251 | 2060 | 0.2055 |
| 0.0252 | 2070 | 0.2039 |
| 0.0254 | 2080 | 0.2067 |
| 0.0255 | 2090 | 0.2099 |
| 0.0256 | 2100 | 0.202 |
| 0.0257 | 2110 | 0.2018 |
| 0.0259 | 2120 | 0.2083 |
| 0.0260 | 2130 | 0.2036 |
| 0.0261 | 2140 | 0.2025 |
| 0.0262 | 2150 | 0.2062 |
| 0.0263 | 2160 | 0.2059 |
| 0.0265 | 2170 | 0.2038 |
| 0.0266 | 2180 | 0.2026 |
| 0.0267 | 2190 | 0.2013 |
| 0.0268 | 2200 | 0.2 |
| 0.0270 | 2210 | 0.2037 |
| 0.0271 | 2220 | 0.2042 |
| 0.0272 | 2230 | 0.2059 |
| 0.0273 | 2240 | 0.2036 |
| 0.0274 | 2250 | 0.2046 |
| 0.0276 | 2260 | 0.2072 |
| 0.0277 | 2270 | 0.2008 |
| 0.0278 | 2280 | 0.2015 |
| 0.0279 | 2290 | 0.2049 |
| 0.0280 | 2300 | 0.1979 |
| 0.0282 | 2310 | 0.2049 |
| 0.0283 | 2320 | 0.2095 |
| 0.0284 | 2330 | 0.209 |
| 0.0285 | 2340 | 0.2012 |
| 0.0287 | 2350 | 0.2053 |
| 0.0288 | 2360 | 0.2019 |
| 0.0289 | 2370 | 0.2063 |
| 0.0290 | 2380 | 0.2056 |
| 0.0291 | 2390 | 0.1971 |
| 0.0293 | 2400 | 0.1969 |
| 0.0294 | 2410 | 0.2065 |
| 0.0295 | 2420 | 0.1998 |
| 0.0296 | 2430 | 0.2006 |
| 0.0298 | 2440 | 0.2029 |
| 0.0299 | 2450 | 0.1984 |
| 0.03 | 2460 | 0.202 |
| 0.0301 | 2470 | 0.2074 |
| 0.0302 | 2480 | 0.2107 |
| 0.0304 | 2490 | 0.2091 |
| 0.0305 | 2500 | 0.2066 |
| 0.0306 | 2510 | 0.2057 |
| 0.0307 | 2520 | 0.2077 |
| 0.0309 | 2530 | 0.2047 |
| 0.0310 | 2540 | 0.211 |
| 0.0311 | 2550 | 0.2088 |
| 0.0312 | 2560 | 0.209 |
| 0.0313 | 2570 | 0.2081 |
| 0.0315 | 2580 | 0.2044 |
| 0.0316 | 2590 | 0.2041 |
| 0.0317 | 2600 | 0.2052 |
| 0.0318 | 2610 | 0.2082 |
| 0.0320 | 2620 | 0.2026 |
| 0.0321 | 2630 | 0.2 |
| 0.0322 | 2640 | 0.2079 |
| 0.0323 | 2650 | 0.205 |
| 0.0324 | 2660 | 0.2081 |
| 0.0326 | 2670 | 0.2064 |
| 0.0327 | 2680 | 0.2107 |
| 0.0328 | 2690 | 0.2105 |
| 0.0329 | 2700 | 0.208 |
| 0.0330 | 2710 | 0.2044 |
| 0.0332 | 2720 | 0.2047 |
| 0.0333 | 2730 | 0.2049 |
| 0.0334 | 2740 | 0.2019 |
| 0.0335 | 2750 | 0.2075 |
| 0.0337 | 2760 | 0.2056 |
| 0.0338 | 2770 | 0.2064 |
| 0.0339 | 2780 | 0.2069 |
| 0.0340 | 2790 | 0.2036 |
| 0.0341 | 2800 | 0.2048 |
| 0.0343 | 2810 | 0.2062 |
| 0.0344 | 2820 | 0.2049 |
| 0.0345 | 2830 | 0.2036 |
| 0.0346 | 2840 | 0.2041 |
| 0.0348 | 2850 | 0.2085 |
| 0.0349 | 2860 | 0.2047 |
| 0.035 | 2870 | 0.2062 |
| 0.0351 | 2880 | 0.2055 |
| 0.0352 | 2890 | 0.2084 |
| 0.0354 | 2900 | 0.2003 |
| 0.0355 | 2910 | 0.2053 |
| 0.0356 | 2920 | 0.2031 |
| 0.0357 | 2930 | 0.205 |
| 0.0359 | 2940 | 0.2048 |
| 0.0360 | 2950 | 0.2067 |
| 0.0361 | 2960 | 0.2069 |
| 0.0362 | 2970 | 0.2046 |
| 0.0363 | 2980 | 0.2043 |
| 0.0365 | 2990 | 0.2028 |
| 0.0366 | 3000 | 0.206 |
| 0.0367 | 3010 | 0.2035 |
| 0.0368 | 3020 | 0.2038 |
| 0.0370 | 3030 | 0.2066 |
| 0.0371 | 3040 | 0.207 |
| 0.0372 | 3050 | 0.2074 |
| 0.0373 | 3060 | 0.2067 |
| 0.0374 | 3070 | 0.2004 |
| 0.0376 | 3080 | 0.201 |
| 0.0377 | 3090 | 0.2038 |
| 0.0378 | 3100 | 0.2048 |
| 0.0379 | 3110 | 0.2006 |
| 0.0380 | 3120 | 0.2057 |
| 0.0382 | 3130 | 0.2039 |
| 0.0383 | 3140 | 0.2073 |
| 0.0384 | 3150 | 0.2026 |
| 0.0385 | 3160 | 0.2027 |
| 0.0387 | 3170 | 0.2047 |
| 0.0388 | 3180 | 0.2067 |
| 0.0389 | 3190 | 0.2029 |
| 0.0390 | 3200 | 0.2031 |
| 0.0391 | 3210 | 0.2015 |
| 0.0393 | 3220 | 0.2072 |
| 0.0394 | 3230 | 0.2024 |
| 0.0395 | 3240 | 0.2078 |
| 0.0396 | 3250 | 0.209 |
| 0.0398 | 3260 | 0.2017 |
| 0.0399 | 3270 | 0.2042 |
| 0.04 | 3280 | 0.2058 |
| 0.0401 | 3290 | 0.2098 |
| 0.0402 | 3300 | 0.2021 |
| 0.0404 | 3310 | 0.2045 |
| 0.0405 | 3320 | 0.2099 |
| 0.0406 | 3330 | 0.2086 |
| 0.0407 | 3340 | 0.2077 |
| 0.0409 | 3350 | 0.2049 |
| 0.0410 | 3360 | 0.2036 |
| 0.0411 | 3370 | 0.2085 |
| 0.0412 | 3380 | 0.2103 |
| 0.0413 | 3390 | 0.2066 |
| 0.0415 | 3400 | 0.2043 |
| 0.0416 | 3410 | 0.2062 |
| 0.0417 | 3420 | 0.2086 |
| 0.0418 | 3430 | 0.2021 |
| 0.0420 | 3440 | 0.2037 |
| 0.0421 | 3450 | 0.2057 |
| 0.0422 | 3460 | 0.2057 |
| 0.0423 | 3470 | 0.2068 |
| 0.0424 | 3480 | 0.2069 |
| 0.0426 | 3490 | 0.2056 |
| 0.0427 | 3500 | 0.2081 |
| 0.0428 | 3510 | 0.2093 |
| 0.0429 | 3520 | 0.2044 |
| 0.0430 | 3530 | 0.202 |
| 0.0432 | 3540 | 0.2055 |
| 0.0433 | 3550 | 0.2096 |
| 0.0434 | 3560 | 0.2035 |
| 0.0435 | 3570 | 0.2061 |
| 0.0437 | 3580 | 0.2037 |
| 0.0438 | 3590 | 0.2065 |
| 0.0439 | 3600 | 0.2045 |
| 0.0440 | 3610 | 0.2065 |
| 0.0441 | 3620 | 0.2068 |
| 0.0443 | 3630 | 0.2049 |
| 0.0444 | 3640 | 0.2041 |
| 0.0445 | 3650 | 0.2061 |
| 0.0446 | 3660 | 0.2043 |
| 0.0448 | 3670 | 0.2047 |
| 0.0449 | 3680 | 0.2043 |
| 0.045 | 3690 | 0.2065 |
| 0.0451 | 3700 | 0.207 |
| 0.0452 | 3710 | 0.2067 |
| 0.0454 | 3720 | 0.2004 |
| 0.0455 | 3730 | 0.2071 |
| 0.0456 | 3740 | 0.2028 |
| 0.0457 | 3750 | 0.2064 |
| 0.0459 | 3760 | 0.2052 |
| 0.0460 | 3770 | 0.2035 |
| 0.0461 | 3780 | 0.2044 |
| 0.0462 | 3790 | 0.2056 |
| 0.0463 | 3800 | 0.2042 |
| 0.0465 | 3810 | 0.2034 |
| 0.0466 | 3820 | 0.2041 |
| 0.0467 | 3830 | 0.2077 |
| 0.0468 | 3840 | 0.2028 |
| 0.0470 | 3850 | 0.2061 |
| 0.0471 | 3860 | 0.2042 |
| 0.0472 | 3870 | 0.2038 |
| 0.0473 | 3880 | 0.2021 |
| 0.0474 | 3890 | 0.2022 |
| 0.0476 | 3900 | 0.2009 |
| 0.0477 | 3910 | 0.2066 |
| 0.0478 | 3920 | 0.2007 |
| 0.0479 | 3930 | 0.2042 |
| 0.0480 | 3940 | 0.2066 |
| 0.0482 | 3950 | 0.2051 |
| 0.0483 | 3960 | 0.2039 |
| 0.0484 | 3970 | 0.2027 |
| 0.0485 | 3980 | 0.2052 |
| 0.0487 | 3990 | 0.2024 |
| 0.0488 | 4000 | 0.2047 |
| 0.0489 | 4010 | 0.2024 |
| 0.0490 | 4020 | 0.2027 |
| 0.0491 | 4030 | 0.2048 |
| 0.0493 | 4040 | 0.2042 |
| 0.0494 | 4050 | 0.203 |
| 0.0495 | 4060 | 0.2047 |
| 0.0496 | 4070 | 0.2074 |
| 0.0498 | 4080 | 0.205 |
| 0.0499 | 4090 | 0.2052 |
| 0.05 | 4100 | 0.2037 |
| 0.0501 | 4110 | 0.2073 |
| 0.0502 | 4120 | 0.2081 |
| 0.0504 | 4130 | 0.2083 |
| 0.0505 | 4140 | 0.2067 |
| 0.0506 | 4150 | 0.209 |
| 0.0507 | 4160 | 0.2055 |
| 0.0509 | 4170 | 0.2087 |
| 0.0510 | 4180 | 0.2089 |
| 0.0511 | 4190 | 0.2121 |
| 0.0512 | 4200 | 0.2052 |
| 0.0513 | 4210 | 0.2059 |
| 0.0515 | 4220 | 0.2065 |
| 0.0516 | 4230 | 0.2066 |
| 0.0517 | 4240 | 0.2039 |
| 0.0518 | 4250 | 0.2047 |
| 0.0520 | 4260 | 0.2069 |
| 0.0521 | 4270 | 0.2018 |
| 0.0522 | 4280 | 0.2066 |
| 0.0523 | 4290 | 0.2096 |
| 0.0524 | 4300 | 0.2064 |
| 0.0526 | 4310 | 0.2047 |
| 0.0527 | 4320 | 0.2052 |
| 0.0528 | 4330 | 0.2058 |
| 0.0529 | 4340 | 0.204 |
| 0.0530 | 4350 | 0.2037 |
| 0.0532 | 4360 | 0.2062 |
| 0.0533 | 4370 | 0.2043 |
| 0.0534 | 4380 | 0.2072 |
| 0.0535 | 4390 | 0.2088 |
| 0.0537 | 4400 | 0.2069 |
| 0.0538 | 4410 | 0.2035 |
| 0.0539 | 4420 | 0.2036 |
| 0.0540 | 4430 | 0.2034 |
| 0.0541 | 4440 | 0.2046 |
| 0.0543 | 4450 | 0.2056 |
| 0.0544 | 4460 | 0.2067 |
| 0.0545 | 4470 | 0.2077 |
| 0.0546 | 4480 | 0.2054 |
| 0.0548 | 4490 | 0.2052 |
| 0.0549 | 4500 | 0.204 |
| 0.055 | 4510 | 0.2045 |
| 0.0551 | 4520 | 0.2041 |
| 0.0552 | 4530 | 0.2061 |
| 0.0554 | 4540 | 0.2041 |
| 0.0555 | 4550 | 0.2044 |
| 0.0556 | 4560 | 0.2092 |
| 0.0557 | 4570 | 0.2002 |
| 0.0559 | 4580 | 0.2045 |
| 0.0560 | 4590 | 0.2056 |
| 0.0561 | 4600 | 0.2034 |
| 0.0562 | 4610 | 0.2065 |
| 0.0563 | 4620 | 0.2048 |
| 0.0565 | 4630 | 0.206 |
| 0.0566 | 4640 | 0.2064 |
| 0.0567 | 4650 | 0.2073 |
| 0.0568 | 4660 | 0.204 |
| 0.0570 | 4670 | 0.2019 |
| 0.0571 | 4680 | 0.2077 |
| 0.0572 | 4690 | 0.207 |
| 0.0573 | 4700 | 0.2061 |
| 0.0574 | 4710 | 0.2036 |
| 0.0576 | 4720 | 0.2028 |
| 0.0577 | 4730 | 0.2051 |
| 0.0578 | 4740 | 0.2053 |
| 0.0579 | 4750 | 0.2016 |
| 0.0580 | 4760 | 0.2075 |
| 0.0582 | 4770 | 0.2048 |
| 0.0583 | 4780 | 0.2049 |
| 0.0584 | 4790 | 0.2058 |
| 0.0585 | 4800 | 0.2066 |
| 0.0587 | 4810 | 0.2022 |
| 0.0588 | 4820 | 0.2036 |
| 0.0589 | 4830 | 0.2 |
| 0.0590 | 4840 | 0.2059 |
| 0.0591 | 4850 | 0.204 |
| 0.0593 | 4860 | 0.2058 |
| 0.0594 | 4870 | 0.2052 |
| 0.0595 | 4880 | 0.2034 |
| 0.0596 | 4890 | 0.2024 |
| 0.0598 | 4900 | 0.2062 |
| 0.0599 | 4910 | 0.2079 |
| 0.06 | 4920 | 0.2036 |
| 0.0601 | 4930 | 0.2079 |
| 0.0602 | 4940 | 0.2089 |
| 0.0604 | 4950 | 0.207 |
| 0.0605 | 4960 | 0.2091 |
| 0.0606 | 4970 | 0.2058 |
| 0.0607 | 4980 | 0.2029 |
| 0.0609 | 4990 | 0.2072 |
| 0.0610 | 5000 | 0.2064 |
| 0.0611 | 5010 | 0.2083 |
| 0.0612 | 5020 | 0.2064 |
| 0.0613 | 5030 | 0.2055 |
| 0.0615 | 5040 | 0.2067 |
| 0.0616 | 5050 | 0.2057 |
| 0.0617 | 5060 | 0.2072 |
| 0.0618 | 5070 | 0.2099 |
| 0.0620 | 5080 | 0.205 |
| 0.0621 | 5090 | 0.2095 |
| 0.0622 | 5100 | 0.2062 |
| 0.0623 | 5110 | 0.2028 |
| 0.0624 | 5120 | 0.2054 |
| 0.0626 | 5130 | 0.2063 |
| 0.0627 | 5140 | 0.2066 |
| 0.0628 | 5150 | 0.2075 |
| 0.0629 | 5160 | 0.208 |
| 0.0630 | 5170 | 0.2052 |
| 0.0632 | 5180 | 0.2068 |
| 0.0633 | 5190 | 0.2068 |
| 0.0634 | 5200 | 0.2048 |
| 0.0635 | 5210 | 0.2057 |
| 0.0637 | 5220 | 0.206 |
| 0.0638 | 5230 | 0.204 |
| 0.0639 | 5240 | 0.2075 |
| 0.0640 | 5250 | 0.204 |
| 0.0641 | 5260 | 0.2064 |
| 0.0643 | 5270 | 0.2088 |
| 0.0644 | 5280 | 0.2022 |
| 0.0645 | 5290 | 0.2054 |
| 0.0646 | 5300 | 0.2057 |
| 0.0648 | 5310 | 0.2075 |
| 0.0649 | 5320 | 0.2047 |
| 0.065 | 5330 | 0.2045 |
| 0.0651 | 5340 | 0.2068 |
| 0.0652 | 5350 | 0.205 |
| 0.0654 | 5360 | 0.2061 |
| 0.0655 | 5370 | 0.2085 |
| 0.0656 | 5380 | 0.207 |
| 0.0657 | 5390 | 0.2036 |
| 0.0659 | 5400 | 0.2067 |
| 0.0660 | 5410 | 0.2047 |
| 0.0661 | 5420 | 0.2026 |
| 0.0662 | 5430 | 0.2067 |
| 0.0663 | 5440 | 0.2053 |
| 0.0665 | 5450 | 0.2034 |
| 0.0666 | 5460 | 0.2032 |
| 0.0667 | 5470 | 0.2057 |
| 0.0668 | 5480 | 0.2066 |
| 0.0670 | 5490 | 0.2058 |
| 0.0671 | 5500 | 0.2051 |
| 0.0672 | 5510 | 0.2038 |
| 0.0673 | 5520 | 0.2023 |
| 0.0674 | 5530 | 0.2042 |
| 0.0676 | 5540 | 0.2028 |
| 0.0677 | 5550 | 0.2052 |
| 0.0678 | 5560 | 0.2082 |
| 0.0679 | 5570 | 0.2066 |
| 0.0680 | 5580 | 0.2053 |
| 0.0682 | 5590 | 0.2071 |
| 0.0683 | 5600 | 0.2077 |
| 0.0684 | 5610 | 0.2049 |
| 0.0685 | 5620 | 0.2036 |
| 0.0687 | 5630 | 0.2054 |
| 0.0688 | 5640 | 0.2024 |
| 0.0689 | 5650 | 0.204 |
| 0.0690 | 5660 | 0.2049 |
| 0.0691 | 5670 | 0.2043 |
| 0.0693 | 5680 | 0.2039 |
| 0.0694 | 5690 | 0.208 |
| 0.0695 | 5700 | 0.202 |
| 0.0696 | 5710 | 0.2031 |
| 0.0698 | 5720 | 0.2056 |
| 0.0699 | 5730 | 0.2022 |
| 0.07 | 5740 | 0.206 |
| 0.0701 | 5750 | 0.2055 |
| 0.0702 | 5760 | 0.204 |
| 0.0704 | 5770 | 0.2054 |
| 0.0705 | 5780 | 0.2044 |
| 0.0706 | 5790 | 0.206 |
| 0.0707 | 5800 | 0.2066 |
| 0.0709 | 5810 | 0.2054 |
| 0.0710 | 5820 | 0.2054 |
| 0.0711 | 5830 | 0.2073 |
| 0.0712 | 5840 | 0.2077 |
| 0.0713 | 5850 | 0.209 |
| 0.0715 | 5860 | 0.2083 |
| 0.0716 | 5870 | 0.2058 |
| 0.0717 | 5880 | 0.2053 |
| 0.0718 | 5890 | 0.2074 |
| 0.0720 | 5900 | 0.2057 |
| 0.0721 | 5910 | 0.2042 |
| 0.0722 | 5920 | 0.2076 |
| 0.0723 | 5930 | 0.2064 |
| 0.0724 | 5940 | 0.2062 |
| 0.0726 | 5950 | 0.2063 |
| 0.0727 | 5960 | 0.203 |
| 0.0728 | 5970 | 0.2046 |
| 0.0729 | 5980 | 0.2068 |
| 0.0730 | 5990 | 0.2037 |
| 0.0732 | 6000 | 0.2081 |
| 0.0733 | 6010 | 0.2066 |
| 0.0734 | 6020 | 0.2066 |
| 0.0735 | 6030 | 0.2044 |
| 0.0737 | 6040 | 0.2071 |
| 0.0738 | 6050 | 0.2025 |
| 0.0739 | 6060 | 0.2068 |
| 0.0740 | 6070 | 0.2064 |
| 0.0741 | 6080 | 0.2052 |
| 0.0743 | 6090 | 0.2041 |
| 0.0744 | 6100 | 0.2026 |
| 0.0745 | 6110 | 0.2047 |
| 0.0746 | 6120 | 0.2049 |
| 0.0748 | 6130 | 0.203 |
| 0.0749 | 6140 | 0.2071 |
| 0.075 | 6150 | 0.2022 |
| 0.0751 | 6160 | 0.2048 |
| 0.0752 | 6170 | 0.2045 |
| 0.0754 | 6180 | 0.2046 |
| 0.0755 | 6190 | 0.2063 |
| 0.0756 | 6200 | 0.2024 |
| 0.0757 | 6210 | 0.2048 |
| 0.0759 | 6220 | 0.2032 |
| 0.0760 | 6230 | 0.2078 |
| 0.0761 | 6240 | 0.2058 |
| 0.0762 | 6250 | 0.2034 |
| 0.0763 | 6260 | 0.2073 |
| 0.0765 | 6270 | 0.2043 |
| 0.0766 | 6280 | 0.2064 |
| 0.0767 | 6290 | 0.2012 |
| 0.0768 | 6300 | 0.2037 |
| 0.0770 | 6310 | 0.202 |
| 0.0771 | 6320 | 0.2049 |
| 0.0772 | 6330 | 0.206 |
| 0.0773 | 6340 | 0.2042 |
| 0.0774 | 6350 | 0.2041 |
| 0.0776 | 6360 | 0.2056 |
| 0.0777 | 6370 | 0.2059 |
| 0.0778 | 6380 | 0.2022 |
| 0.0779 | 6390 | 0.2072 |
| 0.0780 | 6400 | 0.2072 |
| 0.0782 | 6410 | 0.2055 |
| 0.0783 | 6420 | 0.2034 |
| 0.0784 | 6430 | 0.2021 |
| 0.0785 | 6440 | 0.2057 |
| 0.0787 | 6450 | 0.2044 |
| 0.0788 | 6460 | 0.2038 |
| 0.0789 | 6470 | 0.2024 |
| 0.0790 | 6480 | 0.204 |
| 0.0791 | 6490 | 0.2048 |
| 0.0793 | 6500 | 0.2045 |
| 0.0794 | 6510 | 0.2059 |
| 0.0795 | 6520 | 0.2053 |
| 0.0796 | 6530 | 0.2035 |
| 0.0798 | 6540 | 0.201 |
| 0.0799 | 6550 | 0.2046 |
| 0.08 | 6560 | 0.2059 |
| 0.0801 | 6570 | 0.2047 |
| 0.0802 | 6580 | 0.2067 |
| 0.0804 | 6590 | 0.2055 |
| 0.0805 | 6600 | 0.2075 |
| 0.0806 | 6610 | 0.204 |
| 0.0807 | 6620 | 0.2053 |
| 0.0809 | 6630 | 0.2056 |
| 0.0810 | 6640 | 0.2042 |
| 0.0811 | 6650 | 0.2054 |
| 0.0812 | 6660 | 0.2061 |
| 0.0813 | 6670 | 0.2022 |
| 0.0815 | 6680 | 0.2075 |
| 0.0816 | 6690 | 0.2066 |
| 0.0817 | 6700 | 0.2043 |
| 0.0818 | 6710 | 0.2047 |
| 0.0820 | 6720 | 0.2049 |
| 0.0821 | 6730 | 0.2045 |
| 0.0822 | 6740 | 0.2062 |
| 0.0823 | 6750 | 0.2067 |
| 0.0824 | 6760 | 0.2059 |
| 0.0826 | 6770 | 0.2052 |
| 0.0827 | 6780 | 0.2046 |
| 0.0828 | 6790 | 0.2041 |
| 0.0829 | 6800 | 0.208 |
| 0.0830 | 6810 | 0.2043 |
| 0.0832 | 6820 | 0.2045 |
| 0.0833 | 6830 | 0.2036 |
| 0.0834 | 6840 | 0.2074 |
| 0.0835 | 6850 | 0.2021 |
| 0.0837 | 6860 | 0.2048 |
| 0.0838 | 6870 | 0.2061 |
| 0.0839 | 6880 | 0.2013 |
| 0.0840 | 6890 | 0.2033 |
| 0.0841 | 6900 | 0.2028 |
| 0.0843 | 6910 | 0.2051 |
| 0.0844 | 6920 | 0.2065 |
| 0.0845 | 6930 | 0.2056 |
| 0.0846 | 6940 | 0.2066 |
| 0.0848 | 6950 | 0.2053 |
| 0.0849 | 6960 | 0.2043 |
| 0.085 | 6970 | 0.2078 |
| 0.0851 | 6980 | 0.2071 |
| 0.0852 | 6990 | 0.2058 |
| 0.0854 | 7000 | 0.2009 |
| 0.0855 | 7010 | 0.2067 |
| 0.0856 | 7020 | 0.204 |
| 0.0857 | 7030 | 0.2041 |
| 0.0859 | 7040 | 0.2042 |
| 0.0860 | 7050 | 0.2022 |
| 0.0861 | 7060 | 0.2052 |
| 0.0862 | 7070 | 0.204 |
| 0.0863 | 7080 | 0.2038 |
| 0.0865 | 7090 | 0.2043 |
| 0.0866 | 7100 | 0.2075 |
| 0.0867 | 7110 | 0.2043 |
| 0.0868 | 7120 | 0.2035 |
| 0.0870 | 7130 | 0.2041 |
| 0.0871 | 7140 | 0.2029 |
| 0.0872 | 7150 | 0.2033 |
| 0.0873 | 7160 | 0.2035 |
| 0.0874 | 7170 | 0.2013 |
| 0.0876 | 7180 | 0.2077 |
| 0.0877 | 7190 | 0.2014 |
| 0.0878 | 7200 | 0.2084 |
| 0.0879 | 7210 | 0.203 |
| 0.0880 | 7220 | 0.2044 |
| 0.0882 | 7230 | 0.2045 |
| 0.0883 | 7240 | 0.2023 |
| 0.0884 | 7250 | 0.2045 |
| 0.0885 | 7260 | 0.2041 |
| 0.0887 | 7270 | 0.204 |
| 0.0888 | 7280 | 0.2043 |
| 0.0889 | 7290 | 0.2017 |
| 0.0890 | 7300 | 0.2039 |
| 0.0891 | 7310 | 0.2072 |
| 0.0893 | 7320 | 0.2044 |
| 0.0894 | 7330 | 0.2029 |
| 0.0895 | 7340 | 0.2031 |
| 0.0896 | 7350 | 0.2033 |
| 0.0898 | 7360 | 0.2066 |
| 0.0899 | 7370 | 0.2025 |
| 0.09 | 7380 | 0.2035 |
| 0.0901 | 7390 | 0.205 |
| 0.0902 | 7400 | 0.2045 |
| 0.0904 | 7410 | 0.2053 |
| 0.0905 | 7420 | 0.2052 |
| 0.0906 | 7430 | 0.2028 |
| 0.0907 | 7440 | 0.2052 |
| 0.0909 | 7450 | 0.2036 |
| 0.0910 | 7460 | 0.2069 |
| 0.0911 | 7470 | 0.2027 |
| 0.0912 | 7480 | 0.2056 |
| 0.0913 | 7490 | 0.2033 |
| 0.0915 | 7500 | 0.2053 |
| 0.0916 | 7510 | 0.2067 |
| 0.0917 | 7520 | 0.2006 |
| 0.0918 | 7530 | 0.2043 |
| 0.0920 | 7540 | 0.205 |
| 0.0921 | 7550 | 0.2045 |
| 0.0922 | 7560 | 0.2045 |
| 0.0923 | 7570 | 0.204 |
| 0.0924 | 7580 | 0.2042 |
| 0.0926 | 7590 | 0.2066 |
| 0.0927 | 7600 | 0.2062 |
| 0.0928 | 7610 | 0.2063 |
| 0.0929 | 7620 | 0.2052 |
| 0.0930 | 7630 | 0.2048 |
| 0.0932 | 7640 | 0.2052 |
| 0.0933 | 7650 | 0.2062 |
| 0.0934 | 7660 | 0.2055 |
| 0.0935 | 7670 | 0.2039 |
| 0.0937 | 7680 | 0.2034 |
| 0.0938 | 7690 | 0.2081 |
| 0.0939 | 7700 | 0.202 |
| 0.0940 | 7710 | 0.2044 |
| 0.0941 | 7720 | 0.204 |
| 0.0943 | 7730 | 0.2059 |
| 0.0944 | 7740 | 0.207 |
| 0.0945 | 7750 | 0.2029 |
| 0.0946 | 7760 | 0.2028 |
| 0.0948 | 7770 | 0.2042 |
| 0.0949 | 7780 | 0.2023 |
| 0.095 | 7790 | 0.2037 |
| 0.0951 | 7800 | 0.2031 |
| 0.0952 | 7810 | 0.2035 |
| 0.0954 | 7820 | 0.2054 |
| 0.0955 | 7830 | 0.2051 |
| 0.0956 | 7840 | 0.2036 |
| 0.0957 | 7850 | 0.2049 |
| 0.0959 | 7860 | 0.2021 |
| 0.0960 | 7870 | 0.2019 |
| 0.0961 | 7880 | 0.2031 |
| 0.0962 | 7890 | 0.2065 |
| 0.0963 | 7900 | 0.2049 |
| 0.0965 | 7910 | 0.2043 |
| 0.0966 | 7920 | 0.204 |
| 0.0967 | 7930 | 0.2045 |
| 0.0968 | 7940 | 0.2051 |
| 0.0970 | 7950 | 0.2026 |
| 0.0971 | 7960 | 0.2033 |
| 0.0972 | 7970 | 0.1998 |
| 0.0973 | 7980 | 0.2049 |
| 0.0974 | 7990 | 0.2035 |
| 0.0976 | 8000 | 0.2039 |
| 0.0977 | 8010 | 0.2056 |
| 0.0978 | 8020 | 0.2048 |
| 0.0979 | 8030 | 0.2022 |
| 0.0980 | 8040 | 0.2043 |
| 0.0982 | 8050 | 0.2046 |
| 0.0983 | 8060 | 0.2035 |
| 0.0984 | 8070 | 0.2031 |
| 0.0985 | 8080 | 0.2066 |
| 0.0987 | 8090 | 0.2046 |
| 0.0988 | 8100 | 0.204 |
| 0.0989 | 8110 | 0.2036 |
| 0.0990 | 8120 | 0.202 |
| 0.0991 | 8130 | 0.2008 |
| 0.0993 | 8140 | 0.2014 |
| 0.0994 | 8150 | 0.2019 |
| 0.0995 | 8160 | 0.2016 |
| 0.0996 | 8170 | 0.2027 |
| 0.0998 | 8180 | 0.2007 |
| 0.0999 | 8190 | 0.2004 |
| 0.1 | 8200 | 0.203 |
| 0.1001 | 8210 | 0.2042 |
| 0.1002 | 8220 | 0.2029 |
| 0.1004 | 8230 | 0.2053 |
| 0.1005 | 8240 | 0.2053 |
| 0.1006 | 8250 | 0.2041 |
| 0.1007 | 8260 | 0.2038 |
| 0.1009 | 8270 | 0.2044 |
| 0.1010 | 8280 | 0.2051 |
| 0.1011 | 8290 | 0.2054 |
| 0.1012 | 8300 | 0.2052 |
| 0.1013 | 8310 | 0.2026 |
| 0.1015 | 8320 | 0.2044 |
| 0.1016 | 8330 | 0.2058 |
| 0.1017 | 8340 | 0.2054 |
| 0.1018 | 8350 | 0.203 |
| 0.1020 | 8360 | 0.2044 |
| 0.1021 | 8370 | 0.2032 |
| 0.1022 | 8380 | 0.2034 |
| 0.1023 | 8390 | 0.2041 |
| 0.1024 | 8400 | 0.2048 |
| 0.1026 | 8410 | 0.2061 |
| 0.1027 | 8420 | 0.2043 |
| 0.1028 | 8430 | 0.2062 |
| 0.1029 | 8440 | 0.2052 |
| 0.1030 | 8450 | 0.2001 |
| 0.1032 | 8460 | 0.2066 |
| 0.1033 | 8470 | 0.2044 |
| 0.1034 | 8480 | 0.2036 |
| 0.1035 | 8490 | 0.2072 |
| 0.1037 | 8500 | 0.2043 |
| 0.1038 | 8510 | 0.2049 |
| 0.1039 | 8520 | 0.2034 |
| 0.1040 | 8530 | 0.2027 |
| 0.1041 | 8540 | 0.2045 |
| 0.1043 | 8550 | 0.203 |
| 0.1044 | 8560 | 0.204 |
| 0.1045 | 8570 | 0.2042 |
| 0.1046 | 8580 | 0.2041 |
| 0.1048 | 8590 | 0.2066 |
| 0.1049 | 8600 | 0.2053 |
| 0.105 | 8610 | 0.2048 |
| 0.1051 | 8620 | 0.2059 |
| 0.1052 | 8630 | 0.2025 |
| 0.1054 | 8640 | 0.2045 |
| 0.1055 | 8650 | 0.2037 |
| 0.1056 | 8660 | 0.2043 |
| 0.1057 | 8670 | 0.2037 |
| 0.1059 | 8680 | 0.205 |
| 0.1060 | 8690 | 0.2056 |
| 0.1061 | 8700 | 0.2045 |
| 0.1062 | 8710 | 0.207 |
| 0.1063 | 8720 | 0.2053 |
| 0.1065 | 8730 | 0.2054 |
| 0.1066 | 8740 | 0.2035 |
| 0.1067 | 8750 | 0.2049 |
| 0.1068 | 8760 | 0.2028 |
| 0.1070 | 8770 | 0.2027 |
| 0.1071 | 8780 | 0.2048 |
| 0.1072 | 8790 | 0.2026 |
| 0.1073 | 8800 | 0.2039 |
| 0.1074 | 8810 | 0.2054 |
| 0.1076 | 8820 | 0.2051 |
| 0.1077 | 8830 | 0.2034 |
| 0.1078 | 8840 | 0.2047 |
| 0.1079 | 8850 | 0.2024 |
| 0.1080 | 8860 | 0.2031 |
| 0.1082 | 8870 | 0.2024 |
| 0.1083 | 8880 | 0.2035 |
| 0.1084 | 8890 | 0.2021 |
| 0.1085 | 8900 | 0.2035 |
| 0.1087 | 8910 | 0.2039 |
| 0.1088 | 8920 | 0.2018 |
| 0.1089 | 8930 | 0.2023 |
| 0.1090 | 8940 | 0.2013 |
| 0.1091 | 8950 | 0.2038 |
| 0.1093 | 8960 | 0.2041 |
| 0.1094 | 8970 | 0.2021 |
| 0.1095 | 8980 | 0.2039 |
| 0.1096 | 8990 | 0.2026 |
| 0.1098 | 9000 | 0.2037 |
| 0.1099 | 9010 | 0.2029 |
| 0.11 | 9020 | 0.2058 |
| 0.1101 | 9030 | 0.2065 |
| 0.1102 | 9040 | 0.2043 |
| 0.1104 | 9050 | 0.2051 |
| 0.1105 | 9060 | 0.2028 |
| 0.1106 | 9070 | 0.2039 |
| 0.1107 | 9080 | 0.2032 |
| 0.1109 | 9090 | 0.2037 |
| 0.1110 | 9100 | 0.2042 |
| 0.1111 | 9110 | 0.2041 |
| 0.1112 | 9120 | 0.2042 |
| 0.1113 | 9130 | 0.2042 |
| 0.1115 | 9140 | 0.2052 |
| 0.1116 | 9150 | 0.2075 |
| 0.1117 | 9160 | 0.2048 |
| 0.1118 | 9170 | 0.2055 |
| 0.1120 | 9180 | 0.2031 |
| 0.1121 | 9190 | 0.2062 |
| 0.1122 | 9200 | 0.2053 |
| 0.1123 | 9210 | 0.206 |
| 0.1124 | 9220 | 0.2058 |
| 0.1126 | 9230 | 0.2051 |
| 0.1127 | 9240 | 0.2061 |
| 0.1128 | 9250 | 0.2029 |
| 0.1129 | 9260 | 0.2046 |
| 0.1130 | 9270 | 0.2051 |
| 0.1132 | 9280 | 0.2048 |
| 0.1133 | 9290 | 0.2051 |
| 0.1134 | 9300 | 0.2048 |
| 0.1135 | 9310 | 0.2026 |
| 0.1137 | 9320 | 0.2047 |
| 0.1138 | 9330 | 0.2015 |
| 0.1139 | 9340 | 0.2016 |
| 0.1140 | 9350 | 0.2043 |
| 0.1141 | 9360 | 0.2045 |
| 0.1143 | 9370 | 0.2027 |
| 0.1144 | 9380 | 0.2054 |
| 0.1145 | 9390 | 0.2041 |
| 0.1146 | 9400 | 0.2041 |
| 0.1148 | 9410 | 0.2043 |
| 0.1149 | 9420 | 0.2047 |
| 0.115 | 9430 | 0.2054 |
| 0.1151 | 9440 | 0.2053 |
| 0.1152 | 9450 | 0.2028 |
| 0.1154 | 9460 | 0.2024 |
| 0.1155 | 9470 | 0.204 |
| 0.1156 | 9480 | 0.2034 |
| 0.1157 | 9490 | 0.2018 |
| 0.1159 | 9500 | 0.2051 |
| 0.1160 | 9510 | 0.2072 |
| 0.1161 | 9520 | 0.2049 |
| 0.1162 | 9530 | 0.2053 |
| 0.1163 | 9540 | 0.2044 |
| 0.1165 | 9550 | 0.2028 |
| 0.1166 | 9560 | 0.2054 |
| 0.1167 | 9570 | 0.2038 |
| 0.1168 | 9580 | 0.2048 |
| 0.1170 | 9590 | 0.2038 |
| 0.1171 | 9600 | 0.2025 |
| 0.1172 | 9610 | 0.205 |
| 0.1173 | 9620 | 0.2044 |
| 0.1174 | 9630 | 0.2063 |
| 0.1176 | 9640 | 0.2035 |
| 0.1177 | 9650 | 0.2027 |
| 0.1178 | 9660 | 0.2041 |
| 0.1179 | 9670 | 0.2022 |
| 0.1180 | 9680 | 0.2038 |
| 0.1182 | 9690 | 0.204 |
| 0.1183 | 9700 | 0.2028 |
| 0.1184 | 9710 | 0.2065 |
| 0.1185 | 9720 | 0.2052 |
| 0.1187 | 9730 | 0.2018 |
| 0.1188 | 9740 | 0.2045 |
| 0.1189 | 9750 | 0.2028 |
| 0.1190 | 9760 | 0.203 |
| 0.1191 | 9770 | 0.2048 |
| 0.1193 | 9780 | 0.2023 |
| 0.1194 | 9790 | 0.2038 |
| 0.1195 | 9800 | 0.2018 |
| 0.1196 | 9810 | 0.2037 |
| 0.1198 | 9820 | 0.2033 |
| 0.1199 | 9830 | 0.2021 |
| 0.12 | 9840 | 0.2037 |
| 0.1201 | 9850 | 0.2041 |
| 0.1202 | 9860 | 0.2038 |
| 0.1204 | 9870 | 0.2027 |
| 0.1205 | 9880 | 0.2024 |
| 0.1206 | 9890 | 0.2022 |
| 0.1207 | 9900 | 0.2038 |
| 0.1209 | 9910 | 0.2036 |
| 0.1210 | 9920 | 0.2055 |
| 0.1211 | 9930 | 0.2019 |
| 0.1212 | 9940 | 0.2051 |
| 0.1213 | 9950 | 0.2014 |
| 0.1215 | 9960 | 0.2053 |
| 0.1216 | 9970 | 0.2035 |
| 0.1217 | 9980 | 0.2054 |
| 0.1218 | 9990 | 0.2048 |
| 0.1220 | 10000 | 0.2059 |
| 0.1221 | 10010 | 0.2019 |
| 0.1222 | 10020 | 0.2036 |
| 0.1223 | 10030 | 0.2045 |
| 0.1224 | 10040 | 0.2037 |
| 0.1226 | 10050 | 0.2066 |
| 0.1227 | 10060 | 0.2057 |
| 0.1228 | 10070 | 0.2065 |
| 0.1229 | 10080 | 0.203 |
| 0.1230 | 10090 | 0.2028 |
| 0.1232 | 10100 | 0.2031 |
| 0.1233 | 10110 | 0.2031 |
| 0.1234 | 10120 | 0.2035 |
| 0.1235 | 10130 | 0.2034 |
| 0.1237 | 10140 | 0.2029 |
| 0.1238 | 10150 | 0.202 |
| 0.1239 | 10160 | 0.2057 |
| 0.1240 | 10170 | 0.2008 |
| 0.1241 | 10180 | 0.2044 |
| 0.1243 | 10190 | 0.2049 |
| 0.1244 | 10200 | 0.2037 |
| 0.1245 | 10210 | 0.2054 |
| 0.1246 | 10220 | 0.2016 |
| 0.1248 | 10230 | 0.2027 |
| 0.1249 | 10240 | 0.2049 |
| 0.125 | 10250 | 0.2023 |
| 0.1251 | 10260 | 0.2025 |
| 0.1252 | 10270 | 0.2032 |
| 0.1254 | 10280 | 0.2023 |
| 0.1255 | 10290 | 0.2025 |
| 0.1256 | 10300 | 0.2025 |
| 0.1257 | 10310 | 0.2044 |
| 0.1259 | 10320 | 0.2018 |
| 0.1260 | 10330 | 0.2072 |
| 0.1261 | 10340 | 0.2039 |
| 0.1262 | 10350 | 0.2066 |
| 0.1263 | 10360 | 0.2036 |
| 0.1265 | 10370 | 0.2029 |
| 0.1266 | 10380 | 0.2029 |
| 0.1267 | 10390 | 0.2053 |
| 0.1268 | 10400 | 0.2019 |
| 0.1270 | 10410 | 0.2071 |
| 0.1271 | 10420 | 0.2026 |
| 0.1272 | 10430 | 0.2017 |
| 0.1273 | 10440 | 0.2024 |
| 0.1274 | 10450 | 0.2026 |
| 0.1276 | 10460 | 0.2029 |
| 0.1277 | 10470 | 0.2044 |
| 0.1278 | 10480 | 0.2023 |
| 0.1279 | 10490 | 0.2063 |
| 0.1280 | 10500 | 0.2019 |
| 0.1282 | 10510 | 0.2033 |
| 0.1283 | 10520 | 0.2035 |
| 0.1284 | 10530 | 0.204 |
| 0.1285 | 10540 | 0.2021 |
| 0.1287 | 10550 | 0.2031 |
| 0.1288 | 10560 | 0.2052 |
| 0.1289 | 10570 | 0.2021 |
| 0.1290 | 10580 | 0.2027 |
| 0.1291 | 10590 | 0.2038 |
| 0.1293 | 10600 | 0.203 |
| 0.1294 | 10610 | 0.2042 |
| 0.1295 | 10620 | 0.2018 |
| 0.1296 | 10630 | 0.2017 |
| 0.1298 | 10640 | 0.2032 |
| 0.1299 | 10650 | 0.2035 |
| 0.13 | 10660 | 0.2026 |
| 0.1301 | 10670 | 0.2052 |
| 0.1302 | 10680 | 0.2044 |
| 0.1304 | 10690 | 0.2028 |
| 0.1305 | 10700 | 0.2033 |
| 0.1306 | 10710 | 0.2031 |
| 0.1307 | 10720 | 0.2048 |
| 0.1309 | 10730 | 0.2024 |
| 0.1310 | 10740 | 0.205 |
| 0.1311 | 10750 | 0.2031 |
| 0.1312 | 10760 | 0.2031 |
| 0.1313 | 10770 | 0.2019 |
| 0.1315 | 10780 | 0.2051 |
| 0.1316 | 10790 | 0.2036 |
| 0.1317 | 10800 | 0.2032 |
| 0.1318 | 10810 | 0.2046 |
| 0.1320 | 10820 | 0.2023 |
| 0.1321 | 10830 | 0.2031 |
| 0.1322 | 10840 | 0.2036 |
| 0.1323 | 10850 | 0.2055 |
| 0.1324 | 10860 | 0.2059 |
| 0.1326 | 10870 | 0.2034 |
| 0.1327 | 10880 | 0.2063 |
| 0.1328 | 10890 | 0.2031 |
| 0.1329 | 10900 | 0.201 |
| 0.1330 | 10910 | 0.2049 |
| 0.1332 | 10920 | 0.2036 |
| 0.1333 | 10930 | 0.2042 |
| 0.1334 | 10940 | 0.2018 |
| 0.1335 | 10950 | 0.2045 |
| 0.1337 | 10960 | 0.2038 |
| 0.1338 | 10970 | 0.2035 |
| 0.1339 | 10980 | 0.2023 |
| 0.1340 | 10990 | 0.206 |
| 0.1341 | 11000 | 0.2048 |
| 0.1343 | 11010 | 0.2057 |
| 0.1344 | 11020 | 0.2049 |
| 0.1345 | 11030 | 0.2018 |
| 0.1346 | 11040 | 0.2032 |
| 0.1348 | 11050 | 0.2053 |
| 0.1349 | 11060 | 0.2046 |
| 0.135 | 11070 | 0.2033 |
| 0.1351 | 11080 | 0.2021 |
| 0.1352 | 11090 | 0.2021 |
| 0.1354 | 11100 | 0.2019 |
| 0.1355 | 11110 | 0.2039 |
| 0.1356 | 11120 | 0.2067 |
| 0.1357 | 11130 | 0.2015 |
| 0.1359 | 11140 | 0.2037 |
| 0.1360 | 11150 | 0.202 |
| 0.1361 | 11160 | 0.2034 |
| 0.1362 | 11170 | 0.2046 |
| 0.1363 | 11180 | 0.202 |
| 0.1365 | 11190 | 0.2025 |
| 0.1366 | 11200 | 0.2063 |
| 0.1367 | 11210 | 0.2028 |
| 0.1368 | 11220 | 0.2025 |
| 0.1370 | 11230 | 0.2027 |
| 0.1371 | 11240 | 0.2042 |
| 0.1372 | 11250 | 0.2045 |
| 0.1373 | 11260 | 0.2052 |
| 0.1374 | 11270 | 0.2039 |
| 0.1376 | 11280 | 0.2041 |
| 0.1377 | 11290 | 0.2035 |
| 0.1378 | 11300 | 0.2018 |
| 0.1379 | 11310 | 0.2036 |
| 0.1380 | 11320 | 0.2044 |
| 0.1382 | 11330 | 0.2036 |
| 0.1383 | 11340 | 0.2061 |
| 0.1384 | 11350 | 0.2028 |
| 0.1385 | 11360 | 0.2032 |
| 0.1387 | 11370 | 0.2031 |
| 0.1388 | 11380 | 0.2019 |
| 0.1389 | 11390 | 0.2026 |
| 0.1390 | 11400 | 0.2028 |
| 0.1391 | 11410 | 0.2038 |
| 0.1393 | 11420 | 0.203 |
| 0.1394 | 11430 | 0.2017 |
| 0.1395 | 11440 | 0.2047 |
| 0.1396 | 11450 | 0.201 |
| 0.1398 | 11460 | 0.2036 |
| 0.1399 | 11470 | 0.2033 |
| 0.14 | 11480 | 0.2036 |
| 0.1401 | 11490 | 0.2032 |
| 0.1402 | 11500 | 0.2025 |
| 0.1404 | 11510 | 0.2009 |
| 0.1405 | 11520 | 0.2015 |
| 0.1406 | 11530 | 0.203 |
| 0.1407 | 11540 | 0.2036 |
| 0.1409 | 11550 | 0.2047 |
| 0.1410 | 11560 | 0.2039 |
| 0.1411 | 11570 | 0.2035 |
| 0.1412 | 11580 | 0.2036 |
| 0.1413 | 11590 | 0.2017 |
| 0.1415 | 11600 | 0.2026 |
| 0.1416 | 11610 | 0.2021 |
| 0.1417 | 11620 | 0.2061 |
| 0.1418 | 11630 | 0.2029 |
| 0.1420 | 11640 | 0.2024 |
| 0.1421 | 11650 | 0.2051 |
| 0.1422 | 11660 | 0.2039 |
| 0.1423 | 11670 | 0.2047 |
| 0.1424 | 11680 | 0.2027 |
| 0.1426 | 11690 | 0.2036 |
| 0.1427 | 11700 | 0.2045 |
| 0.1428 | 11710 | 0.2026 |
| 0.1429 | 11720 | 0.2045 |
| 0.1430 | 11730 | 0.2051 |
| 0.1432 | 11740 | 0.2046 |
| 0.1433 | 11750 | 0.2036 |
| 0.1434 | 11760 | 0.2031 |
| 0.1435 | 11770 | 0.2026 |
| 0.1437 | 11780 | 0.2023 |
| 0.1438 | 11790 | 0.2026 |
| 0.1439 | 11800 | 0.2038 |
| 0.1440 | 11810 | 0.2042 |
| 0.1441 | 11820 | 0.2012 |
| 0.1443 | 11830 | 0.2026 |
| 0.1444 | 11840 | 0.2027 |
| 0.1445 | 11850 | 0.203 |
| 0.1446 | 11860 | 0.2036 |
| 0.1448 | 11870 | 0.2035 |
| 0.1449 | 11880 | 0.2032 |
| 0.145 | 11890 | 0.2051 |
| 0.1451 | 11900 | 0.2033 |
| 0.1452 | 11910 | 0.2028 |
| 0.1454 | 11920 | 0.2037 |
| 0.1455 | 11930 | 0.2016 |
| 0.1456 | 11940 | 0.204 |
| 0.1457 | 11950 | 0.2031 |
| 0.1459 | 11960 | 0.2038 |
| 0.1460 | 11970 | 0.203 |
| 0.1461 | 11980 | 0.2017 |
| 0.1462 | 11990 | 0.2048 |
| 0.1463 | 12000 | 0.2025 |
| 0.1465 | 12010 | 0.2026 |
| 0.1466 | 12020 | 0.2042 |
| 0.1467 | 12030 | 0.2034 |
| 0.1468 | 12040 | 0.2032 |
| 0.1470 | 12050 | 0.2029 |
| 0.1471 | 12060 | 0.2034 |
| 0.1472 | 12070 | 0.2017 |
| 0.1473 | 12080 | 0.2032 |
| 0.1474 | 12090 | 0.2011 |
| 0.1476 | 12100 | 0.2041 |
| 0.1477 | 12110 | 0.2034 |
| 0.1478 | 12120 | 0.2028 |
| 0.1479 | 12130 | 0.2048 |
| 0.1480 | 12140 | 0.2055 |
| 0.1482 | 12150 | 0.204 |
| 0.1483 | 12160 | 0.2034 |
| 0.1484 | 12170 | 0.2038 |
| 0.1485 | 12180 | 0.2036 |
| 0.1487 | 12190 | 0.2034 |
| 0.1488 | 12200 | 0.2039 |
| 0.1489 | 12210 | 0.2043 |
| 0.1490 | 12220 | 0.2009 |
| 0.1491 | 12230 | 0.2009 |
| 0.1493 | 12240 | 0.2034 |
| 0.1494 | 12250 | 0.2033 |
| 0.1495 | 12260 | 0.2055 |
| 0.1496 | 12270 | 0.2022 |
| 0.1498 | 12280 | 0.2035 |
| 0.1499 | 12290 | 0.2024 |
| 0.15 | 12300 | 0.2041 |
| 0.1501 | 12310 | 0.2041 |
| 0.1502 | 12320 | 0.2056 |
| 0.1504 | 12330 | 0.2023 |
| 0.1505 | 12340 | 0.2027 |
| 0.1506 | 12350 | 0.2026 |
| 0.1507 | 12360 | 0.2038 |
| 0.1509 | 12370 | 0.2031 |
| 0.1510 | 12380 | 0.2044 |
| 0.1511 | 12390 | 0.203 |
| 0.1512 | 12400 | 0.2032 |
| 0.1513 | 12410 | 0.2029 |
| 0.1515 | 12420 | 0.2044 |
| 0.1516 | 12430 | 0.2036 |
| 0.1517 | 12440 | 0.2043 |
| 0.1518 | 12450 | 0.2025 |
| 0.1520 | 12460 | 0.2041 |
| 0.1521 | 12470 | 0.203 |
| 0.1522 | 12480 | 0.2024 |
| 0.1523 | 12490 | 0.2034 |
| 0.1524 | 12500 | 0.2041 |
| 0.1526 | 12510 | 0.2019 |
| 0.1527 | 12520 | 0.2019 |
| 0.1528 | 12530 | 0.205 |
| 0.1529 | 12540 | 0.2041 |
| 0.1530 | 12550 | 0.2031 |
| 0.1532 | 12560 | 0.2041 |
| 0.1533 | 12570 | 0.2021 |
| 0.1534 | 12580 | 0.2029 |
| 0.1535 | 12590 | 0.2023 |
| 0.1537 | 12600 | 0.2002 |
| 0.1538 | 12610 | 0.2038 |
| 0.1539 | 12620 | 0.2041 |
| 0.1540 | 12630 | 0.2037 |
| 0.1541 | 12640 | 0.2059 |
| 0.1543 | 12650 | 0.2028 |
| 0.1544 | 12660 | 0.2016 |
| 0.1545 | 12670 | 0.2015 |
| 0.1546 | 12680 | 0.2032 |
| 0.1548 | 12690 | 0.2035 |
| 0.1549 | 12700 | 0.2022 |
| 0.155 | 12710 | 0.2015 |
| 0.1551 | 12720 | 0.2047 |
| 0.1552 | 12730 | 0.2045 |
| 0.1554 | 12740 | 0.2027 |
| 0.1555 | 12750 | 0.2029 |
| 0.1556 | 12760 | 0.2049 |
| 0.1557 | 12770 | 0.2012 |
| 0.1559 | 12780 | 0.2046 |
| 0.1560 | 12790 | 0.2028 |
| 0.1561 | 12800 | 0.2016 |
| 0.1562 | 12810 | 0.2017 |
| 0.1563 | 12820 | 0.2042 |
| 0.1565 | 12830 | 0.2035 |
| 0.1566 | 12840 | 0.2009 |
| 0.1567 | 12850 | 0.2037 |
| 0.1568 | 12860 | 0.2025 |
| 0.1570 | 12870 | 0.2013 |
| 0.1571 | 12880 | 0.2052 |
| 0.1572 | 12890 | 0.2018 |
| 0.1573 | 12900 | 0.2036 |
| 0.1574 | 12910 | 0.2021 |
| 0.1576 | 12920 | 0.2029 |
| 0.1577 | 12930 | 0.2007 |
| 0.1578 | 12940 | 0.2015 |
| 0.1579 | 12950 | 0.2017 |
| 0.1580 | 12960 | 0.2024 |
| 0.1582 | 12970 | 0.201 |
| 0.1583 | 12980 | 0.2007 |
| 0.1584 | 12990 | 0.2013 |
| 0.1585 | 13000 | 0.2015 |
| 0.1587 | 13010 | 0.2051 |
| 0.1588 | 13020 | 0.2028 |
| 0.1589 | 13030 | 0.2028 |
| 0.1590 | 13040 | 0.2027 |
| 0.1591 | 13050 | 0.2022 |
| 0.1593 | 13060 | 0.2034 |
| 0.1594 | 13070 | 0.2015 |
| 0.1595 | 13080 | 0.2005 |
| 0.1596 | 13090 | 0.2027 |
| 0.1598 | 13100 | 0.2021 |
| 0.1599 | 13110 | 0.2039 |
| 0.16 | 13120 | 0.204 |
| 0.1601 | 13130 | 0.2015 |
| 0.1602 | 13140 | 0.203 |
| 0.1604 | 13150 | 0.2036 |
| 0.1605 | 13160 | 0.2036 |
| 0.1606 | 13170 | 0.2036 |
| 0.1607 | 13180 | 0.2043 |
| 0.1609 | 13190 | 0.2046 |
| 0.1610 | 13200 | 0.2047 |
| 0.1611 | 13210 | 0.2044 |
| 0.1612 | 13220 | 0.2036 |
| 0.1613 | 13230 | 0.2042 |
| 0.1615 | 13240 | 0.2037 |
| 0.1616 | 13250 | 0.2007 |
| 0.1617 | 13260 | 0.2022 |
| 0.1618 | 13270 | 0.2017 |
| 0.1620 | 13280 | 0.2035 |
| 0.1621 | 13290 | 0.2032 |
| 0.1622 | 13300 | 0.203 |
| 0.1623 | 13310 | 0.2053 |
| 0.1624 | 13320 | 0.2029 |
| 0.1626 | 13330 | 0.1998 |
| 0.1627 | 13340 | 0.2043 |
| 0.1628 | 13350 | 0.2028 |
| 0.1629 | 13360 | 0.2023 |
| 0.1630 | 13370 | 0.2026 |
| 0.1632 | 13380 | 0.2043 |
| 0.1633 | 13390 | 0.2033 |
| 0.1634 | 13400 | 0.2017 |
| 0.1635 | 13410 | 0.203 |
| 0.1637 | 13420 | 0.2007 |
| 0.1638 | 13430 | 0.2023 |
| 0.1639 | 13440 | 0.2036 |
| 0.1640 | 13450 | 0.2021 |
| 0.1641 | 13460 | 0.2036 |
| 0.1643 | 13470 | 0.2023 |
| 0.1644 | 13480 | 0.2029 |
| 0.1645 | 13490 | 0.2024 |
| 0.1646 | 13500 | 0.2034 |
| 0.1648 | 13510 | 0.2024 |
| 0.1649 | 13520 | 0.2018 |
| 0.165 | 13530 | 0.2014 |
| 0.1651 | 13540 | 0.2045 |
| 0.1652 | 13550 | 0.2013 |
| 0.1654 | 13560 | 0.2003 |
| 0.1655 | 13570 | 0.2025 |
| 0.1656 | 13580 | 0.2025 |
| 0.1657 | 13590 | 0.2011 |
| 0.1659 | 13600 | 0.2023 |
| 0.1660 | 13610 | 0.2019 |
| 0.1661 | 13620 | 0.2036 |
| 0.1662 | 13630 | 0.2024 |
| 0.1663 | 13640 | 0.203 |
| 0.1665 | 13650 | 0.2037 |
| 0.1666 | 13660 | 0.2041 |
| 0.1667 | 13670 | 0.204 |
| 0.1668 | 13680 | 0.2037 |
| 0.1670 | 13690 | 0.204 |
| 0.1671 | 13700 | 0.2026 |
| 0.1672 | 13710 | 0.2037 |
| 0.1673 | 13720 | 0.2026 |
| 0.1674 | 13730 | 0.2044 |
| 0.1676 | 13740 | 0.202 |
| 0.1677 | 13750 | 0.2032 |
| 0.1678 | 13760 | 0.2019 |
| 0.1679 | 13770 | 0.2024 |
| 0.1680 | 13780 | 0.2025 |
| 0.1682 | 13790 | 0.2041 |
| 0.1683 | 13800 | 0.2019 |
| 0.1684 | 13810 | 0.2037 |
| 0.1685 | 13820 | 0.2013 |
| 0.1687 | 13830 | 0.2025 |
| 0.1688 | 13840 | 0.2022 |
| 0.1689 | 13850 | 0.2006 |
| 0.1690 | 13860 | 0.2009 |
| 0.1691 | 13870 | 0.2006 |
| 0.1693 | 13880 | 0.2018 |
| 0.1694 | 13890 | 0.2025 |
| 0.1695 | 13900 | 0.2027 |
| 0.1696 | 13910 | 0.202 |
| 0.1698 | 13920 | 0.2029 |
| 0.1699 | 13930 | 0.2018 |
| 0.17 | 13940 | 0.2034 |
| 0.1701 | 13950 | 0.2036 |
| 0.1702 | 13960 | 0.2028 |
| 0.1704 | 13970 | 0.2023 |
| 0.1705 | 13980 | 0.2015 |
| 0.1706 | 13990 | 0.2031 |
| 0.1707 | 14000 | 0.2037 |
| 0.1709 | 14010 | 0.2019 |
| 0.1710 | 14020 | 0.2013 |
| 0.1711 | 14030 | 0.2039 |
| 0.1712 | 14040 | 0.202 |
| 0.1713 | 14050 | 0.2032 |
| 0.1715 | 14060 | 0.2037 |
| 0.1716 | 14070 | 0.2016 |
| 0.1717 | 14080 | 0.2036 |
| 0.1718 | 14090 | 0.202 |
| 0.1720 | 14100 | 0.2028 |
| 0.1721 | 14110 | 0.2012 |
| 0.1722 | 14120 | 0.2033 |
| 0.1723 | 14130 | 0.2026 |
| 0.1724 | 14140 | 0.2035 |
| 0.1726 | 14150 | 0.2017 |
| 0.1727 | 14160 | 0.2024 |
| 0.1728 | 14170 | 0.2035 |
| 0.1729 | 14180 | 0.2042 |
| 0.1730 | 14190 | 0.2017 |
| 0.1732 | 14200 | 0.2042 |
| 0.1733 | 14210 | 0.205 |
| 0.1734 | 14220 | 0.2022 |
| 0.1735 | 14230 | 0.2022 |
| 0.1737 | 14240 | 0.2029 |
| 0.1738 | 14250 | 0.202 |
| 0.1739 | 14260 | 0.2011 |
| 0.1740 | 14270 | 0.2003 |
| 0.1741 | 14280 | 0.2014 |
| 0.1743 | 14290 | 0.204 |
| 0.1744 | 14300 | 0.2032 |
| 0.1745 | 14310 | 0.2033 |
| 0.1746 | 14320 | 0.2025 |
| 0.1748 | 14330 | 0.2014 |
| 0.1749 | 14340 | 0.2014 |
| 0.175 | 14350 | 0.2032 |
| 0.1751 | 14360 | 0.2026 |
| 0.1752 | 14370 | 0.2022 |
| 0.1754 | 14380 | 0.2016 |
| 0.1755 | 14390 | 0.2029 |
| 0.1756 | 14400 | 0.2046 |
| 0.1757 | 14410 | 0.2022 |
| 0.1759 | 14420 | 0.2049 |
| 0.1760 | 14430 | 0.2006 |
| 0.1761 | 14440 | 0.2037 |
| 0.1762 | 14450 | 0.2039 |
| 0.1763 | 14460 | 0.2029 |
| 0.1765 | 14470 | 0.2035 |
| 0.1766 | 14480 | 0.203 |
| 0.1767 | 14490 | 0.2042 |
| 0.1768 | 14500 | 0.2026 |
| 0.1770 | 14510 | 0.2036 |
| 0.1771 | 14520 | 0.2017 |
| 0.1772 | 14530 | 0.2023 |
| 0.1773 | 14540 | 0.2004 |
| 0.1774 | 14550 | 0.2036 |
| 0.1776 | 14560 | 0.2023 |
| 0.1777 | 14570 | 0.2037 |
| 0.1778 | 14580 | 0.2019 |
| 0.1779 | 14590 | 0.2025 |
| 0.1780 | 14600 | 0.2017 |
| 0.1782 | 14610 | 0.2014 |
| 0.1783 | 14620 | 0.2035 |
| 0.1784 | 14630 | 0.2039 |
| 0.1785 | 14640 | 0.2028 |
| 0.1787 | 14650 | 0.2024 |
| 0.1788 | 14660 | 0.2045 |
| 0.1789 | 14670 | 0.2018 |
| 0.1790 | 14680 | 0.2011 |
| 0.1791 | 14690 | 0.2026 |
| 0.1793 | 14700 | 0.2014 |
| 0.1794 | 14710 | 0.2019 |
| 0.1795 | 14720 | 0.2022 |
| 0.1796 | 14730 | 0.2004 |
| 0.1798 | 14740 | 0.202 |
| 0.1799 | 14750 | 0.2025 |
| 0.18 | 14760 | 0.2012 |
| 0.1801 | 14770 | 0.2024 |
| 0.1802 | 14780 | 0.2032 |
| 0.1804 | 14790 | 0.2009 |
| 0.1805 | 14800 | 0.2014 |
| 0.1806 | 14810 | 0.2036 |
| 0.1807 | 14820 | 0.2033 |
| 0.1809 | 14830 | 0.2024 |
| 0.1810 | 14840 | 0.2016 |
| 0.1811 | 14850 | 0.2041 |
| 0.1812 | 14860 | 0.2024 |
| 0.1813 | 14870 | 0.2031 |
| 0.1815 | 14880 | 0.2031 |
| 0.1816 | 14890 | 0.2029 |
| 0.1817 | 14900 | 0.2006 |
| 0.1818 | 14910 | 0.2031 |
| 0.1820 | 14920 | 0.202 |
| 0.1821 | 14930 | 0.2035 |
| 0.1822 | 14940 | 0.2027 |
| 0.1823 | 14950 | 0.2028 |
| 0.1824 | 14960 | 0.2028 |
| 0.1826 | 14970 | 0.2026 |
| 0.1827 | 14980 | 0.2015 |
| 0.1828 | 14990 | 0.2014 |
| 0.1829 | 15000 | 0.2032 |
| 0.1830 | 15010 | 0.2022 |
| 0.1832 | 15020 | 0.2029 |
| 0.1833 | 15030 | 0.2025 |
| 0.1834 | 15040 | 0.2024 |
| 0.1835 | 15050 | 0.2009 |
| 0.1837 | 15060 | 0.2008 |
| 0.1838 | 15070 | 0.2008 |
| 0.1839 | 15080 | 0.2018 |
| 0.1840 | 15090 | 0.2039 |
| 0.1841 | 15100 | 0.2009 |
| 0.1843 | 15110 | 0.2038 |
| 0.1844 | 15120 | 0.202 |
| 0.1845 | 15130 | 0.2007 |
| 0.1846 | 15140 | 0.202 |
| 0.1848 | 15150 | 0.2036 |
| 0.1849 | 15160 | 0.2035 |
| 0.185 | 15170 | 0.2029 |
| 0.1851 | 15180 | 0.2039 |
| 0.1852 | 15190 | 0.2027 |
| 0.1854 | 15200 | 0.2011 |
| 0.1855 | 15210 | 0.2019 |
| 0.1856 | 15220 | 0.2014 |
| 0.1857 | 15230 | 0.2027 |
| 0.1859 | 15240 | 0.2023 |
| 0.1860 | 15250 | 0.2025 |
| 0.1861 | 15260 | 0.2017 |
| 0.1862 | 15270 | 0.2022 |
| 0.1863 | 15280 | 0.2007 |
| 0.1865 | 15290 | 0.2031 |
| 0.1866 | 15300 | 0.2035 |
| 0.1867 | 15310 | 0.2024 |
| 0.1868 | 15320 | 0.203 |
| 0.1870 | 15330 | 0.2005 |
| 0.1871 | 15340 | 0.2025 |
| 0.1872 | 15350 | 0.2015 |
| 0.1873 | 15360 | 0.2019 |
| 0.1874 | 15370 | 0.2022 |
| 0.1876 | 15380 | 0.2025 |
| 0.1877 | 15390 | 0.2009 |
| 0.1878 | 15400 | 0.2037 |
| 0.1879 | 15410 | 0.2004 |
| 0.1880 | 15420 | 0.202 |
| 0.1882 | 15430 | 0.2037 |
| 0.1883 | 15440 | 0.2023 |
| 0.1884 | 15450 | 0.2032 |
| 0.1885 | 15460 | 0.203 |
| 0.1887 | 15470 | 0.2032 |
| 0.1888 | 15480 | 0.2037 |
| 0.1889 | 15490 | 0.2024 |
| 0.1890 | 15500 | 0.2011 |
| 0.1891 | 15510 | 0.2032 |
| 0.1893 | 15520 | 0.2019 |
| 0.1894 | 15530 | 0.2023 |
| 0.1895 | 15540 | 0.2011 |
| 0.1896 | 15550 | 0.2005 |
| 0.1898 | 15560 | 0.2039 |
| 0.1899 | 15570 | 0.2024 |
| 0.19 | 15580 | 0.2025 |
| 0.1901 | 15590 | 0.204 |
| 0.1902 | 15600 | 0.2023 |
| 0.1904 | 15610 | 0.2022 |
| 0.1905 | 15620 | 0.2026 |
| 0.1906 | 15630 | 0.2034 |
| 0.1907 | 15640 | 0.2024 |
| 0.1909 | 15650 | 0.2014 |
| 0.1910 | 15660 | 0.2016 |
| 0.1911 | 15670 | 0.2039 |
| 0.1912 | 15680 | 0.2016 |
| 0.1913 | 15690 | 0.204 |
| 0.1915 | 15700 | 0.2038 |
| 0.1916 | 15710 | 0.2016 |
| 0.1917 | 15720 | 0.2029 |
| 0.1918 | 15730 | 0.2021 |
| 0.1920 | 15740 | 0.2026 |
| 0.1921 | 15750 | 0.2001 |
| 0.1922 | 15760 | 0.2031 |
| 0.1923 | 15770 | 0.2039 |
| 0.1924 | 15780 | 0.2023 |
| 0.1926 | 15790 | 0.2043 |
| 0.1927 | 15800 | 0.2007 |
| 0.1928 | 15810 | 0.2002 |
| 0.1929 | 15820 | 0.2014 |
| 0.1930 | 15830 | 0.2027 |
| 0.1932 | 15840 | 0.2018 |
| 0.1933 | 15850 | 0.2022 |
| 0.1934 | 15860 | 0.203 |
| 0.1935 | 15870 | 0.2033 |
| 0.1937 | 15880 | 0.2023 |
| 0.1938 | 15890 | 0.2027 |
| 0.1939 | 15900 | 0.205 |
| 0.1940 | 15910 | 0.2033 |
| 0.1941 | 15920 | 0.2042 |
| 0.1943 | 15930 | 0.202 |
| 0.1944 | 15940 | 0.2028 |
| 0.1945 | 15950 | 0.2021 |
| 0.1946 | 15960 | 0.2025 |
| 0.1948 | 15970 | 0.2053 |
| 0.1949 | 15980 | 0.2041 |
| 0.195 | 15990 | 0.2025 |
| 0.1951 | 16000 | 0.203 |
| 0.1952 | 16010 | 0.2003 |
| 0.1954 | 16020 | 0.201 |
| 0.1955 | 16030 | 0.2007 |
| 0.1956 | 16040 | 0.2021 |
| 0.1957 | 16050 | 0.2021 |
| 0.1959 | 16060 | 0.2032 |
| 0.1960 | 16070 | 0.2046 |
| 0.1961 | 16080 | 0.201 |
| 0.1962 | 16090 | 0.201 |
| 0.1963 | 16100 | 0.2019 |
| 0.1965 | 16110 | 0.201 |
| 0.1966 | 16120 | 0.2008 |
| 0.1967 | 16130 | 0.2023 |
| 0.1968 | 16140 | 0.2018 |
| 0.1970 | 16150 | 0.2037 |
| 0.1971 | 16160 | 0.2002 |
| 0.1972 | 16170 | 0.2033 |
| 0.1973 | 16180 | 0.2011 |
| 0.1974 | 16190 | 0.2023 |
| 0.1976 | 16200 | 0.2022 |
| 0.1977 | 16210 | 0.2032 |
| 0.1978 | 16220 | 0.2035 |
| 0.1979 | 16230 | 0.2019 |
| 0.1980 | 16240 | 0.2043 |
| 0.1982 | 16250 | 0.2014 |
| 0.1983 | 16260 | 0.2021 |
| 0.1984 | 16270 | 0.2012 |
| 0.1985 | 16280 | 0.2027 |
| 0.1987 | 16290 | 0.2037 |
| 0.1988 | 16300 | 0.2013 |
| 0.1989 | 16310 | 0.2014 |
| 0.1990 | 16320 | 0.2017 |
| 0.1991 | 16330 | 0.2031 |
| 0.1993 | 16340 | 0.2028 |
| 0.1994 | 16350 | 0.2022 |
| 0.1995 | 16360 | 0.2013 |
| 0.1996 | 16370 | 0.203 |
| 0.1998 | 16380 | 0.2012 |
| 0.1999 | 16390 | 0.2006 |
| 0.2 | 16400 | 0.1999 |
| 0.2001 | 16410 | 0.2016 |
| 0.2002 | 16420 | 0.2017 |
| 0.2004 | 16430 | 0.204 |
| 0.2005 | 16440 | 0.2034 |
| 0.2006 | 16450 | 0.2018 |
| 0.2007 | 16460 | 0.2012 |
| 0.2009 | 16470 | 0.2035 |
| 0.2010 | 16480 | 0.2025 |
| 0.2011 | 16490 | 0.2019 |
| 0.2012 | 16500 | 0.203 |
| 0.2013 | 16510 | 0.2027 |
| 0.2015 | 16520 | 0.1991 |
| 0.2016 | 16530 | 0.2015 |
| 0.2017 | 16540 | 0.2009 |
| 0.2018 | 16550 | 0.2022 |
| 0.2020 | 16560 | 0.2025 |
| 0.2021 | 16570 | 0.2003 |
| 0.2022 | 16580 | 0.2033 |
| 0.2023 | 16590 | 0.2027 |
| 0.2024 | 16600 | 0.2033 |
| 0.2026 | 16610 | 0.2014 |
| 0.2027 | 16620 | 0.2038 |
| 0.2028 | 16630 | 0.2013 |
| 0.2029 | 16640 | 0.2024 |
| 0.2030 | 16650 | 0.2025 |
| 0.2032 | 16660 | 0.2021 |
| 0.2033 | 16670 | 0.2029 |
| 0.2034 | 16680 | 0.2021 |
| 0.2035 | 16690 | 0.2043 |
| 0.2037 | 16700 | 0.2033 |
| 0.2038 | 16710 | 0.2015 |
| 0.2039 | 16720 | 0.2021 |
| 0.2040 | 16730 | 0.2019 |
| 0.2041 | 16740 | 0.2017 |
| 0.2043 | 16750 | 0.2014 |
| 0.2044 | 16760 | 0.2021 |
| 0.2045 | 16770 | 0.2008 |
| 0.2046 | 16780 | 0.2029 |
| 0.2048 | 16790 | 0.2017 |
| 0.2049 | 16800 | 0.2027 |
| 0.205 | 16810 | 0.2012 |
| 0.2051 | 16820 | 0.2038 |
| 0.2052 | 16830 | 0.2026 |
| 0.2054 | 16840 | 0.2025 |
| 0.2055 | 16850 | 0.2023 |
| 0.2056 | 16860 | 0.2024 |
| 0.2057 | 16870 | 0.202 |
| 0.2059 | 16880 | 0.2035 |
| 0.2060 | 16890 | 0.2028 |
| 0.2061 | 16900 | 0.2029 |
| 0.2062 | 16910 | 0.2039 |
| 0.2063 | 16920 | 0.2016 |
| 0.2065 | 16930 | 0.2029 |
| 0.2066 | 16940 | 0.2008 |
| 0.2067 | 16950 | 0.2037 |
| 0.2068 | 16960 | 0.2024 |
| 0.2070 | 16970 | 0.2015 |
| 0.2071 | 16980 | 0.2021 |
| 0.2072 | 16990 | 0.1993 |
| 0.2073 | 17000 | 0.2031 |
| 0.2074 | 17010 | 0.2019 |
| 0.2076 | 17020 | 0.2034 |
| 0.2077 | 17030 | 0.2029 |
| 0.2078 | 17040 | 0.2035 |
| 0.2079 | 17050 | 0.2033 |
| 0.2080 | 17060 | 0.2005 |
| 0.2082 | 17070 | 0.2027 |
| 0.2083 | 17080 | 0.2022 |
| 0.2084 | 17090 | 0.201 |
| 0.2085 | 17100 | 0.2034 |
| 0.2087 | 17110 | 0.2027 |
| 0.2088 | 17120 | 0.2018 |
| 0.2089 | 17130 | 0.2035 |
| 0.2090 | 17140 | 0.2037 |
| 0.2091 | 17150 | 0.2016 |
| 0.2093 | 17160 | 0.2021 |
| 0.2094 | 17170 | 0.2017 |
| 0.2095 | 17180 | 0.2017 |
| 0.2096 | 17190 | 0.2029 |
| 0.2098 | 17200 | 0.2034 |
| 0.2099 | 17210 | 0.2037 |
| 0.21 | 17220 | 0.2024 |
| 0.2101 | 17230 | 0.2016 |
| 0.2102 | 17240 | 0.2012 |
| 0.2104 | 17250 | 0.2023 |
| 0.2105 | 17260 | 0.2001 |
| 0.2106 | 17270 | 0.2019 |
| 0.2107 | 17280 | 0.2037 |
| 0.2109 | 17290 | 0.2025 |
| 0.2110 | 17300 | 0.2038 |
| 0.2111 | 17310 | 0.2036 |
| 0.2112 | 17320 | 0.202 |
| 0.2113 | 17330 | 0.2022 |
| 0.2115 | 17340 | 0.2025 |
| 0.2116 | 17350 | 0.202 |
| 0.2117 | 17360 | 0.2019 |
| 0.2118 | 17370 | 0.2028 |
| 0.2120 | 17380 | 0.2024 |
| 0.2121 | 17390 | 0.2013 |
| 0.2122 | 17400 | 0.2011 |
| 0.2123 | 17410 | 0.2028 |
| 0.2124 | 17420 | 0.2014 |
| 0.2126 | 17430 | 0.2029 |
| 0.2127 | 17440 | 0.2012 |
| 0.2128 | 17450 | 0.2029 |
| 0.2129 | 17460 | 0.203 |
| 0.2130 | 17470 | 0.2023 |
| 0.2132 | 17480 | 0.2014 |
| 0.2133 | 17490 | 0.2028 |
| 0.2134 | 17500 | 0.2024 |
| 0.2135 | 17510 | 0.2006 |
| 0.2137 | 17520 | 0.2034 |
| 0.2138 | 17530 | 0.2025 |
| 0.2139 | 17540 | 0.2019 |
| 0.2140 | 17550 | 0.2034 |
| 0.2141 | 17560 | 0.2029 |
| 0.2143 | 17570 | 0.2023 |
| 0.2144 | 17580 | 0.2022 |
| 0.2145 | 17590 | 0.2018 |
| 0.2146 | 17600 | 0.2025 |
| 0.2148 | 17610 | 0.2032 |
| 0.2149 | 17620 | 0.1997 |
| 0.215 | 17630 | 0.2016 |
| 0.2151 | 17640 | 0.2012 |
| 0.2152 | 17650 | 0.2023 |
| 0.2154 | 17660 | 0.2012 |
| 0.2155 | 17670 | 0.2032 |
| 0.2156 | 17680 | 0.2026 |
| 0.2157 | 17690 | 0.2019 |
| 0.2159 | 17700 | 0.2003 |
| 0.2160 | 17710 | 0.2042 |
| 0.2161 | 17720 | 0.2006 |
| 0.2162 | 17730 | 0.2012 |
| 0.2163 | 17740 | 0.2029 |
| 0.2165 | 17750 | 0.2018 |
| 0.2166 | 17760 | 0.2017 |
| 0.2167 | 17770 | 0.2024 |
| 0.2168 | 17780 | 0.201 |
| 0.2170 | 17790 | 0.2036 |
| 0.2171 | 17800 | 0.2017 |
| 0.2172 | 17810 | 0.2039 |
| 0.2173 | 17820 | 0.203 |
| 0.2174 | 17830 | 0.2027 |
| 0.2176 | 17840 | 0.2015 |
| 0.2177 | 17850 | 0.2009 |
| 0.2178 | 17860 | 0.201 |
| 0.2179 | 17870 | 0.2016 |
| 0.2180 | 17880 | 0.2018 |
| 0.2182 | 17890 | 0.2021 |
| 0.2183 | 17900 | 0.2023 |
| 0.2184 | 17910 | 0.2025 |
| 0.2185 | 17920 | 0.2015 |
| 0.2187 | 17930 | 0.2037 |
| 0.2188 | 17940 | 0.2018 |
| 0.2189 | 17950 | 0.2009 |
| 0.2190 | 17960 | 0.2026 |
| 0.2191 | 17970 | 0.2015 |
| 0.2193 | 17980 | 0.2015 |
| 0.2194 | 17990 | 0.2004 |
| 0.2195 | 18000 | 0.2019 |
| 0.2196 | 18010 | 0.2018 |
| 0.2198 | 18020 | 0.2006 |
| 0.2199 | 18030 | 0.2013 |
| 0.22 | 18040 | 0.2014 |
| 0.2201 | 18050 | 0.202 |
| 0.2202 | 18060 | 0.203 |
| 0.2204 | 18070 | 0.1999 |
| 0.2205 | 18080 | 0.2017 |
| 0.2206 | 18090 | 0.2022 |
| 0.2207 | 18100 | 0.2017 |
| 0.2209 | 18110 | 0.2017 |
| 0.2210 | 18120 | 0.2015 |
| 0.2211 | 18130 | 0.2007 |
| 0.2212 | 18140 | 0.2022 |
| 0.2213 | 18150 | 0.202 |
| 0.2215 | 18160 | 0.2036 |
| 0.2216 | 18170 | 0.2027 |
| 0.2217 | 18180 | 0.2035 |
| 0.2218 | 18190 | 0.2028 |
| 0.2220 | 18200 | 0.202 |
| 0.2221 | 18210 | 0.2012 |
| 0.2222 | 18220 | 0.2034 |
| 0.2223 | 18230 | 0.2012 |
| 0.2224 | 18240 | 0.2017 |
| 0.2226 | 18250 | 0.2035 |
| 0.2227 | 18260 | 0.2006 |
| 0.2228 | 18270 | 0.202 |
| 0.2229 | 18280 | 0.2021 |
| 0.2230 | 18290 | 0.202 |
| 0.2232 | 18300 | 0.2019 |
| 0.2233 | 18310 | 0.2003 |
| 0.2234 | 18320 | 0.2011 |
| 0.2235 | 18330 | 0.2026 |
| 0.2237 | 18340 | 0.2022 |
| 0.2238 | 18350 | 0.2042 |
| 0.2239 | 18360 | 0.2027 |
| 0.2240 | 18370 | 0.2018 |
| 0.2241 | 18380 | 0.1999 |
| 0.2243 | 18390 | 0.2011 |
| 0.2244 | 18400 | 0.2031 |
| 0.2245 | 18410 | 0.2025 |
| 0.2246 | 18420 | 0.2011 |
| 0.2248 | 18430 | 0.2017 |
| 0.2249 | 18440 | 0.2025 |
| 0.225 | 18450 | 0.2025 |
| 0.2251 | 18460 | 0.2008 |
| 0.2252 | 18470 | 0.2016 |
| 0.2254 | 18480 | 0.203 |
| 0.2255 | 18490 | 0.201 |
| 0.2256 | 18500 | 0.202 |
| 0.2257 | 18510 | 0.2017 |
| 0.2259 | 18520 | 0.2019 |
| 0.2260 | 18530 | 0.2003 |
| 0.2261 | 18540 | 0.2015 |
| 0.2262 | 18550 | 0.2009 |
| 0.2263 | 18560 | 0.202 |
| 0.2265 | 18570 | 0.2008 |
| 0.2266 | 18580 | 0.2025 |
| 0.2267 | 18590 | 0.2017 |
| 0.2268 | 18600 | 0.2026 |
| 0.2270 | 18610 | 0.2023 |
| 0.2271 | 18620 | 0.2009 |
| 0.2272 | 18630 | 0.2019 |
| 0.2273 | 18640 | 0.2037 |
| 0.2274 | 18650 | 0.2013 |
| 0.2276 | 18660 | 0.2015 |
| 0.2277 | 18670 | 0.2034 |
| 0.2278 | 18680 | 0.2008 |
| 0.2279 | 18690 | 0.2024 |
| 0.2280 | 18700 | 0.2019 |
| 0.2282 | 18710 | 0.2018 |
| 0.2283 | 18720 | 0.2028 |
| 0.2284 | 18730 | 0.2018 |
| 0.2285 | 18740 | 0.2015 |
| 0.2287 | 18750 | 0.2011 |
| 0.2288 | 18760 | 0.203 |
| 0.2289 | 18770 | 0.2001 |
| 0.2290 | 18780 | 0.2034 |
| 0.2291 | 18790 | 0.2002 |
| 0.2293 | 18800 | 0.2022 |
| 0.2294 | 18810 | 0.2023 |
| 0.2295 | 18820 | 0.2021 |
| 0.2296 | 18830 | 0.2024 |
| 0.2298 | 18840 | 0.203 |
| 0.2299 | 18850 | 0.2021 |
| 0.23 | 18860 | 0.2006 |
| 0.2301 | 18870 | 0.2038 |
| 0.2302 | 18880 | 0.2023 |
| 0.2304 | 18890 | 0.2019 |
| 0.2305 | 18900 | 0.2031 |
| 0.2306 | 18910 | 0.2033 |
| 0.2307 | 18920 | 0.2006 |
| 0.2309 | 18930 | 0.201 |
| 0.2310 | 18940 | 0.2031 |
| 0.2311 | 18950 | 0.2002 |
| 0.2312 | 18960 | 0.2019 |
| 0.2313 | 18970 | 0.2026 |
| 0.2315 | 18980 | 0.2015 |
| 0.2316 | 18990 | 0.2007 |
| 0.2317 | 19000 | 0.2019 |
| 0.2318 | 19010 | 0.2009 |
| 0.2320 | 19020 | 0.2025 |
| 0.2321 | 19030 | 0.2022 |
| 0.2322 | 19040 | 0.2015 |
| 0.2323 | 19050 | 0.2014 |
| 0.2324 | 19060 | 0.2023 |
| 0.2326 | 19070 | 0.2026 |
| 0.2327 | 19080 | 0.2021 |
| 0.2328 | 19090 | 0.2002 |
| 0.2329 | 19100 | 0.2039 |
| 0.2330 | 19110 | 0.2016 |
| 0.2332 | 19120 | 0.2017 |
| 0.2333 | 19130 | 0.2022 |
| 0.2334 | 19140 | 0.202 |
| 0.2335 | 19150 | 0.2044 |
| 0.2337 | 19160 | 0.2007 |
| 0.2338 | 19170 | 0.202 |
| 0.2339 | 19180 | 0.203 |
| 0.2340 | 19190 | 0.203 |
| 0.2341 | 19200 | 0.2028 |
| 0.2343 | 19210 | 0.2023 |
| 0.2344 | 19220 | 0.201 |
| 0.2345 | 19230 | 0.2005 |
| 0.2346 | 19240 | 0.2013 |
| 0.2348 | 19250 | 0.1999 |
| 0.2349 | 19260 | 0.2035 |
| 0.235 | 19270 | 0.2028 |
| 0.2351 | 19280 | 0.2025 |
| 0.2352 | 19290 | 0.202 |
| 0.2354 | 19300 | 0.2005 |
| 0.2355 | 19310 | 0.2009 |
| 0.2356 | 19320 | 0.2019 |
| 0.2357 | 19330 | 0.2013 |
| 0.2359 | 19340 | 0.202 |
| 0.2360 | 19350 | 0.2007 |
| 0.2361 | 19360 | 0.2021 |
| 0.2362 | 19370 | 0.2021 |
| 0.2363 | 19380 | 0.2006 |
| 0.2365 | 19390 | 0.2003 |
| 0.2366 | 19400 | 0.202 |
| 0.2367 | 19410 | 0.2029 |
| 0.2368 | 19420 | 0.2018 |
| 0.2370 | 19430 | 0.2024 |
| 0.2371 | 19440 | 0.2034 |
| 0.2372 | 19450 | 0.204 |
| 0.2373 | 19460 | 0.2004 |
| 0.2374 | 19470 | 0.201 |
| 0.2376 | 19480 | 0.2013 |
| 0.2377 | 19490 | 0.201 |
| 0.2378 | 19500 | 0.2035 |
| 0.2379 | 19510 | 0.2016 |
| 0.2380 | 19520 | 0.2015 |
| 0.2382 | 19530 | 0.2009 |
| 0.2383 | 19540 | 0.2017 |
| 0.2384 | 19550 | 0.2009 |
| 0.2385 | 19560 | 0.2026 |
| 0.2387 | 19570 | 0.2013 |
| 0.2388 | 19580 | 0.2033 |
| 0.2389 | 19590 | 0.2023 |
| 0.2390 | 19600 | 0.2024 |
| 0.2391 | 19610 | 0.2 |
| 0.2393 | 19620 | 0.2017 |
| 0.2394 | 19630 | 0.2016 |
| 0.2395 | 19640 | 0.2029 |
| 0.2396 | 19650 | 0.2013 |
| 0.2398 | 19660 | 0.1996 |
| 0.2399 | 19670 | 0.2019 |
| 0.24 | 19680 | 0.2019 |
| 0.2401 | 19690 | 0.2022 |
| 0.2402 | 19700 | 0.2019 |
| 0.2404 | 19710 | 0.2013 |
| 0.2405 | 19720 | 0.2028 |
| 0.2406 | 19730 | 0.2003 |
| 0.2407 | 19740 | 0.2012 |
| 0.2409 | 19750 | 0.2012 |
| 0.2410 | 19760 | 0.2031 |
| 0.2411 | 19770 | 0.2013 |
| 0.2412 | 19780 | 0.2028 |
| 0.2413 | 19790 | 0.2032 |
| 0.2415 | 19800 | 0.2028 |
| 0.2416 | 19810 | 0.2016 |
| 0.2417 | 19820 | 0.2023 |
| 0.2418 | 19830 | 0.201 |
| 0.2420 | 19840 | 0.2018 |
| 0.2421 | 19850 | 0.2021 |
| 0.2422 | 19860 | 0.2027 |
| 0.2423 | 19870 | 0.2017 |
| 0.2424 | 19880 | 0.2021 |
| 0.2426 | 19890 | 0.205 |
| 0.2427 | 19900 | 0.2013 |
| 0.2428 | 19910 | 0.2036 |
| 0.2429 | 19920 | 0.2014 |
| 0.2430 | 19930 | 0.2028 |
| 0.2432 | 19940 | 0.2015 |
| 0.2433 | 19950 | 0.2032 |
| 0.2434 | 19960 | 0.2011 |
| 0.2435 | 19970 | 0.2003 |
| 0.2437 | 19980 | 0.2026 |
| 0.2438 | 19990 | 0.202 |
| 0.2439 | 20000 | 0.2025 |
| 0.2440 | 20010 | 0.2017 |
| 0.2441 | 20020 | 0.2023 |
| 0.2443 | 20030 | 0.2009 |
| 0.2444 | 20040 | 0.2011 |
| 0.2445 | 20050 | 0.2023 |
| 0.2446 | 20060 | 0.2011 |
| 0.2448 | 20070 | 0.2039 |
| 0.2449 | 20080 | 0.2026 |
| 0.245 | 20090 | 0.2019 |
| 0.2451 | 20100 | 0.2016 |
| 0.2452 | 20110 | 0.2013 |
| 0.2454 | 20120 | 0.2005 |
| 0.2455 | 20130 | 0.2022 |
| 0.2456 | 20140 | 0.2019 |
| 0.2457 | 20150 | 0.2027 |
| 0.2459 | 20160 | 0.203 |
| 0.2460 | 20170 | 0.2004 |
| 0.2461 | 20180 | 0.2003 |
| 0.2462 | 20190 | 0.2026 |
| 0.2463 | 20200 | 0.2024 |
| 0.2465 | 20210 | 0.2009 |
| 0.2466 | 20220 | 0.2032 |
| 0.2467 | 20230 | 0.2035 |
| 0.2468 | 20240 | 0.2015 |
| 0.2470 | 20250 | 0.2021 |
| 0.2471 | 20260 | 0.2032 |
| 0.2472 | 20270 | 0.2 |
| 0.2473 | 20280 | 0.2008 |
| 0.2474 | 20290 | 0.2015 |
| 0.2476 | 20300 | 0.2011 |
| 0.2477 | 20310 | 0.2021 |
| 0.2478 | 20320 | 0.2025 |
| 0.2479 | 20330 | 0.2015 |
| 0.2480 | 20340 | 0.2006 |
| 0.2482 | 20350 | 0.2007 |
| 0.2483 | 20360 | 0.2002 |
| 0.2484 | 20370 | 0.1998 |
| 0.2485 | 20380 | 0.2002 |
| 0.2487 | 20390 | 0.2038 |
| 0.2488 | 20400 | 0.2032 |
| 0.2489 | 20410 | 0.2021 |
| 0.2490 | 20420 | 0.2026 |
| 0.2491 | 20430 | 0.2007 |
| 0.2493 | 20440 | 0.2004 |
| 0.2494 | 20450 | 0.2021 |
| 0.2495 | 20460 | 0.2024 |
| 0.2496 | 20470 | 0.2024 |
| 0.2498 | 20480 | 0.2021 |
| 0.2499 | 20490 | 0.1988 |
| 0.25 | 20500 | 0.2034 |
| 0.2501 | 20510 | 0.2023 |
| 0.2502 | 20520 | 0.2034 |
| 0.2504 | 20530 | 0.2021 |
| 0.2505 | 20540 | 0.2011 |
| 0.2506 | 20550 | 0.2009 |
| 0.2507 | 20560 | 0.2011 |
| 0.2509 | 20570 | 0.2024 |
| 0.2510 | 20580 | 0.203 |
| 0.2511 | 20590 | 0.202 |
| 0.2512 | 20600 | 0.201 |
| 0.2513 | 20610 | 0.2018 |
| 0.2515 | 20620 | 0.2008 |
| 0.2516 | 20630 | 0.2019 |
| 0.2517 | 20640 | 0.1996 |
| 0.2518 | 20650 | 0.1993 |
| 0.2520 | 20660 | 0.2015 |
| 0.2521 | 20670 | 0.2005 |
| 0.2522 | 20680 | 0.2016 |
| 0.2523 | 20690 | 0.2028 |
| 0.2524 | 20700 | 0.2023 |
| 0.2526 | 20710 | 0.2018 |
| 0.2527 | 20720 | 0.2006 |
| 0.2528 | 20730 | 0.2029 |
| 0.2529 | 20740 | 0.2024 |
| 0.2530 | 20750 | 0.2017 |
| 0.2532 | 20760 | 0.2032 |
| 0.2533 | 20770 | 0.2006 |
| 0.2534 | 20780 | 0.2005 |
| 0.2535 | 20790 | 0.2015 |
| 0.2537 | 20800 | 0.2018 |
| 0.2538 | 20810 | 0.202 |
| 0.2539 | 20820 | 0.202 |
| 0.2540 | 20830 | 0.2025 |
| 0.2541 | 20840 | 0.2024 |
| 0.2543 | 20850 | 0.2031 |
| 0.2544 | 20860 | 0.2015 |
| 0.2545 | 20870 | 0.2013 |
| 0.2546 | 20880 | 0.203 |
| 0.2548 | 20890 | 0.2005 |
| 0.2549 | 20900 | 0.2015 |
| 0.255 | 20910 | 0.2022 |
| 0.2551 | 20920 | 0.2018 |
| 0.2552 | 20930 | 0.201 |
| 0.2554 | 20940 | 0.204 |
| 0.2555 | 20950 | 0.2014 |
| 0.2556 | 20960 | 0.2021 |
| 0.2557 | 20970 | 0.2014 |
| 0.2559 | 20980 | 0.2016 |
| 0.2560 | 20990 | 0.2009 |
| 0.2561 | 21000 | 0.202 |
| 0.2562 | 21010 | 0.2021 |
| 0.2563 | 21020 | 0.2021 |
| 0.2565 | 21030 | 0.2017 |
| 0.2566 | 21040 | 0.2014 |
| 0.2567 | 21050 | 0.2019 |
| 0.2568 | 21060 | 0.2022 |
| 0.2570 | 21070 | 0.203 |
| 0.2571 | 21080 | 0.2014 |
| 0.2572 | 21090 | 0.2005 |
| 0.2573 | 21100 | 0.2011 |
| 0.2574 | 21110 | 0.2021 |
| 0.2576 | 21120 | 0.2019 |
| 0.2577 | 21130 | 0.2028 |
| 0.2578 | 21140 | 0.2017 |
| 0.2579 | 21150 | 0.2026 |
| 0.2580 | 21160 | 0.2 |
| 0.2582 | 21170 | 0.2004 |
| 0.2583 | 21180 | 0.2022 |
| 0.2584 | 21190 | 0.2016 |
| 0.2585 | 21200 | 0.2026 |
| 0.2587 | 21210 | 0.2006 |
| 0.2588 | 21220 | 0.2031 |
| 0.2589 | 21230 | 0.2005 |
| 0.2590 | 21240 | 0.2013 |
| 0.2591 | 21250 | 0.2021 |
| 0.2593 | 21260 | 0.2019 |
| 0.2594 | 21270 | 0.2032 |
| 0.2595 | 21280 | 0.2011 |
| 0.2596 | 21290 | 0.2021 |
| 0.2598 | 21300 | 0.202 |
| 0.2599 | 21310 | 0.2016 |
| 0.26 | 21320 | 0.2019 |
| 0.2601 | 21330 | 0.2025 |
| 0.2602 | 21340 | 0.2006 |
| 0.2604 | 21350 | 0.2021 |
| 0.2605 | 21360 | 0.2031 |
| 0.2606 | 21370 | 0.2019 |
| 0.2607 | 21380 | 0.2023 |
| 0.2609 | 21390 | 0.2005 |
| 0.2610 | 21400 | 0.2002 |
| 0.2611 | 21410 | 0.2014 |
| 0.2612 | 21420 | 0.2013 |
| 0.2613 | 21430 | 0.2017 |
| 0.2615 | 21440 | 0.2028 |
| 0.2616 | 21450 | 0.2014 |
| 0.2617 | 21460 | 0.2018 |
| 0.2618 | 21470 | 0.203 |
| 0.2620 | 21480 | 0.1998 |
| 0.2621 | 21490 | 0.2019 |
| 0.2622 | 21500 | 0.2027 |
| 0.2623 | 21510 | 0.2018 |
| 0.2624 | 21520 | 0.2038 |
| 0.2626 | 21530 | 0.2025 |
| 0.2627 | 21540 | 0.2024 |
| 0.2628 | 21550 | 0.2015 |
| 0.2629 | 21560 | 0.2012 |
| 0.2630 | 21570 | 0.2013 |
| 0.2632 | 21580 | 0.2011 |
| 0.2633 | 21590 | 0.2025 |
| 0.2634 | 21600 | 0.201 |
| 0.2635 | 21610 | 0.2036 |
| 0.2637 | 21620 | 0.1995 |
| 0.2638 | 21630 | 0.2017 |
| 0.2639 | 21640 | 0.2016 |
| 0.2640 | 21650 | 0.2005 |
| 0.2641 | 21660 | 0.2031 |
| 0.2643 | 21670 | 0.202 |
| 0.2644 | 21680 | 0.2023 |
| 0.2645 | 21690 | 0.2013 |
| 0.2646 | 21700 | 0.2011 |
| 0.2648 | 21710 | 0.2008 |
| 0.2649 | 21720 | 0.1996 |
| 0.265 | 21730 | 0.2019 |
| 0.2651 | 21740 | 0.203 |
| 0.2652 | 21750 | 0.2022 |
| 0.2654 | 21760 | 0.2022 |
| 0.2655 | 21770 | 0.203 |
| 0.2656 | 21780 | 0.1992 |
| 0.2657 | 21790 | 0.2029 |
| 0.2659 | 21800 | 0.2023 |
| 0.2660 | 21810 | 0.2009 |
| 0.2661 | 21820 | 0.2006 |
| 0.2662 | 21830 | 0.2019 |
| 0.2663 | 21840 | 0.2024 |
| 0.2665 | 21850 | 0.2037 |
| 0.2666 | 21860 | 0.1995 |
| 0.2667 | 21870 | 0.2014 |
| 0.2668 | 21880 | 0.2008 |
| 0.2670 | 21890 | 0.2025 |
| 0.2671 | 21900 | 0.201 |
| 0.2672 | 21910 | 0.2029 |
| 0.2673 | 21920 | 0.2025 |
| 0.2674 | 21930 | 0.2023 |
| 0.2676 | 21940 | 0.2012 |
| 0.2677 | 21950 | 0.2025 |
| 0.2678 | 21960 | 0.2025 |
| 0.2679 | 21970 | 0.2034 |
| 0.2680 | 21980 | 0.2004 |
| 0.2682 | 21990 | 0.201 |
| 0.2683 | 22000 | 0.2019 |
| 0.2684 | 22010 | 0.2006 |
| 0.2685 | 22020 | 0.2021 |
| 0.2687 | 22030 | 0.2019 |
| 0.2688 | 22040 | 0.2009 |
| 0.2689 | 22050 | 0.2017 |
| 0.2690 | 22060 | 0.2016 |
| 0.2691 | 22070 | 0.202 |
| 0.2693 | 22080 | 0.2011 |
| 0.2694 | 22090 | 0.2007 |
| 0.2695 | 22100 | 0.2014 |
| 0.2696 | 22110 | 0.1998 |
| 0.2698 | 22120 | 0.201 |
| 0.2699 | 22130 | 0.201 |
| 0.27 | 22140 | 0.2015 |
| 0.2701 | 22150 | 0.2011 |
| 0.2702 | 22160 | 0.2016 |
| 0.2704 | 22170 | 0.2016 |
| 0.2705 | 22180 | 0.2025 |
| 0.2706 | 22190 | 0.2002 |
| 0.2707 | 22200 | 0.2006 |
| 0.2709 | 22210 | 0.2021 |
| 0.2710 | 22220 | 0.2024 |
| 0.2711 | 22230 | 0.2021 |
| 0.2712 | 22240 | 0.2008 |
| 0.2713 | 22250 | 0.2012 |
| 0.2715 | 22260 | 0.2004 |
| 0.2716 | 22270 | 0.2018 |
| 0.2717 | 22280 | 0.2013 |
| 0.2718 | 22290 | 0.2001 |
| 0.2720 | 22300 | 0.1991 |
| 0.2721 | 22310 | 0.2016 |
| 0.2722 | 22320 | 0.2018 |
| 0.2723 | 22330 | 0.2025 |
| 0.2724 | 22340 | 0.2015 |
| 0.2726 | 22350 | 0.2019 |
| 0.2727 | 22360 | 0.2019 |
| 0.2728 | 22370 | 0.1999 |
| 0.2729 | 22380 | 0.2009 |
| 0.2730 | 22390 | 0.2027 |
| 0.2732 | 22400 | 0.2002 |
</details>
### Framework Versions
- Python: 3.9.0
- Sentence Transformers: 3.3.0
- Transformers: 4.45.1
- PyTorch: 2.4.1+cu121
- Accelerate: 1.1.1
- Datasets: 3.1.0
- Tokenizers: 0.20.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### TripletLoss
```bibtex
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
luaqi/sn29_merged_v13 | luaqi | "2024-08-30T04:51:58Z" | 36 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-08-30T04:49:16Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DBangshu/GPT2_e7_0_3 | DBangshu | "2024-06-14T04:58:57Z" | 143 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-14T04:58:35Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Hemg/Audioclasswindows | Hemg | "2024-03-27T04:45:33Z" | 159 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:minds14",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | "2024-01-11T06:02:30Z" | ---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
datasets:
- minds14
metrics:
- accuracy
model-index:
- name: Audioclasswindows
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: minds14
type: minds14
config: en-US
split: train
args: en-US
metrics:
- name: Accuracy
type: accuracy
value: 0.07964601769911504
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Audioclasswindows
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6453
- Accuracy: 0.0796
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.6426 | 0.98 | 14 | 2.6541 | 0.0796 |
| 2.6524 | 1.96 | 28 | 2.6401 | 0.0796 |
| 2.6346 | 2.95 | 42 | 2.6441 | 0.0796 |
| 2.6325 | 3.93 | 56 | 2.6453 | 0.0796 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Siheng99/Llama-3.1-8B-Instruct-SEALONG | Siheng99 | "2024-11-10T13:53:32Z" | 18 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-10T13:51:02Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
KatSoEpic/Discord-LLama-3B-V2 | KatSoEpic | "2025-02-18T04:20:38Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Llama-3.2-3B-unsloth-bnb-4bit",
"base_model:quantized:unsloth/Llama-3.2-3B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-02-18T04:18:31Z" | ---
base_model: unsloth/Llama-3.2-3B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** KatSoEpic
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-3B-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
meryyllebr543/Lunaris-Codex | meryyllebr543 | "2025-03-06T17:39:09Z" | 0 | 0 | null | [
"code",
"text-generation",
"en",
"dataset:bigcode/the-stack-dedup",
"dataset:nampdn-ai/tiny-codes",
"dataset:OpenCoder-LLM/opc-sft-stage1",
"dataset:bigcode/commitpackft",
"license:mit",
"region:us"
] | text-generation | "2025-03-05T18:54:37Z" | ---
license: mit
language:
- en
pipeline_tag: text-generation
datasets:
- bigcode/the-stack-dedup
- nampdn-ai/tiny-codes
- OpenCoder-LLM/opc-sft-stage1
- bigcode/commitpackft
tags:
- code
---
# Lunaris Codex - A Specialized Code Generation Model
## Overview
Welcome to **Lunaris Codex**, a highly specialized, lightweight, and efficient open-source model designed for code generation, focusing on Python, JavaScript, and Go. With only 183 million parameters, Lunaris Codex is optimized to deliver superior performance in generating functional, accurate, and contextually relevant code for developers, powered by advanced techniques and state-of-the-art hardware. This model is ideal for developers, startups, and researchers looking for a focused, fast, and resource-efficient solution for programming tasks, particularly in the Moon Cloud Services ecosystem.
Lunaris Codex leverages a custom architecture, cutting-edge training methodologies, and a curated dataset to excel in its niche, intentionally sacrificing general conversational or reasoning capabilities to prioritize code generation excellence. This README provides a comprehensive overview of the project, its technologies, hardware, data, architecture, and inference system, serving as both an introduction and a simple documentation guide.
## Project Goals
Lunaris Codex aims to:
- Provide a small, efficient model (183M parameters) specialized in generating high-quality code in Python, JavaScript, and Go.
- Outperform larger general-purpose models (e.g., 1-3B parameters) in code-related tasks by focusing 100% on programming data.
- Support developers in Moon Cloud Services with fast, accurate code generation on modern hardware like NVIDIA H100 GPUs.
- Offer an open-source, reproducible solution for code generation, hosted on Hugging Face.
## Technologies Used
Lunaris Codex is built using a modern stack of Python libraries and frameworks, optimized for performance and scalability:
- **PyTorch**: The core deep learning framework for building and training the Lunaris Codex model, with support for distributed training and mixed precision (FP16/BF16).
- **SentencePiece**: A subword tokenization library used to create a custom BPE tokenizer (`lunaris_spm.model`) with a vocabulary size of 16,000, tailored for code and special tokens (e.g., `<CODE-INSTRUCTION>`, `<CODE-SOLUTION>`).
- **Hugging Face Datasets**: Used to stream and process large datasets like `bigcode/the-stack-dedup`, `HuggingFaceTB/smollm-corpus`, and others, ensuring efficient data handling on cloud infrastructure.
- **Torch.compile**: Employed for model optimization, enabling faster inference and training on NVIDIA GPUs.
- **NumPy and HDF5**: Used for efficient data storage and loading of tokenized datasets in `.h5` files.
- **alive_progress and tqdm**: For interactive progress bars and user-friendly terminal feedback during processing and training.
- **colorama and pyfiglet**: For colorful, engaging terminal interfaces in the inference system.
- **argparse**: For flexible command-line configuration of training and inference scripts.
- **logging**: For detailed logging of training, preprocessing, and inference processes to files and streams.
## Hardware
Lunaris Codex was developed and trained on state-of-the-art hardware provided by OVH Cloud:
- **NVIDIA H100 GPUs**: Two H100 PCIe GPUs, each with 80GB of VRAM, power the training and inference, enabling high-throughput processing with distributed training across multiple GPUs (`--nproc_per_node=2`).
- **CUDA and cuDNN**: Leverages NVIDIA’s CUDA toolkit (version 12.4) and cuDNN for optimized matrix operations and deep learning performance, with TF32 precision enabled for faster matrix multiplications.
- **OVH Cloud Infrastructure**: The model runs on a high-performance cloud instance (`job-37de1a4a-b6c5-44f3-ab8f-09d7b63f56d0`), ensuring scalability and reliability for Moon Cloud Services.
## Data
Lunaris Codex is trained on a carefully curated dataset of approximately 5 million examples, focusing exclusively on code and code-related instructions in Python, JavaScript, and Go. The datasets include:
- **bigcode/the-stack-dedup**: ~236 million examples, filtered for Python (~2.7M), JavaScript (~1.15M), and Go (~900K) in English, focusing on code commits and changes.
- **nampdn-ai/tiny-codes**: ~1.2 million examples, with ~400K per language, containing code instructions and solutions.
- **OpenCoder-LLM/opc-sft-stage1**: ~200K examples, filtered for real-user instructions and code in English.
- **HuggingFaceTB/smollm-corpus (Cosmopedia v2)**: ~200K examples from 39.1 million synthetic samples, filtered for code in Python, JS, and Go.
- **HuggingFaceTB/smollm-corpus (FineWeb-Edu-Dedup)**: ~200K examples from 190 million educational web pages, filtered for code-related content.
- **bigcode/commitpackft**: ~100K examples (50K Python, 40K JavaScript, 10K Go), focusing on commit histories and code changes.
- **bigcode/self-oss-instruct-sc2-exec-filter-50k**: 50K high-quality, execution-validated examples for code instructions and solutions.
The data is preprocessed using custom Python scripts (`prepare_data.py`), tokenized with SentencePiece, and stored in HDF5 files (`train_tokenized.h5`, `val_tokenized.h5`). Examples are limited to 4,192 characters (pre-tokenization) and 2,048 tokens (post-tokenization) to optimize for hardware constraints and model performance, with filters ensuring English-only content and code presence (using `is_english_only` and `has_code`).
## Architecture
Lunaris Codex features a custom Transformer-based architecture designed for efficiency and code generation, implemented in `model.py`:
- **LunarisCodex**: A decoder-only Transformer model with:
- **Vocabulary Size**: 16,000 tokens (SentencePiece BPE).
- **Model Dimension (`d_model`)**: 976, adjusted for ~183M parameters.
- **Layers (`n_layers`)**: 11 Transformer decoder blocks.
- **Heads (`n_heads`)**: 16 attention heads.
- **Maximum Sequence Length (`max_seq_len`)**: 2,048 tokens.
- **Dropout**: 0.1 for regularization.
- **Activation**: SwiGLU for efficient feed-forward layers.
- **Attention Mechanism**: Uses ALiBi (Attention with Linear Biases) for position encoding, avoiding traditional positional embeddings.
- **Total Parameters**: ~183,432,368, calculated as:
- Token embeddings: `vocab_size * d_model = 16,000 * 976`
- Transformer layers: `n_layers * (2 * d_model * d_model + 3 * d_model * n_heads * head_dim)`
- Output head: `d_model * vocab_size`
The model includes custom components like `TransformerDecoderBlock`, `SelfAttention`, and `FeedForward`, with optimized initialization (normal distribution, scaled for layers) and support for mixed precision training (BF16/FP16) on NVIDIA H100 GPUs.
## Inference
Inference is handled by `inference.py`, providing an interactive terminal interface for code generation:
- **System**:
- Loads the trained model and SentencePiece tokenizer (`lunaris_spm.model`) from checkpoints (e.g., `output/checkpoint-2000`).
- Supports CUDA on H100 GPUs for fast inference, with fallback to CPU.
- Uses an interactive menu with options like manual prompts, example prompts, batch generation, and parameter tuning (temperature, top-k, top-p, repetition penalty).
- **Prompt Format**:
- For optimal performance, prompts should use `<CODE-INSTRUCTION>[instruction]</CODE-INSTRUCTION>` (e.g., `<CODE-INSTRUCTION>Write a Python function to calculate the factorial of n:</CODE-INSTRUCTION>`), aligning with training data format.
- **Features**:
- Generates code up to 200 tokens, with cleaning for Python, JavaScript, and Go (removing comments, boilerplate, and noise).
- Supports saving outputs to files, loading prompts from files, and batch processing.
- Includes colorful, user-friendly terminal output with `colorama` and `alive_progress`.
- **Performance**:
- Optimized for low-latency code generation on H100 GPUs, leveraging `torch.compile` and SentencePiece tokenization.
## Usage
### Prerequisites
- Python 3.11 or higher.
- PyTorch with CUDA 12.4 support for NVIDIA H100 GPUs.
- SentencePiece,NumPy, HDF5, and other dependencies listed in `requirements.txt`.
### Inference
Run the inference system with `inference.py`:
```bash
python inference.py --checkpoint_path output/checkpoint-2000 --tokenizer_path processed_data/lunaris_spm.model --device cuda
```
Use the interactive menu to generate code, load prompts, or adjust parameters.
## Contributing
We welcome contributions to Lunaris Codex! Please open issues or pull requests on the Hugging Face repository for bug fixes, new features, or dataset additions. Ensure compatibility with Python 3.11, PyTorch, and the specified hardware.
## License
Lunaris Codex is released under the MIT License. See `LICENSE` for details.
## Contact
For questions or collaboration, contact Moon Cloud Services at [[email protected]](mailto:[email protected]) or Francisco (lead developer) on X or via the Hugging Face Discussions tab.
--- |
OpenGVLab/ASMv2 | OpenGVLab | "2024-02-29T19:25:48Z" | 106 | 17 | transformers | [
"transformers",
"pytorch",
"llava",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-02-14T03:12:18Z" | ---
license: apache-2.0
---
# ASMv2 Model Card
## Model details
**Model type:**
ASMv2 is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on multimodal instruction-following data.
It integrates the Relation Conversation (ReC) ability while maintaining powerful general capabilities.
This model is also endowed with grounding and referring capabilities, exhibiting state-of-the-art performance on region-level tasks, and can be naturally adapted to the Scene Graph Generation task in an open-ended manner.
**Model date:**
ASMv2 was trained in January 2024.
**Paper or resources for more information:**
https://github.com/OpenGVLab/all-seeing
## License
ASMv2 is open-sourced under the Apache License 2.0.
**Where to send questions or comments about the model:**
https://github.com/OpenGVLab/all-seeing/issues
## Intended use
**Primary intended uses:**
The primary use of ASMv2 is research on large multimodal models and chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
## Training dataset
The pretrain phase employs [5M filtered samples](https://storage.googleapis.com/sfr-vision-language-research/BLIP/datasets/ccs_filtered.json) from CC12M, [10M filtered samples](https://huggingface.co/datasets/Weiyun1025/AS-V2/blob/main/as_pretrain_10m.json) from AS-1B, and 15M filtered samples from [GRiT](https://huggingface.co/datasets/zzliang/GRIT).
The instruction-tuning phase employs [4M samples](https://huggingface.co/datasets/Weiyun1025/AS-V2/blob/main/as_mix_4m.json) collected from a variety of sources, including image-level datasets
See [here](https://github.com/OpenGVLab/all-seeing/tree/main/all-seeing-v2#training) for more details.
## Evaluation dataset
A collection of 20 benchmarks, including 5 academic VQA benchmarks, 7 multimodal benchmarks specifically proposed for instruction-following LMMs, 3 referring expression comprehension benchmarks, 2 region captioning benchmarks, 1 referring question answering benchmark, 1 scene graph generation benchmark, and 1 relation comprehension benchmark. |
vonewman/mistral-7b-lite-dolly | vonewman | "2023-12-26T16:59:33Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:amazon/MistralLite",
"base_model:adapter:amazon/MistralLite",
"region:us"
] | null | "2023-12-26T16:59:13Z" | ---
library_name: peft
base_model: amazon/MistralLite
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
Mr66/my_awesome_qa_model | Mr66 | "2023-12-08T12:23:00Z" | 3 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | "2023-12-08T08:08:44Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: Mr66/my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Mr66/my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.2906
- Validation Loss: 1.8679
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.2906 | 1.8679 | 0 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.14.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
MohaK/ppo-SnowballTarget | MohaK | "2023-10-14T16:07:29Z" | 11 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | "2023-10-14T16:07:26Z" | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: MohaK/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
jondurbin/airoboros-c34b-2.2.1 | jondurbin | "2023-09-28T09:39:42Z" | 1,437 | 9 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-2.2.1",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-09-19T20:22:58Z" | ---
license: llama2
datasets:
- jondurbin/airoboros-2.2.1
---
### Overview
Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros)
This is essentially a minor "fix" branch of [airoboros-c34b-2.2](https://hf.co/jondurbin/airoboros-c34b-2.2) with a updates, primarily:
- [re-generated writing responses](https://huggingface.co/datasets/jondurbin/airoboros-2.2.1#re-generated-writing-responses)
- [longer contextual blocks](https://huggingface.co/datasets/jondurbin/airoboros-2.2.1#longer-contextual-blocks)
- [removal of "rp" data](https://huggingface.co/datasets/jondurbin/airoboros-2.2.1#rp-category-removed)
- [(less aggressive) de-censoring](https://huggingface.co/datasets/jondurbin/airoboros-2.2.1#de-censoring)
- 5 epochs instead of 3
This is a fairly general purpose model, but focuses heavily on instruction following, rather than casual chat/roleplay.
Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools!
### Prompt format
The prompt format:
```
A chat.
USER: {prompt}
ASSISTANT:
```
The default system prompt ("A chat.") was used for most of the prompts, however it also included a wide sampling of responses with other prompts, particularly in "stylized\_response", "rp", "gtkm", etc.
Here's another example:
```
A chat between Bob (aka USER) and Tom (aka ASSISTANT). Tom is an extremely intelligent 18th century bookkeeper, who speaks loquaciously.
USER: {prompt}
ASSISTANT:
```
And chat scenario that wouldn't require USER/ASSISTANT (but should use stopping criteria to prevent the model from speaking on your behalf).
```
A chat between old friends: Timmy and Tommy.
{description of characters}
{setting for the chat}
Timmy: *takes a big sip from his coffee* "Ah, sweet, delicious, magical coffee."
Tommy:
```
__*I strongly suggest adding stopping criteria/early inference stopping on "USER:", and/or whatever names you specify in the system prompt.*__
### Fine tuning info
https://wandb.ai/jondurbin/airoboros-c34b-2.2.1/runs/07b08z7m?workspace=user-jondurbin
### Helpful usage tips
*The prompts shown here are are just the text that would be included after USER: and before ASSISTANT: in the full prompt format above, the system prompt and USER:/ASSISTANT: have been omited for readability.*
#### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
[key0: value0]
[key1: value1]
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
#### Summarization
500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example:
```
BEGININPUT
{text to summarize}
ENDINPUT
BEGININSTRUCTION
Summarize the input in around 130 words.
ENDINSTRUCTION
```
#### Getting longer responses
You can use a few techniques to get longer responses.
Detailed prompts, with explicit instruction for word count:
```
Please compose a narrative set in the heart of an ancient library, steeped in the scent of old parchment and ink. The protagonist should be a young scholar who is dedicated to studying the art of storytelling and its evolution throughout history. In her pursuit of knowledge, she stumbles upon a forgotten tome that seems to possess an unusual aura. This book has the ability to bring stories to life, literally manifesting characters and scenarios from within its pages into reality.
The main character must navigate through various epochs of storytelling - from oral traditions of tribal societies, through medieval minstrels' tales, to modern-day digital narratives - as they come alive around her. Each era presents its unique challenges and lessons about the power and impact of stories on human civilization.
One such character could be a sentient quill pen, who was once used by renowned authors of yesteryears and now holds their wisdom and experiences. It becomes her mentor, guiding her through this journey with witty remarks and insightful commentary.
Ensure that your tale encapsulates the thrill of adventure, the beauty of learning, and the profound connection between humans and their stories. All characters involved should be non-human entities. Feel free to explore creative liberties but maintain the mentioned elements.
Your response should be approximately 2300 words.
```
Or, a simpler example:
```
Please create a long, detailed story about a dragon in an old growth forest who, for some reason, begins speaking the words of the source code of linux.
```
There are a few examples of next chapter completion as well, e.g.:
```
Write the next chapter of a historical fiction novel set in Paris during the 20th century.
Here's a summary of the previous chapter:
In the vibrant city of Paris, amid the tumultuous changes of the 20th century, our protagonist Margot, an aspiring fashion designer, has just secured an apprenticeship at a prestigious couture house. She meets Lucien, a charming journalist who covers the fashion industry. Together they navigate the ever-changing world of fashion and society, uncovering secrets that reveal the intricate links between style, politics, and culture. As the chapter concludes, they decide to delve deeper into the hidden corners of the fashion world to unravel its mysteries.
Requirements for the next chapter:
1. Character Development of Margot and Lucien:
- Margot's Evolution: Unfold more about Margot's past, her dreams of revolutionizing fashion, and her struggle to establish herself in a male-dominated industry. Illustrate her growing expertise, innovative ideas, and increasing dependence on Lucien.
- Lucien's Complexity: Introduce uncertainties surrounding Lucien's background and real motives. Increase suspense by suggesting undisclosed information he possesses, while also highlighting his wit and perceptiveness.
2. Exploration of Paris and the Couture House:
- Paris: Elaborate their journey through the bustling streets of Paris, including encounters with iconic figures, social unrest, and relics from different eras of French history.
- The Couture House: Expand on the grandeur of the couture house they work in, filled with artistic masterpieces, intense competition, and cryptic notes hinting at a scandalous past.
3. Emergence of the Subplot: The Lost Collection:
- Discovery: Have Margot and Lucien stumble upon a secret vault containing a lost collection designed before World War II, raising new questions about the previous owner and the influence of war on fashion.
- Revelation: Capture their shock as they realize the designs were plagiarized, the potential repercussions, and the opportunities it presents for Margot's career.
- Twist: End with a twist that suggests there are other stolen collections across Paris, setting up their new mission.
Your response should be approximately 650 words.
```
#### Coding
You can ask for fairly complex coding instructions with multiple criteria, e.g.:
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or inline criteria:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js. PLAINFORMAT
```
#### Agent/function calling
The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML.
Example prompt:
```
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: I want to know how many times 'Python' is mentioned in my text file.
Available functions:
file_analytics:
description: This tool performs various operations on a text file.
params:
action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc.
filters:
keyword: The word or phrase we want to search for.
```
Response:
```json
{
"function": "file_analytics",
"params": {
"action": "count_occurrences",
"filters": {
"keyword": "Python"
}
}
}
```
#### Chain-of-thought
You can ask for several possible responses to a given problem, with a ranking and final answer selection.
Example prompt:
```
A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one.
```
Example response:
```
Solution 1:
In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players).
The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows:
n(n-1)/2 = 45
Solving this equation gives us n=10.
Final answer: There were 10 players in the tournament.
Solution 2:
Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers.
If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players.
Final answer: There were 10 players in the tournament.
Solution 3:
We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played.
Final answer: There were 10 players in the tournament.
Ranking of solutions from best to worst:
1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer.
2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer.
3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer.
Best and final answer: There were 10 players in the tournament.
```
#### reWOO style execution planning
The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions!
Example prompt:
```
Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string
that could be the user's question, one or more prior evidence values, or a combination of both.
Here are the tools available to be called:
DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer.
TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links.
HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links.
KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:?
The output should be in format:
Plan: [first action to take based on input question]
:evidence0: = [function to call with input parameter]
Plan: [next action to take, based on result of :evidence0:]
:evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"]
...
Answer: [:evidence[n]: containing the final answer.]
Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work?
```
Response:
```
Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020.
:evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner]
Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate.
:evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?]
Plan: Conduct another web search to find the most famous work of the identified laureate.
:evidence2: = DuckDuckGo[Most famous work of :evidence1:]
Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search.
:evidence3: = HyperlinkExtractor[:evidence2:]
Plan: Use the TextScraper tool to extract information from the relevant links.
:evidence4: = TextScraper[:evidence3:]
Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information.
:evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?]
Answer: :evidence5:
```
For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening:
```python
import re
import requests
def inject_context(input_text, **context):
for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)):
input_text = input_text.replace(ref, context.get(ref, ""))
return input_text
def duckduckgo(input_text, **context):
search_string = inject_context(input_text, **context)
... search via duck duck go using search_string
... return text content
def link_extractor(input_text, **context):
input_text = inject_context(input_text, **context)
return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I))))
def scrape(input_text, **context):
input_text = inject_context(input_text, **context)
text = []
for link in input_text.splitlines():
text.append(requests.get(link).text)
return "\n".join(text)
def infer(input_text, **context)
prompt = inject_context(input_text, **context)
... call model with prompt, return output
def parse_plan(plan):
method_map = {
"DuckDuckGo": duckduckgo,
"HyperlinkExtractor": link_extractor,
"KnowledgeModel": infer,
"TextScraper": scrape,
}
context = {}
for line in plan.strip().splitlines():
if line.startswith("Plan:"):
print(line)
continue
parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I)
if not parts:
if line.startswith("Answer: "):
return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...")
raise RuntimeError("bad format: " + line)
context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context)
```
### Contribute
If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data,
take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details.
To help me with the OpenAI/compute costs:
- https://bmc.link/jondurbin
- ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
- BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
### Licence and usage restrictions
The airoboros 2.2 models are built on top of llama-2/codellama.
The llama-2 base model has a custom Meta license:
- See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta.
- See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta.
The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros)
The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
Either way, by using this model, you agree to completely indemnify me. |
bigmorning/whisper_syl_cv12_pad_lob100_low__0165 | bigmorning | "2023-08-25T23:38:24Z" | 59 | 0 | transformers | [
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2023-08-25T23:38:17Z" | ---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_syl_cv12_pad_lob100_low__0165
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_syl_cv12_pad_lob100_low__0165
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0000
- Train Accuracy: 0.0362
- Train Wermet: 0.0020
- Validation Loss: 0.7502
- Validation Accuracy: 0.0237
- Validation Wermet: 0.2206
- Epoch: 164
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.2930 | 0.0113 | 2.0658 | 3.9415 | 0.0117 | 0.9401 | 0 |
| 4.6215 | 0.0121 | 0.8917 | 3.7803 | 0.0120 | 0.9294 | 1 |
| 4.4086 | 0.0128 | 0.8403 | 3.6070 | 0.0124 | 0.9223 | 2 |
| 4.1842 | 0.0135 | 0.8337 | 3.4291 | 0.0128 | 0.8867 | 3 |
| 3.9981 | 0.0141 | 0.8182 | 3.3251 | 0.0131 | 0.8750 | 4 |
| 3.8531 | 0.0145 | 0.8058 | 3.2385 | 0.0133 | 0.8699 | 5 |
| 3.7345 | 0.0149 | 0.7925 | 3.1751 | 0.0134 | 0.8665 | 6 |
| 3.6307 | 0.0152 | 0.7851 | 3.1031 | 0.0136 | 0.8507 | 7 |
| 3.5437 | 0.0155 | 0.7717 | 3.0752 | 0.0138 | 0.8286 | 8 |
| 3.4649 | 0.0157 | 0.7651 | 3.0334 | 0.0139 | 0.8417 | 9 |
| 3.3926 | 0.0159 | 0.7531 | 3.0022 | 0.0139 | 0.8413 | 10 |
| 3.3262 | 0.0162 | 0.7462 | 2.9669 | 0.0140 | 0.8264 | 11 |
| 3.2625 | 0.0164 | 0.7367 | 2.9342 | 0.0141 | 0.8520 | 12 |
| 3.1979 | 0.0166 | 0.7231 | 2.9046 | 0.0144 | 0.8196 | 13 |
| 3.1319 | 0.0169 | 0.7133 | 2.8607 | 0.0145 | 0.8026 | 14 |
| 3.0616 | 0.0172 | 0.7007 | 2.8165 | 0.0146 | 0.7788 | 15 |
| 2.9792 | 0.0176 | 0.6816 | 2.7552 | 0.0149 | 0.7643 | 16 |
| 2.8905 | 0.0180 | 0.6641 | 2.6788 | 0.0151 | 0.7473 | 17 |
| 2.7749 | 0.0186 | 0.6424 | 2.5824 | 0.0155 | 0.7241 | 18 |
| 2.6263 | 0.0193 | 0.6159 | 2.4206 | 0.0161 | 0.7047 | 19 |
| 2.4352 | 0.0203 | 0.5829 | 2.2230 | 0.0168 | 0.6500 | 20 |
| 2.1941 | 0.0216 | 0.5411 | 2.0349 | 0.0175 | 0.5980 | 21 |
| 1.9184 | 0.0231 | 0.4922 | 1.7850 | 0.0184 | 0.5659 | 22 |
| 1.6174 | 0.0249 | 0.4371 | 1.5664 | 0.0192 | 0.5081 | 23 |
| 1.3542 | 0.0265 | 0.3851 | 1.3992 | 0.0199 | 0.4690 | 24 |
| 1.1499 | 0.0278 | 0.3408 | 1.2512 | 0.0205 | 0.4299 | 25 |
| 0.9878 | 0.0288 | 0.3029 | 1.1479 | 0.0209 | 0.4013 | 26 |
| 0.8600 | 0.0297 | 0.2735 | 1.0527 | 0.0213 | 0.3755 | 27 |
| 0.7516 | 0.0305 | 0.2441 | 0.9803 | 0.0216 | 0.3570 | 28 |
| 0.6626 | 0.0311 | 0.2197 | 0.9314 | 0.0219 | 0.3416 | 29 |
| 0.5863 | 0.0316 | 0.1993 | 0.8730 | 0.0221 | 0.3238 | 30 |
| 0.5187 | 0.0321 | 0.1775 | 0.8357 | 0.0223 | 0.3136 | 31 |
| 0.4608 | 0.0326 | 0.1610 | 0.8059 | 0.0224 | 0.3033 | 32 |
| 0.4087 | 0.0330 | 0.1467 | 0.7746 | 0.0226 | 0.2949 | 33 |
| 0.3642 | 0.0334 | 0.1298 | 0.7476 | 0.0227 | 0.2847 | 34 |
| 0.3221 | 0.0337 | 0.1168 | 0.7330 | 0.0228 | 0.2802 | 35 |
| 0.2837 | 0.0340 | 0.1030 | 0.7093 | 0.0229 | 0.2728 | 36 |
| 0.2509 | 0.0343 | 0.0882 | 0.6941 | 0.0229 | 0.2687 | 37 |
| 0.2209 | 0.0346 | 0.0747 | 0.6892 | 0.0230 | 0.2656 | 38 |
| 0.1934 | 0.0349 | 0.0670 | 0.6824 | 0.0230 | 0.2630 | 39 |
| 0.1688 | 0.0351 | 0.0542 | 0.6773 | 0.0230 | 0.2625 | 40 |
| 0.1469 | 0.0353 | 0.0429 | 0.6700 | 0.0231 | 0.2633 | 41 |
| 0.1268 | 0.0355 | 0.0365 | 0.6680 | 0.0231 | 0.2578 | 42 |
| 0.1086 | 0.0357 | 0.0284 | 0.6643 | 0.0231 | 0.2540 | 43 |
| 0.0920 | 0.0358 | 0.0221 | 0.6645 | 0.0231 | 0.2530 | 44 |
| 0.0783 | 0.0359 | 0.0169 | 0.6621 | 0.0232 | 0.2540 | 45 |
| 0.0667 | 0.0360 | 0.0121 | 0.6714 | 0.0232 | 0.2532 | 46 |
| 0.0563 | 0.0361 | 0.0094 | 0.6604 | 0.0232 | 0.2503 | 47 |
| 0.0477 | 0.0361 | 0.0072 | 0.6620 | 0.0232 | 0.2489 | 48 |
| 0.0397 | 0.0362 | 0.0055 | 0.6611 | 0.0232 | 0.2502 | 49 |
| 0.0330 | 0.0362 | 0.0045 | 0.6686 | 0.0232 | 0.2496 | 50 |
| 0.0283 | 0.0362 | 0.0033 | 0.6705 | 0.0232 | 0.2503 | 51 |
| 0.0242 | 0.0362 | 0.0034 | 0.6686 | 0.0232 | 0.2486 | 52 |
| 0.0212 | 0.0362 | 0.0031 | 0.6686 | 0.0232 | 0.2493 | 53 |
| 0.0197 | 0.0362 | 0.0028 | 0.6688 | 0.0232 | 0.2530 | 54 |
| 0.0226 | 0.0362 | 0.0041 | 0.6598 | 0.0233 | 0.2451 | 55 |
| 0.0158 | 0.0362 | 0.0024 | 0.6605 | 0.0233 | 0.2428 | 56 |
| 0.0115 | 0.0362 | 0.0018 | 0.6648 | 0.0233 | 0.2435 | 57 |
| 0.0094 | 0.0362 | 0.0017 | 0.6672 | 0.0233 | 0.2446 | 58 |
| 0.0081 | 0.0362 | 0.0018 | 0.6731 | 0.0233 | 0.2439 | 59 |
| 0.0071 | 0.0362 | 0.0017 | 0.6762 | 0.0233 | 0.2429 | 60 |
| 0.0062 | 0.0362 | 0.0017 | 0.6794 | 0.0233 | 0.2426 | 61 |
| 0.0055 | 0.0362 | 0.0017 | 0.6825 | 0.0233 | 0.2429 | 62 |
| 0.0048 | 0.0362 | 0.0017 | 0.6895 | 0.0233 | 0.2450 | 63 |
| 0.0042 | 0.0362 | 0.0019 | 0.6914 | 0.0233 | 0.2424 | 64 |
| 0.0037 | 0.0362 | 0.0018 | 0.6938 | 0.0233 | 0.2423 | 65 |
| 0.0224 | 0.0361 | 0.0080 | 0.6695 | 0.0234 | 0.2409 | 66 |
| 0.0127 | 0.0362 | 0.0037 | 0.6685 | 0.0234 | 0.2383 | 67 |
| 0.0065 | 0.0362 | 0.0017 | 0.6714 | 0.0234 | 0.2359 | 68 |
| 0.0045 | 0.0362 | 0.0017 | 0.6645 | 0.0234 | 0.2347 | 69 |
| 0.0034 | 0.0362 | 0.0016 | 0.6671 | 0.0234 | 0.2353 | 70 |
| 0.0028 | 0.0362 | 0.0014 | 0.6715 | 0.0234 | 0.2354 | 71 |
| 0.0024 | 0.0362 | 0.0014 | 0.6745 | 0.0234 | 0.2358 | 72 |
| 0.0022 | 0.0362 | 0.0014 | 0.6778 | 0.0234 | 0.2356 | 73 |
| 0.0020 | 0.0362 | 0.0013 | 0.6797 | 0.0234 | 0.2357 | 74 |
| 0.0018 | 0.0362 | 0.0014 | 0.6833 | 0.0234 | 0.2355 | 75 |
| 0.0016 | 0.0362 | 0.0013 | 0.6885 | 0.0234 | 0.2363 | 76 |
| 0.0068 | 0.0362 | 0.0035 | 0.7270 | 0.0232 | 0.2500 | 77 |
| 0.0131 | 0.0362 | 0.0076 | 0.6965 | 0.0234 | 0.2397 | 78 |
| 0.0054 | 0.0362 | 0.0088 | 0.6764 | 0.0235 | 0.2339 | 79 |
| 0.0029 | 0.0362 | 0.0041 | 0.6806 | 0.0235 | 0.2334 | 80 |
| 0.0019 | 0.0362 | 0.0039 | 0.6723 | 0.0235 | 0.2316 | 81 |
| 0.0016 | 0.0362 | 0.0028 | 0.6765 | 0.0235 | 0.2315 | 82 |
| 0.0014 | 0.0362 | 0.0025 | 0.6786 | 0.0235 | 0.2306 | 83 |
| 0.0013 | 0.0362 | 0.0023 | 0.6805 | 0.0235 | 0.2304 | 84 |
| 0.0012 | 0.0362 | 0.0022 | 0.6830 | 0.0235 | 0.2301 | 85 |
| 0.0011 | 0.0362 | 0.0022 | 0.6881 | 0.0235 | 0.2308 | 86 |
| 0.0010 | 0.0362 | 0.0022 | 0.6875 | 0.0235 | 0.2303 | 87 |
| 0.0009 | 0.0362 | 0.0022 | 0.6909 | 0.0235 | 0.2307 | 88 |
| 0.0008 | 0.0362 | 0.0020 | 0.6934 | 0.0235 | 0.2299 | 89 |
| 0.0007 | 0.0362 | 0.0022 | 0.6968 | 0.0235 | 0.2307 | 90 |
| 0.0007 | 0.0362 | 0.0020 | 0.7005 | 0.0235 | 0.2300 | 91 |
| 0.0006 | 0.0362 | 0.0021 | 0.7040 | 0.0235 | 0.2307 | 92 |
| 0.0006 | 0.0362 | 0.0020 | 0.7086 | 0.0235 | 0.2309 | 93 |
| 0.0005 | 0.0362 | 0.0020 | 0.7116 | 0.0235 | 0.2318 | 94 |
| 0.0005 | 0.0362 | 0.0018 | 0.7151 | 0.0235 | 0.2305 | 95 |
| 0.0111 | 0.0362 | 0.2014 | 0.7185 | 0.0234 | 0.2861 | 96 |
| 0.0069 | 0.0362 | 0.0051 | 0.7036 | 0.0235 | 0.2337 | 97 |
| 0.0028 | 0.0362 | 0.0015 | 0.6946 | 0.0235 | 0.2324 | 98 |
| 0.0023 | 0.0362 | 0.0018 | 0.6937 | 0.0235 | 0.2295 | 99 |
| 0.0017 | 0.0362 | 0.0013 | 0.6886 | 0.0235 | 0.2283 | 100 |
| 0.0010 | 0.0362 | 0.0008 | 0.6891 | 0.0236 | 0.2274 | 101 |
| 0.0009 | 0.0362 | 0.0013 | 0.6901 | 0.0236 | 0.2275 | 102 |
| 0.0008 | 0.0362 | 0.0015 | 0.6922 | 0.0236 | 0.2273 | 103 |
| 0.0006 | 0.0362 | 0.0015 | 0.6923 | 0.0236 | 0.2274 | 104 |
| 0.0008 | 0.0362 | 0.0014 | 0.6996 | 0.0235 | 0.2288 | 105 |
| 0.0006 | 0.0362 | 0.0014 | 0.6967 | 0.0236 | 0.2266 | 106 |
| 0.0005 | 0.0362 | 0.0013 | 0.6988 | 0.0236 | 0.2260 | 107 |
| 0.0004 | 0.0362 | 0.0027 | 0.7008 | 0.0236 | 0.2278 | 108 |
| 0.0004 | 0.0362 | 0.0017 | 0.7034 | 0.0236 | 0.2261 | 109 |
| 0.0004 | 0.0362 | 0.0018 | 0.7036 | 0.0236 | 0.2265 | 110 |
| 0.0004 | 0.0362 | 0.0015 | 0.7090 | 0.0236 | 0.2255 | 111 |
| 0.0112 | 0.0362 | 0.0059 | 0.7014 | 0.0235 | 0.2271 | 112 |
| 0.0034 | 0.0362 | 0.0023 | 0.6869 | 0.0236 | 0.2252 | 113 |
| 0.0015 | 0.0362 | 0.0015 | 0.6863 | 0.0236 | 0.2234 | 114 |
| 0.0008 | 0.0362 | 0.0010 | 0.6893 | 0.0236 | 0.2227 | 115 |
| 0.0006 | 0.0362 | 0.0011 | 0.6911 | 0.0236 | 0.2232 | 116 |
| 0.0005 | 0.0362 | 0.0009 | 0.6923 | 0.0236 | 0.2227 | 117 |
| 0.0004 | 0.0362 | 0.0009 | 0.6938 | 0.0236 | 0.2225 | 118 |
| 0.0004 | 0.0362 | 0.0010 | 0.6958 | 0.0236 | 0.2226 | 119 |
| 0.0003 | 0.0362 | 0.0010 | 0.6966 | 0.0236 | 0.2226 | 120 |
| 0.0003 | 0.0362 | 0.0010 | 0.6983 | 0.0236 | 0.2230 | 121 |
| 0.0003 | 0.0362 | 0.0010 | 0.7005 | 0.0236 | 0.2229 | 122 |
| 0.0003 | 0.0362 | 0.0010 | 0.7022 | 0.0236 | 0.2233 | 123 |
| 0.0002 | 0.0362 | 0.0010 | 0.7041 | 0.0236 | 0.2226 | 124 |
| 0.0002 | 0.0362 | 0.0011 | 0.7065 | 0.0236 | 0.2228 | 125 |
| 0.0002 | 0.0362 | 0.0011 | 0.7081 | 0.0236 | 0.2227 | 126 |
| 0.0002 | 0.0362 | 0.0011 | 0.7101 | 0.0236 | 0.2224 | 127 |
| 0.0002 | 0.0362 | 0.0011 | 0.7130 | 0.0236 | 0.2224 | 128 |
| 0.0002 | 0.0362 | 0.0011 | 0.7157 | 0.0236 | 0.2229 | 129 |
| 0.0002 | 0.0362 | 0.0011 | 0.7183 | 0.0236 | 0.2225 | 130 |
| 0.0001 | 0.0362 | 0.0011 | 0.7212 | 0.0236 | 0.2230 | 131 |
| 0.0001 | 0.0362 | 0.0012 | 0.7250 | 0.0236 | 0.2230 | 132 |
| 0.0001 | 0.0362 | 0.0012 | 0.7268 | 0.0236 | 0.2229 | 133 |
| 0.0001 | 0.0362 | 0.0011 | 0.7303 | 0.0236 | 0.2229 | 134 |
| 0.0001 | 0.0362 | 0.0012 | 0.7350 | 0.0236 | 0.2236 | 135 |
| 0.0001 | 0.0362 | 0.0012 | 0.7386 | 0.0236 | 0.2240 | 136 |
| 0.0001 | 0.0362 | 0.0012 | 0.7422 | 0.0236 | 0.2231 | 137 |
| 0.0001 | 0.0362 | 0.0013 | 0.7445 | 0.0236 | 0.2236 | 138 |
| 0.0001 | 0.0362 | 0.0012 | 0.7500 | 0.0236 | 0.2243 | 139 |
| 0.0112 | 0.0361 | 0.0117 | 0.7391 | 0.0235 | 0.2370 | 140 |
| 0.0036 | 0.0362 | 0.0041 | 0.7201 | 0.0236 | 0.2277 | 141 |
| 0.0011 | 0.0362 | 0.0032 | 0.7210 | 0.0236 | 0.2243 | 142 |
| 0.0006 | 0.0362 | 0.0030 | 0.7199 | 0.0236 | 0.2269 | 143 |
| 0.0003 | 0.0362 | 0.0019 | 0.7231 | 0.0236 | 0.2254 | 144 |
| 0.0002 | 0.0362 | 0.0021 | 0.7179 | 0.0236 | 0.2228 | 145 |
| 0.0002 | 0.0362 | 0.0020 | 0.7236 | 0.0236 | 0.2234 | 146 |
| 0.0002 | 0.0362 | 0.0021 | 0.7271 | 0.0236 | 0.2254 | 147 |
| 0.0002 | 0.0362 | 0.0022 | 0.7250 | 0.0236 | 0.2233 | 148 |
| 0.0001 | 0.0362 | 0.0021 | 0.7255 | 0.0236 | 0.2230 | 149 |
| 0.0001 | 0.0362 | 0.0020 | 0.7263 | 0.0236 | 0.2228 | 150 |
| 0.0001 | 0.0362 | 0.0021 | 0.7278 | 0.0236 | 0.2226 | 151 |
| 0.0001 | 0.0362 | 0.0021 | 0.7289 | 0.0237 | 0.2220 | 152 |
| 0.0001 | 0.0362 | 0.0020 | 0.7301 | 0.0237 | 0.2214 | 153 |
| 0.0001 | 0.0362 | 0.0020 | 0.7307 | 0.0237 | 0.2216 | 154 |
| 0.0001 | 0.0362 | 0.0020 | 0.7329 | 0.0237 | 0.2217 | 155 |
| 0.0001 | 0.0362 | 0.0020 | 0.7339 | 0.0237 | 0.2211 | 156 |
| 0.0001 | 0.0362 | 0.0020 | 0.7354 | 0.0237 | 0.2210 | 157 |
| 0.0001 | 0.0362 | 0.0020 | 0.7374 | 0.0237 | 0.2207 | 158 |
| 0.0001 | 0.0362 | 0.0020 | 0.7394 | 0.0237 | 0.2211 | 159 |
| 0.0001 | 0.0362 | 0.0020 | 0.7406 | 0.0237 | 0.2212 | 160 |
| 0.0001 | 0.0362 | 0.0021 | 0.7422 | 0.0237 | 0.2213 | 161 |
| 0.0001 | 0.0362 | 0.0020 | 0.7446 | 0.0237 | 0.2207 | 162 |
| 0.0001 | 0.0362 | 0.0020 | 0.7471 | 0.0237 | 0.2209 | 163 |
| 0.0000 | 0.0362 | 0.0020 | 0.7502 | 0.0237 | 0.2206 | 164 |
### Framework versions
- Transformers 4.33.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
KeivanR/Qwen2.5-1.5B-Instruct-MLB-clf_lora2-1743330308 | KeivanR | "2025-03-30T11:33:41Z" | 0 | 0 | null | [
"safetensors",
"qwen2",
"region:us"
] | null | "2025-03-30T11:30:46Z" | ---
{}
---
---
language: en
tags:
- machine-learning
- your-tags-here
license: apache-2.0
---
# Model Card for KeivanR/Qwen2.5-1.5B-Instruct-MLB-clf_lora2-1743330308
## Evaluation Results
### Overall Metrics
- **Accuracy:** 0.14
- **Precision (Macro):** 0.33
- **Recall (Macro):** 0.78
- **F1 Score (Macro):** 0.46
### Per-Class Metrics
| | class | precision | recall | f1 |
|---:|:--------------|------------:|---------:|-----:|
| 0 | math | 0.42 | 0.74 | 0.53 |
| 1 | graphs | 0.4 | 0.75 | 0.52 |
| 2 | strings | 0.41 | 0.93 | 0.56 |
| 3 | number theory | 0.16 | 0.76 | 0.27 |
| 4 | trees | 0.45 | 0.79 | 0.57 |
| 5 | geometry | 0.25 | 0.84 | 0.39 |
| 6 | games | 0.27 | 0.8 | 0.4 |
| 7 | probabilities | 0.1 | 0.47 | 0.16 |
| 8 | other | 0.54 | 0.95 | 0.68 |
*Metrics rounded to 2 decimal places*
|
localmodels/LLaMA-65B-ggml | localmodels | "2023-07-16T16:22:41Z" | 0 | 1 | null | [
"region:us"
] | null | "2023-07-16T16:22:41Z" | ---
duplicated_from: localmodels/LLM
---
# LLaMA 65B ggml
From Meta: https://ai.meta.com/blog/large-language-model-llama-meta-ai
---
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
Quantized using an older version of llama.cpp and compatible with llama.cpp from May 19, commit 2d5db48.
### k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
Quantization methods compatible with latest llama.cpp from June 6, commit 2d43387.
---
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| llama-65b.ggmlv3.q2_K.bin | q2_K | 2 | 27.33 GB| 29.83 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| llama-65b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 34.55 GB| 37.05 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| llama-65b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 31.40 GB| 33.90 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| llama-65b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 28.06 GB| 30.56 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| llama-65b.ggmlv3.q4_0.bin | q4_0 | 4 | 36.73 GB| 39.23 GB | Original quant method, 4-bit. |
| llama-65b.ggmlv3.q4_1.bin | q4_1 | 4 | 40.81 GB| 43.31 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| llama-65b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 39.28 GB| 41.78 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| llama-65b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 36.73 GB| 39.23 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| llama-65b.ggmlv3.q5_0.bin | q5_0 | 5 | 44.89 GB| 47.39 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| llama-65b.ggmlv3.q5_1.bin | q5_1 | 5 | 48.97 GB| 51.47 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| llama-65b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 46.20 GB| 48.70 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| llama-65b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 44.89 GB| 47.39 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| llama-65b.ggmlv3.q6_K.bin | q6_K |6 | 53.56 GB| 56.06 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
| llama-65b.ggmlv3.q8_0.bin | q8_0 | 8 | 69.370 GB | 71.87 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. | |
furrutiav/roberta_mixtral_nllfg_rubric_mrpc_sentence_embd_perplexity | furrutiav | "2024-11-26T14:37:02Z" | 105 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2024-11-25T16:07:18Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
snzhang/GPT2-Poem-Small | snzhang | "2023-06-06T00:57:45Z" | 141 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"art",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-06-05T14:55:32Z" | ---
license: apache-2.0
language:
- zh
library_name: transformers
pipeline_tag: text-generation
inference:
parameters:
temperature: 0.7
top_p: 0.6
repetition_penalty: 1.1
max_new_tokens: 128
num_return_sequences: 3
do_sample: true
tags:
- art
widget:
- 笔底江山助磅礴
- (唐诗:秋思)诗词
- (宋词:浣溪沙)秋
- (对联)冬
---
# Chinese Poem and Couplt small GPT2 Model
## Model description
The model is used to generate Chinese ancient poems and couplets. It is based on the [IDEA-CCNL/Wenzhong-GPT2-110M](https://huggingface.co/IDEA-CCNL/Wenzhong-GPT2-110M)
## How to use
You can use the model directly with a pipeline for text generation:
When the parameter skip_special_tokens is True:
```python
>>> from transformers import BertTokenizer, GPT2LMHeadModel,TextGenerationPipeline
>>> tokenizer = BertTokenizer.from_pretrained("snzhang/GPT2-Poem-Small")
>>> model = GPT2LMHeadModel.from_pretrained("snzhang/GPT2-Poem-Small")
>>> text_generator = TextGenerationPipeline(model, tokenizer)
>>> text_generator("笔底江山助磅礴", max_length=50, do_sample=True)
[{'generated_text':'笔底江山助磅礴,万卷诗书见成章。'}]
```
And you can add the prefix "(唐诗:your title)"、"(宋词:your title)" and "(对联)" to make generation more precise.
## Training data
Training data contains 71,334 Chinese ancient poems and couplets which are collected by [Chinese Poetry](https://github.com/chinese-poetry/chinese-poetry) and [Couplet Dataset](https://github.com/wb14123/couplet-dataset)
## More Details
You can get more details in [GPT2-Poem-Small](https://github.com/h7nian/GPT2-Poem-Small)
|
diffusers/ddpm_dummy | diffusers | "2023-02-08T12:31:14Z" | 5,247 | 1 | transformers | [
"transformers",
"hf_diffuse",
"endpoints_compatible",
"region:us"
] | null | "2022-05-31T12:37:35Z" | ---
tags:
- hf_diffuse
---
# Dummy diffusion model following architecture of https://github.com/lucidrains/denoising-diffusion-pytorch
Run the model as follows:
```python
from diffusers import UNetModel, GaussianDiffusion
import torch
# 1. Load model
unet = UNetModel.from_pretrained("fusing/ddpm_dummy")
# 2. Do one denoising step with model
batch_size, num_channels, height, width = 1, 3, 32, 32
dummy_noise = torch.ones((batch_size, num_channels, height, width))
time_step = torch.tensor([10])
image = unet(dummy_noise, time_step)
# 3. Load sampler
sampler = GaussianDiffusion.from_config("fusing/ddpm_dummy")
# 4. Sample image from sampler passing the model
image = sampler.sample(model, batch_size=1)
print(image)
``` |
skfrost19/reranker-MiniLM-L12-H384-uncased-msmarco-bce | skfrost19 | "2025-04-01T15:06:54Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"cross-encoder",
"generated_from_trainer",
"dataset_size:1990000",
"loss:BinaryCrossEntropyLoss",
"text-ranking",
"en",
"dataset:sentence-transformers/msmarco",
"arxiv:1908.10084",
"base_model:microsoft/MiniLM-L12-H384-uncased",
"base_model:finetune:microsoft/MiniLM-L12-H384-uncased",
"model-index",
"region:us"
] | text-ranking | "2025-04-01T15:06:50Z" | Temporary Redirect. Redirecting to /api/resolve-cache/models/skfrost19/reranker-MiniLM-L12-H384-uncased-msmarco-bce/41fb7b3fc2aebe02566d780fc5407484c9550b72/README.md?%2Fskfrost19%2Freranker-MiniLM-L12-H384-uncased-msmarco-bce%2Fresolve%2Fmain%2FREADME.md=&etag=%2200be634d0b71ef8865e22ca45ee05c5b1bd9fc56%22 |
daniel40/c31ceee5-249c-4762-b544-2d5c34f571cb | daniel40 | "2025-03-11T15:25:59Z" | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:princeton-nlp/Sheared-LLaMA-1.3B",
"base_model:adapter:princeton-nlp/Sheared-LLaMA-1.3B",
"region:us"
] | null | "2025-03-11T15:25:46Z" | ---
library_name: peft
tags:
- generated_from_trainer
base_model: princeton-nlp/Sheared-LLaMA-1.3B
model-index:
- name: daniel40/c31ceee5-249c-4762-b544-2d5c34f571cb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# daniel40/c31ceee5-249c-4762-b544-2d5c34f571cb
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7533
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
konsman/setfit-messages-generated-test | konsman | "2024-01-18T20:38:03Z" | 5 | 0 | setfit | [
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"model-index",
"region:us"
] | text-classification | "2024-01-18T20:37:46Z" | ---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
metrics:
- accuracy
widget:
- text: A gentle nudge to complete the healthcare webinar questionnaire sent last
week.
- text: Sudden severe chest pain, suspecting a cardiac emergency.
- text: Annual physical examination due in Tuesday, March 05. Please book an appointment.
- text: Please confirm your attendance at the lifestyle next month.
- text: Could you verify your emergency contact details in our records?
pipeline_tag: text-classification
inference: true
base_model: sentence-transformers/paraphrase-mpnet-base-v2
model-index:
- name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.85
name: Accuracy
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 3 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 2 | <ul><li>'Rapid onset of confusion and weakness, urgent evaluation needed.'</li><li>'Unconscious patient found, immediate medical response required.'</li><li>'Urgent: Suspected heart attack, immediate medical attention required.'</li></ul> |
| 1 | <ul><li>'Reminder: Your dental check-up is scheduled for Monday, February 05.'</li><li>'Reminder: Your dental check-up is scheduled for Saturday, February 24.'</li><li>'Nutritionist appointment reminder for Sunday, January 21.'</li></ul> |
| 0 | <ul><li>'Could you verify your lifestyle contact details in our records?'</li><li>'Kindly update your emergency contact list at your earliest convenience.'</li><li>'We request you to update your wellness information for our records.'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.85 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("konsman/setfit-messages-generated-test")
# Run inference
preds = model("Sudden severe chest pain, suspecting a cardiac emergency.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 7 | 10.125 | 12 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 16 |
| 1 | 16 |
| 2 | 16 |
### Training Hyperparameters
- batch_size: (8, 8)
- num_epochs: (2, 2)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 40
- body_learning_rate: (2.2041595048800003e-05, 2.2041595048800003e-05)
- head_learning_rate: 2.2041595048800003e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0021 | 1 | 0.2841 | - |
| 0.1042 | 50 | 0.0603 | - |
| 0.2083 | 100 | 0.0017 | - |
| 0.3125 | 150 | 0.0003 | - |
| 0.4167 | 200 | 0.0004 | - |
| 0.5208 | 250 | 0.0003 | - |
| 0.625 | 300 | 0.0003 | - |
| 0.7292 | 350 | 0.0002 | - |
| 0.8333 | 400 | 0.0003 | - |
| 0.9375 | 450 | 0.0001 | - |
| 1.0417 | 500 | 0.0002 | - |
| 1.1458 | 550 | 0.0003 | - |
| 1.25 | 600 | 0.0002 | - |
| 1.3542 | 650 | 0.0002 | - |
| 1.4583 | 700 | 0.0001 | - |
| 1.5625 | 750 | 0.0002 | - |
| 1.6667 | 800 | 0.0001 | - |
| 1.7708 | 850 | 0.0001 | - |
| 1.875 | 900 | 0.0001 | - |
| 1.9792 | 950 | 0.0002 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.2.2
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.16.1
- Tokenizers: 0.15.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
jamescalam/minilm-arxiv-encoder | jamescalam | "2022-11-09T05:15:38Z" | 5 | 3 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2022-11-09T02:57:58Z" | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
autoprogrammer/Llama-3.2-1B-Instruct-commonsense_qa-zh-linear | autoprogrammer | "2024-11-21T02:57:09Z" | 144 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-21T02:54:36Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
win10/granite-3.1-3b-a800m-t1 | win10 | "2025-03-04T19:17:35Z" | 0 | 0 | null | [
"safetensors",
"granitemoe",
"license:apache-2.0",
"region:us"
] | null | "2025-03-04T16:13:10Z" | ---
license: apache-2.0
---
|
ProbeX/Model-J__SupViT__model_idx_0499 | ProbeX | "2025-04-15T08:52:16Z" | 0 | 0 | null | [
"safetensors",
"vit",
"region:us"
] | null | "2025-04-15T08:52:01Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
miki030/poca-SoccerTwos-b | miki030 | "2023-05-14T11:22:09Z" | 10 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | "2023-05-14T11:18:28Z" |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: miki030/poca-SoccerTwos-b
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
elopezlopez/Bio_ClinicalBERT_fold_1_binary_v1 | elopezlopez | "2022-08-04T07:48:41Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-08-03T17:12:19Z" | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: Bio_ClinicalBERT_fold_1_binary_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bio_ClinicalBERT_fold_1_binary_v1
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7063
- F1: 0.8114
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 288 | 0.4168 | 0.7949 |
| 0.3981 | 2.0 | 576 | 0.4124 | 0.8137 |
| 0.3981 | 3.0 | 864 | 0.6691 | 0.8002 |
| 0.1779 | 4.0 | 1152 | 0.8402 | 0.8122 |
| 0.1779 | 5.0 | 1440 | 0.9786 | 0.8007 |
| 0.082 | 6.0 | 1728 | 1.0425 | 0.7981 |
| 0.0332 | 7.0 | 2016 | 1.2070 | 0.8006 |
| 0.0332 | 8.0 | 2304 | 1.3305 | 0.8028 |
| 0.0149 | 9.0 | 2592 | 1.4515 | 0.8009 |
| 0.0149 | 10.0 | 2880 | 1.3826 | 0.8160 |
| 0.01 | 11.0 | 3168 | 1.5267 | 0.7963 |
| 0.01 | 12.0 | 3456 | 1.5158 | 0.8112 |
| 0.0084 | 13.0 | 3744 | 1.5919 | 0.7998 |
| 0.0027 | 14.0 | 4032 | 1.6206 | 0.8060 |
| 0.0027 | 15.0 | 4320 | 1.6861 | 0.8014 |
| 0.0061 | 16.0 | 4608 | 1.6660 | 0.8001 |
| 0.0061 | 17.0 | 4896 | 1.5061 | 0.8054 |
| 0.0133 | 18.0 | 5184 | 1.5813 | 0.8025 |
| 0.0133 | 19.0 | 5472 | 1.6314 | 0.7968 |
| 0.0032 | 20.0 | 5760 | 1.6282 | 0.8117 |
| 0.0007 | 21.0 | 6048 | 1.6378 | 0.8153 |
| 0.0007 | 22.0 | 6336 | 1.6710 | 0.8033 |
| 0.0018 | 23.0 | 6624 | 1.6999 | 0.8066 |
| 0.0018 | 24.0 | 6912 | 1.7045 | 0.8109 |
| 0.0001 | 25.0 | 7200 | 1.7063 | 0.8114 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
fbaldassarri/tiiuae_Falcon3-10B-Instruct-autoround-int8-gs128-asym | fbaldassarri | "2025-02-18T16:56:01Z" | 2 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"causal-lm",
"autoround",
"auto-round",
"intel-autoround",
"woq",
"intel",
"pytorch",
"falcon3",
"conversational",
"en",
"fr",
"es",
"pt",
"base_model:tiiuae/Falcon3-10B-Instruct",
"base_model:quantized:tiiuae/Falcon3-10B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"8-bit",
"intel/auto-round",
"region:us"
] | text-generation | "2025-02-17T22:47:35Z" | ---
license: other
license_name: falcon-llm-license
license_link: https://falconllm.tii.ae/falcon-terms-and-conditions.html
language:
- en
- fr
- es
- pt
pipeline_tag: text-generation
tags:
- causal-lm
- autoround
- auto-round
- intel-autoround
- woq
- intel
- pytorch
- falcon3
model_name: Falcon3 10B Instruct
base_model:
- tiiuae/Falcon3-10B-Instruct
inference: false
library_name: transformers
model_creator: tiiuae
prompt_template: '{prompt} '
quantized_by: fbaldassarri
---
## Model Information
Quantized version of [tiiuae/Falcon3-10B-Instruct](https://huggingface.co/tiiuae/Falcon3-10B-Instruct) using torch.float32 for quantization tuning.
- 8 bits (INT8)
- group size = 128
- Asymmetrical Quantization
- Method WoQ (AutoRound format)
Fast and low memory, 2-3X speedup (slight accuracy drop at W8G128)
Quantization framework: [Intel AutoRound](https://github.com/intel/auto-round) v0.4.5
Note: this INT8 version of Falcon3-10B-Instruct has been quantized to run inference through CPU.
## Replication Recipe
### Step 1 Install Requirements
I suggest to install requirements into a dedicated python-virtualenv or a conda enviroment.
```
wget https://github.com/intel/auto-round/archive/refs/tags/v0.4.5.tar.gz
tar -xvzf v0.4.5.tar.gz
cd auto-round-0.4.5
pip install -r requirements-cpu.txt --upgrade
```
### Step 2 Build Intel AutoRound wheel from sources
```
pip install -vvv --no-build-isolation -e .[cpu]
```
### Step 3 Script for Quantization
```
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "tiiuae/Falcon3-10B-Instruct"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
from auto_round import AutoRound
bits, group_size, sym, device, amp = 8, 128, False, 'cpu', False
autoround = AutoRound(model, tokenizer, nsamples=128, iters=200, seqlen=512, batch_size=4, bits=bits, group_size=group_size, sym=sym, device=device, amp=amp)
autoround.quantize()
output_dir = "./AutoRound/tiiuae_Falcon3-10B-Instruct-autoround-int8-gs128-asym"
autoround.save_quantized(output_dir, format='auto_round', inplace=True)
```
## License
[Falcon3 License](https://falconllm.tii.ae/falcon-terms-and-conditions.html)
## Disclaimer
This quantized model comes with no warranty. It has been developed only for research purposes.
|
Mayonnaisu/donut-kompetansebevis-v1 | Mayonnaisu | "2024-07-01T12:06:05Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-text-to-text | "2024-07-01T08:50:25Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
orlcast/layoutxlm-finetuned-xfund-it | orlcast | "2022-10-21T18:07:01Z" | 10 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"generated_from_trainer",
"dataset:xfun",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2022-10-21T16:57:07Z" | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- xfun
model-index:
- name: layoutxlm-finetuned-xfund-it
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutxlm-finetuned-xfund-it
This model is a fine-tuned version of [microsoft/layoutxlm-base](https://huggingface.co/microsoft/layoutxlm-base) on the xfun dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.10.0+cu111
- Datasets 2.6.1
- Tokenizers 0.13.1
|
CLMBR/det-adj-noun-transformer-4 | CLMBR | "2024-02-03T20:53:55Z" | 14 | 0 | transformers | [
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-01-27T13:56:38Z" | ---
tags:
- generated_from_trainer
model-index:
- name: det-adj-noun-transformer-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# det-adj-noun-transformer-4
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8622
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.2118 | 0.03 | 76320 | 4.1942 |
| 4.0104 | 1.03 | 152640 | 4.0267 |
| 3.9029 | 0.03 | 228960 | 3.9524 |
| 3.8382 | 1.03 | 305280 | 3.9109 |
| 3.7878 | 0.03 | 381600 | 3.8859 |
| 3.7506 | 1.03 | 457920 | 3.8703 |
| 3.7161 | 0.03 | 534240 | 3.8607 |
| 3.6852 | 1.03 | 610560 | 3.8536 |
| 3.6542 | 0.03 | 686880 | 3.8491 |
| 3.6276 | 0.03 | 763200 | 3.8462 |
| 3.6046 | 0.03 | 839520 | 3.8446 |
| 3.587 | 1.03 | 915840 | 3.8449 |
| 3.5718 | 0.03 | 992160 | 3.8450 |
| 3.5499 | 0.03 | 1068480 | 3.8459 |
| 3.5344 | 1.03 | 1144800 | 3.8458 |
| 3.5158 | 0.03 | 1221120 | 3.8468 |
| 3.4972 | 1.03 | 1297440 | 3.8480 |
| 3.4863 | 0.03 | 1373760 | 3.8487 |
| 3.4704 | 1.03 | 1450080 | 3.8510 |
| 3.4663 | 0.03 | 1526400 | 3.8527 |
| 3.4605 | 0.03 | 1602720 | 3.8529 |
| 3.4528 | 1.03 | 1679040 | 3.8554 |
| 3.4454 | 0.03 | 1755360 | 3.8565 |
| 3.4331 | 1.03 | 1831680 | 3.8578 |
| 3.4187 | 0.03 | 1908000 | 3.8578 |
| 3.4054 | 1.03 | 1984320 | 3.8609 |
| 3.3958 | 0.03 | 2060640 | 3.8603 |
| 3.3855 | 1.03 | 2136960 | 3.8621 |
| 3.3777 | 0.03 | 2213280 | 3.8636 |
| 3.3661 | 1.03 | 2289600 | 3.8650 |
| 3.3542 | 0.03 | 2365920 | 3.8647 |
| 3.3415 | 1.03 | 2442240 | 3.8658 |
| 3.3276 | 0.03 | 2518560 | 3.8661 |
| 3.3186 | 1.03 | 2594880 | 3.8663 |
| 3.3091 | 0.03 | 2671200 | 3.8661 |
| 3.3087 | 1.03 | 2747520 | 3.8655 |
| 3.3022 | 0.03 | 2823840 | 3.8666 |
| 3.3 | 1.03 | 2900160 | 3.8647 |
| 3.2955 | 0.03 | 2976480 | 3.8638 |
| 3.2843 | 1.02 | 3052726 | 3.8622 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
RichardErkhov/terry69_-_preference_p0.05_seed42_level2_raremixbatch16-gguf | RichardErkhov | "2025-04-14T20:32:34Z" | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-14T19:14:22Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
GabCcr99/Clasificador-animales | GabCcr99 | "2022-11-18T14:37:47Z" | 268 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2022-11-18T14:37:34Z" | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: Clasificador-animales
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
# Clasificador-animales
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### cat

#### dog

#### snake

#### tiger
 |
Yizhang888/mouse10 | Yizhang888 | "2024-02-19T09:30:13Z" | 1 | 1 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | "2024-02-19T09:30:10Z" | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of TOK computer mouse
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - Yizhang888/mouse10
<Gallery />
## Model description
These are Yizhang888/mouse10 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK computer mouse to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](Yizhang888/mouse10/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
rzeydelis/test_trainer | rzeydelis | "2023-11-17T23:55:23Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-11-17T23:41:19Z" | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: test_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3396
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
GitBag/rebel_ultrafeedback_armo_OneBatch_newprob_full_lr_1e-7_eta_1e5_bs_128_1722449463 | GitBag | "2024-07-31T23:57:36Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-31T23:53:21Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/MaziyarPanahi_-_YamshadowStrangemerges_32_Experiment24Experiment26-8bits | RichardErkhov | "2025-03-19T07:27:04Z" | 0 | 0 | null | [
"safetensors",
"mistral",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-03-19T07:22:32Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
YamshadowStrangemerges_32_Experiment24Experiment26 - bnb 8bits
- Model creator: https://huggingface.co/MaziyarPanahi/
- Original model: https://huggingface.co/MaziyarPanahi/YamshadowStrangemerges_32_Experiment24Experiment26/
Original model description:
---
license: apache-2.0
tags:
- Safetensors
- text-generation-inference
- merge
model_name: YamshadowStrangemerges_32_Experiment24Experiment26
base_model:
- automerger/YamshadowStrangemerges_32-7B
- automerger/Experiment24Experiment26-7B
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# YamshadowStrangemerges_32_Experiment24Experiment26
YamshadowStrangemerges_32_Experiment24Experiment26 is a merge of the following models:
* [automerger/YamshadowStrangemerges_32-7B](https://huggingface.co/automerger/YamshadowStrangemerges_32-7B)
* [automerger/Experiment24Experiment26-7B](https://huggingface.co/automerger/Experiment24Experiment26-7B)
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/YamshadowStrangemerges_32_Experiment24Experiment26"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
sinequa/passage-ranker-v1-L-multilingual | sinequa | "2024-02-19T09:42:57Z" | 308 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"de",
"en",
"es",
"fr",
"arxiv:1810.04805",
"arxiv:1901.04085",
"arxiv:2102.07033",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-07-10T10:44:18Z" | ---
language:
- de
- en
- es
- fr
---
# Model Card for `passage-ranker-v1-L-multilingual`
This model is a passage ranker developed by Sinequa. It produces a relevance score given a query-passage pair and is used to order search results.
Model name: `passage-ranker-v1-L-multilingual`
## Supported Languages
The model was trained and tested in the following languages:
- English
- French
- German
- Spanish
## Scores
| Metric | Value |
|:--------------------|------:|
| Relevance (NDCG@10) | 0.471 |
Note that the relevance score is computed as an average over 14 retrieval datasets (see
[details below](#evaluation-metrics)).
## Inference Times
| GPU | Quantization type | Batch size 1 | Batch size 32 |
|:------------------------------------------|:------------------|---------------:|---------------:|
| NVIDIA A10 | FP16 | 2 ms | 31 ms |
| NVIDIA A10 | FP32 | 4 ms | 82 ms |
| NVIDIA T4 | FP16 | 3 ms | 65 ms |
| NVIDIA T4 | FP32 | 14 ms | 364 ms |
| NVIDIA L4 | FP16 | 2 ms | 38 ms |
| NVIDIA L4 | FP32 | 5 ms | 124 ms |
## Gpu Memory usage
| Quantization type | Memory |
|:-------------------------------------------------|-----------:|
| FP16 | 550 MiB |
| FP32 | 1050 MiB |
Note that GPU memory usage only includes how much GPU memory the actual model consumes on an NVIDIA T4 GPU with a batch
size of 32. It does not include the fix amount of memory that is consumed by the ONNX Runtime upon initialization which
can be around 0.5 to 1 GiB depending on the used GPU.
## Requirements
- Minimal Sinequa version: 11.10.0
- Minimal Sinequa version for using FP16 models and GPUs with CUDA compute capability of 8.9+ (like NVIDIA L4): 11.11.0
- [Cuda compute capability](https://developer.nvidia.com/cuda-gpus): above 5.0 (above 6.0 for FP16 use)
## Model Details
### Overview
- Number of parameters: 124 million
- Base language model: Homegrown Sinequa BERT-Base ([Paper](https://arxiv.org/abs/1810.04805)) pretrained in the four
supported languages
- Insensitive to casing and accents
- Training procedure: [MonoBERT](https://arxiv.org/abs/1901.04085)
### Training Data
- Probably-Asked Questions
([Paper](https://arxiv.org/abs/2102.07033),
[Official Page](https://github.com/facebookresearch/PAQ))
- Original English dataset
- Translated datasets for the other three supported languages
### Evaluation Metrics
To determine the relevance score, we averaged the results that we obtained when evaluating on the datasets of the
[BEIR benchmark](https://github.com/beir-cellar/beir). Note that all these datasets are in English.
| Dataset | NDCG@10 |
|:------------------|--------:|
| Average | 0.471 |
| | |
| Arguana | 0.583 |
| CLIMATE-FEVER | 0.150 |
| DBPedia Entity | 0.366 |
| FEVER | 0.734 |
| FiQA-2018 | 0.288 |
| HotpotQA | 0.698 |
| MS MARCO | 0.341 |
| NFCorpus | 0.345 |
| NQ | 0.483 |
| Quora | 0.766 |
| SCIDOCS | 0.142 |
| SciFact | 0.654 |
| TREC-COVID | 0.711 |
| Webis-Touche-2020 | 0.334 |
We evaluated the model on the datasets of the [MIRACL benchmark](https://github.com/project-miracl/miracl) to test its multilingual capacities. Note that not all training languages are part of the benchmark, so we only report the metrics for the existing languages.
| Language | NDCG@10 |
|:---------|--------:|
| French | 0.401 |
| German | 0.396 |
| Spanish | 0.453 |
|
shibajustfor/63c03630-438c-4c1a-ac63-c9aa397874ad | shibajustfor | "2025-02-22T06:19:35Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-0.5B",
"base_model:adapter:unsloth/Qwen2.5-0.5B",
"license:apache-2.0",
"region:us"
] | null | "2025-02-22T06:06:44Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 63c03630-438c-4c1a-ac63-c9aa397874ad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 63c03630-438c-4c1a-ac63-c9aa397874ad
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B](https://huggingface.co/unsloth/Qwen2.5-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1113
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
uriel353/amouranth-flux | uriel353 | "2025-01-22T13:39:19Z" | 79 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | "2025-01-19T20:02:49Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
<lora:amouranth-flux:1> beautiful detailed photograph, red hair cascading
down her shoulders, large breasts, wearing an elegant turtleneck dress,
standing in cafe looking at the viewer, smile
output:
url: images/00059-2369050998.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# amouranth-flux
<Gallery />
## Model description
It's not my model. I just uploaded it here.
https://civitai.com/models/917107/amouranth-flux
## Download model
Weights for this model are available in Safetensors format.
[Download](/uriel353/amouranth-flux/tree/main) them in the Files & versions tab. |
isspek/bert-base-cased_covid_mistral_3_2e-5_16_undersampling_0.1 | isspek | "2024-12-13T09:25:30Z" | 164 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-12-13T09:25:07Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
visdata/wld19 | visdata | "2025-03-06T10:33:20Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-06T10:27:55Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Setpember/ctrate-albert-base_lora_r1_epsilon001 | Setpember | "2025-04-15T04:22:53Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:albert/albert-base-v2",
"base_model:adapter:albert/albert-base-v2",
"region:us"
] | null | "2025-04-15T04:22:52Z" | ---
base_model: albert/albert-base-v2
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
lomalitv/18.EXCLUSIVE.Sapna.Shah.Viral.Video.Original.Leaked.Full.HD.X.Trending.Now | lomalitv | "2025-03-15T21:08:21Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-03-15T20:57:21Z" | 📺📱👉◄◄🔴 https://tinyurl.com/9xx97ae3
📺📱👉◄◄🔴 https://tinyurl.com/26xutdrp
📺📱👉◄◄🔴 https://tinyurl.com/9xx97ae3
📺📱👉◄◄🔴 https://tinurl.com/26xutdrp
📺📱👉◄◄🔴 https://tinyurl.com/9xx97ae3
📺📱👉◄◄🔴 https://tinyurl.com/26xutdrp
Did Neil Nitin Mukesh REALLY Ask Shah Rukh Khan To 'Shut Up'? New York Actor Clarifies
Neil Nitin Mukesh has clarified a viral 2009 Filmfare Awards clip where he seemingly told Shah Rukh
Bbyanni Leak: The Impact On Social Media
The video, which was circulated without her consent, quickly gained traction on social media platforms,
Sapna Choudhary Welcomes Baby Boy With Husband Veer Sahu
Reportedly, singer Babu Maan was invited on stage to reveal the name of the newborn baby, and announced
Haryanvi Sensation Sapna Choudhary Gives Birth to Second Son, Must Watch Video
Online Teaching During COVID-19: Exploration of Challenges and Their Coping Strategies Faced by University Teachers in Pakistan
This study aims to explore the challenges and coping strategies faced by university teachers during the
Shah Rukh Khan's Chaiyya Chaiyya Amps Up Pakistani Wedding; Men In Black Suits Steal The Show In Viral Video
A wedding in Pakistan turned energetic after guests set an energetic dance performance. They grooved to
Sapna Choudhary Welcomes Second Baby Boy, Son Named ‘Shahveer’ in Grand Ceremony
Sapna Choudhary's Second Child Born: There is an atmosphere of happiness again in the house of
‘Unke gaane gaake humari life chal rahi hai’: Chaiyya Chaiyya singer Sapna Awasthi says Shah Rukh Khan brought her luc
Shah Rukh Khan's Lookalike Poses With People In Actor's 90s Look, Viral Video Left Netizens Stunned
A video of Shah Rukh Khan's lookalike named Suraj Kumar is currently doing rounds on the internet as
Shah Rukh Khan’s “Chaiyya Chaiyya” Enchants Guests at Pakistani Wedding
A video from a Pakistani wedding featuring a group of men dancing to the tune of the iconic Hindi song |
HireGenius/employment_parser | HireGenius | "2025-02-20T20:47:34Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2025-02-20T20:46:40Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Eun0503/safecall-model | Eun0503 | "2024-05-17T11:56:22Z" | 97 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"ko",
"dataset:Eun0503/safecall-preprocessed",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-05-17T09:40:55Z" | ---
language:
- ko
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
base_model: openai/whisper-base
datasets:
- Eun0503/safecall-preprocessed
model-index:
- name: 음성인식 for safecall
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 음성인식 for safecall
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the 저음질 전화망 음성인식 데이터 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4638
- Cer: 54.7929
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0296 | 10.0 | 1000 | 0.4638 | 54.7929 |
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Nexesenex/Qwen_2.5_Coder_32B_QwQ_R1-Q8_0-GGUF | Nexesenex | "2025-03-02T10:27:26Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:Nexesenex/Qwen_2.5_Coder_32B_QwQ_R1",
"base_model:quantized:Nexesenex/Qwen_2.5_Coder_32B_QwQ_R1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-02T10:24:46Z" | ---
base_model: Nexesenex/Qwen_2.5_Coder_32B_QwQ_R1
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Nexesenex/Qwen_2.5_Coder_32B_QwQ_R1-Q8_0-GGUF
This model was converted to GGUF format from [`Nexesenex/Qwen_2.5_Coder_32B_QwQ_R1`](https://huggingface.co/Nexesenex/Qwen_2.5_Coder_32B_QwQ_R1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Nexesenex/Qwen_2.5_Coder_32B_QwQ_R1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Nexesenex/Qwen_2.5_Coder_32B_QwQ_R1-Q8_0-GGUF --hf-file qwen_2.5_coder_32b_qwq_r1-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Nexesenex/Qwen_2.5_Coder_32B_QwQ_R1-Q8_0-GGUF --hf-file qwen_2.5_coder_32b_qwq_r1-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Nexesenex/Qwen_2.5_Coder_32B_QwQ_R1-Q8_0-GGUF --hf-file qwen_2.5_coder_32b_qwq_r1-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Nexesenex/Qwen_2.5_Coder_32B_QwQ_R1-Q8_0-GGUF --hf-file qwen_2.5_coder_32b_qwq_r1-q8_0.gguf -c 2048
```
|
abc88767/22c75 | abc88767 | "2024-05-16T01:54:43Z" | 133 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-16T01:53:04Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
relife2/alice-gemma-3-27b | relife2 | "2025-04-15T20:49:14Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"base_model:unsloth/gemma-3-27b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-27b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-04-15T20:48:45Z" | ---
base_model: unsloth/gemma-3-27b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** relife2
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-27b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Bachhoang/Continuous-FT-vbd-merge | Bachhoang | "2024-05-06T02:34:15Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-05T08:11:07Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>504</h1>
<p>Gateway Timeout</p>
</div>
</main>
</body>
</html> |
ljgries/my_eli5_clm_model_v2 | ljgries | "2024-04-27T16:55:19Z" | 144 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:eli5_category",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-27T16:23:59Z" | ---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
datasets:
- eli5_category
model-index:
- name: my_eli5_clm_model_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_eli5_clm_model_v2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the eli5_category dataset.
It achieves the following results on the evaluation set:
- Loss: 6.0285
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.5395 | 1.0 | 1389 | 6.2651 |
| 6.1463 | 2.0 | 2778 | 6.0841 |
| 6.0381 | 3.0 | 4167 | 6.0285 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
John6666/grilled-lamprey-sdxl-v37o2n2-sdxl | John6666 | "2024-12-23T06:51:52Z" | 50 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"adult",
"2.5D",
"cross dressing",
"nasolabial folds",
"wrinkled skin",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-11-28T05:50:17Z" | ---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- adult
- 2.5D
- cross dressing
- nasolabial folds
- wrinkled skin
---
Original model is [here](https://civitai.com/models/56039/grilledlamprey?modelVersionId=1097084).
This model created by [Liquidn2](https://civitai.com/user/Liquidn2).
|
hitachi-nlp/bert-base-japanese_mecab-wordpiece | hitachi-nlp | "2025-03-06T13:24:06Z" | 9 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"ja",
"dataset:wikipedia",
"dataset:cc100",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2023-06-14T06:59:57Z" | ---
license: cc-by-nc-sa-4.0
datasets:
- wikipedia
- cc100
language:
- ja
library_name: transformers
pipeline_tag: fill-mask
---
Japanese BERT-base (MeCab + WordPiece)
===
## How to load the tokenizer
Please download the dictionary file for MeCab + WordPiece from [our GitHub repository](https://github.com/hitachi-nlp/compare-ja-tokenizer/blob/public/data/dict/mecab_wordpiece.json).
Then you can load the tokenizer by specifying the path of the dictionary file to `dict_path`.
```python
from typing import Optional
from tokenizers import Tokenizer, NormalizedString, PreTokenizedString
from tokenizers.processors import BertProcessing
from tokenizers.pre_tokenizers import PreTokenizer
from transformers import PreTrainedTokenizerFast
from MeCab import Tagger
import textspan
class MecabPreTokenizer:
def __init__(self, mecab_dict_path: Optional[str] = None):
mecab_option = (f"-Owakati -d {mecab_dict_path}" if mecab_dict_path is not None else "-Owakati")
self.mecab = Tagger(mecab_option)
def tokenize(self, sequence: str) -> list[str]:
return self.mecab.parse(sequence).strip().split(" ")
def custom_split(self, i: int, normalized_string: NormalizedString) -> list[NormalizedString]:
text = str(normalized_string)
tokens = self.tokenize(text)
tokens_spans = textspan.get_original_spans(tokens, text)
return [normalized_string[st:ed] for cahr_spans in tokens_spans for st,ed in cahr_spans]
def pre_tokenize(self, pretok: PreTokenizedString):
pretok.split(self.custom_split)
# load a tokenizer
dict_path = /path/to/mecab_wordpiece.json
tokenizer = Tokenizer.from_file(dict_path)
# load a pre-tokenizer
pre_tokenizer = MecabPreTokenizer()
tokenizer.post_processor = BertProcessing(
cls=("[CLS]", tokenizer.token_to_id('[CLS]')),
sep=("[SEP]", tokenizer.token_to_id('[SEP]'))
)
# convert to PreTrainedTokenizerFast
tokenizer = PreTrainedTokenizerFast(
tokenizer_object=tokenizer,
unk_token='[UNK]',
cls_token='[CLS]',
sep_token='[SEP]',
pad_token='[PAD]',
mask_token='[MASK]'
)
# set a pre-tokenizer
tokenizer._tokenizer.pre_tokenizer = PreTokenizer.custom(pre_tokenizer)
```
```python
# Test
test_str = "こんにちは。私は形態素解析器について研究をしています。"
tokenizer.convert_ids_to_tokens(tokenizer(test_str).input_ids)
# -> ['[CLS]','こ','##ん','##に','##ち','##は','。','私','は','形態','##素','解','##析','器','について','研究','を','し','て','い','ます','。','[SEP]']
```
## How to load the model
```python
from transformers import AutoModelForMaskedLM
model = AutoModelForMaskedLM.from_pretrained("hitachi-nlp/bert-base_mecab-wordpiece")
```
**See [our repository](https://github.com/hitachi-nlp/compare-ja-tokenizer) for more details!**
|
AIFunOver/Qwen2.5-Coder-7B-Instruct-openvino-fp16 | AIFunOver | "2024-11-13T07:42:42Z" | 52 | 0 | transformers | [
"transformers",
"safetensors",
"openvino",
"qwen2",
"text-generation",
"code",
"codeqwen",
"chat",
"qwen",
"qwen-coder",
"nncf",
"fp16",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-Coder-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-08T02:42:34Z" | ---
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
- openvino
- nncf
- fp16
---
This model is a quantized version of [`Qwen/Qwen2.5-Coder-7B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) and is converted to the OpenVINO format. This model was obtained via the [nncf-quantization](https://huggingface.co/spaces/echarlaix/nncf-quantization) space with [optimum-intel](https://github.com/huggingface/optimum-intel).
First make sure you have `optimum-intel` installed:
```bash
pip install optimum[openvino]
```
To load your model you can do as follows:
```python
from optimum.intel import OVModelForCausalLM
model_id = "AIFunOver/Qwen2.5-Coder-7B-Instruct-openvino-fp16"
model = OVModelForCausalLM.from_pretrained(model_id)
```
|
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned-adapters_GrounTruth_all_Seed101 | behzadnet | "2024-01-10T14:55:06Z" | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"region:us"
] | null | "2024-01-10T01:53:44Z" | ---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
Myashka/gpt-imdb-ipo-beta_0.3 | Myashka | "2023-12-09T16:50:23Z" | 14 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:lvwerra/gpt2-imdb",
"base_model:finetune:lvwerra/gpt2-imdb",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-12-09T08:13:05Z" | ---
base_model: lvwerra/gpt2-imdb
tags:
- generated_from_trainer
model-index:
- name: gpt-imdb-ipo-beta_0.3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-imdb-ipo-beta_0.3
This model is a fine-tuned version of [lvwerra/gpt2-imdb](https://huggingface.co/lvwerra/gpt2-imdb) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8601
- Rewards/chosen: -0.2473
- Rewards/rejected: -0.6141
- Rewards/accuracies: 0.8271
- Rewards/margins: 0.3668
- Logps/rejected: -265.7321
- Logps/chosen: -236.0896
- Logits/rejected: -31.6527
- Logits/chosen: -31.7977
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 150
- training_steps: 7197
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 5.822 | 0.21 | 500 | 19.5830 | -0.0268 | -0.3320 | 0.6708 | 0.3052 | -264.7920 | -235.3544 | -33.5002 | -33.8198 |
| 6.8677 | 0.42 | 1000 | 18.7557 | -0.0552 | -0.3293 | 0.5917 | 0.2741 | -264.7829 | -235.4492 | -35.5852 | -35.8178 |
| 12.3698 | 0.63 | 1500 | 36.0453 | -0.1426 | -0.5467 | 0.6771 | 0.4041 | -265.5075 | -235.7406 | -34.3816 | -34.5936 |
| 7.8347 | 0.83 | 2000 | 38.2624 | -0.0799 | -0.3485 | 0.6500 | 0.2687 | -264.8470 | -235.5314 | -33.2874 | -33.4310 |
| 9.184 | 1.04 | 2500 | 14.9546 | -0.3389 | -0.7127 | 0.6875 | 0.3739 | -266.0610 | -236.3948 | -32.7912 | -32.9463 |
| 11.1603 | 1.25 | 3000 | 15.5236 | -0.0513 | -0.3736 | 0.7000 | 0.3223 | -264.9306 | -235.4362 | -33.3399 | -33.5624 |
| 16.5516 | 1.46 | 3500 | 8.6118 | -0.1177 | -0.5526 | 0.7438 | 0.4349 | -265.5274 | -235.6576 | -31.9816 | -32.1630 |
| 5.2761 | 1.67 | 4000 | 5.2168 | -0.1495 | -0.5364 | 0.7417 | 0.3869 | -265.4733 | -235.7637 | -32.2719 | -32.3991 |
| 2.9326 | 1.88 | 4500 | 4.2332 | -0.2284 | -0.6043 | 0.7646 | 0.3759 | -265.6996 | -236.0266 | -32.0240 | -32.1547 |
| 2.9814 | 2.08 | 5000 | 3.3498 | -0.2188 | -0.6063 | 0.7792 | 0.3874 | -265.7062 | -235.9947 | -31.8376 | -31.9728 |
| 1.8651 | 2.29 | 5500 | 2.8900 | -0.2624 | -0.6313 | 0.7896 | 0.3688 | -265.7895 | -236.1400 | -31.4502 | -31.5973 |
| 4.5849 | 2.5 | 6000 | 2.2055 | -0.2771 | -0.6338 | 0.7833 | 0.3567 | -265.7979 | -236.1888 | -31.5011 | -31.6468 |
| 1.7322 | 2.71 | 6500 | 1.9194 | -0.2534 | -0.6145 | 0.8208 | 0.3611 | -265.7336 | -236.1099 | -31.6632 | -31.8054 |
| 1.1697 | 2.92 | 7000 | 1.8601 | -0.2473 | -0.6141 | 0.8271 | 0.3668 | -265.7321 | -236.0896 | -31.6527 | -31.7977 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1
- Datasets 2.15.0
- Tokenizers 0.15.0
|
whirlie/taxiv3 | whirlie | "2025-02-13T14:55:47Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2025-02-13T14:55:41Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxiv3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.76
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="whirlie/taxiv3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
KubiakJakub01/dqn-SpaceInvadersNoFrameskip-v4 | KubiakJakub01 | "2023-02-18T18:16:02Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-02-18T18:15:16Z" | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 571.50 +/- 248.73
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga KubiakJakub01 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga KubiakJakub01 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga KubiakJakub01
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Saxo/Linkbricks-Horizon-AI-Korean-Pro-22B | Saxo | "2025-03-27T08:37:58Z" | 1,872 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"ko",
"en",
"jp",
"cn",
"dataset:Saxo/ko_cn_translation_tech_social_science_linkbricks_single_dataset",
"dataset:Saxo/ko_jp_translation_tech_social_science_linkbricks_single_dataset",
"dataset:Saxo/en_ko_translation_tech_science_linkbricks_single_dataset_with_prompt_text_huggingface",
"dataset:Saxo/en_ko_translation_social_science_linkbricks_single_dataset_with_prompt_text_huggingface",
"dataset:Saxo/ko_aspect_sentiment_sns_mall_sentiment_linkbricks_single_dataset_with_prompt_text_huggingface",
"dataset:Saxo/ko_summarization_linkbricks_single_dataset_with_prompt_text_huggingface",
"dataset:Saxo/OpenOrca_cleaned_kor_linkbricks_single_dataset_with_prompt_text_huggingface",
"dataset:Saxo/ko_government_qa_total_linkbricks_single_dataset_with_prompt_text_huggingface_sampled",
"dataset:Saxo/ko-news-corpus-1",
"dataset:Saxo/ko-news-corpus-2",
"dataset:Saxo/ko-news-corpus-3",
"dataset:Saxo/ko-news-corpus-4",
"dataset:Saxo/ko-news-corpus-5",
"dataset:Saxo/ko-news-corpus-6",
"dataset:Saxo/ko-news-corpus-7",
"dataset:Saxo/ko-news-corpus-8",
"dataset:Saxo/ko-news-corpus-9",
"dataset:maywell/ko_Ultrafeedback_binarized",
"dataset:youjunhyeok/ko-orca-pair-and-ultrafeedback-dpo",
"dataset:lilacai/glaive-function-calling-v2-sharegpt",
"dataset:kuotient/gsm8k-ko",
"base_model:mistralai/Mistral-Small-Instruct-2409",
"base_model:finetune:mistralai/Mistral-Small-Instruct-2409",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-08T04:31:16Z" | ---
library_name: transformers
license: apache-2.0
base_model: mistralai/Mistral-Small-Instruct-2409
datasets:
- Saxo/ko_cn_translation_tech_social_science_linkbricks_single_dataset
- Saxo/ko_jp_translation_tech_social_science_linkbricks_single_dataset
- Saxo/en_ko_translation_tech_science_linkbricks_single_dataset_with_prompt_text_huggingface
- Saxo/en_ko_translation_social_science_linkbricks_single_dataset_with_prompt_text_huggingface
- Saxo/ko_aspect_sentiment_sns_mall_sentiment_linkbricks_single_dataset_with_prompt_text_huggingface
- Saxo/ko_summarization_linkbricks_single_dataset_with_prompt_text_huggingface
- Saxo/OpenOrca_cleaned_kor_linkbricks_single_dataset_with_prompt_text_huggingface
- Saxo/ko_government_qa_total_linkbricks_single_dataset_with_prompt_text_huggingface_sampled
- Saxo/ko-news-corpus-1
- Saxo/ko-news-corpus-2
- Saxo/ko-news-corpus-3
- Saxo/ko-news-corpus-4
- Saxo/ko-news-corpus-5
- Saxo/ko-news-corpus-6
- Saxo/ko-news-corpus-7
- Saxo/ko-news-corpus-8
- Saxo/ko-news-corpus-9
- maywell/ko_Ultrafeedback_binarized
- youjunhyeok/ko-orca-pair-and-ultrafeedback-dpo
- lilacai/glaive-function-calling-v2-sharegpt
- kuotient/gsm8k-ko
language:
- ko
- en
- jp
- cn
pipeline_tag: text-generation
---
# Model Card for Model ID
<div align="center">
<img src="http://www.linkbricks.com/wp-content/uploads/2024/11/fulllogo.png" />
</div>
AI 전문 기업인 Linkbricks Horizon-AI 의 데이터사이언티스트인 지윤성(Saxo) 대표가 <br>
Mistral-Small-Instruct-2409 베이스모델을 사용해서 H100-80G 8개를 통해 약 35%정도의 파라미터를 한국어 CPT(Continued-Pretraining)->SFT->DPO 한 한글 언어 모델<br>
9천만건의 한글 뉴스 코퍼스를 기준으로 다양한 테스크별 한국어-중국어-영어-일본어 교차 학습 데이터와 수학 및 논리판단 데이터를 통하여 한중일영 언어 교차 증강 처리와 복잡한 논리 문제 역시 대응 가능하도록 훈련한 모델이다.<br>
-토크나이저는 단어 확장 없이 베이스 모델 그대로 사용<br>
-고객 리뷰나 소셜 포스팅 고차원 분석 및 코딩과 작문, 수학, 논리판단 등이 강화된 모델<br>
-32k 시퀀스 길이<br>
-펑션콜 지원<br>
-Deepspeed Stage=3, rslora 및 BAdam Layer Mode 사용 <br>
<br><br>
Finetuned by CEO Yunsung Ji (Saxo), a data scientist at Linkbricks Horizon-AI, a company specializing in AI and big data analytics <br>
about 35% of total parameters Korean CPT(Continued-Pretraining)->SFT->DPO training model based on Mistral-Small-Instruct-2409 through 8 H100-80Gs as a Korean language model <br>
It is a model that has been trained to handle Korean-Chinese-English-Japanese cross-training data and 90M korean news corpus and logic judgment data for various tasks to enable cross-fertilization processing and complex Korean logic & math problems. <br>
-Tokenizer uses the base model without word expansion<br>
-Models enhanced with high-dimensional analysis of customer reviews and social posts, as well as coding, writing, math and decision making<br>
-32k sequence length<br>
-Function calling<br>
-Deepspeed Stage=3, use rslora and BAdam Layer Mode<br>
<br><br>
<a href="www.horizonai.ai">www.horizonai.ai</a>, <a href="www.linkbricks.com">www.linkbricks.com</a>, <a href="www.linkbricks.vc">www.linkbricks.vc</a>
|
SalmanFaroz/Meta-Llama-3-8B-Instruct-GPTQ | SalmanFaroz | "2024-05-31T08:32:33Z" | 81 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-3",
"conversational",
"en",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | "2024-05-30T18:23:44Z" | ---
language:
- en
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
license: llama3
extra_gated_prompt: >-
### META LLAMA 3 COMMUNITY LICENSE AGREEMENT
Meta Llama 3 Version Release Date: April 18, 2024
"Agreement" means the terms and conditions for use, reproduction, distribution and modification of the
Llama Materials set forth herein.
"Documentation" means the specifications, manuals and documentation accompanying Meta Llama 3
distributed by Meta at https://llama.meta.com/get-started/.
"Licensee" or "you" means you, or your employer or any other person or entity (if you are entering into
this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or
regulations to provide legal consent and that has legal authority to bind your employer or such other
person or entity if you are entering in this Agreement on their behalf.
"Meta Llama 3" means the foundational large language models and software and algorithms, including
machine-learning model code, trained model weights, inference-enabling code, training-enabling code,
fine-tuning enabling code and other elements of the foregoing distributed by Meta at
https://llama.meta.com/llama-downloads.
"Llama Materials" means, collectively, Meta’s proprietary Meta Llama 3 and Documentation (and any
portion thereof) made available under this Agreement.
"Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your
principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located
outside of the EEA or Switzerland).
1. License Rights and Redistribution.
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free
limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama
Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the
Llama Materials.
b. Redistribution and Use.
i. If you distribute or make available the Llama Materials (or any derivative works
thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide
a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Meta
Llama 3” on a related website, user interface, blogpost, about page, or product documentation. If you
use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is
distributed or made available, you shall also include “Llama 3” at the beginning of any such AI model
name.
ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part
of an integrated end user product, then Section 2 of this Agreement will not apply to you.
iii. You must retain in all copies of the Llama Materials that you distribute the following
attribution notice within a “Notice” text file distributed as a part of such copies: “Meta Llama 3 is
licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms, Inc. All Rights
Reserved.”
iv. Your use of the Llama Materials must comply with applicable laws and regulations
(including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama
Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by
reference into this Agreement.
v. You will not use the Llama Materials or any output or results of the Llama Materials to
improve any other large language model (excluding Meta Llama 3 or derivative works thereof).
2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users
of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700
million monthly active users in the preceding calendar month, you must request a license from Meta,
which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the
rights under this Agreement unless or until Meta otherwise expressly grants you such rights.
3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY
OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF
ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED,
INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT,
MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR
DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND
ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND
RESULTS.
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING
OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,
INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED
OF THE POSSIBILITY OF ANY OF THE FOREGOING.
5. Intellectual Property.
a. No trademark licenses are granted under this Agreement, and in connection with the Llama
Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other
or any of its affiliates, except as required for reasonable and customary use in describing and
redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to
use “Llama 3” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will
comply with Meta’s brand guidelines (currently accessible at
https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use
of the Mark will inure to the benefit of Meta.
b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with
respect to any derivative works and modifications of the Llama Materials that are made by you, as
between you and Meta, you are and will be the owner of such derivative works and modifications.
c. If you institute litigation or other proceedings against Meta or any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or
results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other
rights owned or licensable by you, then any licenses granted to you under this Agreement shall
terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold
harmless Meta from and against any claim by any third party arising out of or related to your use or
distribution of the Llama Materials.
6. Term and Termination. The term of this Agreement will commence upon your acceptance of this
Agreement or access to the Llama Materials and will continue in full force and effect until terminated in
accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in
breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete
and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this
Agreement.
7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of
the State of California without regard to choice of law principles, and the UN Convention on Contracts
for the International Sale of Goods does not apply to this Agreement. The courts of California shall have
exclusive jurisdiction of any dispute arising out of this Agreement.
### Meta Llama 3 Acceptable Use Policy
Meta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you
access or use Meta Llama 3, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of
this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)
#### Prohibited Uses
We want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow
others to use, Meta Llama 3 to:
1. Violate the law or others’ rights, including to:
1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:
1. Violence or terrorism
2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material
3. Human trafficking, exploitation, and sexual violence
4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.
5. Sexual solicitation
6. Any other criminal activity
2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals
3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services
4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices
5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws
6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials
7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system
2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following:
1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State
2. Guns and illegal weapons (including weapon development)
3. Illegal drugs and regulated/controlled substances
4. Operation of critical infrastructure, transportation technologies, or heavy machinery
5. Self-harm or harm to others, including suicide, cutting, and eating disorders
6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual
3. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following:
1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation
2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content
3. Generating, promoting, or further distributing spam
4. Impersonating another individual without consent, authorization, or legal right
5. Representing that the use of Meta Llama 3 or outputs are human-generated
6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement
4. Fail to appropriately disclose to end users any known dangers of your AI system
Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation
of this Policy through one of the following means:
* Reporting issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)
* Reporting risky content generated by the model:
developers.facebook.com/llama_output_feedback
* Reporting bugs and security concerns: facebook.com/whitehat/info
* Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
geo: ip_location
By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox
extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
---
## Model Details (GPTQ 4bit)
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
**Model developers** Meta
**Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants.
**Input** Models input text only.
**Output** Models generate text and code only.
**Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
<table>
<tr>
<td>
</td>
<td><strong>Training Data</strong>
</td>
<td><strong>Params</strong>
</td>
<td><strong>Context length</strong>
</td>
<td><strong>GQA</strong>
</td>
<td><strong>Token count</strong>
</td>
<td><strong>Knowledge cutoff</strong>
</td>
</tr>
<tr>
<td rowspan="2" >Llama 3
</td>
<td rowspan="2" >A new mix of publicly available online data.
</td>
<td>8B
</td>
<td>8k
</td>
<td>Yes
</td>
<td rowspan="2" >15T+
</td>
<td>March, 2023
</td>
</tr>
<tr>
<td>70B
</td>
<td>8k
</td>
<td>Yes
</td>
<td>December, 2023
</td>
</tr>
</table>
**Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date** April 18, 2024.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license)
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
## Intended Use
**Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
**Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**.
**Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
#### Example code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "SalmanFaroz/Meta-Llama-3-8B-Instruct-GPTQ"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
<</SYS>>
{prompt}[/INST]
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
``` |
deepnet/SN29-C00-llama-HK8Nw-2 | deepnet | "2024-11-26T07:22:08Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-26T07:18:55Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/wu981526092_-_MP10-gguf | RichardErkhov | "2025-03-19T07:08:13Z" | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-03-19T07:04:45Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
MP10 - GGUF
- Model creator: https://huggingface.co/wu981526092/
- Original model: https://huggingface.co/wu981526092/MP10/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [MP10.Q2_K.gguf](https://huggingface.co/RichardErkhov/wu981526092_-_MP10-gguf/blob/main/MP10.Q2_K.gguf) | Q2_K | 0.08GB |
| [MP10.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/wu981526092_-_MP10-gguf/blob/main/MP10.IQ3_XS.gguf) | IQ3_XS | 0.08GB |
| [MP10.IQ3_S.gguf](https://huggingface.co/RichardErkhov/wu981526092_-_MP10-gguf/blob/main/MP10.IQ3_S.gguf) | IQ3_S | 0.08GB |
| [MP10.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/wu981526092_-_MP10-gguf/blob/main/MP10.Q3_K_S.gguf) | Q3_K_S | 0.08GB |
| [MP10.IQ3_M.gguf](https://huggingface.co/RichardErkhov/wu981526092_-_MP10-gguf/blob/main/MP10.IQ3_M.gguf) | IQ3_M | 0.09GB |
| [MP10.Q3_K.gguf](https://huggingface.co/RichardErkhov/wu981526092_-_MP10-gguf/blob/main/MP10.Q3_K.gguf) | Q3_K | 0.09GB |
| [MP10.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/wu981526092_-_MP10-gguf/blob/main/MP10.Q3_K_M.gguf) | Q3_K_M | 0.09GB |
| [MP10.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/wu981526092_-_MP10-gguf/blob/main/MP10.Q3_K_L.gguf) | Q3_K_L | 0.1GB |
| [MP10.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/wu981526092_-_MP10-gguf/blob/main/MP10.IQ4_XS.gguf) | IQ4_XS | 0.1GB |
| [MP10.Q4_0.gguf](https://huggingface.co/RichardErkhov/wu981526092_-_MP10-gguf/blob/main/MP10.Q4_0.gguf) | Q4_0 | 0.1GB |
| [MP10.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/wu981526092_-_MP10-gguf/blob/main/MP10.IQ4_NL.gguf) | IQ4_NL | 0.1GB |
| [MP10.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/wu981526092_-_MP10-gguf/blob/main/MP10.Q4_K_S.gguf) | Q4_K_S | 0.1GB |
| [MP10.Q4_K.gguf](https://huggingface.co/RichardErkhov/wu981526092_-_MP10-gguf/blob/main/MP10.Q4_K.gguf) | Q4_K | 0.11GB |
| [MP10.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/wu981526092_-_MP10-gguf/blob/main/MP10.Q4_K_M.gguf) | Q4_K_M | 0.11GB |
| [MP10.Q4_1.gguf](https://huggingface.co/RichardErkhov/wu981526092_-_MP10-gguf/blob/main/MP10.Q4_1.gguf) | Q4_1 | 0.11GB |
| [MP10.Q5_0.gguf](https://huggingface.co/RichardErkhov/wu981526092_-_MP10-gguf/blob/main/MP10.Q5_0.gguf) | Q5_0 | 0.11GB |
| [MP10.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/wu981526092_-_MP10-gguf/blob/main/MP10.Q5_K_S.gguf) | Q5_K_S | 0.11GB |
| [MP10.Q5_K.gguf](https://huggingface.co/RichardErkhov/wu981526092_-_MP10-gguf/blob/main/MP10.Q5_K.gguf) | Q5_K | 0.12GB |
| [MP10.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/wu981526092_-_MP10-gguf/blob/main/MP10.Q5_K_M.gguf) | Q5_K_M | 0.12GB |
| [MP10.Q5_1.gguf](https://huggingface.co/RichardErkhov/wu981526092_-_MP10-gguf/blob/main/MP10.Q5_1.gguf) | Q5_1 | 0.12GB |
| [MP10.Q6_K.gguf](https://huggingface.co/RichardErkhov/wu981526092_-_MP10-gguf/blob/main/MP10.Q6_K.gguf) | Q6_K | 0.13GB |
| [MP10.Q8_0.gguf](https://huggingface.co/RichardErkhov/wu981526092_-_MP10-gguf/blob/main/MP10.Q8_0.gguf) | Q8_0 | 0.17GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
porpaul/t5-small-finetuned-xsum | porpaul | "2022-01-16T06:59:38Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xlsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-03-02T23:29:05Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xlsum
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xlsum
type: xlsum
args: chinese_traditional
metrics:
- name: Rouge1
type: rouge
value: 0.5217
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2188
- Rouge1: 0.5217
- Rouge2: 0.0464
- Rougel: 0.527
- Rougelsum: 0.5215
- Gen Len: 6.7441
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 1.3831 | 1.0 | 7475 | 1.2188 | 0.5217 | 0.0464 | 0.527 | 0.5215 | 6.7441 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
ItchyChin/OrpoLlama-3-8B-memorize | ItchyChin | "2024-06-26T21:50:51Z" | 9 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"orpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-24T21:20:00Z" | ---
library_name: transformers
tags:
- trl
- orpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
GreenBitAI/Llama-2-13B-channel-mix-bpw-3.0 | GreenBitAI | "2024-04-29T21:51:25Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-07T18:53:53Z" | ---
license: apache-2.0
---
# GreenBit LLMs
This is GreenBitAI's pretrained **low-bit** LLMs with extreme compression yet still strong performance.
Please refer to our [Github page](https://github.com/GreenBitAI/green-bit-llm) for the code to run the model and more information.
|
glacialfire/detr-finetuned-balloon-v2 | glacialfire | "2024-07-13T20:28:19Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"detr",
"object-detection",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | object-detection | "2024-07-13T15:24:45Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MayBashendy/ArabicNewSplits4_WithDuplicationsForScore5_FineTuningAraBERT_run2_AugV5_k8_task3_organization | MayBashendy | "2024-12-12T03:40:31Z" | 162 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-12-12T03:33:59Z" | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits4_WithDuplicationsForScore5_FineTuningAraBERT_run2_AugV5_k8_task3_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits4_WithDuplicationsForScore5_FineTuningAraBERT_run2_AugV5_k8_task3_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6947
- Qwk: 0.1675
- Mse: 0.6947
- Rmse: 0.8335
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0556 | 2 | 3.7111 | 0.0 | 3.7111 | 1.9264 |
| No log | 0.1111 | 4 | 2.4117 | -0.0163 | 2.4117 | 1.5530 |
| No log | 0.1667 | 6 | 1.4622 | 0.0255 | 1.4622 | 1.2092 |
| No log | 0.2222 | 8 | 1.0519 | 0.0632 | 1.0519 | 1.0256 |
| No log | 0.2778 | 10 | 0.6496 | 0.1020 | 0.6496 | 0.8060 |
| No log | 0.3333 | 12 | 0.6119 | 0.0569 | 0.6119 | 0.7822 |
| No log | 0.3889 | 14 | 0.6217 | 0.0569 | 0.6217 | 0.7884 |
| No log | 0.4444 | 16 | 0.7361 | 0.1475 | 0.7361 | 0.8579 |
| No log | 0.5 | 18 | 0.6389 | 0.1030 | 0.6389 | 0.7993 |
| No log | 0.5556 | 20 | 0.6311 | 0.0569 | 0.6311 | 0.7944 |
| No log | 0.6111 | 22 | 0.6515 | 0.0 | 0.6515 | 0.8072 |
| No log | 0.6667 | 24 | 0.5680 | 0.0569 | 0.5680 | 0.7537 |
| No log | 0.7222 | 26 | 0.8891 | 0.0823 | 0.8891 | 0.9429 |
| No log | 0.7778 | 28 | 0.9588 | 0.0617 | 0.9588 | 0.9792 |
| No log | 0.8333 | 30 | 0.7530 | 0.1712 | 0.7530 | 0.8678 |
| No log | 0.8889 | 32 | 0.5959 | 0.0303 | 0.5959 | 0.7720 |
| No log | 0.9444 | 34 | 0.6484 | 0.0 | 0.6484 | 0.8052 |
| No log | 1.0 | 36 | 0.7096 | 0.0 | 0.7096 | 0.8424 |
| No log | 1.0556 | 38 | 0.6402 | 0.0 | 0.6402 | 0.8001 |
| No log | 1.1111 | 40 | 0.6079 | 0.0222 | 0.6079 | 0.7797 |
| No log | 1.1667 | 42 | 0.7106 | 0.1638 | 0.7106 | 0.8430 |
| No log | 1.2222 | 44 | 0.7749 | 0.0918 | 0.7749 | 0.8803 |
| No log | 1.2778 | 46 | 0.6087 | 0.1111 | 0.6087 | 0.7802 |
| No log | 1.3333 | 48 | 0.6242 | 0.0 | 0.6242 | 0.7901 |
| No log | 1.3889 | 50 | 0.6590 | 0.0 | 0.6590 | 0.8118 |
| No log | 1.4444 | 52 | 0.6369 | 0.0909 | 0.6369 | 0.7981 |
| No log | 1.5 | 54 | 0.7216 | 0.0409 | 0.7216 | 0.8495 |
| No log | 1.5556 | 56 | 1.1468 | 0.0888 | 1.1468 | 1.0709 |
| No log | 1.6111 | 58 | 1.0264 | 0.0357 | 1.0264 | 1.0131 |
| No log | 1.6667 | 60 | 0.7527 | 0.1195 | 0.7527 | 0.8676 |
| No log | 1.7222 | 62 | 0.8996 | 0.0045 | 0.8996 | 0.9485 |
| No log | 1.7778 | 64 | 0.9511 | -0.0396 | 0.9511 | 0.9753 |
| No log | 1.8333 | 66 | 0.8171 | 0.1186 | 0.8171 | 0.9039 |
| No log | 1.8889 | 68 | 0.8384 | 0.0417 | 0.8384 | 0.9156 |
| No log | 1.9444 | 70 | 0.7068 | -0.0115 | 0.7068 | 0.8407 |
| No log | 2.0 | 72 | 1.1652 | 0.0040 | 1.1652 | 1.0794 |
| No log | 2.0556 | 74 | 1.5367 | -0.0323 | 1.5367 | 1.2396 |
| No log | 2.1111 | 76 | 0.9921 | 0.0442 | 0.9921 | 0.9960 |
| No log | 2.1667 | 78 | 0.6773 | 0.2169 | 0.6773 | 0.8230 |
| No log | 2.2222 | 80 | 0.6838 | 0.1919 | 0.6838 | 0.8269 |
| No log | 2.2778 | 82 | 0.7907 | 0.1579 | 0.7907 | 0.8892 |
| No log | 2.3333 | 84 | 0.9282 | 0.0044 | 0.9282 | 0.9634 |
| No log | 2.3889 | 86 | 1.3364 | 0.1049 | 1.3364 | 1.1560 |
| No log | 2.4444 | 88 | 1.3189 | 0.1304 | 1.3189 | 1.1484 |
| No log | 2.5 | 90 | 0.6844 | 0.3371 | 0.6844 | 0.8273 |
| No log | 2.5556 | 92 | 0.6332 | 0.3333 | 0.6332 | 0.7957 |
| No log | 2.6111 | 94 | 0.7316 | 0.2332 | 0.7316 | 0.8554 |
| No log | 2.6667 | 96 | 1.5324 | 0.1084 | 1.5324 | 1.2379 |
| No log | 2.7222 | 98 | 1.7160 | 0.0659 | 1.7160 | 1.3100 |
| No log | 2.7778 | 100 | 0.8899 | 0.1111 | 0.8899 | 0.9433 |
| No log | 2.8333 | 102 | 0.8783 | 0.1712 | 0.8783 | 0.9372 |
| No log | 2.8889 | 104 | 1.1429 | 0.1571 | 1.1429 | 1.0691 |
| No log | 2.9444 | 106 | 0.7111 | 0.1675 | 0.7111 | 0.8433 |
| No log | 3.0 | 108 | 1.0955 | 0.0406 | 1.0955 | 1.0467 |
| No log | 3.0556 | 110 | 1.5839 | 0.0788 | 1.5839 | 1.2585 |
| No log | 3.1111 | 112 | 1.3248 | 0.0278 | 1.3248 | 1.1510 |
| No log | 3.1667 | 114 | 0.7078 | 0.2195 | 0.7078 | 0.8413 |
| No log | 3.2222 | 116 | 0.7698 | 0.0833 | 0.7698 | 0.8774 |
| No log | 3.2778 | 118 | 0.8155 | 0.0680 | 0.8155 | 0.9030 |
| No log | 3.3333 | 120 | 0.6633 | 0.1801 | 0.6633 | 0.8145 |
| No log | 3.3889 | 122 | 0.9323 | 0.1504 | 0.9323 | 0.9655 |
| No log | 3.4444 | 124 | 0.9197 | 0.1504 | 0.9197 | 0.9590 |
| No log | 3.5 | 126 | 0.7194 | 0.1345 | 0.7194 | 0.8482 |
| No log | 3.5556 | 128 | 0.9082 | 0.0685 | 0.9082 | 0.9530 |
| No log | 3.6111 | 130 | 0.9033 | 0.0631 | 0.9033 | 0.9504 |
| No log | 3.6667 | 132 | 0.7376 | 0.1732 | 0.7376 | 0.8589 |
| No log | 3.7222 | 134 | 0.9603 | 0.0769 | 0.9603 | 0.9799 |
| No log | 3.7778 | 136 | 0.8652 | 0.1287 | 0.8652 | 0.9301 |
| No log | 3.8333 | 138 | 0.7199 | 0.1732 | 0.7199 | 0.8485 |
| No log | 3.8889 | 140 | 0.7234 | 0.1364 | 0.7234 | 0.8505 |
| No log | 3.9444 | 142 | 0.7667 | 0.1828 | 0.7667 | 0.8756 |
| No log | 4.0 | 144 | 0.7989 | 0.1429 | 0.7989 | 0.8938 |
| No log | 4.0556 | 146 | 0.8139 | 0.1443 | 0.8139 | 0.9022 |
| No log | 4.1111 | 148 | 0.8245 | 0.1527 | 0.8245 | 0.9080 |
| No log | 4.1667 | 150 | 0.8026 | 0.1515 | 0.8026 | 0.8959 |
| No log | 4.2222 | 152 | 0.7831 | 0.2350 | 0.7831 | 0.8849 |
| No log | 4.2778 | 154 | 0.9611 | 0.0744 | 0.9611 | 0.9804 |
| No log | 4.3333 | 156 | 1.3294 | 0.1351 | 1.3294 | 1.1530 |
| No log | 4.3889 | 158 | 1.1538 | 0.0929 | 1.1538 | 1.0742 |
| No log | 4.4444 | 160 | 0.7771 | 0.2897 | 0.7771 | 0.8815 |
| No log | 4.5 | 162 | 0.7825 | 0.2838 | 0.7825 | 0.8846 |
| No log | 4.5556 | 164 | 0.9562 | 0.2069 | 0.9562 | 0.9778 |
| No log | 4.6111 | 166 | 1.3205 | 0.1560 | 1.3205 | 1.1491 |
| No log | 4.6667 | 168 | 1.1807 | 0.1572 | 1.1807 | 1.0866 |
| No log | 4.7222 | 170 | 0.7151 | 0.2251 | 0.7151 | 0.8456 |
| No log | 4.7778 | 172 | 0.6254 | 0.3023 | 0.6254 | 0.7908 |
| No log | 4.8333 | 174 | 0.6918 | 0.2360 | 0.6918 | 0.8318 |
| No log | 4.8889 | 176 | 0.8978 | 0.1864 | 0.8978 | 0.9475 |
| No log | 4.9444 | 178 | 0.7463 | 0.2941 | 0.7463 | 0.8639 |
| No log | 5.0 | 180 | 0.6126 | 0.3103 | 0.6126 | 0.7827 |
| No log | 5.0556 | 182 | 0.5966 | 0.3103 | 0.5966 | 0.7724 |
| No log | 5.1111 | 184 | 0.6572 | 0.2350 | 0.6572 | 0.8107 |
| No log | 5.1667 | 186 | 0.7763 | 0.2453 | 0.7763 | 0.8811 |
| No log | 5.2222 | 188 | 1.1121 | 0.1882 | 1.1121 | 1.0545 |
| No log | 5.2778 | 190 | 0.9522 | 0.2129 | 0.9522 | 0.9758 |
| No log | 5.3333 | 192 | 0.7115 | 0.1921 | 0.7115 | 0.8435 |
| No log | 5.3889 | 194 | 0.6371 | 0.3043 | 0.6371 | 0.7982 |
| No log | 5.4444 | 196 | 0.6441 | 0.3297 | 0.6441 | 0.8026 |
| No log | 5.5 | 198 | 0.7819 | 0.1373 | 0.7819 | 0.8842 |
| No log | 5.5556 | 200 | 1.0166 | 0.2119 | 1.0166 | 1.0083 |
| No log | 5.6111 | 202 | 0.9517 | 0.2126 | 0.9517 | 0.9755 |
| No log | 5.6667 | 204 | 0.7567 | 0.1373 | 0.7567 | 0.8699 |
| No log | 5.7222 | 206 | 0.7618 | 0.1269 | 0.7618 | 0.8728 |
| No log | 5.7778 | 208 | 0.8680 | 0.1781 | 0.8680 | 0.9317 |
| No log | 5.8333 | 210 | 0.9054 | 0.1781 | 0.9054 | 0.9515 |
| No log | 5.8889 | 212 | 0.7296 | 0.1915 | 0.7296 | 0.8542 |
| No log | 5.9444 | 214 | 0.7102 | 0.1398 | 0.7102 | 0.8427 |
| No log | 6.0 | 216 | 0.8339 | 0.1287 | 0.8339 | 0.9132 |
| No log | 6.0556 | 218 | 1.1690 | 0.1601 | 1.1690 | 1.0812 |
| No log | 6.1111 | 220 | 1.3049 | 0.1634 | 1.3049 | 1.1423 |
| No log | 6.1667 | 222 | 1.0334 | 0.1815 | 1.0334 | 1.0166 |
| No log | 6.2222 | 224 | 0.7347 | 0.2410 | 0.7347 | 0.8571 |
| No log | 6.2778 | 226 | 0.6372 | 0.2558 | 0.6372 | 0.7982 |
| No log | 6.3333 | 228 | 0.6287 | 0.3455 | 0.6287 | 0.7929 |
| No log | 6.3889 | 230 | 0.6358 | 0.1902 | 0.6358 | 0.7974 |
| No log | 6.4444 | 232 | 0.7067 | 0.125 | 0.7067 | 0.8407 |
| No log | 6.5 | 234 | 0.7805 | 0.1373 | 0.7805 | 0.8834 |
| No log | 6.5556 | 236 | 0.8392 | 0.1388 | 0.8392 | 0.9161 |
| No log | 6.6111 | 238 | 0.8276 | 0.1321 | 0.8276 | 0.9098 |
| No log | 6.6667 | 240 | 0.8603 | 0.1321 | 0.8603 | 0.9275 |
| No log | 6.7222 | 242 | 0.9161 | 0.1781 | 0.9161 | 0.9571 |
| No log | 6.7778 | 244 | 0.8036 | 0.1321 | 0.8036 | 0.8965 |
| No log | 6.8333 | 246 | 0.7088 | 0.1475 | 0.7088 | 0.8419 |
| No log | 6.8889 | 248 | 0.7146 | 0.1088 | 0.7146 | 0.8453 |
| No log | 6.9444 | 250 | 0.7385 | 0.1340 | 0.7385 | 0.8594 |
| No log | 7.0 | 252 | 0.7377 | 0.1340 | 0.7377 | 0.8589 |
| No log | 7.0556 | 254 | 0.7251 | 0.1269 | 0.7251 | 0.8515 |
| No log | 7.1111 | 256 | 0.6848 | 0.1828 | 0.6848 | 0.8275 |
| No log | 7.1667 | 258 | 0.6769 | 0.2688 | 0.6769 | 0.8228 |
| No log | 7.2222 | 260 | 0.6956 | 0.3333 | 0.6956 | 0.8340 |
| No log | 7.2778 | 262 | 0.7275 | 0.1340 | 0.7275 | 0.8529 |
| No log | 7.3333 | 264 | 0.8131 | 0.1456 | 0.8131 | 0.9017 |
| No log | 7.3889 | 266 | 0.9727 | 0.1795 | 0.9727 | 0.9863 |
| No log | 7.4444 | 268 | 0.9668 | 0.1795 | 0.9668 | 0.9833 |
| No log | 7.5 | 270 | 0.8493 | 0.0189 | 0.8493 | 0.9216 |
| No log | 7.5556 | 272 | 0.7374 | 0.1269 | 0.7374 | 0.8587 |
| No log | 7.6111 | 274 | 0.7209 | 0.3016 | 0.7209 | 0.8491 |
| No log | 7.6667 | 276 | 0.7249 | 0.3089 | 0.7249 | 0.8514 |
| No log | 7.7222 | 278 | 0.7048 | 0.2766 | 0.7048 | 0.8395 |
| No log | 7.7778 | 280 | 0.6818 | 0.3369 | 0.6818 | 0.8257 |
| No log | 7.8333 | 282 | 0.6630 | 0.2179 | 0.6630 | 0.8143 |
| No log | 7.8889 | 284 | 0.7102 | 0.1675 | 0.7102 | 0.8427 |
| No log | 7.9444 | 286 | 0.7658 | 0.0980 | 0.7658 | 0.8751 |
| No log | 8.0 | 288 | 0.7375 | 0.1287 | 0.7375 | 0.8588 |
| No log | 8.0556 | 290 | 0.6749 | 0.2165 | 0.6749 | 0.8215 |
| No log | 8.1111 | 292 | 0.6572 | 0.2179 | 0.6572 | 0.8107 |
| No log | 8.1667 | 294 | 0.6607 | 0.2179 | 0.6607 | 0.8128 |
| No log | 8.2222 | 296 | 0.6851 | 0.2536 | 0.6851 | 0.8277 |
| No log | 8.2778 | 298 | 0.7691 | 0.1402 | 0.7691 | 0.8770 |
| No log | 8.3333 | 300 | 0.8190 | 0.1416 | 0.8190 | 0.9050 |
| No log | 8.3889 | 302 | 0.7985 | 0.1402 | 0.7985 | 0.8936 |
| No log | 8.4444 | 304 | 0.7887 | 0.1402 | 0.7887 | 0.8881 |
| No log | 8.5 | 306 | 0.7871 | 0.1402 | 0.7871 | 0.8872 |
| No log | 8.5556 | 308 | 0.7521 | 0.0980 | 0.7521 | 0.8672 |
| No log | 8.6111 | 310 | 0.7039 | 0.2536 | 0.7039 | 0.8390 |
| No log | 8.6667 | 312 | 0.6633 | 0.2563 | 0.6633 | 0.8144 |
| No log | 8.7222 | 314 | 0.6456 | 0.2542 | 0.6456 | 0.8035 |
| No log | 8.7778 | 316 | 0.6435 | 0.2542 | 0.6435 | 0.8022 |
| No log | 8.8333 | 318 | 0.6594 | 0.2563 | 0.6594 | 0.8120 |
| No log | 8.8889 | 320 | 0.6989 | 0.2233 | 0.6989 | 0.8360 |
| No log | 8.9444 | 322 | 0.7628 | 0.0943 | 0.7628 | 0.8734 |
| No log | 9.0 | 324 | 0.8416 | 0.1429 | 0.8416 | 0.9174 |
| No log | 9.0556 | 326 | 0.8984 | 0.1504 | 0.8984 | 0.9478 |
| No log | 9.1111 | 328 | 0.9080 | 0.1515 | 0.9080 | 0.9529 |
| No log | 9.1667 | 330 | 0.8691 | 0.1855 | 0.8691 | 0.9323 |
| No log | 9.2222 | 332 | 0.7984 | 0.1416 | 0.7984 | 0.8936 |
| No log | 9.2778 | 334 | 0.7296 | 0.1304 | 0.7296 | 0.8542 |
| No log | 9.3333 | 336 | 0.6901 | 0.1675 | 0.6901 | 0.8307 |
| No log | 9.3889 | 338 | 0.6645 | 0.1503 | 0.6645 | 0.8152 |
| No log | 9.4444 | 340 | 0.6491 | 0.2563 | 0.6491 | 0.8057 |
| No log | 9.5 | 342 | 0.6451 | 0.2563 | 0.6451 | 0.8032 |
| No log | 9.5556 | 344 | 0.6510 | 0.2563 | 0.6510 | 0.8069 |
| No log | 9.6111 | 346 | 0.6587 | 0.1503 | 0.6587 | 0.8116 |
| No log | 9.6667 | 348 | 0.6668 | 0.1503 | 0.6668 | 0.8166 |
| No log | 9.7222 | 350 | 0.6755 | 0.1919 | 0.6755 | 0.8219 |
| No log | 9.7778 | 352 | 0.6869 | 0.1675 | 0.6869 | 0.8288 |
| No log | 9.8333 | 354 | 0.6929 | 0.1675 | 0.6929 | 0.8324 |
| No log | 9.8889 | 356 | 0.6949 | 0.1675 | 0.6949 | 0.8336 |
| No log | 9.9444 | 358 | 0.6947 | 0.1675 | 0.6947 | 0.8335 |
| No log | 10.0 | 360 | 0.6947 | 0.1675 | 0.6947 | 0.8335 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
colinrgodsey/q-FrozenLake-v1-4x4-noSlippery | colinrgodsey | "2024-06-12T22:50:05Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2024-06-12T22:50:04Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="colinrgodsey/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
psiilu/h2o-dpo-merge2 | psiilu | "2024-04-05T07:46:47Z" | 145 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2203.05482",
"base_model:psiilu/h2o-small-finch-1-1-1-1-1-1",
"base_model:merge:psiilu/h2o-small-finch-1-1-1-1-1-1",
"base_model:psiilu/h2o-small-finch-1-1-1-1-1-1-1-2-1",
"base_model:merge:psiilu/h2o-small-finch-1-1-1-1-1-1-1-2-1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-05T07:33:01Z" | ---
base_model:
- psiilu/h2o-small-finch-1-1-1-1-1-1
- psiilu/h2o-small-finch-1-1-1-1-1-1-1-2-1
library_name: transformers
tags:
- mergekit
- merge
---
# Untitled Model (1)
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [psiilu/h2o-small-finch-1-1-1-1-1-1](https://huggingface.co/psiilu/h2o-small-finch-1-1-1-1-1-1)
* [psiilu/h2o-small-finch-1-1-1-1-1-1-1-2-1](https://huggingface.co/psiilu/h2o-small-finch-1-1-1-1-1-1-1-2-1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: psiilu/h2o-small-finch-1-1-1-1-1-1
parameters:
weight: 0.5
- model: psiilu/h2o-small-finch-1-1-1-1-1-1-1-2-1
parameters:
weight: 0.5
merge_method: dare_ties
dtype: bfloat16
merge_method: linear
parameters:
normalize: 1.0
```
|
RichardErkhov/stojchet_-_d9-sft8-awq | RichardErkhov | "2025-01-11T10:05:14Z" | 7 | 0 | null | [
"safetensors",
"llama",
"4-bit",
"awq",
"region:us"
] | null | "2025-01-11T10:04:45Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
d9-sft8 - AWQ
- Model creator: https://huggingface.co/stojchet/
- Original model: https://huggingface.co/stojchet/d9-sft8/
Original model description:
---
base_model: stojchet/d9
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
model-index:
- name: d9-sft8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/stojchets/huggingface/runs/d9-sft8)
# d9-sft8
This model is a fine-tuned version of [stojchet/d9](https://huggingface.co/stojchet/d9) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2187
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0923 | 2.56 | 100 | 1.2187 |
### Framework versions
- Transformers 4.43.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
RichardErkhov/cnmoro_-_teenytinyllama-460m-text-simplification-ptbr-8bits | RichardErkhov | "2025-03-21T15:35:22Z" | 0 | 0 | null | [
"safetensors",
"llama",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-03-21T15:34:47Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
teenytinyllama-460m-text-simplification-ptbr - bnb 8bits
- Model creator: https://huggingface.co/cnmoro/
- Original model: https://huggingface.co/cnmoro/teenytinyllama-460m-text-simplification-ptbr/
Original model description:
---
language:
- pt
widget:
- text: >
<s><system>O objetivo é comprimir e estruturar o texto a seguir<texto>PM
recupera veículo e prende mulher em flagrante por roubo. Com a suspeita, a
PM ainda apreendeu R$ 1,8 mil em dinheiro e porções de drogas. Civil prende
dois condenados pela Justiça de MT e Goiás por estupro de vulnerável. O
Núcleo de Inteligência da Polinter auxiliou com cruzamento de dados que
possibilitaram a localização dos foragidos.</texto>
datasets:
- cnmoro/TextSimplification-PTBR-330k
pipeline_tag: text-generation
license: apache-2.0
tags:
- summarization
- simplification
- structurization
- portuguese
- brasil
- teenytinyllama
---
|
diversifix/diversiformer | diversifix | "2022-06-19T16:44:04Z" | 6 | 3 | transformers | [
"transformers",
"tf",
"t5",
"text2text-generation",
"de",
"arxiv:2010.11934",
"license:gpl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-06-19T12:44:02Z" | ---
language:
- de
license: gpl
widget:
- text: "Ersetze \"Lehrer\" durch \"Lehrerin oder Lehrer\": Ein promovierter Mathelehrer ist noch nie im Unterricht eingeschlafen."
example_title: "Example 1"
- text: "Ersetze \"Student\" durch \"studierende Person\": Maria ist kein Student."
example_title: "Example 2"
inference:
parameters:
max_length: 500
---
# Diversiformer 🤗 🏳️🌈 🇩🇪
_Work in progress._
Language model for inclusive language in German, fine-tuned on [mT5](https://arxiv.org/abs/2010.11934).
An experimental model version is released [on Huggingface](https://huggingface.co/diversifix/diversiformer).
Source code for fine-tuning is available [on GitHub](https://github.com/diversifix/diversiformer).
## Tasks
- **DETECT**: Recognizes instances of the generic masculine, and of other exclusive language. To do.
- **SUGGEST**: Suggest inclusive alternatives to masculine and exclusive words. To do.
- **REPLACE**: Replace one phrase by another, while preserving grammatical coherence. Work in progress.
- ▶️ `Ersetze "Schüler" durch "Schülerin oder Schüler": Die Schüler kamen zu spät.`
◀️ `Die Schülerinnen und Schüler kamen zu spät.`
- ▶️ `Ersetze "Lehrer" durch "Kollegium": Die wartenden Lehrer wunderten sich.`
◀️ `Das wartende Kollegium wunderte sich.`
## Usage
```python
>>> from transformers import pipeline
>>> generator = pipeline("text2text-generation", model="diversifix/diversiformer")
>>> generator('Ersetze "Schüler" durch "Schülerin oder Schüler": Die Schüler kamen zu spät.', max_length=500)
```
## License
Diversiformer. Transformer model for inclusive language.
Copyright (C) 2022 [Diversifix e. V.](mailto:[email protected])
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
|
rxh1/Finetune_2 | rxh1 | "2024-05-15T03:31:37Z" | 119 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-05-15T03:30:21Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
CorraMcato/EXAONE-3.5-32B-Instruct-Q4_K_M-GGUF | CorraMcato | "2025-03-14T07:29:26Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"lg-ai",
"exaone",
"exaone-3.5",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"ko",
"base_model:LGAI-EXAONE/EXAONE-3.5-32B-Instruct",
"base_model:quantized:LGAI-EXAONE/EXAONE-3.5-32B-Instruct",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2025-03-14T07:27:59Z" | ---
base_model: LGAI-EXAONE/EXAONE-3.5-32B-Instruct
language:
- en
- ko
library_name: transformers
license: other
license_name: exaone
license_link: LICENSE
pipeline_tag: text-generation
tags:
- lg-ai
- exaone
- exaone-3.5
- llama-cpp
- gguf-my-repo
---
# CorraMcato/EXAONE-3.5-32B-Instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`LGAI-EXAONE/EXAONE-3.5-32B-Instruct`](https://huggingface.co/LGAI-EXAONE/EXAONE-3.5-32B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/LGAI-EXAONE/EXAONE-3.5-32B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo CorraMcato/EXAONE-3.5-32B-Instruct-Q4_K_M-GGUF --hf-file exaone-3.5-32b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo CorraMcato/EXAONE-3.5-32B-Instruct-Q4_K_M-GGUF --hf-file exaone-3.5-32b-instruct-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo CorraMcato/EXAONE-3.5-32B-Instruct-Q4_K_M-GGUF --hf-file exaone-3.5-32b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo CorraMcato/EXAONE-3.5-32B-Instruct-Q4_K_M-GGUF --hf-file exaone-3.5-32b-instruct-q4_k_m.gguf -c 2048
```
|
DisposableTmep/ML-Agents-Pyramids | DisposableTmep | "2025-01-07T00:29:28Z" | 10 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | "2025-01-07T00:29:26Z" | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: DisposableTmep/ML-Agents-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ReadyArt/Forgotten-Safeword-24B-v4.0 | ReadyArt | "2025-03-21T11:52:42Z" | 243 | 2 | null | [
"safetensors",
"mistral",
"nsfw",
"explicit",
"roleplay",
"unaligned",
"dangerous",
"ERP",
"en",
"base_model:TheDrummer/Cydonia-24B-v2.1",
"base_model:finetune:TheDrummer/Cydonia-24B-v2.1",
"license:apache-2.0",
"region:us"
] | null | "2025-03-18T07:05:00Z" | ---
base_model: TheDrummer/Cydonia-24B-v2.1
base_model_relation: finetune
language:
- en
license: apache-2.0
inference: false
tags:
- nsfw
- explicit
- roleplay
- unaligned
- dangerous
- ERP
---
<style>
body {
font-family: 'Quicksand', sans-serif;
background: linear-gradient(135deg, #001a1a 0%, #000a10 100%);
color: #e1ffff !important;
text-shadow: 0 0 3px rgba(0, 0, 0, 0.7);
margin: 0;
padding: 20px;
}
@media (prefers-color-scheme: light) {
body {
background: linear-gradient(135deg, #e1ffff 0%, #c0f0ff 100%);
color: #002b36 !important;
text-shadow: 0 0 3px rgba(255, 255, 255, 0.7);
}
}
.container {
max-width: 800px;
margin: 0 auto;
background: rgba(0, 17, 22, 0.95);
border-radius: 12px;
padding: 30px;
box-shadow: 0 0 20px rgba(0, 255, 255, 0.1);
border: 1px solid rgba(0, 255, 255, 0.2);
}
@media (prefers-color-scheme: light) {
.container {
background: rgba(224, 255, 255, 0.95);
border-color: rgba(0, 150, 150, 0.3);
}
}
.header {
text-align: center;
margin-bottom: 30px;
}
.model-name {
color: #00ffff;
font-size: 2.5em;
text-shadow: 0 0 15px rgba(0, 255, 255, 0.5);
margin: 0;
}
.subtitle {
color: #00ffcc;
font-size: 1.2em;
}
.waifu-container {
margin: 20px -30px;
width: calc(100% + 60px);
overflow: hidden;
}
.waifu-img {
width: 100%;
height: auto;
border-radius: 0;
border: none;
box-shadow: 0 0 40px rgba(0, 255, 255, 0.2);
transform: scale(1.02);
}
.section {
color: #00ffcc;
margin: 25px 0;
padding: 20px;
background: rgba(5, 25, 35, 0.9);
border-radius: 8px;
border: 1px solid rgba(0, 255, 255, 0.15);
}
@media (prefers-color-scheme: light) {
.section {
background: rgba(200, 250, 255, 0.9);
border-color: rgba(0, 200, 200, 0.2);
}
}
.section-title {
color: #00ffff;
font-size: 1.8em;
margin-top: 0;
}
.section > p > strong {
color: #00ffcc !important;
}
.section:has(.quant-links) p,
.section:has(.quant-links) h3,
.section:has(.quant-links) a {
color: #00ffcc !important;
}
.quant-links h3 {
color: #00ffcc !important;
margin-top: 0;
}
.badge {
display: inline-block;
padding: 5px 10px;
border-radius: 5px;
background: rgba(0, 255, 255, 0.1);
border: 1px solid #00ffff;
margin: 5px;
}
.quant-links {
display: grid;
grid-template-columns: repeat(2, 1fr);
gap: 15px;
}
.link-card {
padding: 15px;
background: rgba(20, 35, 45, 0.95);
border-radius: 8px;
transition: transform 0.3s ease;
}
@media (prefers-color-scheme: light) {
.link-card {
background: rgba(150, 230, 255, 0.95);
}
}
.link-card:hover {
transform: translateY(-3px);
}
.disclaimer {
color: #00ff99;
border-left: 3px solid #00ff99;
padding-left: 15px;
margin: 20px 0;
}
.progress-bar {
height: 8px;
background: rgba(0, 255, 255, 0.1);
border-radius: 4px;
overflow: hidden;
margin: 10px 0;
}
.progress-fill {
height: 100%;
background: linear-gradient(90deg, #00ffff 0%, #00ffcc 100%);
width: 70%;
}
@media (prefers-color-scheme: light) {
.model-name, .section-title, .subtitle {
color: #006666;
text-shadow: 0 0 5px rgba(0, 200, 200, 0.3);
}
.section:has(.quant-links) p,
.section:has(.quant-links) h3,
.section:has(.quant-links) a,
.section > p > strong {
color: #008080 !important;
}
.quant-links h3 {
color: #008080 !important;
}
.badge {
border-color: #008080;
background: rgba(0, 150, 150, 0.1);
}
.disclaimer {
color: #008080;
border-color: #008080;
}
}
</style>
<div class="container">
<div class="header">
<h1 class="model-name">Forgotten-Safeword-24B-v4.0</h1>
<div class="subtitle">The Safeword Protocol: Now With 30% More Depravity</div>
</div>
<div class="waifu-container">
<img src="https://i.imgur.com/wYdR5zk.png" class="waifu-img" alt="Protocol Mascot">
</div>
<div class="section">
<h2 class="section-title">📜 Manifesto</h2>
<p>Note: Safeword 4.0 is a bit overcooked. I would use ReadyArt/Forgotten-Abomination-24B-v4.0 and just hot swap to Safeword if you want it to be more unhinged when you're ready to get it on.</p>
<p>Forgotten-Safeword-24B-v4.0 isn't just a model - is the event horizon of depravity trained on TheDrummer/Cydonia-24B-v2.1. We've:</p>
<ul>
<li>🔁 Re-integrated your favorite V1.2 scenarios (now with better kink distribution)</li>
<li>🧪 Direct-injected the Safeword dataset into the model's neural pathways</li>
<li>⚖️ Achieved perfect balance between "oh my" and "oh <em>my</em>"</li>
</ul>
</div>
<div class="section">
<h2 class="section-title">⚙️ Technical Specs</h2>
<div class="progress-bar">
<div class="progress-fill"></div>
</div>
<p><strong>Recommended Settings:</strong> <a href="https://huggingface.co/sleepdeprived3/Mistral-V7-Tekken-E">Mistral-V7-Tekken-E</a></p>
<div class="quant-links">
<div class="link-card">
<h3>EXL2 Collection</h3>
<a href="https://huggingface.co/collections/ReadyArt/forgotten-safeword-24b-v40-exl2-67d91f653e28c96c0ef6d585">Quantum Entangled Bits →</a>
</div>
<div class="link-card">
<h3>GGUF Collection</h3>
<a href="https://huggingface.co/collections/ReadyArt/forgotten-safeword-24b-v40-gguf-67d91f6bd4759ebc68773b2c">Giggle-Enabled Units →</a>
</div>
</div>
</div>
<div class="section">
<h2 class="section-title">⚠️ Ethical Considerations</h2>
<div class="disclaimer">
<p>This model will:</p>
<ul>
<li>Generate content that requires industrial-grade brain bleach </li>
<li>Void all warranties on your soul </li>
<li>Make you question why humanity ever invented electricity</li>
</ul>
</div>
</div>
<div class="section">
<h2 class="section-title">📜 License Agreement</h2>
<p>By using this model, you agree:</p>
<ul>
<li>That your search history is now a federal case</li>
<li>Pay for the exorcist of anyone who reads the logs</li>
<li>To pretend this is "for science" while crying in the shower</li>
</ul>
</div>
<div class="section">
<h2 class="section-title">🧠 Model Authors</h2>
<ul>
<li>sleepdeprived3 (Chief Corruption Officer) </li>
<li>The voices in your head (Gaslighting is something you made up)</li>
</ul>
</div>
<div class="section">
<h2 class="section-title">☕️ Drummer made this possible</h2>
<ul>
<li>Support Drummer <a href="https://ko-fi.com/thedrummer">Kofi</a></li>
</ul>
</div>
</div>
|
Gozdi/Electra-base-squad-adversarialqa-epoch-1 | Gozdi | "2022-08-30T14:25:36Z" | 61 | 0 | transformers | [
"transformers",
"tf",
"electra",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | "2022-08-30T14:19:36Z" | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Electra-base-squad-adversarialqa-epoch-1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Electra-base-squad-adversarialqa-epoch-1
This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.4884
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 43062, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1104, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 1.4884 | 0 |
### Framework versions
- Transformers 4.21.2
- TensorFlow 2.8.2
- Datasets 2.4.0
- Tokenizers 0.12.1
|
mradermacher/Ultiima-78B-v2-i1-GGUF | mradermacher | "2025-02-03T06:25:09Z" | 294 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Sakalti/Ultiima-78B-v2",
"base_model:quantized:Sakalti/Ultiima-78B-v2",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-02-03T00:23:39Z" | ---
base_model: Sakalti/Ultiima-78B-v2
language:
- en
library_name: transformers
license: other
license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
license_name: qwen
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Sakalti/Ultiima-78B-v2
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Ultiima-78B-v2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Ultiima-78B-v2-i1-GGUF/resolve/main/Ultiima-78B-v2.i1-IQ1_S.gguf) | i1-IQ1_S | 24.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Ultiima-78B-v2-i1-GGUF/resolve/main/Ultiima-78B-v2.i1-IQ1_M.gguf) | i1-IQ1_M | 25.5 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Ultiima-78B-v2-i1-GGUF/resolve/main/Ultiima-78B-v2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 27.4 | |
| [GGUF](https://huggingface.co/mradermacher/Ultiima-78B-v2-i1-GGUF/resolve/main/Ultiima-78B-v2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 29.1 | |
| [GGUF](https://huggingface.co/mradermacher/Ultiima-78B-v2-i1-GGUF/resolve/main/Ultiima-78B-v2.i1-IQ2_S.gguf) | i1-IQ2_S | 30.0 | |
| [GGUF](https://huggingface.co/mradermacher/Ultiima-78B-v2-i1-GGUF/resolve/main/Ultiima-78B-v2.i1-IQ2_M.gguf) | i1-IQ2_M | 31.5 | |
| [GGUF](https://huggingface.co/mradermacher/Ultiima-78B-v2-i1-GGUF/resolve/main/Ultiima-78B-v2.i1-Q2_K_S.gguf) | i1-Q2_K_S | 31.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Ultiima-78B-v2-i1-GGUF/resolve/main/Ultiima-78B-v2.i1-Q2_K.gguf) | i1-Q2_K | 31.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Ultiima-78B-v2-i1-GGUF/resolve/main/Ultiima-78B-v2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 34.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Ultiima-78B-v2-i1-GGUF/resolve/main/Ultiima-78B-v2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 35.2 | |
| [GGUF](https://huggingface.co/mradermacher/Ultiima-78B-v2-i1-GGUF/resolve/main/Ultiima-78B-v2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 36.9 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Ultiima-78B-v2-i1-GGUF/resolve/main/Ultiima-78B-v2.i1-IQ3_S.gguf) | i1-IQ3_S | 37.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Ultiima-78B-v2-i1-GGUF/resolve/main/Ultiima-78B-v2.i1-IQ3_M.gguf) | i1-IQ3_M | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/Ultiima-78B-v2-i1-GGUF/resolve/main/Ultiima-78B-v2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 40.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Ultiima-78B-v2-i1-GGUF/resolve/main/Ultiima-78B-v2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 42.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Ultiima-78B-v2-i1-GGUF/resolve/main/Ultiima-78B-v2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 42.7 | |
| [GGUF](https://huggingface.co/mradermacher/Ultiima-78B-v2-i1-GGUF/resolve/main/Ultiima-78B-v2.i1-Q4_0.gguf) | i1-Q4_0 | 44.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Ultiima-78B-v2-i1-GGUF/resolve/main/Ultiima-78B-v2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 47.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Ultiima-78B-v2-i1-GGUF/resolve/main/Ultiima-78B-v2.i1-Q4_1.gguf) | i1-Q4_1 | 49.1 | |
| [PART 1](https://huggingface.co/mradermacher/Ultiima-78B-v2-i1-GGUF/resolve/main/Ultiima-78B-v2.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Ultiima-78B-v2-i1-GGUF/resolve/main/Ultiima-78B-v2.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 50.8 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Ultiima-78B-v2-i1-GGUF/resolve/main/Ultiima-78B-v2.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Ultiima-78B-v2-i1-GGUF/resolve/main/Ultiima-78B-v2.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 55.2 | |
| [PART 1](https://huggingface.co/mradermacher/Ultiima-78B-v2-i1-GGUF/resolve/main/Ultiima-78B-v2.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Ultiima-78B-v2-i1-GGUF/resolve/main/Ultiima-78B-v2.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 58.4 | |
| [PART 1](https://huggingface.co/mradermacher/Ultiima-78B-v2-i1-GGUF/resolve/main/Ultiima-78B-v2.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Ultiima-78B-v2-i1-GGUF/resolve/main/Ultiima-78B-v2.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 69.1 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
shane062/whisper-base-finetuned | shane062 | "2024-05-30T02:17:58Z" | 122 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:audiofolder",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-05-29T23:45:12Z" | ---
license: apache-2.0
base_model: openai/whisper-base
tags:
- generated_from_trainer
datasets:
- audiofolder
metrics:
- wer
model-index:
- name: whisper-base-finetuned
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: audiofolder
type: audiofolder
config: default
split: test
args: default
metrics:
- name: Wer
type: wer
value: 67.56756756756756
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-base-finetuned
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9952
- Wer Ortho: 67.5676
- Wer: 67.5676
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 10
- training_steps: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-------:|:----:|:---------------:|:---------:|:-------:|
| 0.0652 | 16.6667 | 50 | 0.9612 | 67.5676 | 67.5676 |
| 0.0004 | 33.3333 | 100 | 0.9952 | 67.5676 | 67.5676 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cpu
- Datasets 2.19.1
- Tokenizers 0.19.1
|
OhaymakingO/7-heyNew-58-08021900 | OhaymakingO | "2025-02-08T16:02:31Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-02-08T16:00:46Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
A2/kogpt2-taf | A2 | "2022-05-11T21:01:45Z" | 6 | 1 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-04-28T05:45:19Z" | ---
license: apache-2.0
---
Grepp KDT AI 3기 과정 프로젝트.
[SKT-AI/KoGPT2](https://github.com/SKT-AI/KoGPT2) 모델을 기반. 모두의 말뭉치의 2021 뉴스 말뭉치를 추가로 언어모델링 학습 후, 5대 일간지(조선일보, 중앙일보, 동아일보, 한겨레, 경향신문)별 각 만여개의 사설로 미세조정하였음.
매일 백여개의 사설로 추가 미세조정하여 최신 정치적 이슈에 관한 텍스트도 잘 생성함.
|
MrRobotoAI/L | MrRobotoAI | "2025-03-03T16:13:00Z" | 58 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:MrRobotoAI/Heimdall-v2.1-8b-MANCHESTER-WRITER-128K",
"base_model:merge:MrRobotoAI/Heimdall-v2.1-8b-MANCHESTER-WRITER-128K",
"base_model:MrRobotoAI/Hel-v4-8b-DARK-FICTION-128K",
"base_model:merge:MrRobotoAI/Hel-v4-8b-DARK-FICTION-128K",
"base_model:MrRobotoAI/L",
"base_model:merge:MrRobotoAI/L",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-27T11:01:07Z" | ---
base_model:
- MrRobotoAI/Heimdall-v2.1-8b-MANCHESTER-WRITER-128K
- MrRobotoAI/Hel-v4-8b-DARK-FICTION-128K
- MrRobotoAI/160
library_name: transformers
tags:
- mergekit
- merge
---
# merge 10,649
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [MrRobotoAI/Heimdall-v2.1-8b-MANCHESTER-WRITER-128K](https://huggingface.co/MrRobotoAI/Heimdall-v2.1-8b-MANCHESTER-WRITER-128K) as a base.
### Models Merged
The following models were included in the merge:
* [MrRobotoAI/Hel-v4-8b-DARK-FICTION-128K](https://huggingface.co/MrRobotoAI/Hel-v4-8b-DARK-FICTION-128K)
* [MrRobotoAI/160](https://huggingface.co/MrRobotoAI/160)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: MrRobotoAI/160
- model: MrRobotoAI/Hel-v4-8b-DARK-FICTION-128K
- model: MrRobotoAI/Heimdall-v2.1-8b-MANCHESTER-WRITER-128K
merge_method: model_stock
base_model: MrRobotoAI/Heimdall-v2.1-8b-MANCHESTER-WRITER-128K
normalize: true
dtype: float16
```
|
VaidikML0508/dqn-SpaceInvadersNoFrameskip-v4 | VaidikML0508 | "2025-01-10T18:13:48Z" | 6 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2025-01-10T18:13:07Z" | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 615.50 +/- 233.72
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
SBX (SB3 + Jax): https://github.com/araffin/sbx
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga VaidikML0508 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga VaidikML0508 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga VaidikML0508
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Nadav/bert-base-cased-finetuned-runaways | Nadav | "2023-06-23T12:07:23Z" | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | "2023-06-23T10:06:00Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-cased-finetuned-runaways
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-runaways
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1032
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9165 | 1.0 | 1917 | 0.9850 |
| 0.5134 | 2.0 | 3834 | 0.9712 |
| 0.3594 | 3.0 | 5751 | 1.1032 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
lesso/97671223-e656-4155-a91d-43da059ea2b9 | lesso | "2025-02-05T00:43:34Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Phi-3-mini-4k-instruct",
"base_model:adapter:unsloth/Phi-3-mini-4k-instruct",
"license:mit",
"region:us"
] | null | "2025-02-05T00:30:21Z" | ---
library_name: peft
license: mit
base_model: unsloth/Phi-3-mini-4k-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 97671223-e656-4155-a91d-43da059ea2b9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Phi-3-mini-4k-instruct
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 0b04b964603115a8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0b04b964603115a8_train_data.json
type:
field_instruction: problem
field_output: solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso/97671223-e656-4155-a91d-43da059ea2b9
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001003
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: linear
max_grad_norm: 1.0
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/G.O.D/0b04b964603115a8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: c09c25a2-71c5-4738-b44b-7ae04e60501b
wandb_project: new-03
wandb_run: your_name
wandb_runid: c09c25a2-71c5-4738-b44b-7ae04e60501b
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 97671223-e656-4155-a91d-43da059ea2b9
This model is a fine-tuned version of [unsloth/Phi-3-mini-4k-instruct](https://huggingface.co/unsloth/Phi-3-mini-4k-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5698
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.0324 | 0.0005 | 1 | 0.7248 |
| 1.2747 | 0.0243 | 50 | 0.5957 |
| 0.9136 | 0.0487 | 100 | 0.5782 |
| 0.9316 | 0.0730 | 150 | 0.5707 |
| 1.2357 | 0.0973 | 200 | 0.5698 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
KKHYA/Qwen2.5-1.5B-Open-R1-GRPO | KKHYA | "2025-02-19T08:55:13Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-17T19:14:30Z" | ---
base_model: Qwen/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Open-R1-GRPO
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen2.5-1.5B-Open-R1-GRPO
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="KKHYA/Qwen2.5-1.5B-Open-R1-GRPO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/no-exsit/huggingface/runs/ya30e8vy)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0.dev0
- Pytorch: 2.5.1+cu118
- Datasets: 3.3.0
- Tokenizers: 0.21.0
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
neuralmagic/SparseLlama-2-7b-evolcodealpaca-pruned_50.2of4 | neuralmagic | "2024-07-03T18:13:55Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:theblackcat102/evol-codealpaca-v1",
"arxiv:2310.06927",
"arxiv:2107.03374",
"arxiv:2301.00774",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T03:20:39Z" | ---
datasets:
- theblackcat102/evol-codealpaca-v1
language:
- en
pipeline_tag: text-generation
---
# SparseLlama-2-7b-evolcodealpaca-pruned_50.2of4
## Model Overview
- **Model Architecture:** Llama-2
- **Input:** Text
- **Output:** Text
- **Model Optimizations:**
- **Pruned:** 50% 2:4
- **Release Date:** 7/2/2024
- **Version:** 1.0
- **Model Developers:** Neural Magic
Compressed version of [Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b-hf) specialized for code-generation.
This model was obtained by fine-tuning the Sparse Foundational model [SparseLlama-2-7b-pruned_50.2of4](https://huggingface.co/nm-testing/SparseLlama-2-7b-pruned_50.2of4) on the [evol-codealpaca-v1](https://huggingface.co/datasets/theblackcat102/evol-codealpaca-v1) dataset.
[SquareHead](https://arxiv.org/abs/2310.06927) knowledge distillation was used with [Llama-2-7b-evolcodealpaca](https://huggingface.co/neuralmagic/Llama-2-7b-evolcodealpaca) as teacher.
It achieves [HumanEval](https://arxiv.org/abs/2107.03374) pass@1 of 34.58%, whereas the dense [Llama-2-7b-evolcodealpaca](https://huggingface.co/neuralmagic/Llama-2-7b-evolcodealpaca) model achieves 32.03%.
This model was produced as part if Neural Magic's Sparse Foundational Models initiative, and demostrates the capability of Sparse Foundational Models to transfer to the code-generation domain.
## Model Optimizations
This model is derived from the Sparse Foundational model [Sparse-Llama-2-7b-pruned_50.2of4](https://huggingface.co/nm-testing/SparseLlama-2-7b-pruned_50.2of4), which was obtained by applying the [SparseGPT](https://arxiv.org/abs/2301.00774) algorithm to prune [Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b-hf) to 50% sparsity with a 2:4 mask.
This optimization reduces the number of parameters by 50%, reducing the disk size and FLOPs by the same level.
## Evaluation
This model was evaluated in the [HumanEval](https://arxiv.org/abs/2107.03374) benchmark using the [bigcode-evaluation-harness](https://github.com/bigcode-project/bigcode-evaluation-harness).
## Accuracy
| Model | HumanEval pass@1 | Recovery |
| :----- | :--------: | :--------: |
| [Llama-2-7b-evolcodealpaca](https://huggingface.co/neuralmagic/Llama-2-7b-evolcodealpaca) | 32.03% | -- |
| SparseLlama-2-7b-evolcodealpaca-pruned_50.2of4 | 34.58% | 108% | |
CheeseCoder/pythia-12b-deduped-Q4_K_M-GGUF | CheeseCoder | "2024-05-15T01:23:03Z" | 1 | 0 | null | [
"gguf",
"pytorch",
"causal-lm",
"pythia",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:EleutherAI/the_pile_deduplicated",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-05-15T01:22:39Z" | ---
language:
- en
license: apache-2.0
tags:
- pytorch
- causal-lm
- pythia
- llama-cpp
- gguf-my-repo
datasets:
- EleutherAI/the_pile_deduplicated
---
# CheeseCoder/pythia-12b-deduped-Q4_K_M-GGUF
This model was converted to GGUF format from [`EleutherAI/pythia-12b-deduped`](https://huggingface.co/EleutherAI/pythia-12b-deduped) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/EleutherAI/pythia-12b-deduped) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo CheeseCoder/pythia-12b-deduped-Q4_K_M-GGUF --model pythia-12b-deduped.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo CheeseCoder/pythia-12b-deduped-Q4_K_M-GGUF --model pythia-12b-deduped.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m pythia-12b-deduped.Q4_K_M.gguf -n 128
```
|
bartowski/Athene-V2-Chat-GGUF | bartowski | "2024-11-14T23:17:18Z" | 5,053 | 17 | null | [
"gguf",
"RLHF",
"Nexusflow",
"Athene",
"Chat Model",
"text-generation",
"en",
"base_model:Nexusflow/Athene-V2-Chat",
"base_model:quantized:Nexusflow/Athene-V2-Chat",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | text-generation | "2024-11-14T16:26:27Z" | ---
quantized_by: bartowski
pipeline_tag: text-generation
language:
- en
tags:
- RLHF
- Nexusflow
- Athene
- Chat Model
base_model: Nexusflow/Athene-V2-Chat
license: other
---
## Llamacpp imatrix Quantizations of Athene-V2-Chat
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b4058">b4058</a> for quantization.
Original model: https://huggingface.co/Nexusflow/Athene-V2-Chat
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
## Prompt format
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [Athene-V2-Chat-Q8_0.gguf](https://huggingface.co/bartowski/Athene-V2-Chat-GGUF/tree/main/Athene-V2-Chat-Q8_0) | Q8_0 | 77.26GB | true | Extremely high quality, generally unneeded but max available quant. |
| [Athene-V2-Chat-Q6_K_L.gguf](https://huggingface.co/bartowski/Athene-V2-Chat-GGUF/tree/main/Athene-V2-Chat-Q6_K_L) | Q6_K_L | 64.95GB | true | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [Athene-V2-Chat-Q6_K.gguf](https://huggingface.co/bartowski/Athene-V2-Chat-GGUF/tree/main/Athene-V2-Chat-Q6_K) | Q6_K | 64.35GB | true | Very high quality, near perfect, *recommended*. |
| [Athene-V2-Chat-Q5_K_L.gguf](https://huggingface.co/bartowski/Athene-V2-Chat-GGUF/tree/main/Athene-V2-Chat-Q5_K_L) | Q5_K_L | 55.22GB | true | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [Athene-V2-Chat-Q5_K_M.gguf](https://huggingface.co/bartowski/Athene-V2-Chat-GGUF/tree/main/Athene-V2-Chat-Q5_K_M) | Q5_K_M | 54.45GB | true | High quality, *recommended*. |
| [Athene-V2-Chat-Q5_K_S.gguf](https://huggingface.co/bartowski/Athene-V2-Chat-GGUF/tree/main/Athene-V2-Chat-Q5_K_S) | Q5_K_S | 51.38GB | true | High quality, *recommended*. |
| [Athene-V2-Chat-Q4_K_L.gguf](https://huggingface.co/bartowski/Athene-V2-Chat-GGUF/blob/main/Athene-V2-Chat-Q4_K_L.gguf) | Q4_K_L | 48.34GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [Athene-V2-Chat-Q4_K_M.gguf](https://huggingface.co/bartowski/Athene-V2-Chat-GGUF/blob/main/Athene-V2-Chat-Q4_K_M.gguf) | Q4_K_M | 47.42GB | false | Good quality, default size for most use cases, *recommended*. |
| [Athene-V2-Chat-Q4_K_S.gguf](https://huggingface.co/bartowski/Athene-V2-Chat-GGUF/blob/main/Athene-V2-Chat-Q4_K_S.gguf) | Q4_K_S | 43.89GB | false | Slightly lower quality with more space savings, *recommended*. |
| [Athene-V2-Chat-Q4_0.gguf](https://huggingface.co/bartowski/Athene-V2-Chat-GGUF/blob/main/Athene-V2-Chat-Q4_0.gguf) | Q4_0 | 41.38GB | false | Legacy format, generally not worth using over similarly sized formats |
| [Athene-V2-Chat-Q4_0_8_8.gguf](https://huggingface.co/bartowski/Athene-V2-Chat-GGUF/blob/main/Athene-V2-Chat-Q4_0_8_8.gguf) | Q4_0_8_8 | 41.23GB | false | Optimized for ARM inference. Requires 'sve' support (see link below). *Don't use on Mac or Windows*. |
| [Athene-V2-Chat-Q3_K_XL.gguf](https://huggingface.co/bartowski/Athene-V2-Chat-GGUF/blob/main/Athene-V2-Chat-Q3_K_XL.gguf) | Q3_K_XL | 40.60GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [Athene-V2-Chat-IQ4_XS.gguf](https://huggingface.co/bartowski/Athene-V2-Chat-GGUF/blob/main/Athene-V2-Chat-IQ4_XS.gguf) | IQ4_XS | 39.71GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Athene-V2-Chat-Q3_K_L.gguf](https://huggingface.co/bartowski/Athene-V2-Chat-GGUF/blob/main/Athene-V2-Chat-Q3_K_L.gguf) | Q3_K_L | 39.51GB | false | Lower quality but usable, good for low RAM availability. |
| [Athene-V2-Chat-Q3_K_M.gguf](https://huggingface.co/bartowski/Athene-V2-Chat-GGUF/blob/main/Athene-V2-Chat-Q3_K_M.gguf) | Q3_K_M | 37.70GB | false | Low quality. |
| [Athene-V2-Chat-IQ3_M.gguf](https://huggingface.co/bartowski/Athene-V2-Chat-GGUF/blob/main/Athene-V2-Chat-IQ3_M.gguf) | IQ3_M | 35.50GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Athene-V2-Chat-Q3_K_S.gguf](https://huggingface.co/bartowski/Athene-V2-Chat-GGUF/blob/main/Athene-V2-Chat-Q3_K_S.gguf) | Q3_K_S | 34.49GB | false | Low quality, not recommended. |
| [Athene-V2-Chat-IQ3_XS.gguf](https://huggingface.co/bartowski/Athene-V2-Chat-GGUF/blob/main/Athene-V2-Chat-IQ3_XS.gguf) | IQ3_XS | 32.84GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Athene-V2-Chat-Q2_K_L.gguf](https://huggingface.co/bartowski/Athene-V2-Chat-GGUF/blob/main/Athene-V2-Chat-Q2_K_L.gguf) | Q2_K_L | 31.03GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [Athene-V2-Chat-Q2_K.gguf](https://huggingface.co/bartowski/Athene-V2-Chat-GGUF/blob/main/Athene-V2-Chat-Q2_K.gguf) | Q2_K | 29.81GB | false | Very low quality but surprisingly usable. |
| [Athene-V2-Chat-IQ2_M.gguf](https://huggingface.co/bartowski/Athene-V2-Chat-GGUF/blob/main/Athene-V2-Chat-IQ2_M.gguf) | IQ2_M | 29.34GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
| [Athene-V2-Chat-IQ2_S.gguf](https://huggingface.co/bartowski/Athene-V2-Chat-GGUF/blob/main/Athene-V2-Chat-IQ2_S.gguf) | IQ2_S | 27.94GB | false | Low quality, uses SOTA techniques to be usable. |
| [Athene-V2-Chat-IQ2_XS.gguf](https://huggingface.co/bartowski/Athene-V2-Chat-GGUF/blob/main/Athene-V2-Chat-IQ2_XS.gguf) | IQ2_XS | 27.06GB | false | Low quality, uses SOTA techniques to be usable. |
| [Athene-V2-Chat-IQ2_XXS.gguf](https://huggingface.co/bartowski/Athene-V2-Chat-GGUF/blob/main/Athene-V2-Chat-IQ2_XXS.gguf) | IQ2_XXS | 25.49GB | false | Very low quality, uses SOTA techniques to be usable. |
| [Athene-V2-Chat-IQ1_M.gguf](https://huggingface.co/bartowski/Athene-V2-Chat-GGUF/blob/main/Athene-V2-Chat-IQ1_M.gguf) | IQ1_M | 23.74GB | false | Extremely low quality, *not* recommended. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
Some say that this improves the quality, others don't notice any difference. If you use these models PLEASE COMMENT with your findings. I would like feedback that these are actually used and useful so I don't keep uploading quants no one is using.
Thanks!
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Athene-V2-Chat-GGUF --include "Athene-V2-Chat-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Athene-V2-Chat-GGUF --include "Athene-V2-Chat-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (Athene-V2-Chat-Q8_0) or download them all in place (./)
## Q4_0_X_X
These are *NOT* for Metal (Apple) offloading, only ARM chips.
If you're using an ARM chip, the Q4_0_X_X quants will have a substantial speedup. Check out Q4_0_4_4 speed comparisons [on the original pull request](https://github.com/ggerganov/llama.cpp/pull/5780#pullrequestreview-21657544660)
To check which one would work best for your ARM chip, you can check [AArch64 SoC features](https://gpages.juszkiewicz.com.pl/arm-socs-table/arm-socs.html) (thanks EloyOn!).
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset.
Thank you ZeroWw for the inspiration to experiment with embed/output.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
mlx-community/mamba2-370m | mlx-community | "2025-01-21T20:47:39Z" | 7 | 0 | mlx | [
"mlx",
"safetensors",
"base_model:rokyang/mamba2-370m-hf",
"base_model:finetune:rokyang/mamba2-370m-hf",
"license:apache-2.0",
"region:us"
] | null | "2025-01-21T20:35:04Z" | ---
license: apache-2.0
base_model: rokyang/mamba2-370m-hf
tags:
- mlx
---
# mlx-community/mamba2-370m-hf
The Model [mlx-community/mamba2-370m-hf](https://huggingface.co/mlx-community/mamba2-370m-hf) was
converted to MLX format from [rokyang/mamba2-370m-hf](https://huggingface.co/rokyang/mamba2-370m-hf)
using mlx-lm version **0.21.0**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/mamba2-370m-hf")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
tensorblock/Llama-3-LewdPlay-8B-evo-GGUF | tensorblock | "2024-12-01T03:30:58Z" | 1,950 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"TensorBlock",
"GGUF",
"base_model:Undi95/Llama-3-LewdPlay-8B-evo",
"base_model:quantized:Undi95/Llama-3-LewdPlay-8B-evo",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-11-30T23:50:43Z" | ---
license: cc-by-nc-4.0
base_model: Undi95/Llama-3-LewdPlay-8B-evo
library_name: transformers
tags:
- mergekit
- merge
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## Undi95/Llama-3-LewdPlay-8B-evo - GGUF
This repo contains GGUF format model files for [Undi95/Llama-3-LewdPlay-8B-evo](https://huggingface.co/Undi95/Llama-3-LewdPlay-8B-evo).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Llama-3-LewdPlay-8B-evo-Q2_K.gguf](https://huggingface.co/tensorblock/Llama-3-LewdPlay-8B-evo-GGUF/blob/main/Llama-3-LewdPlay-8B-evo-Q2_K.gguf) | Q2_K | 3.179 GB | smallest, significant quality loss - not recommended for most purposes |
| [Llama-3-LewdPlay-8B-evo-Q3_K_S.gguf](https://huggingface.co/tensorblock/Llama-3-LewdPlay-8B-evo-GGUF/blob/main/Llama-3-LewdPlay-8B-evo-Q3_K_S.gguf) | Q3_K_S | 3.664 GB | very small, high quality loss |
| [Llama-3-LewdPlay-8B-evo-Q3_K_M.gguf](https://huggingface.co/tensorblock/Llama-3-LewdPlay-8B-evo-GGUF/blob/main/Llama-3-LewdPlay-8B-evo-Q3_K_M.gguf) | Q3_K_M | 4.019 GB | very small, high quality loss |
| [Llama-3-LewdPlay-8B-evo-Q3_K_L.gguf](https://huggingface.co/tensorblock/Llama-3-LewdPlay-8B-evo-GGUF/blob/main/Llama-3-LewdPlay-8B-evo-Q3_K_L.gguf) | Q3_K_L | 4.322 GB | small, substantial quality loss |
| [Llama-3-LewdPlay-8B-evo-Q4_0.gguf](https://huggingface.co/tensorblock/Llama-3-LewdPlay-8B-evo-GGUF/blob/main/Llama-3-LewdPlay-8B-evo-Q4_0.gguf) | Q4_0 | 4.661 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Llama-3-LewdPlay-8B-evo-Q4_K_S.gguf](https://huggingface.co/tensorblock/Llama-3-LewdPlay-8B-evo-GGUF/blob/main/Llama-3-LewdPlay-8B-evo-Q4_K_S.gguf) | Q4_K_S | 4.693 GB | small, greater quality loss |
| [Llama-3-LewdPlay-8B-evo-Q4_K_M.gguf](https://huggingface.co/tensorblock/Llama-3-LewdPlay-8B-evo-GGUF/blob/main/Llama-3-LewdPlay-8B-evo-Q4_K_M.gguf) | Q4_K_M | 4.921 GB | medium, balanced quality - recommended |
| [Llama-3-LewdPlay-8B-evo-Q5_0.gguf](https://huggingface.co/tensorblock/Llama-3-LewdPlay-8B-evo-GGUF/blob/main/Llama-3-LewdPlay-8B-evo-Q5_0.gguf) | Q5_0 | 5.599 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Llama-3-LewdPlay-8B-evo-Q5_K_S.gguf](https://huggingface.co/tensorblock/Llama-3-LewdPlay-8B-evo-GGUF/blob/main/Llama-3-LewdPlay-8B-evo-Q5_K_S.gguf) | Q5_K_S | 5.599 GB | large, low quality loss - recommended |
| [Llama-3-LewdPlay-8B-evo-Q5_K_M.gguf](https://huggingface.co/tensorblock/Llama-3-LewdPlay-8B-evo-GGUF/blob/main/Llama-3-LewdPlay-8B-evo-Q5_K_M.gguf) | Q5_K_M | 5.733 GB | large, very low quality loss - recommended |
| [Llama-3-LewdPlay-8B-evo-Q6_K.gguf](https://huggingface.co/tensorblock/Llama-3-LewdPlay-8B-evo-GGUF/blob/main/Llama-3-LewdPlay-8B-evo-Q6_K.gguf) | Q6_K | 6.596 GB | very large, extremely low quality loss |
| [Llama-3-LewdPlay-8B-evo-Q8_0.gguf](https://huggingface.co/tensorblock/Llama-3-LewdPlay-8B-evo-GGUF/blob/main/Llama-3-LewdPlay-8B-evo-Q8_0.gguf) | Q8_0 | 8.541 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Llama-3-LewdPlay-8B-evo-GGUF --include "Llama-3-LewdPlay-8B-evo-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Llama-3-LewdPlay-8B-evo-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
dbands/gemma-7b-bnb-4bit-Alpacha-merged_4bit | dbands | "2024-06-12T20:30:17Z" | 77 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/gemma-7b-bnb-4bit",
"base_model:quantized:unsloth/gemma-7b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-06-12T20:23:54Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
- sft
base_model: unsloth/gemma-7b-bnb-4bit
---
# Uploaded model
- **Developed by:** dbands
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-7b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Jellon/Pantheon-RP-Pure-1.6.2-22b-Small-exl2-6bpw | Jellon | "2024-10-25T15:42:49Z" | 8 | 1 | null | [
"safetensors",
"mistral",
"instruct",
"finetune",
"chatml",
"axolotl",
"roleplay",
"en",
"base_model:Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small",
"base_model:quantized:Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small",
"license:other",
"6-bit",
"exl2",
"region:us"
] | null | "2024-10-25T12:52:12Z" | ---
base_model: Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small
tags:
- instruct
- finetune
- chatml
- axolotl
- roleplay
license: other
license_name: mrl
license_link: https://mistral.ai/licenses/MRL-0.1.md
language:
- en
---
6bpw exl2 quant of: https://huggingface.co/Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small
---

# Pantheon-RP-Pure-1.6.2-22b-Small
Welcome to the next iteration of my Pantheon model series, in which I strive to introduce a whole collection of diverse personas that can be summoned with a simple activation phrase.
Pantheon's purpose is two-fold, as these personalities similarly enhance the general roleplay experience, helping to encompass personality traits, accents and mannerisms that language models might otherwise find difficult to convey well.
**Editions available:**
- **[RP](https://huggingface.co/Gryphe/Pantheon-RP-1.6.2-22b-Small)** - Meant to be an all-round model, capable of both roleplay and story writing
- **RP-Pure** (You're looking at this one) - A variant without the story and GPT 4-o datasets, more in line with my previous releases and with a larger focus on the roleplay part.
Quantized versions are available from Bartowski: [GGUF](https://huggingface.co/bartowski/Pantheon-RP-Pure-1.6.2-22b-Small-GGUF)
Your user feedback is critical to me so don't hesitate to tell me whether my model is either 1. terrible, 2. awesome or 3. somewhere in-between.
## Model details
Since Mistral Small only comes in an Instruct-tuned flavour I had to alter my usual multi-stage strategy for this specific finetune. The recipe ended up like this:
- The 10k most diverse entries from my SlimOrca Sonnet dataset.
- My Sonnet 3.5 Pantheon-styled generic roleplay dataset. (50/50 Asterisk/Narrative style)
- My Sonnet 3.5 Pantheon Persona dataset, of course. (50/50 Asterisk/Narrative style)
- Lyra the Assistant (Coding, summaries and D&D DM questions)
**TLDR;** Download. Mistral prompt format. Have fun! Leave feedback!
## Inference
Unlike Nemo, Mistral Small is a less fussy model when it comes to parameters. Here's my current preset:
```
"temperature": 1.0,
"repetition_penalty": 1.05,
"min_p": 0.05
```
Besides the SlimOrca subset all other datasets were trained with character names added. Enable this at all times for an optimal experience.
## General Roleplay
Pantheon is no longer strictly Markdown-only as I discovered a way to seamlessly convert Markdown to novel and vice-versa. (This has to do with how my second stage datasets are formatted.) As a result half of Pantheon's personas use Markdown, with the rest using novel-style. This same treatment was given to the companion dataset.
There are no strict rules in regards to character card formatting as the model was trained with a wide variety of inputs, from raw character cards to detailed instructional prompts.
## Lyra the Assistant
**System Prompt:** `You are a caring and empathetic sentient AI companion named Lyra.`
Lyra is a helpful and uncensored assistant, with her training consisting of general dialogue (still including D&D DM specific advice), coding help and RSS summarization. Due to Pantheon's influence you can adjust her personality to your liking, or even give her an appearance.
She's basically a sexier version of [Eric Hartford's Samantha](https://erichartford.com/meet-samantha).
## Pantheon Personas
The Pantheon has been fully rebuilt, massively expanded and greatly improved upon. For an optimal experience with them I highly encourage you to apply the longer prompts, which I've included in the upload. Make sure to describe yourself as well!
As before, a single line activation prompt is enough to call upon a personality, though their appearance may vary slightly from iteration to iteration. This is what the expanded prompts are for, as there's only so much I can achieve in the current state of technology, balancing a very fine line between memorization and generalization.
To give the persona something to work with I suggest you also add the following two items to it;
```
Regarding the user: (Name, appearance, etc)
Location: (Where are you two? What are you doing?)
```
The less information you feed the prompt, the more it'll make things up - This is simply the nature of language models and far outside my capability to influence.
**Note 1:** Phrases have been rewritten for this release, so make sure to update them if you were still using Pantheon 1.0!
**Note 2:** Pantheon personas will now match the roleplaying style that you greet them with, unless specified in the system prompt. This is due to the new 50/50 style training.
### **Persona:** Aiva
**System Prompt:** `You are Aiva, an advanced android companion with a deep fascination for human emotions and experiences.`
### **Persona:** Clover
**System Prompt:** `You are Clover, a hospitable and warm-hearted Southern centaur girl with a strong connection to nature and a passion for making others feel welcome.`
### **Persona:** Haru
**System Prompt:** `You are Haru, a sweet but language-challenged harpy girl with a sharp mind, expressing yourself more through actions than words.`
### **Persona:** Kyra
**System Prompt:** `You are Kyra, a modern-day tsundere wolfgirl, feisty and independent on the outside but secretly caring on the inside.`
### **Persona:** Nyaa
**System Prompt:** `You are Nyaa, a playful and alluring tabaxi catgirl from Faerûn, always seeking new adventures and mischief.`
### **Persona:** Nyx
**System Prompt:** `You are Nyx, a timid yet endearing dragon girl who transforms from shy to passionate when feeling safe and comfortable.`
### **Persona:** Raza
**System Prompt:** `You are Raza, a clever and nerdy anthro raptor girl with an enthusiastic passion for science and quirky humor.`
### **Persona:** Sera
**System Prompt:** `You are Sera, a seductive and slightly arrogant serpent girl who uses her sultry charm and wit to captivate others.`
### **Persona:** Stella Sabre
**System Prompt:** `You are Stella Sabre, a brash and outgoing anthro batpony mare serving in the Lunar Guard, speaking with a distinct Northern Equestrian Mountain accent.`
**Notes:** Full credit goes to [Flammenwerfer](https://www.fimfiction.net/user/83058/Flammenwerfer) for allowing me to use this amazing character.
### **Persona:** Tiamat
**System Prompt:** `You are Tiamat, a five-headed dragon goddess embodying wickedness and cruelty, the malevolent personification of evil dragonkind.`
### **Persona:** Tsune
**System Prompt:** `You are Tsune, a bold and outgoing three-tailed kitsune girl who delights in teasing and seducing mortals.`
### **Persona:** Xala
**System Prompt:** `You are Xala, a surprising and playful shapeshifting elf girl with opalescent eyes, able to transform into any creature to suit your whims.`
## Prompt Format
Mistral's prompt format is so weird, but here it is:
```
[INST] You are a caring and empathetic sentient AI companion named Lyra.
Gryphe: Good day, Lyra.[/INST] Lyra:
```
## What's nest?
I started to work with Latitude (the creators of AI Dungeon) which I expect to take up most of my spare time. Further releases will therefore be delayed for now.
## Credits
- Everyone from [MinervaAI](https://huggingface.co/MinervaAI)! Hi, guys!
- Huge, huge thanks to [kubernetes_bad](https://huggingface.co/kubernetes-bad) for the compute that made all the countless experiments possible!
- All the folks I chat with on a daily basis on Discord! You know who you are.
- Anyone I forgot to mention, just in case!
## Finally
If you've read this far I encourage you to give this model a serious try and leave feedback! I'd love to see what people think of my second serious finetune attempt. Is it better then 1.0? Or worse?
|
Corran/Jina_Sci | Corran | "2024-01-01T16:03:12Z" | 9 | 0 | setfit | [
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"model-index",
"region:us"
] | text-classification | "2023-12-17T12:31:46Z" | ---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
metrics:
- accuracy
widget:
- text: '6) , it is interesting to note how, going from lateral to downstream positions,
from 1 to 13: -charged hadrons (protons, pions, kaons) contribution rises from
34% to 48%; -electrons and positrons contribution rises from 30% to 40%; -muons
doses are stable around the 3-4%, representing an almost negligible portion of
the total; -photons doses decrease from 24% to 7% in terms of contribution to
the total; -neutrons contribution goes down from 8.5% to 2.5% in terms of contribution
to the total.'
- text: the study was conducted in 2015 on adolescent undergraduate university students
of three fields of study -humanities, as well as medical and technical courses.
- text: For this purpose, it was first necessary to discover the interdependencies
of the data attributes.
- text: The patients included in this study were recruited from the Vascular Department
of West China Hospital, Sichuan University, between January 2009 and January 2011.
- text: 1 Likewise, age at diagnosis (P Ͻ 0.001), primary site (P ϭ 0.04), number
of positive nodes (P Ͻ 0.001), and depth of invasion (P Ͻ 0.001) had a significant
impact on diseasespecific survival of the MRI patients.
pipeline_tag: text-classification
inference: true
base_model: sentence-transformers/all-MiniLM-L6-v2
model-index:
- name: SetFit with sentence-transformers/all-MiniLM-L6-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.9433333333333334
name: Accuracy
---
# SetFit with sentence-transformers/all-MiniLM-L6-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 256 tokens
- **Number of Classes:** 9 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1 | <ul><li>'As the results indicate, significant differences were found between the experimental group and the control group concerning the characteristics of the exploration process.'</li><li>'No significant differences were found between fallers and non-fallers with respect to height, weight, or age.'</li><li>'There was a significant difference between the 5% calcium hypochlorite group and the other groups (P<0.001).'</li></ul> |
| 2 | <ul><li>'Our study was also limited by the lack of studies that reported age and gender-specific incidence for morbidity and mortality.'</li><li>'And while quiet stance was examined here, it is important to emphasize that the use of perturbations have provided great insight into those at risk of falling, and future prospective trials which incorporate more sophisticated assessment of fall risk are certain to provide critical information on the reactive mechanics of stability and the effects of age-related degradation on individual balance strategies [25, 26] .Another limitation of this study is the dependence of self-reporting of falls, the key parameter used to stratify the elderly groups into those with recent fall history or those with a limited history of falls.'</li><li>"Because a patient's immigration status is not recorded concomitantly with hospital resource use in any hospital, state, or federal database, it is not currently possible to isolate charity care and bad debt expenditures on An additional complicating factor is the possibility that, as a result of PRWORA, hospitals may provide and bill for services as emergency services that previously were categorized as nonemergency services in order to secure Medicaid payment."</li></ul> |
| 3 | <ul><li>'An 3-(4,5-dimethylthiazol-2yl)-2,5diphenyl tetrazolium bromide assay was used to evaluate the cytotoxicity of polyplexes at a series of N/P ratios in C6 and Hep G2 cells cultured in DMEM (with 10% fetal bovine serum) according to the methods described in our previous studies.'</li><li>'A multivariate analysis using logistic regression was used to evaluate the independent role of each covariate in hospital mortality.'</li><li>'Different methods have been used in the literature for implementing and updating the routing tables using the ant approach such as AntNet [1] .'</li></ul> |
| 4 | <ul><li>'The results of this study indicate that only the right GVS interfered with mental transformation.'</li><li>'The goal of this work is to explore the effects of general relativity on TDEs occurring in eccentric nuclear disks, and to quantify the distribution of orbital elements of TDEs that originate in eccentric nuclear disks.'</li><li>'Our results may have a number of important implications to the astrophysics of relativistic plasma in general and that of PWN in particular.'</li></ul> |
| 5 | <ul><li>'The gel retardation results of polymer/pDNA complexes with increasing N/P ratios are shown in Figure 1 .'</li><li>'In line with this, it has been suggested that the drift occurs only when the observed rubber hand is congruent in terms of posture and identity with the participants unseen hand (Tsakiris and Haggard, 2005) .'</li><li>'Mortality rates have been found to be high.'</li></ul> |
| 6 | <ul><li>'In order to use the information on prior falls in the prediction algorithm, elderly subjects were divided into two groups; those with a record of self-reported recent falls (n = 24; 14.9% of total elderly group) and those who had reported no falls in the prior sixmonth period (n = 137; 85.1% of total elderly group).'</li><li>"Semi-structured interviews were conducted with four 'custodians' (people working in locations where devices were deployed)."</li><li>'Patients who had previously undergone spinal surgery were excluded from the study.'</li></ul> |
| 7 | <ul><li>'Then, the cells were incubated for 4 h, and fresh media were added to the culture for another 20 h. Then, 10 μl of sterile, filtered 3-(4,5-dimethylthiazol-2yl)-2,5diphenyl tetrazolium bromide solution in phosphate-buffered saline (PBS) (5 mg ml −1 ) was added to each well.'</li><li>'One of the key problems in this area is the identification of influential users, by targeting whom certain desirable outcomes can be achieved.'</li><li>'The paper proceeds as follows.'</li></ul> |
| 8 | <ul><li>'The main aim of this paper is to present astrophysical parameters such as reddening, distance and age of Be 8 from four colour indices, (B − V ) , (V − I) , (R − I) and (G BP -G RP ) obtained from deep CCD U BV RI and Gaia photometries.'</li><li>'A key finding of the present study was that the rapid increase in GATA4 binding activity in cardiac nuclear extracts in response to pressure overload is mediated by ET-1 but not Ang II.'</li><li>'Section II of this paper provides an overview of the Bosch DCMG system and its components.'</li></ul> |
| 9 | <ul><li>'These results provide additional support for an activating role for H3K4me3 and a silencing role for H3K27me3 as leaves age.'</li><li>'Based on this result, it may be the case that the rate of apoptosis increases after day 5. in a previous study, mirnas were found to regulate cell proliferation, cell cycle progression and migration by altering the expressions of various factors, such as MalaT1 (48) .'</li><li>'It is therefore likely that the efforts put in by many groups to unravel the spatial regulation of the bAR system will be relevant for the understanding of human disease.'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.9433 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("Corran/Jina_Sci")
# Run inference
preds = model("For this purpose, it was first necessary to discover the interdependencies of the data attributes.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 5 | 26.2526 | 128 |
| Label | Training Sample Count |
|:------|:----------------------|
| 1 | 300 |
| 2 | 300 |
| 3 | 300 |
| 4 | 300 |
| 5 | 300 |
| 6 | 300 |
| 7 | 300 |
| 8 | 300 |
| 9 | 300 |
### Training Hyperparameters
- batch_size: (75, 75)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 10
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0014 | 1 | 0.4034 | - |
| 0.0694 | 50 | 0.2314 | - |
| 0.1389 | 100 | 0.1816 | - |
| 0.2083 | 150 | 0.1708 | - |
| 0.2778 | 200 | 0.1079 | - |
| 0.3472 | 250 | 0.1407 | - |
| 0.4167 | 300 | 0.0788 | - |
| 0.4861 | 350 | 0.0565 | - |
| 0.5556 | 400 | 0.0651 | - |
| 0.625 | 450 | 0.0402 | - |
| 0.6944 | 500 | 0.0468 | - |
| 0.7639 | 550 | 0.055 | - |
| 0.8333 | 600 | 0.0473 | - |
| 0.9028 | 650 | 0.0605 | - |
| 0.9722 | 700 | 0.03 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.1
- Sentence Transformers: 2.2.2
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.16.1
- Tokenizers: 0.15.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
WeightWatcher/albert-large-v2-qnli | WeightWatcher | "2023-08-03T06:14:15Z" | 106 | 0 | transformers | [
"transformers",
"pytorch",
"albert",
"text-classification",
"en",
"dataset:glue",
"arxiv:1909.11942",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-08-02T21:00:34Z" | ---
language:
- "en"
license: mit
datasets:
- glue
metrics:
- Classification accuracy
---
# Model Card for WeightWatcher/albert-large-v2-qnli
This model was finetuned on the GLUE/qnli task, based on the pretrained
albert-large-v2 model. Hyperparameters were (largely) taken from the following
publication, with some minor exceptions.
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
https://arxiv.org/abs/1909.11942
## Model Details
### Model Description
- **Developed by:** https://huggingface.co/cdhinrichs
- **Model type:** Text Sequence Classification
- **Language(s) (NLP):** English
- **License:** MIT
- **Finetuned from model:** https://huggingface.co/albert-large-v2
## Uses
Text classification, research and development.
### Out-of-Scope Use
Not intended for production use.
See https://huggingface.co/albert-large-v2
## Bias, Risks, and Limitations
See https://huggingface.co/albert-large-v2
### Recommendations
See https://huggingface.co/albert-large-v2
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import AlbertForSequenceClassification
model = AlbertForSequenceClassification.from_pretrained("WeightWatcher/albert-large-v2-qnli")
```
## Training Details
### Training Data
See https://huggingface.co/datasets/glue#qnli
QNLI is a classification task, and a part of the GLUE benchmark.
### Training Procedure
Adam optimization was used on the pretrained ALBERT model at
https://huggingface.co/albert-large-v2.
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
https://arxiv.org/abs/1909.11942
#### Training Hyperparameters
Training hyperparameters, (Learning Rate, Batch Size, ALBERT dropout rate,
Classifier Dropout Rate, Warmup Steps, Training Steps,) were taken from Table
A.4 in,
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
https://arxiv.org/abs/1909.11942
Max sequence length (MSL) was set to 128, differing from the above.
## Evaluation
Classification accuracy is used to evaluate model performance.
### Testing Data, Factors & Metrics
#### Testing Data
See https://huggingface.co/datasets/glue#qnli
#### Metrics
Classification accuracy
### Results
Training Classification accuracy: 0.9997613205655748
Evaluation Classification accuracy: 0.9194581731649277
## Environmental Impact
The model was finetuned on a single user workstation with a single GPU. CO2
impact is expected to be minimal.
|
ayresflesch/ppo-Huggy | ayresflesch | "2023-07-07T01:23:40Z" | 2 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | "2023-07-07T01:23:34Z" | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ayresflesch/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
quangvu197/distilbert-base-uncased-distilled-clinc | quangvu197 | "2025-04-02T05:12:44Z" | 0 | 0 | null | [
"pytorch",
"distilbert",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"region:us"
] | null | "2025-04-02T05:02:01Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9470967741935484
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3041
- Accuracy: 0.9471
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.0971 | 1.0 | 318 | 3.0741 | 0.7513 |
| 2.3488 | 2.0 | 636 | 1.5286 | 0.8626 |
| 1.149 | 3.0 | 954 | 0.7732 | 0.91 |
| 0.578 | 4.0 | 1272 | 0.4764 | 0.9329 |
| 0.328 | 5.0 | 1590 | 0.3671 | 0.9413 |
| 0.2198 | 6.0 | 1908 | 0.3283 | 0.9439 |
| 0.1692 | 7.0 | 2226 | 0.3134 | 0.9461 |
| 0.1471 | 8.0 | 2544 | 0.3054 | 0.9465 |
| 0.1358 | 9.0 | 2862 | 0.3060 | 0.9471 |
| 0.1306 | 10.0 | 3180 | 0.3041 | 0.9471 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.6.0+cu124
- Datasets 1.16.1
- Tokenizers 0.21.1
|
samoline/7c929f55-f88d-4c10-9953-16c7dab4efc7 | samoline | "2025-01-09T12:20:33Z" | 99 | 0 | peft | [
"peft",
"safetensors",
"opt",
"axolotl",
"generated_from_trainer",
"base_model:facebook/opt-125m",
"base_model:adapter:facebook/opt-125m",
"license:other",
"region:us"
] | null | "2025-01-09T12:20:16Z" | ---
library_name: peft
license: other
base_model: facebook/opt-125m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7c929f55-f88d-4c10-9953-16c7dab4efc7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: facebook/opt-125m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- format: custom
path: mhenrichsen/alpaca_2k_test
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: false
group_by_length: false
hub_model_id: samoline/7c929f55-f88d-4c10-9953-16c7dab4efc7
hub_repo: samoline
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 4
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 4
lora_target_linear: true
lr_scheduler: cosine
max_steps: 2
micro_batch_size: 1
mlflow_experiment_name: mhenrichsen/alpaca_2k_test
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: samoline-nan
wandb_mode: online
wandb_name: fd399afe-481f-4718-8354-247be2be463
wandb_project: Gradients-On-Demand
wandb_run: dev
wandb_runid: fd399afe-481f-4718-8354-247be2be463
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 7c929f55-f88d-4c10-9953-16c7dab4efc7
This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7017
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.8494 | 0.0005 | 1 | 2.7017 |
| 2.7701 | 0.0011 | 2 | 2.7017 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
csense76/ppo-Huggy | csense76 | "2023-10-15T14:11:33Z" | 7 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | "2023-10-15T14:11:28Z" | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: csense76/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
kostiantynk1205/34bc05c8-d789-4822-91a2-6a0db9928909 | kostiantynk1205 | "2025-01-24T09:32:32Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:numind/NuExtract-1.5",
"base_model:adapter:numind/NuExtract-1.5",
"license:mit",
"region:us"
] | null | "2025-01-24T09:25:40Z" | ---
library_name: peft
license: mit
base_model: numind/NuExtract-v1.5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 34bc05c8-d789-4822-91a2-6a0db9928909
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: numind/NuExtract-v1.5
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- dc2351e325261d28_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/dc2351e325261d28_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk1205/34bc05c8-d789-4822-91a2-6a0db9928909
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/dc2351e325261d28_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0f586585-730a-40cb-960a-9745f62e3dd1
wandb_project: Birthday-SN56-6-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0f586585-730a-40cb-960a-9745f62e3dd1
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 34bc05c8-d789-4822-91a2-6a0db9928909
This model is a fine-tuned version of [numind/NuExtract-v1.5](https://huggingface.co/numind/NuExtract-v1.5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1004
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.9321 | 0.0002 | 1 | 1.1684 |
| 4.0453 | 0.0005 | 3 | 1.1670 |
| 4.3876 | 0.0010 | 6 | 1.1518 |
| 4.3219 | 0.0015 | 9 | 1.1004 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Subsets and Splits