modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-05 06:27:31
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
468 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-05 06:26:36
card
stringlengths
11
1.01M
anton-l/wav2vec2-xls-r-common_voice-tr-ft
anton-l
2022-01-31T09:48:53Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "common_voice", "generated_from_trainer", "tr", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - tr license: apache-2.0 tags: - automatic-speech-recognition - common_voice - generated_from_trainer model-index: - name: wav2vec2-xls-r-common_voice-tr-ft-500sh results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-common_voice-tr-ft-500sh This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - TR dataset. It achieves the following results on the evaluation set: - Loss: 0.5794 - Wer: 0.4009 - Cer: 0.1032 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:------:|:----:|:---------------:|:------:|:------:| | 0.5288 | 17.0 | 500 | 0.5099 | 0.5426 | 0.1432 | | 0.2967 | 34.0 | 1000 | 0.5421 | 0.4746 | 0.1256 | | 0.2447 | 51.0 | 1500 | 0.5347 | 0.4831 | 0.1267 | | 0.122 | 68.01 | 2000 | 0.5854 | 0.4479 | 0.1161 | | 0.1035 | 86.0 | 2500 | 0.5597 | 0.4457 | 0.1166 | | 0.081 | 103.0 | 3000 | 0.5748 | 0.4250 | 0.1144 | | 0.0849 | 120.0 | 3500 | 0.5598 | 0.4337 | 0.1145 | | 0.0542 | 137.01 | 4000 | 0.5687 | 0.4223 | 0.1097 | | 0.0318 | 155.0 | 4500 | 0.5904 | 0.4057 | 0.1052 | | 0.0106 | 172.0 | 5000 | 0.5794 | 0.4009 | 0.1032 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2 - Datasets 1.18.2 - Tokenizers 0.10.3
TajMahaladeen/pokemon_gptj
TajMahaladeen
2022-01-31T06:12:31Z
9
0
transformers
[ "transformers", "pytorch", "gptj", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 ---
NbAiLab/xls-r-1b-npsc
NbAiLab
2022-01-31T04:33:39Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- license: apache-2.0 ---
leandrodzp/cbow_uruguayan_news
leandrodzp
2022-01-31T02:38:31Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
# Supervised Continous Bag of words model trained with Uruguayan news from Twitter Model trained with Facebook's fasttext library.
eldor-97/MarianMix_en-10
eldor-97
2022-01-30T23:25:27Z
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - bleu model-index: - name: MarianMix_en-10 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MarianMix_en-10 This model is a fine-tuned version of [Helsinki-NLP/opus-tatoeba-en-ja](https://huggingface.co/Helsinki-NLP/opus-tatoeba-en-ja) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0752 - Bleu: 14.601 - Gen Len: 45.8087 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 99 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:--------:| | 2.1136 | 0.44 | 500 | 2.0044 | 0.2655 | 109.0201 | | 1.1422 | 0.89 | 1000 | 1.7516 | 1.4123 | 71.0 | | 0.9666 | 1.33 | 1500 | 1.5219 | 3.6611 | 64.6888 | | 0.8725 | 1.78 | 2000 | 1.3606 | 4.6539 | 77.1641 | | 0.7655 | 2.22 | 2500 | 1.2586 | 8.3456 | 60.3837 | | 0.7149 | 2.67 | 3000 | 1.1953 | 11.2247 | 50.5921 | | 0.6719 | 3.11 | 3500 | 1.1541 | 10.4303 | 54.3776 | | 0.6265 | 3.56 | 4000 | 1.1186 | 13.3231 | 48.283 | | 0.6157 | 4.0 | 4500 | 1.0929 | 13.8467 | 46.569 | | 0.5736 | 4.44 | 5000 | 1.0848 | 14.2731 | 45.5035 | | 0.5683 | 4.89 | 5500 | 1.0752 | 14.601 | 45.8087 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.17.0 - Tokenizers 0.10.3
gabrieljg/wav2vec2-common_voice-es-demo
gabrieljg
2022-01-30T21:38:32Z
29
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "common_voice", "generated_from_trainer", "es", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - es license: apache-2.0 tags: - automatic-speech-recognition - common_voice - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-common_voice-es-demo results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-common_voice-es-demo This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - ES dataset. It achieves the following results on the evaluation set: - Loss: 0.1788 - Wer: 1.0239 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 15.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | No log | 0.02 | 100 | 6.6465 | 1.0 | | No log | 0.04 | 200 | 3.0150 | 1.0 | | No log | 0.05 | 300 | 2.8622 | 1.0003 | | No log | 0.07 | 400 | 0.9506 | 0.9771 | | 5.1598 | 0.09 | 500 | 0.4883 | 1.0009 | | 5.1598 | 0.11 | 600 | 0.3893 | 1.0203 | | 5.1598 | 0.13 | 700 | 0.3417 | 1.0283 | | 5.1598 | 0.14 | 800 | 0.3352 | 1.0335 | | 5.1598 | 0.16 | 900 | 0.2987 | 1.0168 | | 0.3671 | 0.18 | 1000 | 0.2921 | 1.0159 | | 0.3671 | 0.2 | 1100 | 0.2770 | 1.0096 | | 0.3671 | 0.22 | 1200 | 0.2790 | 1.0398 | | 0.3671 | 0.24 | 1300 | 0.2659 | 1.0190 | | 0.3671 | 0.25 | 1400 | 0.2657 | 1.0528 | | 0.289 | 0.27 | 1500 | 0.2556 | 1.0301 | | 0.289 | 0.29 | 1600 | 0.2514 | 1.0193 | | 0.289 | 0.31 | 1700 | 0.2708 | 1.0699 | | 0.289 | 0.33 | 1800 | 0.2455 | 1.0723 | | 0.289 | 0.34 | 1900 | 0.2456 | 1.0100 | | 0.271 | 0.36 | 2000 | 0.2338 | 1.0533 | | 0.271 | 0.38 | 2100 | 0.2479 | 1.0128 | | 0.271 | 0.4 | 2200 | 0.2483 | 1.0386 | | 0.271 | 0.42 | 2300 | 0.2436 | 1.0528 | | 0.271 | 0.43 | 2400 | 0.2382 | 1.0476 | | 0.2634 | 0.45 | 2500 | 0.2329 | 1.0680 | | 0.2634 | 0.47 | 2600 | 0.2433 | 1.0581 | | 0.2634 | 0.49 | 2700 | 0.2354 | 1.0641 | | 0.2634 | 0.51 | 2800 | 0.2318 | 1.0504 | | 0.2634 | 0.52 | 2900 | 0.2325 | 1.0500 | | 0.2522 | 0.54 | 3000 | 0.2344 | 1.0380 | | 0.2522 | 0.56 | 3100 | 0.2244 | 1.0663 | | 0.2522 | 0.58 | 3200 | 0.2340 | 1.0647 | | 0.2522 | 0.6 | 3300 | 0.2288 | 1.0538 | | 0.2522 | 0.61 | 3400 | 0.2212 | 1.0614 | | 0.2468 | 0.63 | 3500 | 0.2487 | 1.0557 | | 0.2468 | 0.65 | 3600 | 0.2330 | 1.0510 | | 0.2468 | 0.67 | 3700 | 0.2308 | 1.0506 | | 0.2468 | 0.69 | 3800 | 0.2320 | 1.0451 | | 0.2468 | 0.71 | 3900 | 0.2261 | 1.0701 | | 0.2505 | 0.72 | 4000 | 0.2281 | 1.0713 | | 0.2505 | 0.74 | 4100 | 0.2277 | 1.0741 | | 0.2505 | 0.76 | 4200 | 0.2253 | 1.0814 | | 0.2505 | 0.78 | 4300 | 0.2215 | 1.0437 | | 0.2505 | 0.8 | 4400 | 0.2220 | 1.0557 | | 0.2434 | 0.81 | 4500 | 0.2184 | 1.0533 | | 0.2434 | 0.83 | 4600 | 0.2222 | 1.0819 | | 0.2434 | 0.85 | 4700 | 0.2162 | 1.0238 | | 0.2434 | 0.87 | 4800 | 0.2132 | 1.0457 | | 0.2434 | 0.89 | 4900 | 0.2068 | 1.0611 | | 0.2347 | 0.9 | 5000 | 0.2166 | 1.0332 | | 0.2347 | 0.92 | 5100 | 0.2087 | 1.0433 | | 0.2347 | 0.94 | 5200 | 0.2100 | 1.0292 | | 0.2347 | 0.96 | 5300 | 0.2067 | 1.0734 | | 0.2347 | 0.98 | 5400 | 0.2148 | 1.0279 | | 0.2333 | 0.99 | 5500 | 0.2125 | 1.0277 | | 0.2333 | 1.01 | 5600 | 0.2054 | 1.0453 | | 0.2333 | 1.03 | 5700 | 0.2091 | 1.0557 | | 0.2333 | 1.05 | 5800 | 0.2086 | 1.0239 | | 0.2333 | 1.07 | 5900 | 0.2051 | 1.0645 | | 0.2087 | 1.09 | 6000 | 0.2103 | 1.0240 | | 0.2087 | 1.1 | 6100 | 0.2145 | 1.0197 | | 0.2087 | 1.12 | 6200 | 0.2136 | 1.0248 | | 0.2087 | 1.14 | 6300 | 0.2045 | 1.0443 | | 0.2087 | 1.16 | 6400 | 0.2089 | 1.0397 | | 0.2013 | 1.18 | 6500 | 0.2012 | 1.0654 | | 0.2013 | 1.19 | 6600 | 0.2054 | 1.0414 | | 0.2013 | 1.21 | 6700 | 0.2081 | 1.0632 | | 0.2013 | 1.23 | 6800 | 0.2104 | 1.0190 | | 0.2013 | 1.25 | 6900 | 0.2045 | 1.0813 | | 0.2092 | 1.27 | 7000 | 0.2096 | 1.0751 | | 0.2092 | 1.28 | 7100 | 0.2103 | 1.0328 | | 0.2092 | 1.3 | 7200 | 0.2044 | 1.0011 | | 0.2092 | 1.32 | 7300 | 0.2089 | 1.0260 | | 0.2092 | 1.34 | 7400 | 0.2063 | 1.0551 | | 0.2076 | 1.36 | 7500 | 0.2029 | 1.0075 | | 0.2076 | 1.37 | 7600 | 0.2040 | 1.0528 | | 0.2076 | 1.39 | 7700 | 0.2075 | 1.0398 | | 0.2076 | 1.41 | 7800 | 0.2023 | 1.0231 | | 0.2076 | 1.43 | 7900 | 0.2049 | 1.0318 | | 0.2028 | 1.45 | 8000 | 0.2072 | 1.0763 | | 0.2028 | 1.47 | 8100 | 0.2075 | 1.0762 | | 0.2028 | 1.48 | 8200 | 0.2052 | 1.0838 | | 0.2028 | 1.5 | 8300 | 0.2053 | 1.0407 | | 0.2028 | 1.52 | 8400 | 0.2066 | 1.0266 | | 0.2025 | 1.54 | 8500 | 0.2037 | 1.0628 | | 0.2025 | 1.56 | 8600 | 0.2010 | 1.0351 | | 0.2025 | 1.57 | 8700 | 0.1961 | 1.0812 | | 0.2025 | 1.59 | 8800 | 0.1963 | 1.0868 | | 0.2025 | 1.61 | 8900 | 0.2022 | 1.0710 | | 0.1997 | 1.63 | 9000 | 0.2051 | 1.0764 | | 0.1997 | 1.65 | 9100 | 0.1987 | 1.0581 | | 0.1997 | 1.66 | 9200 | 0.2051 | 1.0611 | | 0.1997 | 1.68 | 9300 | 0.1999 | 1.0808 | | 0.1997 | 1.7 | 9400 | 0.1972 | 1.0703 | | 0.1983 | 1.72 | 9500 | 0.1961 | 1.0584 | | 0.1983 | 1.74 | 9600 | 0.2031 | 1.0938 | | 0.1983 | 1.75 | 9700 | 0.2019 | 1.0891 | | 0.1983 | 1.77 | 9800 | 0.2006 | 1.0542 | | 0.1983 | 1.79 | 9900 | 0.1925 | 1.0627 | | 0.1961 | 1.81 | 10000 | 0.1976 | 1.0751 | | 0.1961 | 1.83 | 10100 | 0.2051 | 1.0611 | | 0.1961 | 1.85 | 10200 | 0.2037 | 1.0656 | | 0.1961 | 1.86 | 10300 | 0.2025 | 1.0291 | | 0.1961 | 1.88 | 10400 | 0.1977 | 1.0525 | | 0.2025 | 1.9 | 10500 | 0.2030 | 1.0670 | | 0.2025 | 1.92 | 10600 | 0.1980 | 1.0765 | | 0.2025 | 1.94 | 10700 | 0.1975 | 1.0254 | | 0.2025 | 1.95 | 10800 | 0.1986 | 1.0636 | | 0.2025 | 1.97 | 10900 | 0.1956 | 1.0352 | | 0.2025 | 1.99 | 11000 | 0.1954 | 1.0265 | | 0.2025 | 2.01 | 11100 | 0.1957 | 1.0752 | | 0.2025 | 2.03 | 11200 | 0.1943 | 1.0784 | | 0.2025 | 2.04 | 11300 | 0.1898 | 1.0341 | | 0.2025 | 2.06 | 11400 | 0.1921 | 1.0301 | | 0.1805 | 2.08 | 11500 | 0.1910 | 1.0230 | | 0.1805 | 2.1 | 11600 | 0.1961 | 1.0203 | | 0.1805 | 2.12 | 11700 | 0.1973 | 1.0776 | | 0.1805 | 2.13 | 11800 | 0.1876 | 1.0788 | | 0.1805 | 2.15 | 11900 | 0.1934 | 1.0251 | | 0.177 | 2.17 | 12000 | 0.1967 | 1.0340 | | 0.177 | 2.19 | 12100 | 0.1932 | 1.0131 | | 0.177 | 2.21 | 12200 | 0.1926 | 1.0078 | | 0.177 | 2.23 | 12300 | 0.1947 | 0.9991 | | 0.177 | 2.24 | 12400 | 0.1914 | 1.0213 | | 0.1782 | 2.26 | 12500 | 0.1962 | 0.9882 | | 0.1782 | 2.28 | 12600 | 0.1960 | 1.0562 | | 0.1782 | 2.3 | 12700 | 0.2006 | 1.0401 | | 0.1782 | 2.32 | 12800 | 0.1950 | 1.0688 | | 0.1782 | 2.33 | 12900 | 0.1920 | 1.0435 | | 0.1796 | 2.35 | 13000 | 0.1926 | 1.0667 | | 0.1796 | 2.37 | 13100 | 0.1949 | 1.0859 | | 0.1796 | 2.39 | 13200 | 0.1932 | 1.0670 | | 0.1796 | 2.41 | 13300 | 0.1882 | 1.0663 | | 0.1796 | 2.42 | 13400 | 0.1877 | 1.0760 | | 0.1775 | 2.44 | 13500 | 0.1893 | 1.0859 | | 0.1775 | 2.46 | 13600 | 0.1936 | 1.0702 | | 0.1775 | 2.48 | 13700 | 0.1871 | 1.0414 | | 0.1775 | 2.5 | 13800 | 0.1917 | 1.0430 | | 0.1775 | 2.51 | 13900 | 0.1922 | 1.0422 | | 0.1778 | 2.53 | 14000 | 0.1875 | 1.0585 | | 0.1778 | 2.55 | 14100 | 0.1876 | 1.0603 | | 0.1778 | 2.57 | 14200 | 0.1888 | 1.0628 | | 0.1778 | 2.59 | 14300 | 0.1948 | 1.0782 | | 0.1778 | 2.6 | 14400 | 0.1942 | 1.0695 | | 0.1784 | 2.62 | 14500 | 0.1842 | 1.0863 | | 0.1784 | 2.64 | 14600 | 0.1850 | 1.0543 | | 0.1784 | 2.66 | 14700 | 0.1824 | 1.0683 | | 0.1784 | 2.68 | 14800 | 0.1888 | 1.0693 | | 0.1784 | 2.7 | 14900 | 0.1871 | 1.0175 | | 0.1753 | 2.71 | 15000 | 0.1889 | 1.0549 | | 0.1753 | 2.73 | 15100 | 0.1865 | 1.0544 | | 0.1753 | 2.75 | 15200 | 0.1918 | 1.0726 | | 0.1753 | 2.77 | 15300 | 0.1964 | 1.0915 | | 0.1753 | 2.79 | 15400 | 0.1900 | 1.0610 | | 0.1768 | 2.8 | 15500 | 0.1894 | 1.0763 | | 0.1768 | 2.82 | 15600 | 0.1882 | 1.0548 | | 0.1768 | 2.84 | 15700 | 0.1861 | 1.0902 | | 0.1768 | 2.86 | 15800 | 0.1860 | 1.0551 | | 0.1768 | 2.88 | 15900 | 0.1879 | 1.0581 | | 0.1761 | 2.89 | 16000 | 0.1899 | 1.0544 | | 0.1761 | 2.91 | 16100 | 0.1860 | 1.0530 | | 0.1761 | 2.93 | 16200 | 0.1894 | 1.0596 | | 0.1761 | 2.95 | 16300 | 0.1835 | 1.0394 | | 0.1761 | 2.97 | 16400 | 0.1852 | 1.0445 | | 0.1754 | 2.98 | 16500 | 0.1847 | 1.0390 | | 0.1754 | 3.0 | 16600 | 0.1828 | 1.0440 | | 0.1754 | 3.02 | 16700 | 0.1869 | 1.0560 | | 0.1754 | 3.04 | 16800 | 0.1882 | 1.0573 | | 0.1754 | 3.06 | 16900 | 0.1912 | 1.0600 | | 0.1592 | 3.08 | 17000 | 0.1921 | 1.0529 | | 0.1592 | 3.09 | 17100 | 0.1881 | 1.0175 | | 0.1592 | 3.11 | 17200 | 0.1891 | 1.0654 | | 0.1592 | 3.13 | 17300 | 0.1889 | 1.0687 | | 0.1592 | 3.15 | 17400 | 0.1916 | 1.0642 | | 0.1556 | 3.17 | 17500 | 0.1850 | 1.0295 | | 0.1556 | 3.18 | 17600 | 0.1875 | 1.0273 | | 0.1556 | 3.2 | 17700 | 0.1894 | 1.0051 | | 0.1556 | 3.22 | 17800 | 0.1870 | 1.0462 | | 0.1556 | 3.24 | 17900 | 0.1831 | 1.0308 | | 0.1557 | 3.26 | 18000 | 0.1878 | 1.0603 | | 0.1557 | 3.27 | 18100 | 0.1850 | 1.0566 | | 0.1557 | 3.29 | 18200 | 0.1843 | 1.0629 | | 0.1557 | 3.31 | 18300 | 0.1886 | 1.0378 | | 0.1557 | 3.33 | 18400 | 0.1892 | 1.0381 | | 0.159 | 3.35 | 18500 | 0.1942 | 1.0519 | | 0.159 | 3.36 | 18600 | 0.1829 | 1.0622 | | 0.159 | 3.38 | 18700 | 0.1894 | 1.0557 | | 0.159 | 3.4 | 18800 | 0.1895 | 1.0627 | | 0.159 | 3.42 | 18900 | 0.1863 | 1.0362 | | 0.1582 | 3.44 | 19000 | 0.1888 | 1.0491 | | 0.1582 | 3.46 | 19100 | 0.1854 | 1.0483 | | 0.1582 | 3.47 | 19200 | 0.1797 | 0.9787 | | 0.1582 | 3.49 | 19300 | 0.1785 | 1.0086 | | 0.1582 | 3.51 | 19400 | 0.1797 | 0.9915 | | 0.1507 | 3.53 | 19500 | 0.1873 | 1.0266 | | 0.1507 | 3.55 | 19600 | 0.1838 | 1.0299 | | 0.1507 | 3.56 | 19700 | 0.1817 | 1.0355 | | 0.1507 | 3.58 | 19800 | 0.1819 | 1.0271 | | 0.1507 | 3.6 | 19900 | 0.1883 | 1.0248 | | 0.1601 | 3.62 | 20000 | 0.1823 | 1.0406 | | 0.1601 | 3.64 | 20100 | 0.1801 | 1.0261 | | 0.1601 | 3.65 | 20200 | 0.1783 | 1.0329 | | 0.1601 | 3.67 | 20300 | 0.1857 | 1.0162 | | 0.1601 | 3.69 | 20400 | 0.1814 | 1.0212 | | 0.1552 | 3.71 | 20500 | 0.1837 | 1.0232 | | 0.1552 | 3.73 | 20600 | 0.1843 | 1.0314 | | 0.1552 | 3.74 | 20700 | 0.1842 | 1.0258 | | 0.1552 | 3.76 | 20800 | 0.1821 | 1.0479 | | 0.1552 | 3.78 | 20900 | 0.1864 | 1.0459 | | 0.1576 | 3.8 | 21000 | 0.1831 | 1.0364 | | 0.1576 | 3.82 | 21100 | 0.1852 | 1.0271 | | 0.1576 | 3.83 | 21200 | 0.1865 | 1.0204 | | 0.1576 | 3.85 | 21300 | 0.1794 | 1.0324 | | 0.1576 | 3.87 | 21400 | 0.1826 | 1.0315 | | 0.1585 | 3.89 | 21500 | 0.1824 | 1.0327 | | 0.1585 | 3.91 | 21600 | 0.1838 | 1.0208 | | 0.1585 | 3.93 | 21700 | 0.1850 | 1.0199 | | 0.1585 | 3.94 | 21800 | 0.1841 | 1.0050 | | 0.1585 | 3.96 | 21900 | 0.1783 | 1.0003 | | 0.1572 | 3.98 | 22000 | 0.1787 | 1.0115 | | 0.1572 | 4.0 | 22100 | 0.1810 | 1.0235 | | 0.1572 | 4.02 | 22200 | 0.1763 | 1.0191 | | 0.1572 | 4.03 | 22300 | 0.1764 | 1.0332 | | 0.1572 | 4.05 | 22400 | 0.1794 | 1.0429 | | 0.1406 | 4.07 | 22500 | 0.1905 | 1.0288 | | 0.1406 | 4.09 | 22600 | 0.1776 | 1.0244 | | 0.1406 | 4.11 | 22700 | 0.1782 | 1.0451 | | 0.1406 | 4.12 | 22800 | 0.1771 | 1.0387 | | 0.1406 | 4.14 | 22900 | 0.1788 | 1.0435 | | 0.14 | 4.16 | 23000 | 0.1792 | 1.0421 | | 0.14 | 4.18 | 23100 | 0.1841 | 1.0241 | | 0.14 | 4.2 | 23200 | 0.1769 | 1.0546 | | 0.14 | 4.21 | 23300 | 0.1815 | 1.0602 | | 0.14 | 4.23 | 23400 | 0.1784 | 1.0369 | | 0.1394 | 4.25 | 23500 | 0.1809 | 1.0406 | | 0.1394 | 4.27 | 23600 | 0.1744 | 1.0133 | | 0.1394 | 4.29 | 23700 | 0.1771 | 1.0214 | | 0.1394 | 4.31 | 23800 | 0.1765 | 1.0064 | | 0.1394 | 4.32 | 23900 | 0.1793 | 1.0200 | | 0.14 | 4.34 | 24000 | 0.1776 | 1.0352 | | 0.14 | 4.36 | 24100 | 0.1775 | 1.0294 | | 0.14 | 4.38 | 24200 | 0.1763 | 1.0213 | | 0.14 | 4.4 | 24300 | 0.1697 | 1.0302 | | 0.14 | 4.41 | 24400 | 0.1771 | 1.0259 | | 0.1408 | 4.43 | 24500 | 0.1747 | 1.0409 | | 0.1408 | 4.45 | 24600 | 0.1769 | 1.0278 | | 0.1408 | 4.47 | 24700 | 0.1767 | 1.0190 | | 0.1408 | 4.49 | 24800 | 0.1745 | 1.0281 | | 0.1408 | 4.5 | 24900 | 0.1738 | 1.0356 | | 0.1391 | 4.52 | 25000 | 0.1781 | 1.0429 | | 0.1391 | 4.54 | 25100 | 0.1784 | 1.0076 | | 0.1391 | 4.56 | 25200 | 0.1771 | 1.0157 | | 0.1391 | 4.58 | 25300 | 0.1758 | 1.0337 | | 0.1391 | 4.59 | 25400 | 0.1758 | 1.0466 | | 0.1398 | 4.61 | 25500 | 0.1724 | 1.0403 | | 0.1398 | 4.63 | 25600 | 0.1765 | 1.0481 | | 0.1398 | 4.65 | 25700 | 0.1757 | 1.0320 | | 0.1398 | 4.67 | 25800 | 0.1814 | 1.0479 | | 0.1398 | 4.69 | 25900 | 0.1713 | 1.0251 | | 0.1427 | 4.7 | 26000 | 0.1735 | 1.0340 | | 0.1427 | 4.72 | 26100 | 0.1765 | 1.0358 | | 0.1427 | 4.74 | 26200 | 0.1731 | 1.0220 | | 0.1427 | 4.76 | 26300 | 0.1769 | 1.0261 | | 0.1427 | 4.78 | 26400 | 0.1747 | 1.0139 | | 0.1424 | 4.79 | 26500 | 0.1791 | 1.0406 | | 0.1424 | 4.81 | 26600 | 0.1735 | 1.0497 | | 0.1424 | 4.83 | 26700 | 0.1710 | 1.0433 | | 0.1424 | 4.85 | 26800 | 0.1771 | 1.0002 | | 0.1424 | 4.87 | 26900 | 0.1748 | 1.0046 | | 0.1419 | 4.88 | 27000 | 0.1794 | 1.0332 | | 0.1419 | 4.9 | 27100 | 0.1772 | 1.0558 | | 0.1419 | 4.92 | 27200 | 0.1757 | 1.0477 | | 0.1419 | 4.94 | 27300 | 0.1735 | 1.0324 | | 0.1419 | 4.96 | 27400 | 0.1758 | 1.0260 | | 0.1433 | 4.97 | 27500 | 0.1767 | 1.0422 | | 0.1433 | 4.99 | 27600 | 0.1695 | 1.0386 | | 0.1433 | 5.01 | 27700 | 0.1763 | 1.0571 | | 0.1433 | 5.03 | 27800 | 0.1743 | 1.0367 | | 0.1433 | 5.05 | 27900 | 0.1804 | 1.0255 | | 0.1306 | 5.07 | 28000 | 0.1803 | 1.0377 | | 0.1306 | 5.08 | 28100 | 0.1750 | 1.0552 | | 0.1306 | 5.1 | 28200 | 0.1743 | 1.0512 | | 0.1306 | 5.12 | 28300 | 0.1777 | 1.0584 | | 0.1306 | 5.14 | 28400 | 0.1726 | 1.0374 | | 0.123 | 5.16 | 28500 | 0.1776 | 1.0439 | | 0.123 | 5.17 | 28600 | 0.1759 | 1.0682 | | 0.123 | 5.19 | 28700 | 0.1724 | 1.0511 | | 0.123 | 5.21 | 28800 | 0.1677 | 1.0560 | | 0.123 | 5.23 | 28900 | 0.1699 | 1.0421 | | 0.1217 | 5.25 | 29000 | 0.1803 | 1.0370 | | 0.1217 | 5.26 | 29100 | 0.1770 | 1.0474 | | 0.1217 | 5.28 | 29200 | 0.1733 | 1.0332 | | 0.1217 | 5.3 | 29300 | 0.1746 | 1.0158 | | 0.1217 | 5.32 | 29400 | 0.1763 | 1.0341 | | 0.1246 | 5.34 | 29500 | 0.1775 | 1.0348 | | 0.1246 | 5.35 | 29600 | 0.1730 | 1.0492 | | 0.1246 | 5.37 | 29700 | 0.1730 | 1.0503 | | 0.1246 | 5.39 | 29800 | 0.1727 | 1.0437 | | 0.1246 | 5.41 | 29900 | 0.1744 | 1.0539 | | 0.127 | 5.43 | 30000 | 0.1748 | 1.0463 | | 0.127 | 5.44 | 30100 | 0.1746 | 1.0555 | | 0.127 | 5.46 | 30200 | 0.1810 | 1.0558 | | 0.127 | 5.48 | 30300 | 0.1773 | 1.0407 | | 0.127 | 5.5 | 30400 | 0.1722 | 1.0489 | | 0.1276 | 5.52 | 30500 | 0.1720 | 1.0520 | | 0.1276 | 5.54 | 30600 | 0.1777 | 1.0347 | | 0.1276 | 5.55 | 30700 | 0.1685 | 1.0347 | | 0.1276 | 5.57 | 30800 | 0.1659 | 1.0338 | | 0.1276 | 5.59 | 30900 | 0.1756 | 1.0228 | | 0.1246 | 5.61 | 31000 | 0.1717 | 1.0409 | | 0.1246 | 5.63 | 31100 | 0.1764 | 1.0202 | | 0.1246 | 5.64 | 31200 | 0.1693 | 1.0314 | | 0.1246 | 5.66 | 31300 | 0.1731 | 1.0319 | | 0.1246 | 5.68 | 31400 | 0.1688 | 1.0380 | | 0.1271 | 5.7 | 31500 | 0.1671 | 1.0350 | | 0.1271 | 5.72 | 31600 | 0.1676 | 1.0430 | | 0.1271 | 5.73 | 31700 | 0.1656 | 1.0441 | | 0.1271 | 5.75 | 31800 | 0.1664 | 1.0403 | | 0.1271 | 5.77 | 31900 | 0.1691 | 1.0152 | | 0.1259 | 5.79 | 32000 | 0.1702 | 1.0018 | | 0.1259 | 5.81 | 32100 | 0.1664 | 1.0246 | | 0.1259 | 5.82 | 32200 | 0.1737 | 1.0340 | | 0.1259 | 5.84 | 32300 | 0.1742 | 1.0449 | | 0.1259 | 5.86 | 32400 | 0.1707 | 1.0279 | | 0.1273 | 5.88 | 32500 | 0.1697 | 1.0471 | | 0.1273 | 5.9 | 32600 | 0.1668 | 1.0322 | | 0.1273 | 5.92 | 32700 | 0.1706 | 1.0378 | | 0.1273 | 5.93 | 32800 | 0.1704 | 1.0350 | | 0.1273 | 5.95 | 32900 | 0.1725 | 1.0244 | | 0.123 | 5.97 | 33000 | 0.1678 | 1.0447 | | 0.123 | 5.99 | 33100 | 0.1681 | 1.0438 | | 0.123 | 6.01 | 33200 | 0.1689 | 1.0297 | | 0.123 | 6.02 | 33300 | 0.1690 | 1.0333 | | 0.123 | 6.04 | 33400 | 0.1734 | 1.0296 | | 0.1163 | 6.06 | 33500 | 0.1748 | 1.0307 | | 0.1163 | 6.08 | 33600 | 0.1715 | 1.0123 | | 0.1163 | 6.1 | 33700 | 0.1668 | 1.0117 | | 0.1163 | 6.11 | 33800 | 0.1690 | 1.0230 | | 0.1163 | 6.13 | 33900 | 0.1693 | 1.0166 | | 0.1101 | 6.15 | 34000 | 0.1728 | 1.0162 | | 0.1101 | 6.17 | 34100 | 0.1683 | 1.0107 | | 0.1101 | 6.19 | 34200 | 0.1703 | 0.9814 | | 0.1101 | 6.2 | 34300 | 0.1692 | 1.0007 | | 0.1101 | 6.22 | 34400 | 0.1690 | 1.0000 | | 0.1118 | 6.24 | 34500 | 0.1734 | 0.9972 | | 0.1118 | 6.26 | 34600 | 0.1739 | 1.0096 | | 0.1118 | 6.28 | 34700 | 0.1749 | 1.0047 | | 0.1118 | 6.3 | 34800 | 0.1709 | 1.0111 | | 0.1118 | 6.31 | 34900 | 0.1717 | 1.0179 | | 0.1153 | 6.33 | 35000 | 0.1690 | 1.0155 | | 0.1153 | 6.35 | 35100 | 0.1710 | 1.0144 | | 0.1153 | 6.37 | 35200 | 0.1719 | 1.0030 | | 0.1153 | 6.39 | 35300 | 0.1690 | 1.0272 | | 0.1153 | 6.4 | 35400 | 0.1673 | 1.0103 | | 0.1106 | 6.42 | 35500 | 0.1710 | 1.0222 | | 0.1106 | 6.44 | 35600 | 0.1747 | 1.0173 | | 0.1106 | 6.46 | 35700 | 0.1721 | 0.9933 | | 0.1106 | 6.48 | 35800 | 0.1670 | 1.0184 | | 0.1106 | 6.49 | 35900 | 0.1714 | 1.0122 | | 0.1116 | 6.51 | 36000 | 0.1717 | 1.0035 | | 0.1116 | 6.53 | 36100 | 0.1685 | 1.0099 | | 0.1116 | 6.55 | 36200 | 0.1687 | 1.0288 | | 0.1116 | 6.57 | 36300 | 0.1664 | 1.0314 | | 0.1116 | 6.58 | 36400 | 0.1665 | 1.0264 | | 0.1128 | 6.6 | 36500 | 0.1681 | 1.0420 | | 0.1128 | 6.62 | 36600 | 0.1682 | 1.0409 | | 0.1128 | 6.64 | 36700 | 0.1717 | 1.0271 | | 0.1128 | 6.66 | 36800 | 0.1717 | 1.0166 | | 0.1128 | 6.68 | 36900 | 0.1755 | 1.0175 | | 0.1134 | 6.69 | 37000 | 0.1623 | 1.0185 | | 0.1134 | 6.71 | 37100 | 0.1674 | 1.0302 | | 0.1134 | 6.73 | 37200 | 0.1633 | 1.0325 | | 0.1134 | 6.75 | 37300 | 0.1628 | 1.0228 | | 0.1134 | 6.77 | 37400 | 0.1636 | 1.0243 | | 0.1102 | 6.78 | 37500 | 0.1667 | 1.0282 | | 0.1102 | 6.8 | 37600 | 0.1623 | 1.0212 | | 0.1102 | 6.82 | 37700 | 0.1639 | 1.0140 | | 0.1102 | 6.84 | 37800 | 0.1587 | 1.0258 | | 0.1102 | 6.86 | 37900 | 0.1610 | 1.0087 | | 0.1113 | 6.87 | 38000 | 0.1647 | 1.0199 | | 0.1113 | 6.89 | 38100 | 0.1609 | 1.0054 | | 0.1113 | 6.91 | 38200 | 0.1602 | 1.0145 | | 0.1113 | 6.93 | 38300 | 0.1602 | 1.0144 | | 0.1113 | 6.95 | 38400 | 0.1602 | 1.0375 | | 0.1071 | 6.96 | 38500 | 0.1592 | 1.0259 | | 0.1071 | 6.98 | 38600 | 0.1612 | 1.0236 | | 0.1071 | 7.0 | 38700 | 0.1621 | 1.0277 | | 0.1071 | 7.02 | 38800 | 0.1669 | 1.0367 | | 0.1071 | 7.04 | 38900 | 0.1742 | 1.0484 | | 0.1062 | 7.05 | 39000 | 0.1752 | 1.0302 | | 0.1062 | 7.07 | 39100 | 0.1676 | 1.0244 | | 0.1062 | 7.09 | 39200 | 0.1723 | 1.0300 | | 0.1062 | 7.11 | 39300 | 0.1727 | 1.0294 | | 0.1062 | 7.13 | 39400 | 0.1711 | 1.0255 | | 0.1021 | 7.15 | 39500 | 0.1699 | 1.0471 | | 0.1021 | 7.16 | 39600 | 0.1682 | 1.0426 | | 0.1021 | 7.18 | 39700 | 0.1713 | 1.0233 | | 0.1021 | 7.2 | 39800 | 0.1682 | 1.0259 | | 0.1021 | 7.22 | 39900 | 0.1710 | 1.0162 | | 0.103 | 7.24 | 40000 | 0.1725 | 1.0283 | | 0.103 | 7.25 | 40100 | 0.1729 | 1.0264 | | 0.103 | 7.27 | 40200 | 0.1665 | 1.0451 | | 0.103 | 7.29 | 40300 | 0.1671 | 1.0386 | | 0.103 | 7.31 | 40400 | 0.1671 | 1.0316 | | 0.0981 | 7.33 | 40500 | 0.1708 | 1.0257 | | 0.0981 | 7.34 | 40600 | 0.1642 | 1.0152 | | 0.0981 | 7.36 | 40700 | 0.1707 | 1.0110 | | 0.0981 | 7.38 | 40800 | 0.1675 | 1.0186 | | 0.0981 | 7.4 | 40900 | 0.1702 | 1.0123 | | 0.1005 | 7.42 | 41000 | 0.1699 | 1.0159 | | 0.1005 | 7.43 | 41100 | 0.1703 | 1.0219 | | 0.1005 | 7.45 | 41200 | 0.1707 | 1.0194 | | 0.1005 | 7.47 | 41300 | 0.1644 | 1.0016 | | 0.1005 | 7.49 | 41400 | 0.1716 | 0.9941 | | 0.1021 | 7.51 | 41500 | 0.1670 | 1.0159 | | 0.1021 | 7.53 | 41600 | 0.1667 | 1.0033 | | 0.1021 | 7.54 | 41700 | 0.1667 | 1.0176 | | 0.1021 | 7.56 | 41800 | 0.1679 | 1.0194 | | 0.1021 | 7.58 | 41900 | 0.1632 | 1.0418 | | 0.0963 | 7.6 | 42000 | 0.1712 | 1.0152 | | 0.0963 | 7.62 | 42100 | 0.1632 | 1.0364 | | 0.0963 | 7.63 | 42200 | 0.1702 | 1.0229 | | 0.0963 | 7.65 | 42300 | 0.1655 | 1.0179 | | 0.0963 | 7.67 | 42400 | 0.1698 | 1.0329 | | 0.1014 | 7.69 | 42500 | 0.1691 | 1.0398 | | 0.1014 | 7.71 | 42600 | 0.1638 | 1.0487 | | 0.1014 | 7.72 | 42700 | 0.1617 | 1.0210 | | 0.1014 | 7.74 | 42800 | 0.1648 | 1.0124 | | 0.1014 | 7.76 | 42900 | 0.1608 | 1.0202 | | 0.1008 | 7.78 | 43000 | 0.1611 | 1.0353 | | 0.1008 | 7.8 | 43100 | 0.1633 | 1.0319 | | 0.1008 | 7.81 | 43200 | 0.1640 | 1.0032 | | 0.1008 | 7.83 | 43300 | 0.1589 | 0.9985 | | 0.1008 | 7.85 | 43400 | 0.1630 | 0.9975 | | 0.0988 | 7.87 | 43500 | 0.1604 | 1.0053 | | 0.0988 | 7.89 | 43600 | 0.1687 | 1.0063 | | 0.0988 | 7.91 | 43700 | 0.1619 | 1.0096 | | 0.0988 | 7.92 | 43800 | 0.1565 | 0.9901 | | 0.0988 | 7.94 | 43900 | 0.1619 | 0.9742 | | 0.102 | 7.96 | 44000 | 0.1598 | 0.9593 | | 0.102 | 7.98 | 44100 | 0.1635 | 0.9718 | | 0.102 | 8.0 | 44200 | 0.1624 | 0.9903 | | 0.102 | 8.01 | 44300 | 0.1605 | 0.9882 | | 0.102 | 8.03 | 44400 | 0.1657 | 1.0128 | | 0.0961 | 8.05 | 44500 | 0.1651 | 1.0155 | | 0.0961 | 8.07 | 44600 | 0.1680 | 1.0194 | | 0.0961 | 8.09 | 44700 | 0.1694 | 1.0112 | | 0.0961 | 8.1 | 44800 | 0.1665 | 1.0073 | | 0.0961 | 8.12 | 44900 | 0.1612 | 1.0200 | | 0.0894 | 8.14 | 45000 | 0.1652 | 1.0337 | | 0.0894 | 8.16 | 45100 | 0.1626 | 1.0086 | | 0.0894 | 8.18 | 45200 | 0.1639 | 1.0083 | | 0.0894 | 8.19 | 45300 | 0.1634 | 1.0223 | | 0.0894 | 8.21 | 45400 | 0.1631 | 1.0339 | | 0.0887 | 8.23 | 45500 | 0.1640 | 1.0311 | | 0.0887 | 8.25 | 45600 | 0.1661 | 1.0264 | | 0.0887 | 8.27 | 45700 | 0.1650 | 1.0315 | | 0.0887 | 8.29 | 45800 | 0.1624 | 1.0390 | | 0.0887 | 8.3 | 45900 | 0.1624 | 1.0350 | | 0.0884 | 8.32 | 46000 | 0.1615 | 1.0318 | | 0.0884 | 8.34 | 46100 | 0.1628 | 1.0410 | | 0.0884 | 8.36 | 46200 | 0.1627 | 1.0429 | | 0.0884 | 8.38 | 46300 | 0.1644 | 1.0320 | | 0.0884 | 8.39 | 46400 | 0.1633 | 1.0177 | | 0.0893 | 8.41 | 46500 | 0.1654 | 1.0189 | | 0.0893 | 8.43 | 46600 | 0.1598 | 1.0154 | | 0.0893 | 8.45 | 46700 | 0.1618 | 1.0250 | | 0.0893 | 8.47 | 46800 | 0.1639 | 1.0402 | | 0.0893 | 8.48 | 46900 | 0.1616 | 1.0336 | | 0.0869 | 8.5 | 47000 | 0.1613 | 1.0296 | | 0.0869 | 8.52 | 47100 | 0.1648 | 1.0568 | | 0.0869 | 8.54 | 47200 | 0.1625 | 1.0256 | | 0.0869 | 8.56 | 47300 | 0.1609 | 1.0390 | | 0.0869 | 8.57 | 47400 | 0.1606 | 1.0450 | | 0.0894 | 8.59 | 47500 | 0.1605 | 1.0445 | | 0.0894 | 8.61 | 47600 | 0.1660 | 1.0402 | | 0.0894 | 8.63 | 47700 | 0.1618 | 1.0444 | | 0.0894 | 8.65 | 47800 | 0.1669 | 1.0333 | | 0.0894 | 8.66 | 47900 | 0.1627 | 1.0364 | | 0.0885 | 8.68 | 48000 | 0.1616 | 1.0334 | | 0.0885 | 8.7 | 48100 | 0.1626 | 1.0564 | | 0.0885 | 8.72 | 48200 | 0.1624 | 1.0396 | | 0.0885 | 8.74 | 48300 | 0.1623 | 1.0396 | | 0.0885 | 8.76 | 48400 | 0.1612 | 1.0112 | | 0.0888 | 8.77 | 48500 | 0.1638 | 1.0292 | | 0.0888 | 8.79 | 48600 | 0.1639 | 0.9988 | | 0.0888 | 8.81 | 48700 | 0.1618 | 1.0127 | | 0.0888 | 8.83 | 48800 | 0.1584 | 1.0042 | | 0.0888 | 8.85 | 48900 | 0.1615 | 1.0041 | | 0.0887 | 8.86 | 49000 | 0.1637 | 1.0269 | | 0.0887 | 8.88 | 49100 | 0.1627 | 0.9989 | | 0.0887 | 8.9 | 49200 | 0.1583 | 1.0104 | | 0.0887 | 8.92 | 49300 | 0.1600 | 1.0214 | | 0.0887 | 8.94 | 49400 | 0.1599 | 1.0126 | | 0.0893 | 8.95 | 49500 | 0.1595 | 1.0516 | | 0.0893 | 8.97 | 49600 | 0.1625 | 1.0464 | | 0.0893 | 8.99 | 49700 | 0.1595 | 1.0361 | | 0.0893 | 9.01 | 49800 | 0.1614 | 1.0469 | | 0.0893 | 9.03 | 49900 | 0.1612 | 1.0304 | | 0.0834 | 9.04 | 50000 | 0.1643 | 1.0335 | | 0.0834 | 9.06 | 50100 | 0.1640 | 1.0175 | | 0.0834 | 9.08 | 50200 | 0.1655 | 1.0264 | | 0.0834 | 9.1 | 50300 | 0.1678 | 1.0243 | | 0.0834 | 9.12 | 50400 | 0.1659 | 1.0145 | | 0.079 | 9.14 | 50500 | 0.1644 | 1.0316 | | 0.079 | 9.15 | 50600 | 0.1630 | 1.0326 | | 0.079 | 9.17 | 50700 | 0.1634 | 1.0154 | | 0.079 | 9.19 | 50800 | 0.1697 | 1.0095 | | 0.079 | 9.21 | 50900 | 0.1678 | 1.0050 | | 0.078 | 9.23 | 51000 | 0.1626 | 1.0159 | | 0.078 | 9.24 | 51100 | 0.1666 | 1.0238 | | 0.078 | 9.26 | 51200 | 0.1644 | 1.0244 | | 0.078 | 9.28 | 51300 | 0.1655 | 1.0345 | | 0.078 | 9.3 | 51400 | 0.1615 | 1.0237 | | 0.0776 | 9.32 | 51500 | 0.1664 | 1.0180 | | 0.0776 | 9.33 | 51600 | 0.1603 | 1.0208 | | 0.0776 | 9.35 | 51700 | 0.1594 | 1.0230 | | 0.0776 | 9.37 | 51800 | 0.1622 | 1.0201 | | 0.0776 | 9.39 | 51900 | 0.1596 | 1.0039 | | 0.0782 | 9.41 | 52000 | 0.1645 | 1.0204 | | 0.0782 | 9.42 | 52100 | 0.1640 | 1.0318 | | 0.0782 | 9.44 | 52200 | 0.1621 | 1.0290 | | 0.0782 | 9.46 | 52300 | 0.1638 | 1.0318 | | 0.0782 | 9.48 | 52400 | 0.1613 | 1.0217 | | 0.0782 | 9.5 | 52500 | 0.1609 | 1.0261 | | 0.0782 | 9.52 | 52600 | 0.1625 | 1.0101 | | 0.0782 | 9.53 | 52700 | 0.1613 | 1.0058 | | 0.0782 | 9.55 | 52800 | 0.1599 | 1.0068 | | 0.0782 | 9.57 | 52900 | 0.1600 | 1.0110 | | 0.0797 | 9.59 | 53000 | 0.1594 | 1.0171 | | 0.0797 | 9.61 | 53100 | 0.1583 | 1.0124 | | 0.0797 | 9.62 | 53200 | 0.1646 | 1.0093 | | 0.0797 | 9.64 | 53300 | 0.1580 | 1.0201 | | 0.0797 | 9.66 | 53400 | 0.1599 | 1.0207 | | 0.0783 | 9.68 | 53500 | 0.1577 | 1.0226 | | 0.0783 | 9.7 | 53600 | 0.1593 | 1.0160 | | 0.0783 | 9.71 | 53700 | 0.1570 | 1.0173 | | 0.0783 | 9.73 | 53800 | 0.1614 | 1.0299 | | 0.0783 | 9.75 | 53900 | 0.1610 | 1.0184 | | 0.0779 | 9.77 | 54000 | 0.1606 | 1.0173 | | 0.0779 | 9.79 | 54100 | 0.1577 | 1.0032 | | 0.0779 | 9.8 | 54200 | 0.1590 | 1.0070 | | 0.0779 | 9.82 | 54300 | 0.1580 | 1.0257 | | 0.0779 | 9.84 | 54400 | 0.1592 | 1.0108 | | 0.0778 | 9.86 | 54500 | 0.1617 | 0.9907 | | 0.0778 | 9.88 | 54600 | 0.1605 | 1.0189 | | 0.0778 | 9.89 | 54700 | 0.1605 | 1.0177 | | 0.0778 | 9.91 | 54800 | 0.1536 | 1.0275 | | 0.0778 | 9.93 | 54900 | 0.1658 | 1.0282 | | 0.0777 | 9.95 | 55000 | 0.1543 | 1.0385 | | 0.0777 | 9.97 | 55100 | 0.1559 | 1.0375 | | 0.0777 | 9.99 | 55200 | 0.1590 | 1.0215 | | 0.0777 | 10.0 | 55300 | 0.1624 | 1.0242 | | 0.0777 | 10.02 | 55400 | 0.1635 | 1.0244 | | 0.0712 | 10.04 | 55500 | 0.1629 | 1.0298 | | 0.0712 | 10.06 | 55600 | 0.1601 | 1.0299 | | 0.0712 | 10.08 | 55700 | 0.1625 | 1.0117 | | 0.0712 | 10.09 | 55800 | 0.1650 | 1.0233 | | 0.0712 | 10.11 | 55900 | 0.1631 | 1.0061 | | 0.0667 | 10.13 | 56000 | 0.1637 | 1.0226 | | 0.0667 | 10.15 | 56100 | 0.1607 | 1.0042 | | 0.0667 | 10.17 | 56200 | 0.1599 | 1.0117 | | 0.0667 | 10.18 | 56300 | 0.1623 | 1.0246 | | 0.0667 | 10.2 | 56400 | 0.1639 | 1.0294 | | 0.0695 | 10.22 | 56500 | 0.1650 | 1.0232 | | 0.0695 | 10.24 | 56600 | 0.1620 | 1.0289 | | 0.0695 | 10.26 | 56700 | 0.1667 | 1.0209 | | 0.0695 | 10.27 | 56800 | 0.1580 | 1.0163 | | 0.0695 | 10.29 | 56900 | 0.1646 | 1.0293 | | 0.0686 | 10.31 | 57000 | 0.1636 | 1.0106 | | 0.0686 | 10.33 | 57100 | 0.1586 | 1.0044 | | 0.0686 | 10.35 | 57200 | 0.1582 | 1.0213 | | 0.0686 | 10.37 | 57300 | 0.1627 | 1.0151 | | 0.0686 | 10.38 | 57400 | 0.1619 | 1.0248 | | 0.0686 | 10.4 | 57500 | 0.1596 | 1.0098 | | 0.0686 | 10.42 | 57600 | 0.1606 | 1.0031 | | 0.0686 | 10.44 | 57700 | 0.1620 | 1.0046 | | 0.0686 | 10.46 | 57800 | 0.1592 | 1.0018 | | 0.0686 | 10.47 | 57900 | 0.1592 | 1.0058 | | 0.0669 | 10.49 | 58000 | 0.1605 | 0.9961 | | 0.0669 | 10.51 | 58100 | 0.1632 | 1.0102 | | 0.0669 | 10.53 | 58200 | 0.1593 | 1.0061 | | 0.0669 | 10.55 | 58300 | 0.1586 | 1.0091 | | 0.0669 | 10.56 | 58400 | 0.1603 | 1.0085 | | 0.068 | 10.58 | 58500 | 0.1579 | 1.0031 | | 0.068 | 10.6 | 58600 | 0.1591 | 1.0021 | | 0.068 | 10.62 | 58700 | 0.1590 | 1.0163 | | 0.068 | 10.64 | 58800 | 0.1584 | 1.0045 | | 0.068 | 10.65 | 58900 | 0.1594 | 1.0158 | | 0.0693 | 10.67 | 59000 | 0.1568 | 1.0052 | | 0.0693 | 10.69 | 59100 | 0.1581 | 0.9955 | | 0.0693 | 10.71 | 59200 | 0.1622 | 0.9917 | | 0.0693 | 10.73 | 59300 | 0.1580 | 1.0018 | | 0.0693 | 10.75 | 59400 | 0.1601 | 1.0077 | | 0.0699 | 10.76 | 59500 | 0.1605 | 0.9997 | | 0.0699 | 10.78 | 59600 | 0.1585 | 1.0009 | | 0.0699 | 10.8 | 59700 | 0.1541 | 1.0058 | | 0.0699 | 10.82 | 59800 | 0.1583 | 1.0026 | | 0.0699 | 10.84 | 59900 | 0.1592 | 0.9992 | | 0.0671 | 10.85 | 60000 | 0.1590 | 1.0004 | | 0.0671 | 10.87 | 60100 | 0.1585 | 1.0060 | | 0.0671 | 10.89 | 60200 | 0.1579 | 1.0063 | | 0.0671 | 10.91 | 60300 | 0.1582 | 0.9949 | | 0.0671 | 10.93 | 60400 | 0.1562 | 1.0004 | | 0.0661 | 10.94 | 60500 | 0.1560 | 0.9950 | | 0.0661 | 10.96 | 60600 | 0.1564 | 0.9990 | | 0.0661 | 10.98 | 60700 | 0.1552 | 0.9982 | | 0.0661 | 11.0 | 60800 | 0.1596 | 1.0018 | | 0.0661 | 11.02 | 60900 | 0.1618 | 0.9905 | | 0.0634 | 11.03 | 61000 | 0.1652 | 0.9890 | | 0.0634 | 11.05 | 61100 | 0.1649 | 0.9886 | | 0.0634 | 11.07 | 61200 | 0.1668 | 0.9870 | | 0.0634 | 11.09 | 61300 | 0.1663 | 0.9921 | | 0.0634 | 11.11 | 61400 | 0.1650 | 0.9919 | | 0.0587 | 11.13 | 61500 | 0.1674 | 0.9831 | | 0.0587 | 11.14 | 61600 | 0.1633 | 0.9793 | | 0.0587 | 11.16 | 61700 | 0.1665 | 0.9781 | | 0.0587 | 11.18 | 61800 | 0.1642 | 0.9821 | | 0.0587 | 11.2 | 61900 | 0.1638 | 0.9797 | | 0.0581 | 11.22 | 62000 | 0.1628 | 0.9727 | | 0.0581 | 11.23 | 62100 | 0.1661 | 0.9796 | | 0.0581 | 11.25 | 62200 | 0.1641 | 0.9830 | | 0.0581 | 11.27 | 62300 | 0.1601 | 0.9867 | | 0.0581 | 11.29 | 62400 | 0.1626 | 0.9757 | | 0.0584 | 11.31 | 62500 | 0.1632 | 1.0014 | | 0.0584 | 11.32 | 62600 | 0.1626 | 1.0052 | | 0.0584 | 11.34 | 62700 | 0.1586 | 1.0098 | | 0.0584 | 11.36 | 62800 | 0.1597 | 1.0151 | | 0.0584 | 11.38 | 62900 | 0.1624 | 1.0054 | | 0.0589 | 11.4 | 63000 | 0.1618 | 1.0018 | | 0.0589 | 11.41 | 63100 | 0.1635 | 1.0032 | | 0.0589 | 11.43 | 63200 | 0.1654 | 1.0142 | | 0.0589 | 11.45 | 63300 | 0.1646 | 1.0031 | | 0.0589 | 11.47 | 63400 | 0.1618 | 1.0118 | | 0.0579 | 11.49 | 63500 | 0.1634 | 1.0218 | | 0.0579 | 11.51 | 63600 | 0.1616 | 1.0179 | | 0.0579 | 11.52 | 63700 | 0.1603 | 1.0036 | | 0.0579 | 11.54 | 63800 | 0.1610 | 1.0150 | | 0.0579 | 11.56 | 63900 | 0.1605 | 1.0285 | | 0.0572 | 11.58 | 64000 | 0.1621 | 1.0261 | | 0.0572 | 11.6 | 64100 | 0.1625 | 1.0252 | | 0.0572 | 11.61 | 64200 | 0.1677 | 1.0257 | | 0.0572 | 11.63 | 64300 | 0.1656 | 1.0243 | | 0.0572 | 11.65 | 64400 | 0.1669 | 1.0270 | | 0.0592 | 11.67 | 64500 | 0.1605 | 1.0305 | | 0.0592 | 11.69 | 64600 | 0.1633 | 1.0277 | | 0.0592 | 11.7 | 64700 | 0.1606 | 1.0176 | | 0.0592 | 11.72 | 64800 | 0.1618 | 1.0249 | | 0.0592 | 11.74 | 64900 | 0.1609 | 1.0113 | | 0.0595 | 11.76 | 65000 | 0.1609 | 1.0254 | | 0.0595 | 11.78 | 65100 | 0.1662 | 1.0275 | | 0.0595 | 11.79 | 65200 | 0.1652 | 1.0164 | | 0.0595 | 11.81 | 65300 | 0.1638 | 1.0266 | | 0.0595 | 11.83 | 65400 | 0.1589 | 1.0274 | | 0.0588 | 11.85 | 65500 | 0.1607 | 1.0136 | | 0.0588 | 11.87 | 65600 | 0.1592 | 1.0136 | | 0.0588 | 11.88 | 65700 | 0.1581 | 1.0183 | | 0.0588 | 11.9 | 65800 | 0.1587 | 1.0133 | | 0.0588 | 11.92 | 65900 | 0.1596 | 1.0170 | | 0.0558 | 11.94 | 66000 | 0.1590 | 1.0161 | | 0.0558 | 11.96 | 66100 | 0.1597 | 1.0193 | | 0.0558 | 11.98 | 66200 | 0.1590 | 1.0193 | | 0.0558 | 11.99 | 66300 | 0.1608 | 1.0242 | | 0.0558 | 12.01 | 66400 | 0.1642 | 1.0231 | | 0.0555 | 12.03 | 66500 | 0.1679 | 1.0168 | | 0.0555 | 12.05 | 66600 | 0.1674 | 1.0083 | | 0.0555 | 12.07 | 66700 | 0.1658 | 1.0069 | | 0.0555 | 12.08 | 66800 | 0.1661 | 1.0134 | | 0.0555 | 12.1 | 66900 | 0.1682 | 1.0274 | | 0.0508 | 12.12 | 67000 | 0.1702 | 1.0219 | | 0.0508 | 12.14 | 67100 | 0.1694 | 1.0219 | | 0.0508 | 12.16 | 67200 | 0.1667 | 1.0236 | | 0.0508 | 12.17 | 67300 | 0.1672 | 1.0253 | | 0.0508 | 12.19 | 67400 | 0.1640 | 1.0215 | | 0.0513 | 12.21 | 67500 | 0.1649 | 1.0242 | | 0.0513 | 12.23 | 67600 | 0.1687 | 1.0262 | | 0.0513 | 12.25 | 67700 | 0.1655 | 1.0231 | | 0.0513 | 12.26 | 67800 | 0.1692 | 1.0176 | | 0.0513 | 12.28 | 67900 | 0.1675 | 1.0202 | | 0.0519 | 12.3 | 68000 | 0.1644 | 1.0241 | | 0.0519 | 12.32 | 68100 | 0.1651 | 1.0297 | | 0.0519 | 12.34 | 68200 | 0.1661 | 1.0287 | | 0.0519 | 12.36 | 68300 | 0.1665 | 1.0257 | | 0.0519 | 12.37 | 68400 | 0.1685 | 1.0233 | | 0.0522 | 12.39 | 68500 | 0.1636 | 1.0177 | | 0.0522 | 12.41 | 68600 | 0.1709 | 1.0200 | | 0.0522 | 12.43 | 68700 | 0.1684 | 1.0164 | | 0.0522 | 12.45 | 68800 | 0.1666 | 1.0119 | | 0.0522 | 12.46 | 68900 | 0.1683 | 1.0136 | | 0.05 | 12.48 | 69000 | 0.1696 | 1.0127 | | 0.05 | 12.5 | 69100 | 0.1708 | 1.0184 | | 0.05 | 12.52 | 69200 | 0.1654 | 1.0282 | | 0.05 | 12.54 | 69300 | 0.1700 | 1.0235 | | 0.05 | 12.55 | 69400 | 0.1688 | 1.0257 | | 0.0513 | 12.57 | 69500 | 0.1646 | 1.0274 | | 0.0513 | 12.59 | 69600 | 0.1660 | 1.0247 | | 0.0513 | 12.61 | 69700 | 0.1657 | 1.0188 | | 0.0513 | 12.63 | 69800 | 0.1654 | 1.0087 | | 0.0513 | 12.64 | 69900 | 0.1681 | 1.0146 | | 0.0512 | 12.66 | 70000 | 0.1660 | 1.0185 | | 0.0512 | 12.68 | 70100 | 0.1690 | 1.0214 | | 0.0512 | 12.7 | 70200 | 0.1683 | 1.0160 | | 0.0512 | 12.72 | 70300 | 0.1695 | 1.0198 | | 0.0512 | 12.74 | 70400 | 0.1666 | 1.0193 | | 0.0484 | 12.75 | 70500 | 0.1654 | 1.0142 | | 0.0484 | 12.77 | 70600 | 0.1598 | 1.0154 | | 0.0484 | 12.79 | 70700 | 0.1623 | 1.0139 | | 0.0484 | 12.81 | 70800 | 0.1662 | 1.0180 | | 0.0484 | 12.83 | 70900 | 0.1659 | 1.0232 | | 0.0501 | 12.84 | 71000 | 0.1662 | 1.0202 | | 0.0501 | 12.86 | 71100 | 0.1639 | 1.0161 | | 0.0501 | 12.88 | 71200 | 0.1666 | 1.0151 | | 0.0501 | 12.9 | 71300 | 0.1644 | 1.0129 | | 0.0501 | 12.92 | 71400 | 0.1642 | 1.0171 | | 0.0482 | 12.93 | 71500 | 0.1635 | 1.0162 | | 0.0482 | 12.95 | 71600 | 0.1637 | 1.0186 | | 0.0482 | 12.97 | 71700 | 0.1639 | 1.0142 | | 0.0482 | 12.99 | 71800 | 0.1643 | 1.0122 | | 0.0482 | 13.01 | 71900 | 0.1679 | 1.0156 | | 0.0483 | 13.02 | 72000 | 0.1717 | 1.0224 | | 0.0483 | 13.04 | 72100 | 0.1742 | 1.0229 | | 0.0483 | 13.06 | 72200 | 0.1718 | 1.0237 | | 0.0483 | 13.08 | 72300 | 0.1742 | 1.0266 | | 0.0483 | 13.1 | 72400 | 0.1736 | 1.0257 | | 0.0443 | 13.12 | 72500 | 0.1741 | 1.0275 | | 0.0443 | 13.13 | 72600 | 0.1745 | 1.0325 | | 0.0443 | 13.15 | 72700 | 0.1737 | 1.0296 | | 0.0443 | 13.17 | 72800 | 0.1722 | 1.0303 | | 0.0443 | 13.19 | 72900 | 0.1702 | 1.0305 | | 0.0424 | 13.21 | 73000 | 0.1733 | 1.0241 | | 0.0424 | 13.22 | 73100 | 0.1748 | 1.0243 | | 0.0424 | 13.24 | 73200 | 0.1760 | 1.0231 | | 0.0424 | 13.26 | 73300 | 0.1745 | 1.0241 | | 0.0424 | 13.28 | 73400 | 0.1772 | 1.0217 | | 0.0424 | 13.3 | 73500 | 0.1755 | 1.0206 | | 0.0424 | 13.31 | 73600 | 0.1743 | 1.0242 | | 0.0424 | 13.33 | 73700 | 0.1738 | 1.0208 | | 0.0424 | 13.35 | 73800 | 0.1736 | 1.0249 | | 0.0424 | 13.37 | 73900 | 0.1747 | 1.0271 | | 0.0437 | 13.39 | 74000 | 0.1707 | 1.0241 | | 0.0437 | 13.4 | 74100 | 0.1731 | 1.0269 | | 0.0437 | 13.42 | 74200 | 0.1743 | 1.0290 | | 0.0437 | 13.44 | 74300 | 0.1739 | 1.0266 | | 0.0437 | 13.46 | 74400 | 0.1763 | 1.0246 | | 0.0443 | 13.48 | 74500 | 0.1724 | 1.0209 | | 0.0443 | 13.49 | 74600 | 0.1744 | 1.0244 | | 0.0443 | 13.51 | 74700 | 0.1717 | 1.0232 | | 0.0443 | 13.53 | 74800 | 0.1754 | 1.0217 | | 0.0443 | 13.55 | 74900 | 0.1721 | 1.0234 | | 0.0435 | 13.57 | 75000 | 0.1751 | 1.0197 | | 0.0435 | 13.59 | 75100 | 0.1727 | 1.0285 | | 0.0435 | 13.6 | 75200 | 0.1715 | 1.0221 | | 0.0435 | 13.62 | 75300 | 0.1746 | 1.0247 | | 0.0435 | 13.64 | 75400 | 0.1712 | 1.0231 | | 0.0436 | 13.66 | 75500 | 0.1719 | 1.0228 | | 0.0436 | 13.68 | 75600 | 0.1727 | 1.0197 | | 0.0436 | 13.69 | 75700 | 0.1750 | 1.0252 | | 0.0436 | 13.71 | 75800 | 0.1702 | 1.0241 | | 0.0436 | 13.73 | 75900 | 0.1720 | 1.0250 | | 0.0433 | 13.75 | 76000 | 0.1744 | 1.0210 | | 0.0433 | 13.77 | 76100 | 0.1735 | 1.0211 | | 0.0433 | 13.78 | 76200 | 0.1727 | 1.0205 | | 0.0433 | 13.8 | 76300 | 0.1706 | 1.0218 | | 0.0433 | 13.82 | 76400 | 0.1709 | 1.0238 | | 0.0431 | 13.84 | 76500 | 0.1705 | 1.0197 | | 0.0431 | 13.86 | 76600 | 0.1734 | 1.0223 | | 0.0431 | 13.87 | 76700 | 0.1695 | 1.0250 | | 0.0431 | 13.89 | 76800 | 0.1734 | 1.0232 | | 0.0431 | 13.91 | 76900 | 0.1724 | 1.0219 | | 0.041 | 13.93 | 77000 | 0.1706 | 1.0236 | | 0.041 | 13.95 | 77100 | 0.1689 | 1.0220 | | 0.041 | 13.97 | 77200 | 0.1738 | 1.0230 | | 0.041 | 13.98 | 77300 | 0.1727 | 1.0254 | | 0.041 | 14.0 | 77400 | 0.1721 | 1.0261 | | 0.041 | 14.02 | 77500 | 0.1760 | 1.0261 | | 0.041 | 14.04 | 77600 | 0.1772 | 1.0202 | | 0.041 | 14.06 | 77700 | 0.1782 | 1.0202 | | 0.041 | 14.07 | 77800 | 0.1777 | 1.0222 | | 0.041 | 14.09 | 77900 | 0.1787 | 1.0203 | | 0.0383 | 14.11 | 78000 | 0.1790 | 1.0236 | | 0.0383 | 14.13 | 78100 | 0.1812 | 1.0245 | | 0.0383 | 14.15 | 78200 | 0.1778 | 1.0224 | | 0.0383 | 14.16 | 78300 | 0.1771 | 1.0231 | | 0.0383 | 14.18 | 78400 | 0.1782 | 1.0242 | | 0.0391 | 14.2 | 78500 | 0.1785 | 1.0262 | | 0.0391 | 14.22 | 78600 | 0.1791 | 1.0261 | | 0.0391 | 14.24 | 78700 | 0.1770 | 1.0254 | | 0.0391 | 14.25 | 78800 | 0.1810 | 1.0257 | | 0.0391 | 14.27 | 78900 | 0.1794 | 1.0241 | | 0.0387 | 14.29 | 79000 | 0.1774 | 1.0256 | | 0.0387 | 14.31 | 79100 | 0.1774 | 1.0236 | | 0.0387 | 14.33 | 79200 | 0.1759 | 1.0222 | | 0.0387 | 14.35 | 79300 | 0.1787 | 1.0237 | | 0.0387 | 14.36 | 79400 | 0.1788 | 1.0227 | | 0.0372 | 14.38 | 79500 | 0.1789 | 1.0232 | | 0.0372 | 14.4 | 79600 | 0.1771 | 1.0254 | | 0.0372 | 14.42 | 79700 | 0.1777 | 1.0244 | | 0.0372 | 14.44 | 79800 | 0.1791 | 1.0225 | | 0.0372 | 14.45 | 79900 | 0.1786 | 1.0237 | | 0.0385 | 14.47 | 80000 | 0.1782 | 1.0243 | | 0.0385 | 14.49 | 80100 | 0.1770 | 1.0236 | | 0.0385 | 14.51 | 80200 | 0.1782 | 1.0240 | | 0.0385 | 14.53 | 80300 | 0.1764 | 1.0243 | | 0.0385 | 14.54 | 80400 | 0.1748 | 1.0248 | | 0.039 | 14.56 | 80500 | 0.1758 | 1.0232 | | 0.039 | 14.58 | 80600 | 0.1763 | 1.0246 | | 0.039 | 14.6 | 80700 | 0.1770 | 1.0220 | | 0.039 | 14.62 | 80800 | 0.1788 | 1.0225 | | 0.039 | 14.63 | 80900 | 0.1781 | 1.0230 | | 0.039 | 14.65 | 81000 | 0.1779 | 1.0230 | | 0.039 | 14.67 | 81100 | 0.1755 | 1.0212 | | 0.039 | 14.69 | 81200 | 0.1765 | 1.0226 | | 0.039 | 14.71 | 81300 | 0.1787 | 1.0241 | | 0.039 | 14.72 | 81400 | 0.1782 | 1.0250 | | 0.0368 | 14.74 | 81500 | 0.1780 | 1.0248 | | 0.0368 | 14.76 | 81600 | 0.1782 | 1.0242 | | 0.0368 | 14.78 | 81700 | 0.1782 | 1.0242 | | 0.0368 | 14.8 | 81800 | 0.1792 | 1.0241 | | 0.0368 | 14.82 | 81900 | 0.1796 | 1.0238 | | 0.0378 | 14.83 | 82000 | 0.1795 | 1.0236 | | 0.0378 | 14.85 | 82100 | 0.1796 | 1.0239 | | 0.0378 | 14.87 | 82200 | 0.1792 | 1.0236 | | 0.0378 | 14.89 | 82300 | 0.1789 | 1.0239 | | 0.0378 | 14.91 | 82400 | 0.1788 | 1.0238 | | 0.0386 | 14.92 | 82500 | 0.1787 | 1.0239 | | 0.0386 | 14.94 | 82600 | 0.1786 | 1.0236 | | 0.0386 | 14.96 | 82700 | 0.1786 | 1.0237 | | 0.0386 | 14.98 | 82800 | 0.1787 | 1.0239 | | 0.0386 | 15.0 | 82900 | 0.1788 | 1.0238 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1 - Datasets 1.17.0 - Tokenizers 0.10.3
fgaim/t5-small-squad-v2
fgaim
2022-01-30T21:35:54Z
34
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "en", "dataset:c4", "dataset:squad", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: - en datasets: - c4 - squad tags: - text2text-generation widget: - text: "question: What is the atomic number for oxygen? context: Oxygen is a chemical element with symbol O and atomic number 8." - text: "question: What is the chemical symbol of Oxygen? context: Oxygen is a chemical element with symbol O and atomic number 8." license: apache-2.0 --- T5-small for QA --- [Google's T5-small](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) pre-trained on the [C4](https://huggingface.co/datasets/c4) dataset, fine-tuned for Question-Answering on [SQuAD v2](https://huggingface.co/datasets/squad_v2) with the following hyperparameters: ``` optimizer=adamw_hf learning_rate=3e-5 adam_beta1=0.9 adam_beta2=0.999 adam_epsilon=1e-08 num_train_epochs=2 per_device_train_batch_size=12 ``` Usage --- The input [context and question] has to be prepared in a specific way as follows: ```python from transformers import pipeline def prep_input(_context, _question): return " ".join(["question:", _question.strip(), "context:", _context.strip()]) t5qa = pipeline("text2text-generation", "fgaim/t5-small-squad-v2") context = """ Oxygen is a chemical element with symbol O and atomic number 8. It is a member of the chalcogen group on the periodic table and is a highly reactive nonmetal and oxidizing agent that readily forms compounds (notably oxides) with most elements. By mass, oxygen is the third-most abundant element in the universe, after hydrogen and helium. At standard temperature and pressure, two atoms of the element bind to form dioxygen, a colorless and odorless diatomic gas with the formula O. """ t5qa(prep_input(context, "How many atoms combine to form dioxygen?")) # [{'generated_text': 'two'}] t5qa(prep_input(context, "What element makes up almost half of the earth's crust by mass?")) # [{'generated_text': 'oxygen'}] t5qa(prep_input(context, "What are the most abundent elements of the universe by mass?")) # [{'generated_text': 'hydrogen and helium'}] ```
huggingtweets/newsfrmhome
huggingtweets
2022-01-30T20:50:52Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/newsfrmhome/1643575848331/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1484642358641807369/XYfGxtPs_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">sarah (allegedly)</div> <div style="text-align: center; font-size: 14px;">@newsfrmhome</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from sarah (allegedly). | Data | sarah (allegedly) | | --- | --- | | Tweets downloaded | 3229 | | Retweets | 448 | | Short tweets | 378 | | Tweets kept | 2403 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1kr9qjmz/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @newsfrmhome's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/zjy142t4) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/zjy142t4/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/newsfrmhome') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
osama7/t5-summarization-multinews
osama7
2022-01-30T20:42:51Z
5
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
This is a t5-base model trained on the multi_news dataset for abstraction summarization
gagan3012/xls-r-300m-hi
gagan3012
2022-01-30T20:39:40Z
10
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "hi", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - hi license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_8_0 - generated_from_trainer datasets: - common_voice model-index: - name: xls-r-300m-hi results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xls-r-300m-hi This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HI dataset. It achieves the following results on the evaluation set: - Loss: 0.7522 - Wer: 1.0091 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 50.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.0417 | 2.59 | 500 | 5.1484 | 1.0 | | 3.3722 | 5.18 | 1000 | 3.3380 | 1.0001 | | 1.9752 | 7.77 | 1500 | 1.3910 | 1.0074 | | 1.5868 | 10.36 | 2000 | 1.0298 | 1.0084 | | 1.4413 | 12.95 | 2500 | 0.9313 | 1.0175 | | 1.3296 | 15.54 | 3000 | 0.8966 | 1.0194 | | 1.2746 | 18.13 | 3500 | 0.8875 | 1.0097 | | 1.2147 | 20.73 | 4000 | 0.8746 | 1.0089 | | 1.1774 | 23.32 | 4500 | 0.8383 | 1.0198 | | 1.129 | 25.91 | 5000 | 0.7848 | 1.0167 | | 1.0995 | 28.5 | 5500 | 0.7992 | 1.0210 | | 1.0665 | 31.09 | 6000 | 0.7878 | 1.0107 | | 1.0321 | 33.68 | 6500 | 0.7653 | 1.0082 | | 1.0068 | 36.27 | 7000 | 0.7635 | 1.0065 | | 0.9916 | 38.86 | 7500 | 0.7728 | 1.0090 | | 0.9735 | 41.45 | 8000 | 0.7688 | 1.0070 | | 0.9745 | 44.04 | 8500 | 0.7455 | 1.0097 | | 0.9677 | 46.63 | 9000 | 0.7605 | 1.0099 | | 0.9313 | 49.22 | 9500 | 0.7527 | 1.0097 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
z-uo/vits-male-it
z-uo
2022-01-30T20:20:35Z
4
1
transformers
[ "transformers", "tensorboard", "text-to-speech", "it", "dataset:z-uo/female-LJSpeech-italian", "endpoints_compatible", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - text-to-speech language: - it model-index: - name: vits-male-it results: [] datasets: - z-uo/female-LJSpeech-italian --- # Coqui Model for TTS ``` pip install TTS git clone https://huggingface.co/z-uo/vits-male-it # predict one tts --text "ciao pluto" --model_path "vits-male-it/best_model.pth.tar" --config_path "vits-male-it/config.json" # predict server tts-server --model_path "vits-male-it/best_model.pth.tar" --config_path "vits-male-it/config.json" firefox localhost:5002 ``` More information about training script in [this repo](https://github.com/nicolalandro/train_coqui_tts_ita).
Sindhu/rembert-squad2
Sindhu
2022-01-30T18:35:08Z
5
3
transformers
[ "transformers", "pytorch", "rembert", "question-answering", "multilingual", "dataset:squad2", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: - multilingual tags: - question-answering datasets: - squad2 metrics: - squad2 --- # Rembert Squad2 This model is finetuned for QA task on Squad2 from [Rembert checkpoint](https://huggingface.co/google/rembert). ## Hyperparameters ``` Batch Size: 4 Grad Accumulation Steps = 8 Total epochs = 3 MLM Checkpoint = "rembert" max_seq_len = 256 learning_rate = 1e-5 lr_schedule = LinearWarmup warmup_ratio = 0.1 doc_stride = 128 ``` ## Squad 2 Evaluation stats: Metrics generated from [the official Squad2 evaluation script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/) ```json { "exact": 84.51107554956624, "f1": 87.46644042781853, "total": 11873, "HasAns_exact": 80.97165991902834, "HasAns_f1": 86.89086491219469, "HasAns_total": 5928, "NoAns_exact": 88.04037005887301, "NoAns_f1": 88.04037005887301, "NoAns_total": 5945 } ``` For any questions, you can reach out to me [on Twitter](https://twitter.com/batw0man)
anuragshas/wav2vec2-xls-r-1b-hi-cv8
anuragshas
2022-01-30T15:20:16Z
7
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "hi", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - hi license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_8_0 - generated_from_trainer datasets: - common_voice model-index: - name: '' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HI dataset. It achieves the following results on the evaluation set: - Loss: 0.6780 - Wer: 0.3670 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1500 - num_epochs: 50.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.514 | 2.07 | 400 | 1.4589 | 0.8531 | | 1.4289 | 4.15 | 800 | 0.8940 | 0.6475 | | 1.276 | 6.22 | 1200 | 0.7743 | 0.6089 | | 1.2213 | 8.29 | 1600 | 0.6919 | 0.4973 | | 1.1522 | 10.36 | 2000 | 0.6635 | 0.4588 | | 1.0914 | 12.44 | 2400 | 0.6839 | 0.4586 | | 1.0499 | 14.51 | 2800 | 0.7151 | 0.4467 | | 1.0238 | 16.58 | 3200 | 0.6824 | 0.4436 | | 0.9963 | 18.65 | 3600 | 0.6872 | 0.4437 | | 0.9728 | 20.73 | 4000 | 0.7047 | 0.4244 | | 0.9373 | 22.8 | 4400 | 0.6569 | 0.4189 | | 0.9028 | 24.87 | 4800 | 0.6623 | 0.4094 | | 0.8759 | 26.94 | 5200 | 0.6723 | 0.4152 | | 0.8824 | 29.02 | 5600 | 0.6467 | 0.4017 | | 0.8371 | 31.09 | 6000 | 0.6911 | 0.4080 | | 0.8205 | 33.16 | 6400 | 0.7145 | 0.4063 | | 0.7837 | 35.23 | 6800 | 0.7037 | 0.3930 | | 0.7708 | 37.31 | 7200 | 0.6925 | 0.3840 | | 0.7359 | 39.38 | 7600 | 0.7034 | 0.3829 | | 0.7153 | 41.45 | 8000 | 0.7030 | 0.3794 | | 0.7127 | 43.52 | 8400 | 0.6823 | 0.3761 | | 0.6884 | 45.6 | 8800 | 0.6854 | 0.3711 | | 0.6835 | 47.67 | 9200 | 0.6723 | 0.3665 | | 0.6703 | 49.74 | 9600 | 0.6773 | 0.3668 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
huggingtweets/sardoche_lol
huggingtweets
2022-01-30T15:00:56Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/sardoche_lol/1643554725712/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1450594532186263560/hiL4EyAm_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Sardoche</div> <div style="text-align: center; font-size: 14px;">@sardoche_lol</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Sardoche. | Data | Sardoche | | --- | --- | | Tweets downloaded | 3249 | | Retweets | 242 | | Short tweets | 374 | | Tweets kept | 2633 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/24g273w4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sardoche_lol's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3k2srh5a) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3k2srh5a/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/sardoche_lol') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
sshasnain/finetune-wav2vec2-large-xlsr-bengali
sshasnain
2022-01-30T07:55:29Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "bn", "audio", "speech", "dataset:custom", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: Bengali datasets: - custom metrics: - wer tags: - bn - audio - automatic-speech-recognition - speech license: apache-2.0 model-index: - name: finetune-wav2vec2-large-xlsr-bengali results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: custom type: custom args: ben metrics: - name: Test WER type: wer value: 0.011 --- # finetune-wav2vec2-large-xlsr-bengali *** ## Usage ***
pinecone/mpnet-retriever-discourse
pinecone
2022-01-30T07:23:58Z
4
2
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "question-answering", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers - question-answering --- # MPNet Retriever (Discourse) This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used as a retriever model in open-domain question-answering tasks. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Training The model was fine-tuned on question-answer pairs scraper from several ML-focused Discourse forums \[HuggingFace, PyTorch, Streamlit, TensorFlow\]. The model was trained with the parameters: **DataLoader**: `sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 105 with parameters: ``` {'batch_size': 12} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors Fine-tuned by [James Briggs](https://www.youtube.com/c/jamesbriggs) at [Pinecone](https://www.pinecone.io). Learn more about the [fine-tuning process here](https://www.pinecone.io/learn/retriever-models/).
jcmc/wav2vec-1b-cv8-ir-n
jcmc
2022-01-30T07:16:19Z
8
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - ga-IE license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_8_0 - generated_from_trainer datasets: - common_voice model-index: - name: '' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - GA-IE dataset. It achieves the following results on the evaluation set: - Loss: 0.9810 - Wer: 0.4761 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.2427 | 15.15 | 500 | 1.4632 | 0.9481 | | 1.3128 | 30.3 | 1000 | 0.8662 | 0.6195 | | 0.9403 | 45.45 | 1500 | 0.8163 | 0.5169 | | 0.6868 | 60.61 | 2000 | 0.8661 | 0.4858 | | 0.563 | 75.76 | 2500 | 0.9447 | 0.4867 | | 0.4887 | 90.91 | 3000 | 0.9650 | 0.4823 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
huggingtweets/hashimoto_lo
huggingtweets
2022-01-30T01:43:17Z
0
0
null
[ "huggingtweets", "en", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/hashimoto_lo/1643506993033/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/922396157493383169/LLKd_U72_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">橋下徹</div> <div style="text-align: center; font-size: 14px;">@hashimoto_lo</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from 橋下徹. | Data | 橋下徹 | | --- | --- | | Tweets downloaded | 3247 | | Retweets | 759 | | Short tweets | 137 | | Tweets kept | 2351 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1wi9n714/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @hashimoto_lo's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/240mb7l6) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/240mb7l6/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/hashimoto_lo') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/shiikazuo
huggingtweets
2022-01-30T01:27:28Z
0
0
null
[ "huggingtweets", "en", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/shiikazuo/1643506044134/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/3624876884/b16d250401cc357c5be9859f7ba3db8f_400x400.jpeg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">志位和夫</div> <div style="text-align: center; font-size: 14px;">@shiikazuo</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from 志位和夫. | Data | 志位和夫 | | --- | --- | | Tweets downloaded | 3249 | | Retweets | 38 | | Short tweets | 35 | | Tweets kept | 3176 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/243t6rzm/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @shiikazuo's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/eiaaoe96) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/eiaaoe96/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/shiikazuo') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/tjonthefloor
huggingtweets
2022-01-29T22:53:02Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/tjonthefloor/1643496777814/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1466388620256948228/kkRWm2mR_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">ash ψ</div> <div style="text-align: center; font-size: 14px;">@tjonthefloor</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from ash ψ. | Data | ash ψ | | --- | --- | | Tweets downloaded | 470 | | Retweets | 144 | | Short tweets | 99 | | Tweets kept | 227 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/20bqlhah/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tjonthefloor's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1ntjhfs1) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1ntjhfs1/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/tjonthefloor') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Adil617/wav2vec2-base-timit-demo-colab
Adil617
2022-01-29T21:05:59Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.9314 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 8.686 | 0.16 | 20 | 13.6565 | 1.0 | | 8.0711 | 0.32 | 40 | 12.5379 | 1.0 | | 6.9967 | 0.48 | 60 | 9.7215 | 1.0 | | 5.2368 | 0.64 | 80 | 5.8459 | 1.0 | | 3.4499 | 0.8 | 100 | 3.3413 | 1.0 | | 3.1261 | 0.96 | 120 | 3.2858 | 1.0 | | 3.0654 | 1.12 | 140 | 3.1945 | 1.0 | | 3.0421 | 1.28 | 160 | 3.1296 | 1.0 | | 3.0035 | 1.44 | 180 | 3.1172 | 1.0 | | 3.0067 | 1.6 | 200 | 3.1217 | 1.0 | | 2.9867 | 1.76 | 220 | 3.0715 | 1.0 | | 2.9653 | 1.92 | 240 | 3.0747 | 1.0 | | 2.9629 | 2.08 | 260 | 2.9984 | 1.0 | | 2.9462 | 2.24 | 280 | 2.9991 | 1.0 | | 2.9391 | 2.4 | 300 | 3.0391 | 1.0 | | 2.934 | 2.56 | 320 | 2.9682 | 1.0 | | 2.9193 | 2.72 | 340 | 2.9701 | 1.0 | | 2.8985 | 2.88 | 360 | 2.9314 | 1.0 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
Harveenchadha/vakyansh-wav2vec2-hindi-him-4200
Harveenchadha
2022-01-29T06:03:43Z
25,050
5
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "hi", "arxiv:2107.07402", "license:mit", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- language: hi #datasets: #- Interspeech 2021 metrics: - wer tags: - audio - automatic-speech-recognition - speech license: mit model-index: - name: Wav2Vec2 Vakyansh Hindi Model by Harveen Chadha results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice hi type: common_voice args: hi metrics: - name: Test WER type: wer value: 33.17 --- ## Spaces Demo Check the spaces demo [here](https://huggingface.co/spaces/Harveenchadha/wav2vec2-vakyansh-hindi/tree/main) ## Pretrained Model Fine-tuned on Multilingual Pretrained Model [CLSRIL-23](https://arxiv.org/abs/2107.07402). The original fairseq checkpoint is present [here](https://github.com/Open-Speech-EkStep/vakyansh-models). When using this model, make sure that your speech input is sampled at 16kHz. **Note: The result from this model is without a language model so you may witness a higher WER in some cases.** ## Dataset This model was trained on 4200 hours of Hindi Labelled Data. The labelled data is not present in public domain as of now. ## Training Script Models were trained using experimental platform setup by Vakyansh team at Ekstep. Here is the [training repository](https://github.com/Open-Speech-EkStep/vakyansh-wav2vec2-experimentation). In case you want to explore training logs on wandb they are [here](https://wandb.ai/harveenchadha/hindi_finetuning_multilingual?workspace=user-harveenchadha). ## [Colab Demo](https://colab.research.google.com/github/harveenchadha/bol/blob/main/demos/hf/hindi/hf_hindi_him_4200_demo.ipynb) ## Usage The model can be used directly (without a language model) as follows: ```python import soundfile as sf import torch from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import argparse def parse_transcription(wav_file): # load pretrained model processor = Wav2Vec2Processor.from_pretrained("Harveenchadha/vakyansh-wav2vec2-hindi-him-4200") model = Wav2Vec2ForCTC.from_pretrained("Harveenchadha/vakyansh-wav2vec2-hindi-him-4200") # load audio audio_input, sample_rate = sf.read(wav_file) # pad input values and return pt tensor input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values # INFERENCE # retrieve logits & take argmax logits = model(input_values).logits predicted_ids = torch.argmax(logits, dim=-1) # transcribe transcription = processor.decode(predicted_ids[0], skip_special_tokens=True) print(transcription) ``` ## Evaluation The model can be evaluated as follows on the hindi test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "hi", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("Harveenchadha/vakyansh-wav2vec2-hindi-him-4200") model = Wav2Vec2ForCTC.from_pretrained("Harveenchadha/vakyansh-wav2vec2-hindi-him-4200") model.to("cuda") resampler = torchaudio.transforms.Resample(48_000, 16_000) chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]' # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids, skip_special_tokens=True) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 33.17 % [**Colab Evaluation**](https://colab.research.google.com/github/harveenchadha/bol/blob/main/demos/hf/hindi/hf_vakyansh_hindi_him_4200_evaluation_common_voice.ipynb) ## Credits Thanks to Ekstep Foundation for making this possible. The vakyansh team will be open sourcing speech models in all the Indic Languages.
k-partha/decision_style_bert_bio
k-partha
2022-01-29T03:36:37Z
8
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "arxiv:2109.06402", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
Rates Twitter biographies on decision-making preference: Judging (focused, goal-oriented decision strategy) or Prospecting (open-ended, explorative strategy). Roughly corresponds to [conscientiousness](https://en.wikipedia.org/wiki/Conscientiousness) Go to your Twitter profile, copy your biography and paste in the inference widget, remove any URLs and press hit! Trained on self-described personality labels. Interpret as a continuous score, not as a discrete label. Have fun! Note: Performance on inputs other than Twitter biographies [the training data source] is not verified. For further details and expected performance, read the [paper](https://arxiv.org/abs/2109.06402).
k-partha/extrabert_bio
k-partha
2022-01-29T03:36:11Z
6
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "arxiv:2109.06402", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
Classifies Twitter biographies as either introverts or extroverts. Go to your Twitter profile, copy your biography and paste in the inference widget, remove any URLs and press hit! Trained on self-described personality labels. Interpret as a continuous score, not as a discrete label. Have fun! Barack Obama: Extrovert; Ellen DeGeneres: Extrovert; Naomi Osaka: Introvert Note: Performance on inputs other than Twitter biographies [the training data source] is not verified. For further details and expected performance, read the [paper](https://arxiv.org/abs/2109.06402).
k-partha/curiosity_bert_bio
k-partha
2022-01-29T03:35:48Z
10
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "arxiv:2109.06402", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
Labels Twitter biographies on [Openness](https://en.wikipedia.org/wiki/Openness_to_experience), strongly related to intellectual curiosity. Intuitive: Associated with higher intellectual curiosity Sensing: Associated with lower intellectual curiosity Go to your Twitter profile, copy your biography and paste in the inference widget, remove any URLs and press hit! Trained on self-described personality labels. Interpret as a continuous score, not as a discrete label. Have fun! Note: Performance on inputs other than Twitter biographies [the training data source] is not verified. For further details and expected performance, read the [paper](https://arxiv.org/abs/2109.06402).
facebook/tts_transformer-ru-cv7_css10
facebook
2022-01-28T23:28:04Z
105
13
fairseq
[ "fairseq", "audio", "text-to-speech", "ru", "dataset:common_voice", "dataset:css10", "arxiv:1809.08895", "arxiv:2109.06912", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- library_name: fairseq task: text-to-speech tags: - fairseq - audio - text-to-speech language: ru datasets: - common_voice - css10 widget: - text: "Здравствуйте, это пробный запуск." example_title: "Hello, this is a test run." --- # tts_transformer-ru-cv7_css10 [Transformer](https://arxiv.org/abs/1809.08895) text-to-speech model from fairseq S^2 ([paper](https://arxiv.org/abs/2109.06912)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_synthesis)): - Russian - Single-speaker male voice - Pre-trained on [Common Voice v7](https://commonvoice.mozilla.org/en/datasets), fine-tuned on [CSS10](https://github.com/Kyubyong/css10) ## Usage ```python from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub from fairseq.models.text_to_speech.hub_interface import TTSHubInterface import IPython.display as ipd models, cfg, task = load_model_ensemble_and_task_from_hf_hub( "facebook/tts_transformer-ru-cv7_css10", arg_overrides={"vocoder": "hifigan", "fp16": False} ) model = models[0] TTSHubInterface.update_cfg_with_data_cfg(cfg, task.data_cfg) generator = task.build_generator(model, cfg) text = "Здравствуйте, это пробный запуск." sample = TTSHubInterface.get_model_input(task, text) wav, rate = TTSHubInterface.get_prediction(task, model, generator, sample) ipd.Audio(wav, rate=rate) ``` See also [fairseq S^2 example](https://github.com/pytorch/fairseq/blob/main/examples/speech_synthesis/docs/common_voice_example.md). ## Citation ```bibtex @inproceedings{wang-etal-2021-fairseq, title = "fairseq S{\^{}}2: A Scalable and Integrable Speech Synthesis Toolkit", author = "Wang, Changhan and Hsu, Wei-Ning and Adi, Yossi and Polyak, Adam and Lee, Ann and Chen, Peng-Jen and Gu, Jiatao and Pino, Juan", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-demo.17", doi = "10.18653/v1/2021.emnlp-demo.17", pages = "143--152", } ```
facebook/fastspeech2-en-ljspeech
facebook
2022-01-28T23:25:24Z
2,168
268
fairseq
[ "fairseq", "audio", "text-to-speech", "en", "dataset:ljspeech", "arxiv:2006.04558", "arxiv:2109.06912", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- library_name: fairseq task: text-to-speech tags: - fairseq - audio - text-to-speech language: en datasets: - ljspeech widget: - text: "Hello, this is a test run." example_title: "Hello, this is a test run." --- # fastspeech2-en-ljspeech [FastSpeech 2](https://arxiv.org/abs/2006.04558) text-to-speech model from fairseq S^2 ([paper](https://arxiv.org/abs/2109.06912)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_synthesis)): - English - Single-speaker female voice - Trained on [LJSpeech](https://keithito.com/LJ-Speech-Dataset/) ## Usage ```python from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub from fairseq.models.text_to_speech.hub_interface import TTSHubInterface import IPython.display as ipd models, cfg, task = load_model_ensemble_and_task_from_hf_hub( "facebook/fastspeech2-en-ljspeech", arg_overrides={"vocoder": "hifigan", "fp16": False} ) model = models[0] TTSHubInterface.update_cfg_with_data_cfg(cfg, task.data_cfg) generator = task.build_generator(model, cfg) text = "Hello, this is a test run." sample = TTSHubInterface.get_model_input(task, text) wav, rate = TTSHubInterface.get_prediction(task, model, generator, sample) ipd.Audio(wav, rate=rate) ``` See also [fairseq S^2 example](https://github.com/pytorch/fairseq/blob/main/examples/speech_synthesis/docs/ljspeech_example.md). ## Citation ```bibtex @inproceedings{wang-etal-2021-fairseq, title = "fairseq S{\^{}}2: A Scalable and Integrable Speech Synthesis Toolkit", author = "Wang, Changhan and Hsu, Wei-Ning and Adi, Yossi and Polyak, Adam and Lee, Ann and Chen, Peng-Jen and Gu, Jiatao and Pino, Juan", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-demo.17", doi = "10.18653/v1/2021.emnlp-demo.17", pages = "143--152", } ```
Kneecapsnatcher/Unon
Kneecapsnatcher
2022-01-28T21:21:10Z
0
0
null
[ "license:bsd-2-clause", "region:us" ]
null
2022-03-02T23:29:04Z
--- license: bsd-2-clause ---
anjulRajendraSharma/wavlm-base-libri-clean-100
anjulRajendraSharma
2022-01-28T16:52:47Z
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "wavlm", "automatic-speech-recognition", "librispeech_asr", "generated_from_trainer", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - automatic-speech-recognition - librispeech_asr - generated_from_trainer model-index: - name: wavlm-libri-clean-100h-base results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wavlm-libri-clean-100h-base This model is a fine-tuned version of [microsoft/wavlm-base](https://huggingface.co/microsoft/wavlm-base) on the LIBRISPEECH_ASR - CLEAN dataset. It achieves the following results on the evaluation set: - Loss: 0.0955 - Wer: 0.0773 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.8664 | 0.17 | 300 | 2.8439 | 1.0 | | 0.5009 | 0.34 | 600 | 0.2709 | 0.2162 | | 0.2056 | 0.5 | 900 | 0.1934 | 0.1602 | | 0.1648 | 0.67 | 1200 | 0.1576 | 0.1306 | | 0.1922 | 0.84 | 1500 | 0.1358 | 0.1114 | | 0.093 | 1.01 | 1800 | 0.1277 | 0.1035 | | 0.0652 | 1.18 | 2100 | 0.1251 | 0.1005 | | 0.0848 | 1.35 | 2400 | 0.1188 | 0.0964 | | 0.0706 | 1.51 | 2700 | 0.1091 | 0.0905 | | 0.0846 | 1.68 | 3000 | 0.1018 | 0.0840 | | 0.0684 | 1.85 | 3300 | 0.0978 | 0.0809 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.9.1 - Datasets 1.18.0 - Tokenizers 0.10.3
anjulRajendraSharma/WavLm-base-en
anjulRajendraSharma
2022-01-28T16:40:52Z
58
0
transformers
[ "transformers", "pytorch", "tensorboard", "wavlm", "automatic-speech-recognition", "english_asr", "generated_from_trainer", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - automatic-speech-recognition - english_asr - generated_from_trainer model-index: - name: wavlm-base-english results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wavlm-base-english This model is a fine-tuned version of [microsoft/wavlm-base](https://huggingface.co/microsoft/wavlm-base) on the english_ASR - CLEAN dataset. It achieves the following results on the evaluation set: - Loss: 0.0955 - Wer: 0.0773 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.8664 | 0.17 | 300 | 2.8439 | 1.0 | | 0.5009 | 0.34 | 600 | 0.2709 | 0.2162 | | 0.2056 | 0.5 | 900 | 0.1934 | 0.1602 | | 0.1648 | 0.67 | 1200 | 0.1576 | 0.1306 | | 0.1922 | 0.84 | 1500 | 0.1358 | 0.1114 | | 0.093 | 1.01 | 1800 | 0.1277 | 0.1035 | | 0.0652 | 1.18 | 2100 | 0.1251 | 0.1005 | | 0.0848 | 1.35 | 2400 | 0.1188 | 0.0964 | | 0.0706 | 1.51 | 2700 | 0.1091 | 0.0905 | | 0.0846 | 1.68 | 3000 | 0.1018 | 0.0840 | | 0.0684 | 1.85 | 3300 | 0.0978 | 0.0809 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.9.1 - Datasets 1.18.0 - Tokenizers 0.10.3
alperiox/autonlp-user-review-classification-536415182
alperiox
2022-01-28T16:30:08Z
9
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autonlp", "en", "dataset:alperiox/autonlp-data-user-review-classification", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - alperiox/autonlp-data-user-review-classification co2_eq_emissions: 1.268309634217171 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 536415182 - CO2 Emissions (in grams): 1.268309634217171 ## Validation Metrics - Loss: 0.44733062386512756 - Accuracy: 0.8873239436619719 - Macro F1: 0.8859416445623343 - Micro F1: 0.8873239436619719 - Weighted F1: 0.8864646766540891 - Macro Precision: 0.8848522167487685 - Micro Precision: 0.8873239436619719 - Weighted Precision: 0.8883299798792756 - Macro Recall: 0.8908045977011494 - Micro Recall: 0.8873239436619719 - Weighted Recall: 0.8873239436619719 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/alperiox/autonlp-user-review-classification-536415182 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("alperiox/autonlp-user-review-classification-536415182", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("alperiox/autonlp-user-review-classification-536415182", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
Rocketknight1/distilgpt2-finetuned-wikitext2
Rocketknight1
2022-01-28T13:23:20Z
14
0
transformers
[ "transformers", "tf", "tensorboard", "gpt2", "text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Rocketknight1/distilgpt2-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Rocketknight1/distilgpt2-finetuned-wikitext2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 3.8577 - Validation Loss: 3.6752 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 3.8577 | 3.6752 | 0 | ### Framework versions - Transformers 4.16.0.dev0 - TensorFlow 2.8.0-rc0 - Datasets 1.17.0 - Tokenizers 0.11.0
Maniac/wav2vec2-xls-r-60-urdu
Maniac
2022-01-28T13:03:37Z
5
1
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "ur", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- language: - ur license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_7_0 - generated_from_trainer datasets: - common_voice model-index: - name: '' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - UR dataset. It achieves the following results on the evaluation set: - Loss: 3.8433 - Wer: 0.9852 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 2000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 1.468 | 166.67 | 500 | 3.0262 | 1.0035 | | 0.0572 | 333.33 | 1000 | 3.5352 | 0.9721 | | 0.0209 | 500.0 | 1500 | 3.7266 | 0.9834 | | 0.0092 | 666.67 | 2000 | 3.8433 | 0.9852 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
peterhsu/tf_bert-finetuned-ner
peterhsu
2022-01-28T12:52:36Z
3
0
transformers
[ "transformers", "tf", "bert", "token-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: tf_bert-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # tf_bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0272 - Validation Loss: 0.0522 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2631, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.1727 | 0.0673 | 0 | | 0.0462 | 0.0541 | 1 | | 0.0272 | 0.0522 | 2 | ### Framework versions - Transformers 4.16.0 - TensorFlow 2.7.0 - Datasets 1.18.1 - Tokenizers 0.11.0
huggingtweets/cobie-coinerstakingls
huggingtweets
2022-01-28T11:19:03Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/cobie-coinerstakingls/1643368738479/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1394891459900231689/xXdX3yWP_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1471649307887558661/SpH6Dho7_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Crypto Bros Taking Ls & Cobie</div> <div style="text-align: center; font-size: 14px;">@cobie-coinerstakingls</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Crypto Bros Taking Ls & Cobie. | Data | Crypto Bros Taking Ls | Cobie | | --- | --- | --- | | Tweets downloaded | 566 | 3248 | | Retweets | 94 | 93 | | Short tweets | 222 | 500 | | Tweets kept | 250 | 2655 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1gjf29z1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cobie-coinerstakingls's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/c8xc9umf) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/c8xc9umf/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/cobie-coinerstakingls') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
pitehu/T5_NER_CONLL_ENTITYREPLACE
pitehu
2022-01-28T11:05:16Z
7
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "en", "dataset:CoNLL-2003", "arxiv:2111.10952", "arxiv:1810.04805", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: - en license: "apache-2.0" datasets: - CoNLL-2003 metrics: - F1 --- This is a T5 small model finetuned on CoNLL-2003 dataset for named entity recognition (NER). Example Input and Output: “Recognize all the named entities in this sequence (replace named entities with one of [PER], [ORG], [LOC], [MISC]): When Alice visited New York” → “When PER visited LOC LOC" Evaluation Result: % of match (for comparison with ExT5: https://arxiv.org/pdf/2111.10952.pdf): | Model| ExT5_{Base} | This Model | T5_NER_CONLL_OUTPUTLIST | :---: | :---: | :---: | :---: | | % of Complete Match| 86.53 | 79.03 | TBA| There are some outputs (212/3453 or 6.14% that does not have the same length as the input) F1 score on testing set of those with matching length : | Model | This Model | T5_NER_CONLL_OUTPUTLIST | BERTbase | :---: | :---: | :---: | :---: | | F1| 0.8901 | 0.8691| 0.9240 **Caveat: The testing set of these aren't the same, due to matching length issue... T5_NER_CONLL_OUTPUTLIST only has 27/3453 missing length (only 0.78%); The BERT number is directly from their paper (https://arxiv.org/pdf/1810.04805.pdf)
google/vit-large-patch32-224-in21k
google
2022-01-28T10:21:30Z
1,295
1
transformers
[ "transformers", "pytorch", "tf", "jax", "vit", "image-feature-extraction", "vision", "dataset:imagenet-21k", "arxiv:2010.11929", "arxiv:2006.03677", "license:apache-2.0", "region:us" ]
image-feature-extraction
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - vision datasets: - imagenet-21k inference: false --- # Vision Transformer (large-sized model) Vision Transformer (ViT) model pre-trained on ImageNet-21k (14 million images, 21,843 classes) at resolution 224x224. It was introduced in the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Dosovitskiy et al. and first released in [this repository](https://github.com/google-research/vision_transformer). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman, who already converted the weights from JAX to PyTorch. Credits go to him. Disclaimer: The team releasing ViT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. Images are presented to the model as a sequence of fixed-size patches (resolution 32x32), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. Note that this model does not provide any fine-tuned heads, as these were zero'd by Google researchers. However, the model does include the pre-trained pooler, which can be used for downstream tasks (such as image classification). By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=google/vit) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python from transformers import ViTFeatureExtractor, ViTModel from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-base-patch16-224-in21k') model = ViTModel.from_pretrained('google/vit-base-patch16-224-in21k') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) last_hidden_state = outputs.last_hidden_state ``` Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon, and the API of ViTFeatureExtractor might change. ## Training data The ViT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/google-research/vision_transformer/blob/master/vit_jax/input_pipeline.py). Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5). ### Pretraining The model was trained on TPUv3 hardware (8 cores). All model variants are trained with a batch size of 4096 and learning rate warmup of 10k steps. For ImageNet, the authors found it beneficial to additionally apply gradient clipping at global norm 1. Pre-training resolution is 224. ## Evaluation results For evaluation results on several image classification benchmarks, we refer to tables 2 and 5 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. ### BibTeX entry and citation info ```bibtex @misc{wu2020visual, title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision}, author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda}, year={2020}, eprint={2006.03677}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @inproceedings{deng2009imagenet, title={Imagenet: A large-scale hierarchical image database}, author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li}, booktitle={2009 IEEE conference on computer vision and pattern recognition}, pages={248--255}, year={2009}, organization={Ieee} } ```
microsoft/beit-large-patch16-512
microsoft
2022-01-28T10:20:07Z
824
9
transformers
[ "transformers", "pytorch", "jax", "beit", "image-classification", "vision", "dataset:imagenet", "dataset:imagenet-21k", "arxiv:2106.08254", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - image-classification - vision datasets: - imagenet - imagenet-21k --- # BEiT (large-sized model, fine-tuned on ImageNet-1k) BEiT model pre-trained in a self-supervised fashion on ImageNet-21k (14 million images, 21,841 classes) at resolution 224x224, and fine-tuned on ImageNet 2012 (1 million images, 1,000 classes) at resolution 512x512. It was introduced in the paper [BEIT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong and Furu Wei and first released in [this repository](https://github.com/microsoft/unilm/tree/master/beit). Disclaimer: The team releasing BEiT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The BEiT model is a Vision Transformer (ViT), which is a transformer encoder model (BERT-like). In contrast to the original ViT model, BEiT is pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. The pre-training objective for the model is to predict visual tokens from the encoder of OpenAI's DALL-E's VQ-VAE, based on masked patches. Next, the model was fine-tuned in a supervised fashion on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. Contrary to the original ViT models, BEiT models do use relative position embeddings (similar to T5) instead of absolute position embeddings, and perform classification of images by mean-pooling the final hidden states of the patches, instead of placing a linear layer on top of the final hidden state of the [CLS] token. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. Alternatively, one can mean-pool the final hidden states of the patch embeddings, and place a linear layer on top of that. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=microsoft/beit) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import BeitFeatureExtractor, BeitForImageClassification from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = BeitFeatureExtractor.from_pretrained('microsoft/beit-large-patch16-512') model = BeitForImageClassification.from_pretrained('microsoft/beit-large-patch16-512') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` Currently, both the feature extractor and model support PyTorch. ## Training data The BEiT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes, and fine-tuned on [ImageNet](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/microsoft/unilm/blob/master/beit/datasets.py). Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5). ### Pretraining For all pre-training related hyperparameters, we refer to page 15 of the [original paper](https://arxiv.org/abs/2106.08254). ## Evaluation results For evaluation results on several image classification benchmarks, we refer to tables 1 and 2 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. ### BibTeX entry and citation info ```@article{DBLP:journals/corr/abs-2106-08254, author = {Hangbo Bao and Li Dong and Furu Wei}, title = {BEiT: {BERT} Pre-Training of Image Transformers}, journal = {CoRR}, volume = {abs/2106.08254}, year = {2021}, url = {https://arxiv.org/abs/2106.08254}, archivePrefix = {arXiv}, eprint = {2106.08254}, timestamp = {Tue, 29 Jun 2021 16:55:04 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-08254.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` ```bibtex @inproceedings{deng2009imagenet, title={Imagenet: A large-scale hierarchical image database}, author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li}, booktitle={2009 IEEE conference on computer vision and pattern recognition}, pages={248--255}, year={2009}, organization={Ieee} } ```
microsoft/beit-large-patch16-384
microsoft
2022-01-28T10:19:50Z
242
0
transformers
[ "transformers", "pytorch", "jax", "beit", "image-classification", "vision", "dataset:imagenet", "dataset:imagenet-21k", "arxiv:2106.08254", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - image-classification - vision datasets: - imagenet - imagenet-21k --- # BEiT (large-sized model, fine-tuned on ImageNet-1k) BEiT model pre-trained in a self-supervised fashion on ImageNet-21k (14 million images, 21,841 classes) at resolution 224x224, and fine-tuned on ImageNet 2012 (1 million images, 1,000 classes) at resolution 384x384. It was introduced in the paper [BEIT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong and Furu Wei and first released in [this repository](https://github.com/microsoft/unilm/tree/master/beit). Disclaimer: The team releasing BEiT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The BEiT model is a Vision Transformer (ViT), which is a transformer encoder model (BERT-like). In contrast to the original ViT model, BEiT is pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. The pre-training objective for the model is to predict visual tokens from the encoder of OpenAI's DALL-E's VQ-VAE, based on masked patches. Next, the model was fine-tuned in a supervised fashion on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. Contrary to the original ViT models, BEiT models do use relative position embeddings (similar to T5) instead of absolute position embeddings, and perform classification of images by mean-pooling the final hidden states of the patches, instead of placing a linear layer on top of the final hidden state of the [CLS] token. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. Alternatively, one can mean-pool the final hidden states of the patch embeddings, and place a linear layer on top of that. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=microsoft/beit) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import BeitFeatureExtractor, BeitForImageClassification from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = BeitFeatureExtractor.from_pretrained('microsoft/beit-large-patch16-384') model = BeitForImageClassification.from_pretrained('microsoft/beit-large-patch16-384') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` Currently, both the feature extractor and model support PyTorch. ## Training data The BEiT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes, and fine-tuned on [ImageNet](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/microsoft/unilm/blob/master/beit/datasets.py). Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5). ### Pretraining For all pre-training related hyperparameters, we refer to page 15 of the [original paper](https://arxiv.org/abs/2106.08254). ## Evaluation results For evaluation results on several image classification benchmarks, we refer to tables 1 and 2 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. ### BibTeX entry and citation info ```@article{DBLP:journals/corr/abs-2106-08254, author = {Hangbo Bao and Li Dong and Furu Wei}, title = {BEiT: {BERT} Pre-Training of Image Transformers}, journal = {CoRR}, volume = {abs/2106.08254}, year = {2021}, url = {https://arxiv.org/abs/2106.08254}, archivePrefix = {arXiv}, eprint = {2106.08254}, timestamp = {Tue, 29 Jun 2021 16:55:04 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-08254.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` ```bibtex @inproceedings{deng2009imagenet, title={Imagenet: A large-scale hierarchical image database}, author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li}, booktitle={2009 IEEE conference on computer vision and pattern recognition}, pages={248--255}, year={2009}, organization={Ieee} } ```
microsoft/beit-base-patch16-384
microsoft
2022-01-28T10:19:30Z
409
5
transformers
[ "transformers", "pytorch", "jax", "beit", "image-classification", "vision", "dataset:imagenet", "dataset:imagenet-21k", "arxiv:2106.08254", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - image-classification - vision datasets: - imagenet - imagenet-21k --- # BEiT (base-sized model, fine-tuned on ImageNet-1k) BEiT model pre-trained in a self-supervised fashion on ImageNet-21k (14 million images, 21,841 classes) at resolution 224x224, and fine-tuned on ImageNet 2012 (1 million images, 1,000 classes) at resolution 384x384. It was introduced in the paper [BEIT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong and Furu Wei and first released in [this repository](https://github.com/microsoft/unilm/tree/master/beit). Disclaimer: The team releasing BEiT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The BEiT model is a Vision Transformer (ViT), which is a transformer encoder model (BERT-like). In contrast to the original ViT model, BEiT is pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. The pre-training objective for the model is to predict visual tokens from the encoder of OpenAI's DALL-E's VQ-VAE, based on masked patches. Next, the model was fine-tuned in a supervised fashion on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. Contrary to the original ViT models, BEiT models do use relative position embeddings (similar to T5) instead of absolute position embeddings, and perform classification of images by mean-pooling the final hidden states of the patches, instead of placing a linear layer on top of the final hidden state of the [CLS] token. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. Alternatively, one can mean-pool the final hidden states of the patch embeddings, and place a linear layer on top of that. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=microsoft/beit) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import BeitFeatureExtractor, BeitForImageClassification from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = BeitFeatureExtractor.from_pretrained('microsoft/beit-base-patch16-384') model = BeitForImageClassification.from_pretrained('microsoft/beit-base-patch16-384') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` Currently, both the feature extractor and model support PyTorch. ## Training data The BEiT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes, and fine-tuned on [ImageNet](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/microsoft/unilm/blob/master/beit/datasets.py). Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5). ### Pretraining For all pre-training related hyperparameters, we refer to page 15 of the [original paper](https://arxiv.org/abs/2106.08254). ## Evaluation results For evaluation results on several image classification benchmarks, we refer to tables 1 and 2 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. ### BibTeX entry and citation info ```@article{DBLP:journals/corr/abs-2106-08254, author = {Hangbo Bao and Li Dong and Furu Wei}, title = {BEiT: {BERT} Pre-Training of Image Transformers}, journal = {CoRR}, volume = {abs/2106.08254}, year = {2021}, url = {https://arxiv.org/abs/2106.08254}, archivePrefix = {arXiv}, eprint = {2106.08254}, timestamp = {Tue, 29 Jun 2021 16:55:04 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-08254.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` ```bibtex @inproceedings{deng2009imagenet, title={Imagenet: A large-scale hierarchical image database}, author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li}, booktitle={2009 IEEE conference on computer vision and pattern recognition}, pages={248--255}, year={2009}, organization={Ieee} } ```
hrdipto/wav2vec2-xls-r-tf-left-right-shuru-word-level
hrdipto
2022-01-28T09:54:27Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-xls-r-tf-left-right-shuru-word-level results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-tf-left-right-shuru-word-level This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0504 - Wer: 0.6859 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 23.217 | 23.81 | 500 | 1.3437 | 0.6859 | | 1.1742 | 47.62 | 1000 | 1.0397 | 0.6859 | | 1.0339 | 71.43 | 1500 | 1.0155 | 0.6859 | | 0.9909 | 95.24 | 2000 | 1.0504 | 0.6859 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
hyyoka/wav2vec2-xlsr-korean-senior
hyyoka
2022-01-28T06:08:19Z
8
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "kr", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: kr datasets: - aihub 자유대화 음성(노인남녀) tags: - automatic-speech-recognition license: apache-2.0 --- # wav2vec2-xlsr-korean-senior Futher fine-tuned [fleek/wav2vec-large-xlsr-korean](https://huggingface.co/fleek/wav2vec-large-xlsr-korean) using the [AIhub 자유대화 음성(노인남녀)](https://aihub.or.kr/aidata/30704). - Total train data size: 808,642 - Total vaild data size: 159,970 When using this model, make sure that your speech input is sampled at 16kHz. The script used for training can be found here: https://github.com/hyyoka/wav2vec2-korean-senior ### Inference ``` py import torchaudio from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC import re def clean_up(transcription): hangul = re.compile('[^ ㄱ-ㅣ가-힣]+') result = hangul.sub('', transcription) return result model_name "hyyoka/wav2vec2-xlsr-korean-senior" processor = Wav2Vec2Processor.from_pretrained(model_name) model = Wav2Vec2ForCTC.from_pretrained(model_name) speech_array, sampling_rate = torchaudio.load(wav_file) feat = processor(speech_array[0], sampling_rate=16000, padding=True, max_length=800000, truncation=True, return_attention_mask=True, return_tensors="pt", pad_token_id=49 ) input = {'input_values': feat['input_values'],'attention_mask':feat['attention_mask']} outputs = model(**input, output_attentions=True) logits = outputs.logits predicted_ids = logits.argmax(axis=-1) transcription = processor.decode(predicted_ids[0]) stt_result = clean_up(transcription) ```
huggingtweets/coinerstakingls-elonmusk-tyler
huggingtweets
2022-01-28T05:27:03Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/coinerstakingls-elonmusk-tyler/1643347618705/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1474910968157249536/FS8-70Ie_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1394891459900231689/xXdX3yWP_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1439959943067709448/Z-Dsp_Ge_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & Crypto Bros Taking Ls & Tyler Winklevoss</div> <div style="text-align: center; font-size: 14px;">@coinerstakingls-elonmusk-tyler</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Elon Musk & Crypto Bros Taking Ls & Tyler Winklevoss. | Data | Elon Musk | Crypto Bros Taking Ls | Tyler Winklevoss | | --- | --- | --- | --- | | Tweets downloaded | 3250 | 566 | 3248 | | Retweets | 163 | 94 | 1550 | | Short tweets | 930 | 222 | 357 | | Tweets kept | 2157 | 250 | 1341 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1mpyx1oz/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @coinerstakingls-elonmusk-tyler's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3mnlaoaj) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3mnlaoaj/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/coinerstakingls-elonmusk-tyler') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/bitfinexed-coinerstakingls-xeni
huggingtweets
2022-01-28T04:55:36Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/bitfinexed-coinerstakingls-xeni/1643345731503/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1394891459900231689/xXdX3yWP_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1415442891015610370/1qyYwuHx_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1357462788130578434/6ZRnYvCW_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Crypto Bros Taking Ls & Bitfinex’ed 🔥 & Xeni</div> <div style="text-align: center; font-size: 14px;">@bitfinexed-coinerstakingls-xeni</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Crypto Bros Taking Ls & Bitfinex’ed 🔥 & Xeni. | Data | Crypto Bros Taking Ls | Bitfinex’ed 🔥 | Xeni | | --- | --- | --- | --- | | Tweets downloaded | 566 | 3245 | 3229 | | Retweets | 94 | 650 | 1834 | | Short tweets | 222 | 613 | 402 | | Tweets kept | 250 | 1982 | 993 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3eviqxf1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @bitfinexed-coinerstakingls-xeni's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1kim6sku) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1kim6sku/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/bitfinexed-coinerstakingls-xeni') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
rodrigogelacio/autonlp-department-classification-534915130
rodrigogelacio
2022-01-28T02:06:52Z
3
1
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autonlp", "unk", "dataset:rodrigogelacio/autonlp-data-department-classification", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: autonlp language: unk widget: - text: "I love AutoNLP 🤗" datasets: - rodrigogelacio/autonlp-data-department-classification co2_eq_emissions: 1.4862856774320061 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 534915130 - CO2 Emissions (in grams): 1.4862856774320061 ## Validation Metrics - Loss: 0.37066277861595154 - Accuracy: 0.9204545454545454 - Macro F1: 0.9103715740678612 - Micro F1: 0.9204545454545455 - Weighted F1: 0.9196871607509906 - Macro Precision: 0.9207759152612094 - Micro Precision: 0.9204545454545454 - Weighted Precision: 0.922177301864802 - Macro Recall: 0.9055002187355129 - Micro Recall: 0.9204545454545454 - Weighted Recall: 0.9204545454545454 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/rodrigogelacio/autonlp-department-classification-534915130 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("rodrigogelacio/autonlp-department-classification-534915130", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("rodrigogelacio/autonlp-department-classification-534915130", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
kika2000/wav2vec2-large-xls-r-300m-kika4_my-colab
kika2000
2022-01-28T01:03:34Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-kika4_my-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-kika4_my-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 70 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
huggingtweets/hostagekiller-suicidepussy
huggingtweets
2022-01-27T20:24:27Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/hostagekiller-suicidepussy/1643315062963/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1322637724470358022/ccOsLDPE_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1473236995497500675/FtwXDZld_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">checking my mcdouble for nanochips & HUSSY2K.</div> <div style="text-align: center; font-size: 14px;">@hostagekiller-suicidepussy</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from checking my mcdouble for nanochips & HUSSY2K.. | Data | checking my mcdouble for nanochips | HUSSY2K. | | --- | --- | --- | | Tweets downloaded | 3226 | 3193 | | Retweets | 107 | 847 | | Short tweets | 1124 | 389 | | Tweets kept | 1995 | 1957 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1k8e9itd/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @hostagekiller-suicidepussy's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/dor6qtfm) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/dor6qtfm/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/hostagekiller-suicidepussy') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Jacobo/aristoBERTo
Jacobo
2022-01-27T19:02:16Z
10
5
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "grc", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- tags: language: - grc model-index: - name: aristoBERTo results: [] widget: - text: "Πλάτων ὁ Περικτιόνης [MASK] γένος ἀνέφερεν εἰς Σόλωνα." - text: "ὁ Κριτίας ἀπέβλεψε [MASK] τὴν θύραν." - text: "πρῶτοι δὲ καὶ οὐνόματα ἱρὰ ἔγνωσαν καὶ [MASK] ἱροὺς ἔλεξαν." --- # aristoBERTo aristoBERTo is a transformer model for ancient Greek, a low resource language. We initialized the pre-training with weights from [GreekBERT](https://huggingface.co/nlpaueb/bert-base-greek-uncased-v1), a Greek version of BERT which was trained on a large corpus of modern Greek (~ 30 GB of texts). We continued the pre-training with an ancient Greek corpus of about 900 MB, which was scrapped from the web and post-processed. Duplicate texts and editorial punctuation were removed. Applied to the processing of ancient Greek, aristoBERTo outperforms xlm-roberta-base and mdeberta in most downstream tasks like the labeling of POS, MORPH, DEP and LEMMA. aristoBERTo is provided by the [Diogenet project](https://diogenet.ucsd.edu) of the University of California, San Diego. ## Intended uses This model was created for fine-tuning with spaCy and the ancient Greek Universal Dependency datasets as well as a NER corpus produced by the [Diogenet project](https://diogenet.ucsd.edu). As a fill-mask model, AristoBERTo can also be used in the restoration of damaged Greek papyri, inscriptions, and manuscripts. It achieves the following results on the evaluation set: - Loss: 1.6323 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-------:|:---------------:| | 1.377 | 20.0 | 3414220 | 1.6314 | ### Framework versions - Transformers 4.14.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 1.16.1 - Tokenizers 0.10.3
mbateman/marian-finetuned-kde4-en-to-fr
mbateman
2022-01-27T17:33:02Z
12
0
transformers
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "translation", "generated_from_trainer", "dataset:kde4", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - translation - generated_from_trainer datasets: - kde4 model-index: - name: marian-finetuned-kde4-en-to-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.1 - Tokenizers 0.10.3
Adinda/Adinda
Adinda
2022-01-27T17:02:42Z
0
0
null
[ "license:artistic-2.0", "region:us" ]
null
2022-03-02T23:29:04Z
--- license: artistic-2.0 ---
wolfrage89/company_segment_ner
wolfrage89
2022-01-27T16:56:23Z
23
2
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
## Roberta based NER This model will take in a new article label 3 entities [ORGS, SEGNUM, NUM]. This model is train on reuters news articles ## Try out on huggingface Spaces https://huggingface.co/spaces/wolfrage89/company_segments_ner ## colab sample notebook https://colab.research.google.com/drive/165utMQzYVAX7-aQjWjpmPHwHpdKTaHBa?usp=sharing ## How to use ```python from transformers import pipeline # Minimum code sentence = """Exxon Mobil Corporation is engaged in energy business. The Company is engaged in the exploration, production, trade, transportation and sale of crude oil and natural gas, and the manufacture, transportation and sale of crude oil, natural gas, petroleum products, petrochemicals and a range of specialty products. The Company's segments include Upstream, Downstream, Chemical, and Corporate and Financing. The Upstream segment operates to explore for and produce crude oil and natural gas. The Downstream manufactures, trades and sells petroleum products. The refining and supply operations consists of a global network of manufacturing plants, transportation systems, and distribution centers that provide a range of fuels, lubricants and other products and feedstocks to its customers around the world. The Chemical segment manufactures and sells petrochemicals. The Chemical business supplies olefins, polyolefins, aromatics, and a variety of other petrochemicals.""" model = pipeline('ner', "wolfrage89/company_segment_ner") model_output = model(sentence) print(model_ouput) # [{'entity': 'B-ORG', 'score': 0.99996805, 'index': 1, 'word': 'Ex', 'start': 0, 'end': 2}, {'entity': 'I-ORG', 'score': 0.99971646, 'index': 2, 'word': 'xon', 'start': 2, 'end': 5}, ....] # Sample helper function if you want to use def ner_prediction(model, sentence): entity_map = { "B-ORG":"ORG", "B-SEG":"SEG", "B-SEGNUM":"SEGNUM" } results = [] model_output = model(sentence) accumulate = "" current_class = None start = 0 end = 0 for item in model_output: if item['entity'].startswith("B"): if len(accumulate) >0: results.append((current_class, accumulate, start, end)) accumulate = item['word'].lstrip("Ġ") current_class = entity_map[item['entity']] start=item['start'] end = item['end'] else: if item['word'].startswith("Ġ"): accumulate+=" "+item['word'].lstrip("Ġ") else: accumulate+=item['word'] end = item['end'] # clear last cache if len(accumulate)>0: results.append((current_class, accumulate, start, end)) return results ```
huggingtweets/northernlion
huggingtweets
2022-01-27T16:46:04Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/northernlion/1643301960230/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/2236512789/ChannelIcon_400x400.png&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Ryan Letourneau</div> <div style="text-align: center; font-size: 14px;">@northernlion</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Ryan Letourneau. | Data | Ryan Letourneau | | --- | --- | | Tweets downloaded | 3249 | | Retweets | 85 | | Short tweets | 480 | | Tweets kept | 2684 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2xmzb7x7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @northernlion's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3dilt40l) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3dilt40l/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/northernlion') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
bayartsogt/tts_transformer-mn-mbspeech
bayartsogt
2022-01-27T16:35:40Z
18
1
fairseq
[ "fairseq", "audio", "text-to-speech", "mn", "dataset:mbspeech", "arxiv:1809.08895", "arxiv:2109.06912", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- library_name: fairseq task: text-to-speech tags: - fairseq - audio - text-to-speech language: mn datasets: - mbspeech widget: - text: "миний нэрийг баярцогт гэдэг" example_title: "Say my name!" - text: "би монгол улсын нийслэл, улаанбаатар хотод амьдардаг" example_title: "Where I am from?" - text: "энэхүү өгөгдлийг нээлттэй болгосон, болор соофтынхонд баярлалаа" example_title: "Thank you!" - text: "энэхүү ажлын ихэнх хэсгийг, төгөлдөр ах хийсэн болно" example_title: "Shout out to original creater" --- # tts_transformer-mn-mbspeech [Transformer](https://arxiv.org/abs/1809.08895) text-to-speech model from fairseq S^2 ([paper](https://arxiv.org/abs/2109.06912)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_synthesis)): - Mongolian - Single-speaker male voice - Trained on [MBSpeech](https://github.com/tugstugi/mongolian-nlp/blob/master/datasets/MBSpeech-1.0-csv.zip)
huggingtweets/dp_crazy_gamer
huggingtweets
2022-01-27T15:58:51Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/dp_crazy_gamer/1643299090939/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1435032258868482049/AySjv2ON_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Donovan</div> <div style="text-align: center; font-size: 14px;">@dp_crazy_gamer</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Donovan. | Data | Donovan | | --- | --- | | Tweets downloaded | 3214 | | Retweets | 763 | | Short tweets | 824 | | Tweets kept | 1627 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2pvd0ays/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dp_crazy_gamer's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/14bwewth) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/14bwewth/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/dp_crazy_gamer') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Tsubasaz/clinical-pubmed-bert-base-128
Tsubasaz
2022-01-27T15:44:06Z
16
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "en", "dataset:MIMIC-III", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: - en license: mit datasets: - MIMIC-III widget: - text: "Due to shortness of breath, the patient is diagnosed with [MASK], and other respiratory problems." example_title: "Example 1" --- # ClinicalPubMedBERT ## Description A BERT model pre-trained on PubMed abstracts, and continual pre-trained on clinical notes ([MIMIC-III](https://mimic.physionet.org/)). We try combining two domains that have fewer overlaps with general knowledge text corpora: EHRs and biomedical papers. We hope this model can serve better results on clinical-related downstream tasks such as readmissions. This model is trained on 500000 clinical notes randomly sampled from MIMIC datasets, with 120k steps of training. We also used whole word masking to enhance the coherence of the language model. All notes are chunked into a length of 128 tokens. Pre-trained model: https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract
tomascufaro/wav2vec2-large-xls-r-300m-spanish-custom
tomascufaro
2022-01-27T15:27:27Z
38
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-spanish-custom results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-spanish-custom This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.4426 - Wer: 0.2117 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 4.2307 | 0.4 | 400 | 1.4431 | 0.9299 | | 0.7066 | 0.79 | 800 | 0.5928 | 0.4836 | | 0.4397 | 1.19 | 1200 | 0.4341 | 0.3730 | | 0.3889 | 1.58 | 1600 | 0.4063 | 0.3499 | | 0.3607 | 1.98 | 2000 | 0.3834 | 0.3235 | | 0.2866 | 2.37 | 2400 | 0.3885 | 0.3163 | | 0.2833 | 2.77 | 2800 | 0.3765 | 0.3140 | | 0.2692 | 3.17 | 3200 | 0.3849 | 0.3132 | | 0.2435 | 3.56 | 3600 | 0.3779 | 0.2984 | | 0.2404 | 3.96 | 4000 | 0.3756 | 0.2934 | | 0.2153 | 4.35 | 4400 | 0.3770 | 0.3075 | | 0.2087 | 4.75 | 4800 | 0.3819 | 0.3022 | | 0.1999 | 5.14 | 5200 | 0.3756 | 0.2959 | | 0.1838 | 5.54 | 5600 | 0.3827 | 0.2858 | | 0.1892 | 5.93 | 6000 | 0.3714 | 0.2999 | | 0.1655 | 6.33 | 6400 | 0.3814 | 0.2812 | | 0.1649 | 6.73 | 6800 | 0.3685 | 0.2727 | | 0.1668 | 7.12 | 7200 | 0.3832 | 0.2825 | | 0.1487 | 7.52 | 7600 | 0.3848 | 0.2788 | | 0.152 | 7.91 | 8000 | 0.3810 | 0.2787 | | 0.143 | 8.31 | 8400 | 0.3885 | 0.2856 | | 0.1353 | 8.7 | 8800 | 0.4103 | 0.2827 | | 0.1386 | 9.1 | 9200 | 0.4142 | 0.2874 | | 0.1222 | 9.5 | 9600 | 0.3983 | 0.2830 | | 0.1288 | 9.89 | 10000 | 0.4179 | 0.2781 | | 0.1199 | 10.29 | 10400 | 0.4035 | 0.2789 | | 0.1196 | 10.68 | 10800 | 0.4043 | 0.2746 | | 0.1169 | 11.08 | 11200 | 0.4105 | 0.2753 | | 0.1076 | 11.47 | 11600 | 0.4298 | 0.2686 | | 0.1124 | 11.87 | 12000 | 0.4025 | 0.2704 | | 0.1043 | 12.26 | 12400 | 0.4209 | 0.2659 | | 0.0976 | 12.66 | 12800 | 0.4070 | 0.2672 | | 0.1012 | 13.06 | 13200 | 0.4161 | 0.2720 | | 0.0872 | 13.45 | 13600 | 0.4245 | 0.2697 | | 0.0933 | 13.85 | 14000 | 0.4295 | 0.2684 | | 0.0881 | 14.24 | 14400 | 0.4011 | 0.2650 | | 0.0848 | 14.64 | 14800 | 0.3991 | 0.2675 | | 0.0852 | 15.03 | 15200 | 0.4166 | 0.2617 | | 0.0825 | 15.43 | 15600 | 0.4188 | 0.2639 | | 0.081 | 15.83 | 16000 | 0.4181 | 0.2547 | | 0.0753 | 16.22 | 16400 | 0.4103 | 0.2560 | | 0.0747 | 16.62 | 16800 | 0.4017 | 0.2498 | | 0.0761 | 17.01 | 17200 | 0.4159 | 0.2563 | | 0.0711 | 17.41 | 17600 | 0.4112 | 0.2603 | | 0.0698 | 17.8 | 18000 | 0.4335 | 0.2529 | | 0.073 | 18.2 | 18400 | 0.4120 | 0.2512 | | 0.0665 | 18.6 | 18800 | 0.4335 | 0.2496 | | 0.0657 | 18.99 | 19200 | 0.4143 | 0.2468 | | 0.0617 | 19.39 | 19600 | 0.4339 | 0.2435 | | 0.06 | 19.78 | 20000 | 0.4179 | 0.2438 | | 0.0613 | 20.18 | 20400 | 0.4251 | 0.2393 | | 0.0583 | 20.57 | 20800 | 0.4347 | 0.2422 | | 0.0562 | 20.97 | 21200 | 0.4246 | 0.2377 | | 0.053 | 21.36 | 21600 | 0.4198 | 0.2338 | | 0.0525 | 21.76 | 22000 | 0.4511 | 0.2427 | | 0.0499 | 22.16 | 22400 | 0.4482 | 0.2353 | | 0.0475 | 22.55 | 22800 | 0.4449 | 0.2329 | | 0.0465 | 22.95 | 23200 | 0.4364 | 0.2320 | | 0.0443 | 23.34 | 23600 | 0.4481 | 0.2304 | | 0.0458 | 23.74 | 24000 | 0.4442 | 0.2267 | | 0.0453 | 24.13 | 24400 | 0.4402 | 0.2261 | | 0.0426 | 24.53 | 24800 | 0.4262 | 0.2232 | | 0.0431 | 24.93 | 25200 | 0.4251 | 0.2210 | | 0.0389 | 25.32 | 25600 | 0.4455 | 0.2232 | | 0.039 | 25.72 | 26000 | 0.4372 | 0.2236 | | 0.0378 | 26.11 | 26400 | 0.4236 | 0.2212 | | 0.0348 | 26.51 | 26800 | 0.4359 | 0.2204 | | 0.0361 | 26.9 | 27200 | 0.4248 | 0.2192 | | 0.0356 | 27.3 | 27600 | 0.4397 | 0.2184 | | 0.0325 | 27.7 | 28000 | 0.4367 | 0.2181 | | 0.0313 | 28.09 | 28400 | 0.4477 | 0.2136 | | 0.0306 | 28.49 | 28800 | 0.4533 | 0.2135 | | 0.0314 | 28.88 | 29200 | 0.4410 | 0.2136 | | 0.0307 | 29.28 | 29600 | 0.4457 | 0.2113 | | 0.0309 | 29.67 | 30000 | 0.4426 | 0.2117 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
mrm8488/ppo-CartPole-v1
mrm8488
2022-01-27T15:13:48Z
0
1
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
#@title --- tags: - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 --- # PPO CartPole v1 🤖⚖️ This is a pre-trained model of a PPO agent playing CartPole-v1 using the [stable-baselines3](https://github.com/DLR-RM/stable-baselines3) library. <video loop="" autoplay="" controls="" src="https://huggingface.co/mrm8488/ppo-CartPole-v1/resolve/main/output.mp4"></video> ### Usage (with Stable-baselines3) Using this model becomes easy when you have stable-baselines3 and huggingface_sb3 installed: ``` pip install stable-baselines3 pip install huggingface_sb3 ``` Then, you can use the model like this: ```python import gym from huggingface_sb3 import load_from_hub from stable_baselines3 import PPO from stable_baselines3.common.evaluation import evaluate_policy # Retrieve the model from the hub ## repo_id = id of the model repository from the Hugging Face Hub (repo_id = {organization}/{repo_name}) ## filename = name of the model zip file from the repository checkpoint = load_from_hub(repo_id="mrm8488/ppo-CartPole-v1", filename="cartpole-v1.zip") model = PPO.load(checkpoint) # Evaluate the agent eval_env = gym.make('CartPole-v1') mean_reward, std_reward = evaluate_policy(model, eval_env, n_eval_episodes=10, deterministic=True) print(f"mean_reward={mean_reward:.2f} +/- {std_reward}") # Watch the agent play obs = env.reset() for i in range(1000): action, _state = model.predict(obs) obs, reward, done, info = env.step(action) env.render() if done: obs = env.reset() env.close() ``` ### Evaluation Results Mean_reward: mean_reward=500.00 +/- 0.0
jhonparra18/wav2vec2-large-xls-r-300m-spanish-custom
jhonparra18
2022-01-27T14:58:01Z
15
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "robust-speech-event", "dataset:common_voice", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer - robust-speech-event datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-spanish-custom results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-spanish-custom This model was trained from scratch on the common_voice dataset. It achieves the following results on the evaluation set: - eval_loss: 0.2245 - eval_wer: 0.2082 - eval_runtime: 801.6784 - eval_samples_per_second: 18.822 - eval_steps_per_second: 2.354 - epoch: 0.76 - step: 8400 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 10 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
ncoop57/codeparrot-neo-125M-py
ncoop57
2022-01-27T14:44:13Z
14
1
transformers
[ "transformers", "pytorch", "jax", "rust", "gpt_neo", "text-generation", "text generation", "causal-lm", "en", "arxiv:2101.00027", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: - en tags: - text generation - pytorch - causal-lm license: apache-2.0 datasets: - The Pile --- # GPT-Neo 125M ## Model Description GPT-Neo 125M is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-Neo refers to the class of models, while 125M represents the number of parameters of this particular pre-trained model. ## Training data GPT-Neo 125M was trained on the Pile, a large scale curated dataset created by EleutherAI for the purpose of training this model. ## Training procedure This model was trained on the Pile for 300 billion tokens over 572,300 steps. It was trained as a masked autoregressive language model, using cross-entropy loss. ## Intended Use and Limitations This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt. ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py >>> from transformers import pipeline >>> generator = pipeline('text-generation', model='EleutherAI/gpt-neo-125M') >>> generator("EleutherAI has", do_sample=True, min_length=50) [{'generated_text': 'EleutherAI has made a commitment to create new software packages for each of its major clients and has'}] ``` ### Limitations and Biases GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. ## Eval results TBD ### Down-Stream Applications TBD ### BibTeX entry and citation info To cite this model, use ```bibtex @software{gpt-neo, author = {Black, Sid and Leo, Gao and Wang, Phil and Leahy, Connor and Biderman, Stella}, title = {{GPT-Neo: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow}}, month = mar, year = 2021, note = {{If you use this software, please cite it using these metadata.}}, publisher = {Zenodo}, version = {1.0}, doi = {10.5281/zenodo.5297715}, url = {https://doi.org/10.5281/zenodo.5297715} } @article{gao2020pile, title={The Pile: An 800GB Dataset of Diverse Text for Language Modeling}, author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and others}, journal={arXiv preprint arXiv:2101.00027}, year={2020} } ```
Iskaj/w2v-xlsr-dutch-lm
Iskaj
2022-01-27T13:41:13Z
6
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
Model cloned from https://huggingface.co/facebook/wav2vec2-large-xlsr-53-dutch Currently bugged: Logits size 48, vocab size 50
oskrmiguel/mt5-simplification-spanish
oskrmiguel
2022-01-27T13:32:24Z
22
6
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "simplification", "spanish", "es", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: - es thumbnail: tags: - simplification - mt5 - spanish license: cc-by-nc-sa-4.0 metrics: - sari widget: - text: "La Simplificación Textual es el proceso de transformación de un texto a otro texto equivalente más comprensible para un determinado tipo de grupo o población." - text: "Los textos simplificados son apropiados para muchos grupos de lectores, como, por ejemplo: estudiantes de idiomas, personas con discapacidades intelectuales y otras personas con necesidades especiales de lectura y comprensión. " --- # mt5-simplification-spanish ## Model description This is a fine-tuned mt5-small model for generating simple text from complex text. This model was created with the IXA Group research group of the University of the Basque Country, the model has been evaluated with the Sari, Bleu and Fklg metrics; it was trained and tested using the [Simplext corpus](https://dl.acm.org/doi/10.1145/2738046). ## Dataset Simplext ## Model Evaluation Bleu: 13,186 Sari: 42,203 Fklg: 10,284 ## Authors Oscar M. Cumbicus-Pineda, Itziar Gonzalez-Dios, Aitor Soroa, November 2021 ## Code https://github.com/oskrmiguel/mt5-simplification
anirudh21/bert-base-uncased-finetuned-qnli
anirudh21
2022-01-27T08:21:03Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: bert-base-uncased-finetuned-qnli results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: qnli metrics: - name: Accuracy type: accuracy value: 0.791689547867472 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-qnli This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6268 - Accuracy: 0.7917 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 63 | 0.5339 | 0.7620 | | No log | 2.0 | 126 | 0.4728 | 0.7866 | | No log | 3.0 | 189 | 0.5386 | 0.7847 | | No log | 4.0 | 252 | 0.6096 | 0.7904 | | No log | 5.0 | 315 | 0.6268 | 0.7917 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.1 - Tokenizers 0.10.3
anas-awadalla/bert-small-pretrained-finetuned-squad
anas-awadalla
2022-01-27T06:09:41Z
30
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: bert-small-pretrained-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-small-pretrained-finetuned-squad This model is a fine-tuned version of [anas-awadalla/bert-small-pretrained-on-squad](https://huggingface.co/anas-awadalla/bert-small-pretrained-on-squad) on the squad dataset. - "exact_match": 72.20435193945127 - "f1": 81.31832229156294 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
anas-awadalla/bert-medium-pretrained-finetuned-squad
anas-awadalla
2022-01-27T06:07:11Z
4
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: bert_medium_pretrain_squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert_medium_pretrain_squad This model is a fine-tuned version of [anas-awadalla/bert-medium-pretrained-on-squad](https://huggingface.co/anas-awadalla/bert-medium-pretrained-on-squad) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 0.0973 - "exact_match": 77.95648060548723 - "f1": 85.85300366384631 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
sankhajay/mt5-base-sinaha-qa
sankhajay
2022-01-27T05:35:18Z
6
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
\n --- language: si tags: - question-answering - Sinhala widget: - context: "ශ්‍රී ලංකාව යනු ඉන්දියානු සාගරයේ පිහිටි මනරම් දුපතකි." text: "ශ්‍රී ලංකාව පිහිටා ඇත්තේ කොහෙද ?" --- # mt5-base-sinhala-qa This is an mt5-based Question Answering model for the Sinhalese language. Training is done on translated SQuAD dataset of 8k questions. The translation was done by google translate API. The training was done on Google Colab TPU environment with parallel training techniques. The training was done on around 9k data points which consists of context, question, answer trios for the Sinhala language. Evaluation is done using standard SQuAD evaluation script on around 1k data points which gave following results on the best parameter setting. Evaluation matrices used are EM matric and F1 score matric. Evaluation - {'EM': 39.413680781758956, 'f1': 66.16331104953571}
anirudh21/albert-large-v2-finetuned-wnli
anirudh21
2022-01-27T05:02:43Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "albert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: albert-large-v2-finetuned-wnli results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: wnli metrics: - name: Accuracy type: accuracy value: 0.5352112676056338 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-large-v2-finetuned-wnli This model is a fine-tuned version of [albert-large-v2](https://huggingface.co/albert-large-v2) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6919 - Accuracy: 0.5352 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 17 | 0.7292 | 0.4366 | | No log | 2.0 | 34 | 0.6919 | 0.5352 | | No log | 3.0 | 51 | 0.7084 | 0.4648 | | No log | 4.0 | 68 | 0.7152 | 0.5352 | | No log | 5.0 | 85 | 0.7343 | 0.5211 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.1 - Tokenizers 0.10.3
anas-awadalla/bert-small-pretrained-on-squad
anas-awadalla
2022-01-27T03:57:07Z
9
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "generated_from_trainer", "dataset:squad", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: bert_small_pretrain_squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert_small_pretrain_squad This model is a fine-tuned version of [prajjwal1/bert-small](https://huggingface.co/prajjwal1/bert-small) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 0.1410 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
asahi417/tner-roberta-large-multiconer-en-adapter
asahi417
2022-01-26T16:13:58Z
10
0
adapter-transformers
[ "adapter-transformers", "adapterhub:named-entity-recognition/multiconer", "roberta", "dataset:multiconer", "region:us" ]
null
2022-03-02T23:29:05Z
--- tags: - adapter-transformers - adapterhub:named-entity-recognition/multiconer - roberta datasets: - multiconer --- # Adapter `asahi417/tner-roberta-large-multiconer-en-adapter` for roberta-large An [adapter](https://adapterhub.ml) for the `roberta-large` model that was trained on the [named-entity-recognition/multiconer](https://adapterhub.ml/explore/named-entity-recognition/multiconer/) dataset and includes a prediction head for tagging. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("roberta-large") adapter_name = model.load_adapter("asahi417/tner-roberta-large-multiconer-en-adapter", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
asahi417/tner-xlm-roberta-large-multiconer-mix-adapter
asahi417
2022-01-26T16:00:50Z
3
0
adapter-transformers
[ "adapter-transformers", "adapterhub:named-entity-recognition/multiconer", "xlm-roberta", "dataset:multiconer", "region:us" ]
null
2022-03-02T23:29:05Z
--- tags: - adapter-transformers - adapterhub:named-entity-recognition/multiconer - xlm-roberta datasets: - multiconer --- # Adapter `asahi417/tner-xlm-roberta-large-multiconer-mix-adapter` for xlm-roberta-large An [adapter](https://adapterhub.ml) for the `xlm-roberta-large` model that was trained on the [named-entity-recognition/multiconer](https://adapterhub.ml/explore/named-entity-recognition/multiconer/) dataset and includes a prediction head for tagging. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("xlm-roberta-large") adapter_name = model.load_adapter("asahi417/tner-xlm-roberta-large-multiconer-mix-adapter", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
katrin-kc/dummy2
katrin-kc
2022-01-26T12:01:45Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
# Hello World! This is a dummy repository. Can be deleted.
bitmorse/autonlp-ks-530615016
bitmorse
2022-01-26T11:40:24Z
4
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "autonlp", "en", "dataset:bitmorse/autonlp-data-ks", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - bitmorse/autonlp-data-ks co2_eq_emissions: 2.2247356264808964 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 530615016 - CO2 Emissions (in grams): 2.2247356264808964 ## Validation Metrics - Loss: 0.7859578132629395 - Accuracy: 0.676854818831649 - Macro F1: 0.3297126297995653 - Micro F1: 0.676854818831649 - Weighted F1: 0.6429522696884535 - Macro Precision: 0.33152557743856437 - Micro Precision: 0.676854818831649 - Weighted Precision: 0.6276125515413322 - Macro Recall: 0.33784302289888885 - Micro Recall: 0.676854818831649 - Weighted Recall: 0.676854818831649 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/bitmorse/autonlp-ks-530615016 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("bitmorse/autonlp-ks-530615016", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("bitmorse/autonlp-ks-530615016", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
SetFit/MiniLM-L12-H384-uncased__sst2__all-train
SetFit
2022-01-26T11:27:47Z
12
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: MiniLM-L12-H384-uncased__sst2__all-train results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MiniLM-L12-H384-uncased__sst2__all-train This model is a fine-tuned version of [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2632 - Accuracy: 0.9055 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4183 | 1.0 | 433 | 0.3456 | 0.8720 | | 0.2714 | 2.0 | 866 | 0.2632 | 0.9055 | | 0.2016 | 3.0 | 1299 | 0.3357 | 0.8990 | | 0.1501 | 4.0 | 1732 | 0.4474 | 0.8863 | | 0.1119 | 5.0 | 2165 | 0.3998 | 0.8979 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
Iskaj/hf-challenge-test
Iskaj
2022-01-26T11:21:07Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "ab", "dataset:common_voice", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- language: - ab tags: - automatic-speech-recognition - mozilla-foundation/common_voice_7_0 - generated_from_trainer datasets: - common_voice model-index: - name: '' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset. It achieves the following results on the evaluation set: - Loss: 156.8789 - Wer: 1.3456 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.18.1.dev0 - Tokenizers 0.11.0
jcmc/wav2vec2-large-xlsr-53-ir
jcmc
2022-01-26T10:35:17Z
6
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - ga-IE license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_7_0 - generated_from_trainer datasets: - common_voice model-index: - name: '' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - GA-IE dataset. It achieves the following results on the evaluation set: - Loss: 1.0835 - Wer: 0.7490 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 50.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.1483 | 15.62 | 500 | 3.0498 | 1.0 | | 2.8449 | 31.25 | 1000 | 2.7790 | 0.9493 | | 1.8683 | 46.86 | 1500 | 1.2339 | 0.8161 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
SophieTr/fine-tune-Pegasus-large
SophieTr
2022-01-26T07:56:10Z
6
1
transformers
[ "transformers", "pytorch", "pegasus", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer model-index: - name: fine-tune-Pegasus-large results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fine-tune-Pegasus-large This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 11.0526 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6.35e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 500 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1 - Datasets 1.17.0 - Tokenizers 0.10.3
danielbubiola/bangla_asr
danielbubiola
2022-01-26T07:42:22Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer model-index: - name: bangla_asr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bangla_asr This model is a fine-tuned version of [Harveenchadha/vakyansh-wav2vec2-bengali-bnm-200](https://huggingface.co/Harveenchadha/vakyansh-wav2vec2-bengali-bnm-200) on the None dataset. It achieves the following results on the evaluation set: - Loss: 157.8652 - Wer: 0.4507 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 60 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2601.5363 | 7.46 | 500 | 259.6630 | 0.6863 | | 417.7386 | 14.93 | 1000 | 156.6117 | 0.5275 | | 262.9455 | 22.39 | 1500 | 155.0886 | 0.5006 | | 178.7715 | 29.85 | 2000 | 155.1077 | 0.4840 | | 132.448 | 37.31 | 2500 | 163.8623 | 0.4770 | | 116.3943 | 44.78 | 3000 | 161.5531 | 0.4609 | | 87.1653 | 52.24 | 3500 | 165.6857 | 0.4597 | | 80.5606 | 59.7 | 4000 | 157.8652 | 0.4507 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
kxiaoqiangrexian/bert_test
kxiaoqiangrexian
2022-01-26T06:52:37Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:05Z
--- license: apache-2.0 ---
Ajay191191/autonlp-Test-530014983
Ajay191191
2022-01-25T22:28:49Z
7
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autonlp", "en", "dataset:Ajay191191/autonlp-data-Test", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - Ajay191191/autonlp-data-Test co2_eq_emissions: 55.10196329868386 --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 530014983 - CO2 Emissions (in grams): 55.10196329868386 ## Validation Metrics - Loss: 0.23171618580818176 - Accuracy: 0.9298837645294338 - Precision: 0.9314414866901055 - Recall: 0.9279459594696022 - AUC: 0.979447403984557 - F1: 0.9296904373981703 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Ajay191191/autonlp-Test-530014983 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Ajay191191/autonlp-Test-530014983", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Ajay191191/autonlp-Test-530014983", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
anirudh21/albert-base-v2-finetuned-rte
anirudh21
2022-01-25T22:23:12Z
19
0
transformers
[ "transformers", "pytorch", "tensorboard", "albert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: albert-base-v2-finetuned-rte results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: rte metrics: - name: Accuracy type: accuracy value: 0.7581227436823105 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-base-v2-finetuned-rte This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 1.2496 - Accuracy: 0.7581 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 249 | 0.5914 | 0.6751 | | No log | 2.0 | 498 | 0.5843 | 0.7184 | | 0.5873 | 3.0 | 747 | 0.6925 | 0.7220 | | 0.5873 | 4.0 | 996 | 1.1613 | 0.7545 | | 0.2149 | 5.0 | 1245 | 1.2496 | 0.7581 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.0 - Tokenizers 0.10.3
ghadeermobasher/BC5CDR-Chemical-Disease-balanced-SapBERT-UMLS-2020AB-all-lang-from-XLMR
ghadeermobasher
2022-01-25T21:02:42Z
2
0
adapter-transformers
[ "adapter-transformers", "pytorch", "xlm-roberta", "adapterhub:other", "dataset:ghadeermobasher/BC5CDR-Chemical-Disease", "region:us" ]
null
2022-03-02T23:29:05Z
--- tags: - adapter-transformers - adapterhub:other - xlm-roberta datasets: - ghadeermobasher/BC5CDR-Chemical-Disease --- # Adapter `ghadeermobasher/BC5CDR-Chemical-Disease-balanced-SapBERT-UMLS-2020AB-all-lang-from-XLMR` for ghadeermobasher/BC5CDR-Chemical-Disease-balanced-SapBERT-UMLS-2020AB-all-lang-from-XLMR An [adapter](https://adapterhub.ml) for the `ghadeermobasher/BC5CDR-Chemical-Disease-balanced-SapBERT-UMLS-2020AB-all-lang-from-XLMR` model that was trained on the [other](https://adapterhub.ml/explore/other/) dataset. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("ghadeermobasher/BC5CDR-Chemical-Disease-balanced-SapBERT-UMLS-2020AB-all-lang-from-XLMR") adapter_name = model.load_adapter("ghadeermobasher/BC5CDR-Chemical-Disease-balanced-SapBERT-UMLS-2020AB-all-lang-from-XLMR", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
jiobiala24/wav2vec2-base-checkpoint-9
jiobiala24
2022-01-25T19:52:35Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-base-checkpoint-9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-checkpoint-9 This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-8](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-8) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.9203 - Wer: 0.3258 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.2783 | 1.58 | 1000 | 0.5610 | 0.3359 | | 0.2251 | 3.16 | 2000 | 0.5941 | 0.3374 | | 0.173 | 4.74 | 3000 | 0.6026 | 0.3472 | | 0.1475 | 6.32 | 4000 | 0.6750 | 0.3482 | | 0.1246 | 7.9 | 5000 | 0.6673 | 0.3414 | | 0.1081 | 9.48 | 6000 | 0.7072 | 0.3409 | | 0.1006 | 11.06 | 7000 | 0.7413 | 0.3392 | | 0.0879 | 12.64 | 8000 | 0.7831 | 0.3394 | | 0.0821 | 14.22 | 9000 | 0.7371 | 0.3333 | | 0.0751 | 15.8 | 10000 | 0.8321 | 0.3445 | | 0.0671 | 17.38 | 11000 | 0.8362 | 0.3357 | | 0.0646 | 18.96 | 12000 | 0.8709 | 0.3367 | | 0.0595 | 20.54 | 13000 | 0.8352 | 0.3321 | | 0.0564 | 22.12 | 14000 | 0.8854 | 0.3323 | | 0.052 | 23.7 | 15000 | 0.9031 | 0.3315 | | 0.0485 | 25.28 | 16000 | 0.9171 | 0.3278 | | 0.046 | 26.86 | 17000 | 0.9390 | 0.3254 | | 0.0438 | 28.44 | 18000 | 0.9203 | 0.3258 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
anirudh21/electra-base-discriminator-finetuned-rte
anirudh21
2022-01-25T15:43:18Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "electra", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: electra-base-discriminator-finetuned-rte results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: rte metrics: - name: Accuracy type: accuracy value: 0.8231046931407943 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electra-base-discriminator-finetuned-rte This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.4793 - Accuracy: 0.8231 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 156 | 0.6076 | 0.6570 | | No log | 2.0 | 312 | 0.4824 | 0.7762 | | No log | 3.0 | 468 | 0.4793 | 0.8231 | | 0.4411 | 4.0 | 624 | 0.7056 | 0.7906 | | 0.4411 | 5.0 | 780 | 0.6849 | 0.8159 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.0 - Tokenizers 0.10.3
Iacopo/Shakespear-GPT2
Iacopo
2022-01-25T13:35:35Z
11
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
--- license: mit tags: - generated_from_trainer model-index: - name: output results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # output This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on a dataset of Shakespeare's plays. ## Model description The model is the original gpt-2 model fine-tuned on a custom dataset. ## Intended uses & limitations The model can be used to generate Shakespearean-like text. Consider that because it comes from plays, such a typographical structure might be reproduced. ## Training and evaluation data Trained with Shakespeare's plays corpus. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.0+cu111 - Datasets 1.18.0 - Tokenizers 0.11.0
awsaf49/deep-chimpact
awsaf49
2022-01-25T12:59:16Z
9
1
tf-keras
[ "tf-keras", "region:us" ]
null
2022-03-02T23:29:05Z
# [Deep Chimpact](https://www.drivendata.org/competitions/82/competition-wildlife-video-depth-estimation/page/390/) > Depth Estimation for Wildlife Conservation (1st place solution) <div align=center> <img src="https://user-images.githubusercontent.com/36858976/138281204-c3cbcb77-11ca-448b-a693-cb3cfa3c5181.png" width=800> ## Overview Healthy natural ecosystems have wide-ranging benefits from public health to the economy to agriculture. In order to protect the Earth's natural resources, conservationists need to be able to monitor species population sizes and population change. Camera traps are widely used in conservation research to capture images and videos of wildlife without human interference. Using statistical models for distance sampling, the frequency of animal sightings can be combined with the distance of each animal from the camera to estimate a species' full population size. However, getting distances from camera trap footage currently entails an extremely manual, time-intensive process. It takes a researcher more than **10 minutes** on average to label distance for every **1 minute** of video - that’s a lot of time when you have a million videos! This also creates a bottleneck for critical information that conservationists can use to **monitor wildlife populations**. > Your goal in this challenge is to use machine learning to automatically estimate the distance between a camera trap and an animal in a series of camera trap videos. You will be given a series of timestamps indicating when animals are visible in each camera trap video. To complete the challenge, you will predict the distance between the animal and the camera at each point in time. Along the way, keep an eye out for some sneaky leopards hunting at night, baby chimpanzees getting piggy-back rides, and diva elephants that can't get enough of the limelight. By contributing to this challenge, you can help advance cutting-edge methods for keeping these animal populations (and humans) healthy and safe!
dhanesh123in/layoutlmv2-finetuned-funsd-test
dhanesh123in
2022-01-25T12:33:29Z
7
0
transformers
[ "transformers", "pytorch", "tensorboard", "layoutlmv2", "token-classification", "generated_from_trainer", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer model-index: - name: layoutlmv2-finetuned-funsd-test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlmv2-finetuned-funsd-test This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 1000 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1 - Datasets 1.18.0 - Tokenizers 0.11.0
SamMorgan/yolo_v4_tflite
SamMorgan
2022-01-25T10:15:51Z
0
4
tf-keras
[ "tf-keras", "tflite", "object detection", "computer vision", "darknet", "yolo", "object-detection", "en", "dataset:coco", "dataset:imagenette", "arxiv:2004.10934", "license:mit", "region:us" ]
object-detection
2022-03-02T23:29:04Z
--- language: en tags: - object detection - computer vision - darknet - yolo datasets: - coco - imagenette license: mit thumbnail: https://github.com/hunglc007/tensorflow-yolov4-tflite pipeline_tag: object-detection --- # YOLOv4 YOLO, for "You Only Look Once", is an object detection system in real-time, introduced in [this paper](https://arxiv.org/abs/2004.10934), that recognizes various objects in a single enclosure. It identifies objects more rapidly and more precisely than other recognition systems. Three authors Alexey Bochkovskiy, the Russian developer who built the YOLO Windows version, Chien-Yao Wang, and Hong-Yuan Mark Liao, are accounted for in this work and the entire code is available on [Github](https://github.com/AlexeyAB/darknet). This YOLOv4 library, inspired by previous YOLOv3 implementations here: * [Yolov3 tensorflow](https://github.com/YunYang1994/tensorflow-yolov3) * [Yolov3 tf2](https://github.com/zzh8829/yolov3-tf2)uses Tensorflow 2.0 and is available on this [Github](https://github.com/hunglc007/tensorflow-yolov4-tflite). ### Limitations and biases Object-recognition technology has improved drastically in the past few years across the industry, and it is now part of a huge variety of products and services that millions of people worldwide use. However, errors in object-recognition algorithms can stem from the training data used to create the system is geographically constrained and/or that it fails to recognize cultural differences. The COCO dataset used to train yolov4-tflite has been found to have annotation errors on more than 20% of images. Such errors include captions describing people differently based on skin tone and gender expression. This serves as a reminder to be cognizant that these biases already exist and a warning to be careful about the increasing bias that is likely to come with advancements in image captioning technology. ### How to use YOLOv4tflite You can use this model to detect objects in an image of choice. Follow the following scripts to implement on your own! ```bash # install git lfs git lfs install # if presented with the error "git: 'lfs' is not a git command. See 'git --help'", try running these linux commands: curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash # change directory to base cd .. # install git-lfs sudo apt-get install git-lfs # for message "Git LFS initialized" git lfs install # change directory to yolo_v4_tflite cd ./yolo_v4_tflite # clone this repo into your notebook git clone https://huggingface.co/SamMorgan/yolo_v4_tflite # Run demo tensor flow for an example of how this model works python detect.py --weights ./checkpoints/yolov4-416 --size 416 --model yolov4 --image ./data/kite.jpg --output ./test.jpg # Try with your own image python detect.py --weights ./checkpoints/yolov4-416 --size 416 --model yolov4 --image <insert path to image of choice> --output <insert path to output location of choice> ``` ### Evaluate on COCO 2017 Dataset ```bash # run script in /script/get_coco_dataset_2017.sh to download COCO 2017 Dataset # preprocess coco dataset cd data mkdir dataset cd .. cd scripts python coco_convert.py --input ./coco/annotations/instances_val2017.json --output val2017.pkl python coco_annotation.py --coco_path ./coco cd .. # evaluate yolov4 model python evaluate.py --weights ./data/yolov4.weights cd mAP/extra python remove_space.py cd .. python main.py --output results_yolov4_tf ``` #### mAP50 on COCO 2017 Dataset | Detection | 512x512 | 416x416 | 320x320 | |-------------|---------|---------|---------| | YoloV3 | 55.43 | 52.32 | | | YoloV4 | 61.96 | 57.33 | | ### Benchmark ```bash python benchmarks.py --size 416 --model yolov4 --weights ./data/yolov4.weights ``` #### TensorRT performance | YoloV4 416 images/s | FP32 | FP16 | INT8 | |---------------------|----------|----------|----------| | Batch size 1 | 55 | 116 | | | Batch size 8 | 70 | 152 | | #### Tesla P100 | Detection | 512x512 | 416x416 | 320x320 | |-------------|---------|---------|---------| | YoloV3 FPS | 40.6 | 49.4 | 61.3 | | YoloV4 FPS | 33.4 | 41.7 | 50.0 | #### Tesla K80 | Detection | 512x512 | 416x416 | 320x320 | |-------------|---------|---------|---------| | YoloV3 FPS | 10.8 | 12.9 | 17.6 | | YoloV4 FPS | 9.6 | 11.7 | 16.0 | #### Tesla T4 | Detection | 512x512 | 416x416 | 320x320 | |-------------|---------|---------|---------| | YoloV3 FPS | 27.6 | 32.3 | 45.1 | | YoloV4 FPS | 24.0 | 30.3 | 40.1 | #### Tesla P4 | Detection | 512x512 | 416x416 | 320x320 | |-------------|---------|---------|---------| | YoloV3 FPS | 20.2 | 24.2 | 31.2 | | YoloV4 FPS | 16.2 | 20.2 | 26.5 | #### Macbook Pro 15 (2.3GHz i7) | Detection | 512x512 | 416x416 | 320x320 | |-------------|---------|---------|---------| | YoloV3 FPS | | | | | YoloV4 FPS | | | | ### Traning your own model ```bash # Prepare your dataset # If you want to train from scratch: In config.py set FISRT_STAGE_EPOCHS=0 # Run script: python train.py # Transfer learning: python train.py --weights ./data/yolov4.weights ``` The training performance is not fully reproduced yet, so I recommended to use Alex's [Darknet](https://github.com/AlexeyAB/darknet) to train your own data, then convert the .weights to tensorflow or tflite. ### References * YOLOv4: Optimal Speed and Accuracy of Object Detection [YOLOv4](https://arxiv.org/abs/2004.10934). * [darknet](https://github.com/AlexeyAB/darknet)
z-uo/glowtts-male-it
z-uo
2022-01-25T07:14:09Z
4
1
transformers
[ "transformers", "tensorboard", "text-to-speech", "it", "dataset:z-uo/male-LJSpeech-italian", "endpoints_compatible", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - text-to-speech language: - it model-index: - name: glowtts-male-it results: [] datasets: - z-uo/male-LJSpeech-italian --- # Coqui Model for TTS ``` pip install TTS git clone https://huggingface.co/z-uo/glowtts-male-it # predict one server --text "ciao pluto" --model_path "glowtts-male-it/GOOD_best_model_3840.pth.tar" --config_path "glowtts-male-it/config.json" # predict server tts-server --model_path "glowtts-male-it/GOOD_best_model_3840.pth.tar" --config_path "glowtts-male-it/config.json" firefox localhost:5002 ``` More information about training script in [this repo](https://github.com/nicolalandro/train_coqui_tts_ita).
arman0320/bert-base-cased-wikitext2
arman0320
2022-01-25T05:51:08Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-base-cased-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-wikitext2 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 6.8596 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 7.0963 | 1.0 | 2346 | 7.0570 | | 6.9063 | 2.0 | 4692 | 6.8721 | | 6.8585 | 3.0 | 7038 | 6.8931 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.0 - Tokenizers 0.10.3
anirudh21/electra-base-discriminator-finetuned-wnli
anirudh21
2022-01-25T04:41:03Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "electra", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: electra-base-discriminator-finetuned-wnli results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: wnli metrics: - name: Accuracy type: accuracy value: 0.5633802816901409 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electra-base-discriminator-finetuned-wnli This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6893 - Accuracy: 0.5634 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 40 | 0.6893 | 0.5634 | | No log | 2.0 | 80 | 0.7042 | 0.4225 | | No log | 3.0 | 120 | 0.7008 | 0.3803 | | No log | 4.0 | 160 | 0.6998 | 0.5634 | | No log | 5.0 | 200 | 0.7016 | 0.5352 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.0 - Tokenizers 0.10.3
Suva/uptag-url-model
Suva
2022-01-25T04:32:49Z
6
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "dataset:arxiv", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- datasets: - arxiv widget: - text: "summarize: We describe a system called Overton, whose main design goal is to support engineers in building, monitoring, and improving production machinelearning systems. Key challenges engineers face are monitoring fine-grained quality, diagnosing errors in sophisticated applications, and handling contradictory or incomplete supervision data. Overton automates the life cycle of model construction, deployment, and monitoring by providing a set of novel high-level, declarative abstractions. Overton's vision is to shift developers to these higher-level tasks instead of lower-level machine learning tasks. In fact, using Overton, engineers can build deep-learning-based applications without writing any code in frameworks like TensorFlow. For over a year, Overton has been used in production to support multiple applications in both near-real-time applications and back-of-house processing. In that time, Overton-based applications have answered billions of queries in multiple languages and processed trillions of records reducing errors 1.7-2.9 times versus production systems." license: mit --- ## Usage: ```python abstract = """We describe a system called Overton, whose main design goal is to support engineers in building, monitoring, and improving production machine learning systems. Key challenges engineers face are monitoring fine-grained quality, diagnosing errors in sophisticated applications, and handling contradictory or incomplete supervision data. Overton automates the life cycle of model construction, deployment, and monitoring by providing a set of novel high-level, declarative abstractions. Overton's vision is to shift developers to these higher-level tasks instead of lower-level machine learning tasks. In fact, using Overton, engineers can build deep-learning-based applications without writing any code in frameworks like TensorFlow. For over a year, Overton has been used in production to support multiple applications in both near-real-time applications and back-of-house processing. In that time, Overton-based applications have answered billions of queries in multiple languages and processed trillions of records reducing errors 1.7-2.9 times versus production systems. """ ``` ### Using Transformers🤗 ```python model_name = "Suva/uptag-url-model" from transformers import AutoModelForSeq2SeqLM, AutoTokenizer model = AutoModelForSeq2SeqLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) input_ids = tokenizer.encode("summarize: " + abstract, return_tensors="pt", add_special_tokens=True) generated_ids = model.generate(input_ids=input_ids,num_beams=5,max_length=100,repetition_penalty=2.5,length_penalty=1,early_stopping=True,num_return_sequences=3) preds = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=True) for g in generated_ids] print(preds) # output ["Overton: Building, Deploying, and Monitoring Machine Learning Systems for Engineers", "Overton: A System for Building, Monitoring, and Improving Production Machine Learning Systems", "Overton: Building, Monitoring, and Improving Production Machine Learning Systems"] ```
kika2000/wav2vec2-large-xls-r-300m-kika_my-colab
kika2000
2022-01-25T04:10:14Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-kika_my-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-kika_my-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 1.3300 - Wer: 0.5804 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.8067 | 4.82 | 400 | 1.2892 | 0.8886 | | 0.3048 | 9.64 | 800 | 1.2285 | 0.6797 | | 0.1413 | 14.46 | 1200 | 1.1970 | 0.6509 | | 0.1047 | 19.28 | 1600 | 1.3628 | 0.6166 | | 0.0799 | 24.1 | 2000 | 1.3345 | 0.6014 | | 0.0638 | 28.92 | 2400 | 1.3300 | 0.5804 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
byeongal/gpt-j-6B-float16
byeongal
2022-01-25T03:21:06Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:05Z
--- license: apache-2.0 ---
mtglearn/roberta-mtg-cards
mtglearn
2022-01-25T02:57:42Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:05Z
--- license: apache-2.0 ---
aviator-neural/gpt2-donald_trump
aviator-neural
2022-01-24T22:09:58Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer model-index: - name: gpt2-donald_trump results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-donald_trump This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.8721 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 391 | 2.8721 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.0 - Tokenizers 0.10.3
anirudh21/albert-base-v2-finetuned-qnli
anirudh21
2022-01-24T19:56:19Z
4
1
transformers
[ "transformers", "pytorch", "tensorboard", "albert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: albert-base-v2-finetuned-qnli results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: qnli metrics: - name: Accuracy type: accuracy value: 0.9112209408749771 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-base-v2-finetuned-qnli This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.3194 - Accuracy: 0.9112 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.3116 | 1.0 | 6547 | 0.2818 | 0.8849 | | 0.2467 | 2.0 | 13094 | 0.2532 | 0.9001 | | 0.1858 | 3.0 | 19641 | 0.3194 | 0.9112 | | 0.1449 | 4.0 | 26188 | 0.4338 | 0.9103 | | 0.0584 | 5.0 | 32735 | 0.5752 | 0.9052 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.0 - Tokenizers 0.10.3
anas-awadalla/bert-small-finetuned-squad
anas-awadalla
2022-01-24T19:25:29Z
45
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - squad model-index: - name: bert-small-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-small-finetuned-squad This model is a fine-tuned version of [prajjwal1/bert-small](https://huggingface.co/prajjwal1/bert-small) on the squad dataset. It achieves the following results on the evaluation set: - eval_loss: 1.3138 - eval_runtime: 46.6577 - eval_samples_per_second: 231.13 - eval_steps_per_second: 14.446 - epoch: 4.0 - step: 22132 {'exact_match': 71.05960264900662, 'f1': 80.8260245470904} ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.0 - Tokenizers 0.10.3
birgermoell/wav2vec2-common_voice-tr-demo
birgermoell
2022-01-24T18:52:26Z
10
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "common_voice", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - sv-SE license: apache-2.0 tags: - automatic-speech-recognition - common_voice - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-common_voice-tr-demo results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-common_voice-tr-demo This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - SV-SE dataset. It achieves the following results on the evaluation set: - Loss: 0.5528 - Wer: 0.3811 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 15.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 0.74 | 100 | 3.4444 | 1.0 | | No log | 1.47 | 200 | 2.9421 | 1.0 | | No log | 2.21 | 300 | 2.2802 | 1.0137 | | No log | 2.94 | 400 | 0.9683 | 0.7611 | | 3.7264 | 3.68 | 500 | 0.7941 | 0.6594 | | 3.7264 | 4.41 | 600 | 0.6695 | 0.5751 | | 3.7264 | 5.15 | 700 | 0.6507 | 0.5314 | | 3.7264 | 5.88 | 800 | 0.5731 | 0.4927 | | 3.7264 | 6.62 | 900 | 0.5723 | 0.4580 | | 0.4592 | 7.35 | 1000 | 0.5913 | 0.4479 | | 0.4592 | 8.09 | 1100 | 0.5562 | 0.4423 | | 0.4592 | 8.82 | 1200 | 0.5566 | 0.4292 | | 0.4592 | 9.56 | 1300 | 0.5492 | 0.4303 | | 0.4592 | 10.29 | 1400 | 0.5665 | 0.4331 | | 0.2121 | 11.03 | 1500 | 0.5610 | 0.4084 | | 0.2121 | 11.76 | 1600 | 0.5703 | 0.4014 | | 0.2121 | 12.5 | 1700 | 0.5669 | 0.3898 | | 0.2121 | 13.24 | 1800 | 0.5586 | 0.3962 | | 0.2121 | 13.97 | 1900 | 0.5656 | 0.3897 | | 0.1326 | 14.71 | 2000 | 0.5565 | 0.3813 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
younes9/AI-DAY-distilbert-base-uncased-finetuned-cola
younes9
2022-01-24T18:13:20Z
17
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: AI-DAY-distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5382139717003264 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # AI-DAY-distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7236 - Matthews Correlation: 0.5382 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5308 | 1.0 | 535 | 0.5065 | 0.4296 | | 0.3565 | 2.0 | 1070 | 0.5109 | 0.4940 | | 0.2399 | 3.0 | 1605 | 0.6056 | 0.5094 | | 0.1775 | 4.0 | 2140 | 0.7236 | 0.5382 | | 0.1242 | 5.0 | 2675 | 0.8659 | 0.5347 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.0 - Tokenizers 0.10.3
deepdoctection/tp_casc_rcnn_X_32xd4_50_FPN_GN_2FC_pubtabnet_c
deepdoctection
2022-01-24T16:15:44Z
0
0
null
[ "Tensorflow", "dataset:Pubtabnet", "arxiv:1911.10683", "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:05Z
--- tags: - Tensorflow license: apache-2.0 datasets: - Pubtabnet --- # Tensorpacks Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Pubtabnet for Semantic Segmentation of tables. The model and its training code has been mainly taken from: [Tensorpack](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN) . Regarding the dataset, please check: [Xu Zhong et. all. - Image-based table recognition: data, model, and evaluation](https://arxiv.org/abs/1911.10683). The model has been trained on detecting cells from tables. Note, that the datasets contains tables only. Therefore, it is required to perform a table detection task before detecting cells. The code has been adapted so that it can be used in a **deep**doctection pipeline. ## How this model can be used This model can be used with the **deep**doctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this [Get_started](https://github.com/deepdoctection/deepdoctection/blob/master/notebooks/Get_Started.ipynb) tutorial. ## How this model was trained. To recreate the model run on the **deep**doctection framework, run: ```python >>> import os >>> from deep_doctection.datasets import DatasetRegistry >>> from deep_doctection.eval import MetricRegistry >>> from deep_doctection.utils import get_configs_dir_path >>> from deep_doctection.train import train_faster_rcnn pubtabnet = DatasetRegistry.get_dataset("pubtabnet") pubtabnet.dataflow.categories.filter_categories(categories="CELL") path_config_yaml=os.path.join(get_configs_dir_path(),"tp/cell/conf_frcnn_cell.yaml") path_weights = "" dataset_train = pubtabnet config_overwrite=["TRAIN.STEPS_PER_EPOCH=500","TRAIN.STARTING_EPOCH=1", "TRAIN.CHECKPOINT_PERIOD=50","BACKBONE.FREEZE_AT=0", "PREPROC.TRAIN_SHORT_EDGE_SIZE=[200,600]"] build_train_config=["max_datapoints=500000"] dataset_val = pubtabnet build_val_config = ["max_datapoints=4000"] coco_metric = MetricRegistry.get_metric("coco") coco_metric.set_params(max_detections=[50,200,600], area_range=[[0,1000000],[0,200],[200,800],[800,1000000]]) train_faster_rcnn(path_config_yaml=path_config_yaml, dataset_train=dataset_train, path_weights=path_weights, config_overwrite=config_overwrite, log_dir="/path/to/dir", build_train_config=build_train_config, dataset_val=dataset_val, build_val_config=build_val_config, metric=coco_metric, pipeline_component_name="ImageLayoutService" ) ``` ## How to fine-tune this model To fine tune this model, please check this [Fine-tune](https://github.com/deepdoctection/deepdoctection/blob/master/notebooks/Fine_Tune.ipynb) tutorial.
huggingtweets/yu_kisub21
huggingtweets
2022-01-24T15:24:45Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/yu_kisub21/1643037750346/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1476997379857723392/L6czpqmI_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">ゆう🇲🇾英語を軸に人生に革新を🔥</div> <div style="text-align: center; font-size: 14px;">@yu_kisub21</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from ゆう🇲🇾英語を軸に人生に革新を🔥. | Data | ゆう🇲🇾英語を軸に人生に革新を🔥 | | --- | --- | | Tweets downloaded | 1580 | | Retweets | 366 | | Short tweets | 1137 | | Tweets kept | 77 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1fswx6qh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @yu_kisub21's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/35tec8b2) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/35tec8b2/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/yu_kisub21') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)