modelId
stringlengths
5
138
author
stringlengths
2
42
last_modified
unknowndate
2020-02-15 11:33:14
2025-05-23 06:28:04
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
474 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
unknowndate
2022-03-02 23:29:04
2025-05-23 06:27:34
card
stringlengths
11
1.01M
owao/secgpt-Q8_0-GGUF
owao
"2025-04-19T18:26:43Z"
0
0
null
[ "gguf", "cybersecurity", "security", "network-security", "llama-cpp", "gguf-my-repo", "zh", "en", "base_model:clouditera/secgpt", "base_model:quantized:clouditera/secgpt", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-04-19T18:25:38Z"
--- base_model: clouditera/secgpt language: - zh - en license: apache-2.0 tags: - cybersecurity - security - network-security - llama-cpp - gguf-my-repo --- # owao/secgpt-Q8_0-GGUF This model was converted to GGUF format from [`clouditera/secgpt`](https://huggingface.co/clouditera/secgpt) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/clouditera/secgpt) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo owao/secgpt-Q8_0-GGUF --hf-file secgpt-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo owao/secgpt-Q8_0-GGUF --hf-file secgpt-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo owao/secgpt-Q8_0-GGUF --hf-file secgpt-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo owao/secgpt-Q8_0-GGUF --hf-file secgpt-q8_0.gguf -c 2048 ```
breadlicker45/ModernBERT-base-gender
breadlicker45
"2025-04-19T18:26:02Z"
0
0
null
[ "safetensors", "modernbert", "dataset:breadlicker45/gender-bluesky-classification", "region:us" ]
null
"2025-04-19T16:29:44Z"
--- datasets: - breadlicker45/gender-bluesky-classification ---
rbelanec/train_cola_1744902674
rbelanec
"2025-04-19T18:23:03Z"
0
0
peft
[ "peft", "safetensors", "llama-factory", "lntuning", "generated_from_trainer", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct", "license:llama3", "region:us" ]
null
"2025-04-19T11:53:42Z"
--- library_name: peft license: llama3 base_model: meta-llama/Meta-Llama-3-8B-Instruct tags: - llama-factory - lntuning - generated_from_trainer model-index: - name: train_cola_1744902674 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # train_cola_1744902674 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the cola dataset. It achieves the following results on the evaluation set: - Loss: 0.1450 - Num Input Tokens Seen: 30508240 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 123 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - training_steps: 40000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen | |:-------------:|:-------:|:-----:|:---------------:|:-----------------:| | 0.1957 | 0.4158 | 200 | 0.1817 | 153120 | | 0.1135 | 0.8316 | 400 | 0.1695 | 305504 | | 0.1838 | 1.2474 | 600 | 0.1719 | 458648 | | 0.1714 | 1.6632 | 800 | 0.1582 | 610680 | | 0.1598 | 2.0790 | 1000 | 0.1652 | 763880 | | 0.1457 | 2.4948 | 1200 | 0.1684 | 916648 | | 0.1318 | 2.9106 | 1400 | 0.1617 | 1068552 | | 0.1254 | 3.3264 | 1600 | 0.1538 | 1220928 | | 0.1177 | 3.7422 | 1800 | 0.1547 | 1373952 | | 0.1587 | 4.1580 | 2000 | 0.1559 | 1526312 | | 0.1062 | 4.5738 | 2200 | 0.1582 | 1678248 | | 0.139 | 4.9896 | 2400 | 0.1594 | 1831112 | | 0.0888 | 5.4054 | 2600 | 0.1484 | 1983296 | | 0.1192 | 5.8212 | 2800 | 0.1509 | 2135968 | | 0.1676 | 6.2370 | 3000 | 0.1498 | 2289200 | | 0.0981 | 6.6528 | 3200 | 0.1545 | 2441648 | | 0.1514 | 7.0686 | 3400 | 0.1543 | 2593344 | | 0.0984 | 7.4844 | 3600 | 0.1450 | 2745792 | | 0.0861 | 7.9002 | 3800 | 0.1477 | 2898816 | | 0.1052 | 8.3160 | 4000 | 0.1481 | 3050480 | | 0.1181 | 8.7318 | 4200 | 0.1555 | 3202864 | | 0.0822 | 9.1476 | 4400 | 0.1496 | 3355680 | | 0.1244 | 9.5634 | 4600 | 0.1713 | 3508192 | | 0.1002 | 9.9792 | 4800 | 0.1487 | 3661568 | | 0.0598 | 10.3950 | 5000 | 0.1683 | 3813552 | | 0.0772 | 10.8108 | 5200 | 0.1500 | 3967024 | | 0.0858 | 11.2266 | 5400 | 0.1604 | 4120032 | | 0.193 | 11.6424 | 5600 | 0.1741 | 4272608 | | 0.108 | 12.0582 | 5800 | 0.1718 | 4424280 | | 0.0874 | 12.4740 | 6000 | 0.1606 | 4575480 | | 0.0524 | 12.8898 | 6200 | 0.1681 | 4728792 | | 0.064 | 13.3056 | 6400 | 0.1905 | 4880880 | | 0.0944 | 13.7214 | 6600 | 0.1584 | 5034608 | | 0.0552 | 14.1372 | 6800 | 0.1709 | 5186400 | | 0.0498 | 14.5530 | 7000 | 0.1980 | 5339008 | | 0.0986 | 14.9688 | 7200 | 0.1659 | 5491424 | | 0.0387 | 15.3846 | 7400 | 0.1943 | 5644520 | | 0.0439 | 15.8004 | 7600 | 0.1749 | 5796744 | | 0.0442 | 16.2162 | 7800 | 0.2338 | 5949536 | | 0.1374 | 16.6320 | 8000 | 0.2452 | 6102304 | | 0.0437 | 17.0478 | 8200 | 0.2341 | 6254288 | | 0.0768 | 17.4636 | 8400 | 0.2073 | 6407504 | | 0.0372 | 17.8794 | 8600 | 0.2091 | 6559760 | | 0.1061 | 18.2952 | 8800 | 0.2211 | 6711968 | | 0.0459 | 18.7110 | 9000 | 0.2630 | 6864736 | | 0.0671 | 19.1268 | 9200 | 0.2293 | 7016944 | | 0.0401 | 19.5426 | 9400 | 0.2356 | 7169456 | | 0.0318 | 19.9584 | 9600 | 0.2766 | 7322736 | | 0.0566 | 20.3742 | 9800 | 0.2679 | 7474848 | | 0.0274 | 20.7900 | 10000 | 0.2942 | 7627360 | | 0.087 | 21.2058 | 10200 | 0.2988 | 7779952 | | 0.055 | 21.6216 | 10400 | 0.2842 | 7932848 | | 0.0222 | 22.0374 | 10600 | 0.2714 | 8085448 | | 0.0417 | 22.4532 | 10800 | 0.3261 | 8237768 | | 0.0358 | 22.8690 | 11000 | 0.2791 | 8390664 | | 0.0325 | 23.2848 | 11200 | 0.3150 | 8543280 | | 0.0052 | 23.7006 | 11400 | 0.3346 | 8696432 | | 0.0932 | 24.1164 | 11600 | 0.3394 | 8849408 | | 0.0061 | 24.5322 | 11800 | 0.3440 | 9001408 | | 0.0246 | 24.9480 | 12000 | 0.3293 | 9153696 | | 0.0253 | 25.3638 | 12200 | 0.3331 | 9307088 | | 0.026 | 25.7796 | 12400 | 0.3708 | 9459824 | | 0.0036 | 26.1954 | 12600 | 0.3640 | 9611704 | | 0.0035 | 26.6112 | 12800 | 0.3401 | 9764344 | | 0.0368 | 27.0270 | 13000 | 0.3367 | 9917064 | | 0.0031 | 27.4428 | 13200 | 0.4020 | 10068520 | | 0.0017 | 27.8586 | 13400 | 0.3679 | 10221224 | | 0.0018 | 28.2744 | 13600 | 0.3864 | 10373912 | | 0.0012 | 28.6902 | 13800 | 0.4108 | 10526808 | | 0.003 | 29.1060 | 14000 | 0.3892 | 10678976 | | 0.0045 | 29.5218 | 14200 | 0.3954 | 10831520 | | 0.0188 | 29.9376 | 14400 | 0.4060 | 10984224 | | 0.0022 | 30.3534 | 14600 | 0.4303 | 11135896 | | 0.0027 | 30.7692 | 14800 | 0.4427 | 11288728 | | 0.002 | 31.1850 | 15000 | 0.4246 | 11441040 | | 0.0021 | 31.6008 | 15200 | 0.4266 | 11593456 | | 0.0433 | 32.0166 | 15400 | 0.4899 | 11745744 | | 0.001 | 32.4324 | 15600 | 0.4568 | 11898672 | | 0.0025 | 32.8482 | 15800 | 0.5007 | 12050992 | | 0.0293 | 33.2640 | 16000 | 0.5254 | 12204352 | | 0.0269 | 33.6798 | 16200 | 0.5383 | 12356224 | | 0.002 | 34.0956 | 16400 | 0.5557 | 12507960 | | 0.0355 | 34.5114 | 16600 | 0.5490 | 12660760 | | 0.0007 | 34.9272 | 16800 | 0.5680 | 12813272 | | 0.0035 | 35.3430 | 17000 | 0.5824 | 12965896 | | 0.0293 | 35.7588 | 17200 | 0.6039 | 13118824 | | 0.0007 | 36.1746 | 17400 | 0.6206 | 13271872 | | 0.0128 | 36.5904 | 17600 | 0.6462 | 13424128 | | 0.0086 | 37.0062 | 17800 | 0.6276 | 13576056 | | 0.0139 | 37.4220 | 18000 | 0.6350 | 13728696 | | 0.0007 | 37.8378 | 18200 | 0.6730 | 13881368 | | 0.0002 | 38.2536 | 18400 | 0.6929 | 14033616 | | 0.0468 | 38.6694 | 18600 | 0.6921 | 14185616 | | 0.0251 | 39.0852 | 18800 | 0.7073 | 14338720 | | 0.0 | 39.5010 | 19000 | 0.7611 | 14490240 | | 0.0028 | 39.9168 | 19200 | 0.7695 | 14643072 | | 0.0001 | 40.3326 | 19400 | 0.7628 | 14795184 | | 0.0005 | 40.7484 | 19600 | 0.7207 | 14947312 | | 0.0 | 41.1642 | 19800 | 0.7724 | 15100336 | | 0.0495 | 41.5800 | 20000 | 0.7625 | 15252464 | | 0.0002 | 41.9958 | 20200 | 0.8545 | 15404912 | | 0.0001 | 42.4116 | 20400 | 0.8109 | 15557176 | | 0.0252 | 42.8274 | 20600 | 0.7835 | 15709912 | | 0.0001 | 43.2432 | 20800 | 0.8044 | 15862336 | | 0.0 | 43.6590 | 21000 | 0.8274 | 16014304 | | 0.0004 | 44.0748 | 21200 | 0.8358 | 16166680 | | 0.0231 | 44.4906 | 21400 | 0.8041 | 16320408 | | 0.0004 | 44.9064 | 21600 | 0.8178 | 16472888 | | 0.0003 | 45.3222 | 21800 | 0.8642 | 16625808 | | 0.0001 | 45.7380 | 22000 | 0.8394 | 16778288 | | 0.0001 | 46.1538 | 22200 | 0.8546 | 16931560 | | 0.0 | 46.5696 | 22400 | 0.8646 | 17083880 | | 0.0003 | 46.9854 | 22600 | 0.8434 | 17235976 | | 0.0 | 47.4012 | 22800 | 0.8887 | 17388152 | | 0.0 | 47.8170 | 23000 | 0.8348 | 17540824 | | 0.0009 | 48.2328 | 23200 | 0.8680 | 17693912 | | 0.0039 | 48.6486 | 23400 | 0.8540 | 17846296 | | 0.0 | 49.0644 | 23600 | 0.8674 | 17998760 | | 0.0002 | 49.4802 | 23800 | 0.8551 | 18152072 | | 0.0456 | 49.8960 | 24000 | 0.8905 | 18304072 | | 0.0 | 50.3119 | 24200 | 0.8950 | 18455696 | | 0.0 | 50.7277 | 24400 | 0.9257 | 18608976 | | 0.0 | 51.1435 | 24600 | 0.8666 | 18760928 | | 0.0 | 51.5593 | 24800 | 0.8926 | 18913856 | | 0.0001 | 51.9751 | 25000 | 0.8867 | 19066528 | | 0.0271 | 52.3909 | 25200 | 0.8797 | 19218616 | | 0.0 | 52.8067 | 25400 | 0.8724 | 19370872 | | 0.0 | 53.2225 | 25600 | 0.8797 | 19524232 | | 0.0 | 53.6383 | 25800 | 0.8288 | 19676456 | | 0.0282 | 54.0541 | 26000 | 0.8787 | 19828504 | | 0.0054 | 54.4699 | 26200 | 0.8743 | 19980856 | | 0.0343 | 54.8857 | 26400 | 0.8487 | 20133784 | | 0.0101 | 55.3015 | 26600 | 0.8790 | 20286120 | | 0.0 | 55.7173 | 26800 | 0.8435 | 20439016 | | 0.0001 | 56.1331 | 27000 | 0.8624 | 20591320 | | 0.0 | 56.5489 | 27200 | 0.8957 | 20743736 | | 0.0002 | 56.9647 | 27400 | 0.8590 | 20896184 | | 0.0 | 57.3805 | 27600 | 0.8863 | 21049160 | | 0.0 | 57.7963 | 27800 | 0.8608 | 21201640 | | 0.0 | 58.2121 | 28000 | 0.8635 | 21354208 | | 0.0001 | 58.6279 | 28200 | 0.8397 | 21506752 | | 0.0 | 59.0437 | 28400 | 0.8804 | 21659696 | | 0.0 | 59.4595 | 28600 | 0.8637 | 21811600 | | 0.0 | 59.8753 | 28800 | 0.8831 | 21964272 | | 0.0 | 60.2911 | 29000 | 0.8396 | 22116648 | | 0.0 | 60.7069 | 29200 | 0.8828 | 22269032 | | 0.0002 | 61.1227 | 29400 | 0.9062 | 22421944 | | 0.0 | 61.5385 | 29600 | 0.8913 | 22574936 | | 0.0001 | 61.9543 | 29800 | 0.8643 | 22727064 | | 0.0 | 62.3701 | 30000 | 0.8615 | 22880256 | | 0.0022 | 62.7859 | 30200 | 0.8683 | 23032800 | | 0.0 | 63.2017 | 30400 | 0.8566 | 23184744 | | 0.0001 | 63.6175 | 30600 | 0.8671 | 23336904 | | 0.0 | 64.0333 | 30800 | 0.8533 | 23489432 | | 0.0001 | 64.4491 | 31000 | 0.8689 | 23641496 | | 0.0 | 64.8649 | 31200 | 0.8734 | 23794744 | | 0.0 | 65.2807 | 31400 | 0.8683 | 23947688 | | 0.0047 | 65.6965 | 31600 | 0.8709 | 24099432 | | 0.0004 | 66.1123 | 31800 | 0.8824 | 24251200 | | 0.0 | 66.5281 | 32000 | 0.8991 | 24404736 | | 0.0064 | 66.9439 | 32200 | 0.8599 | 24557120 | | 0.0 | 67.3597 | 32400 | 0.8702 | 24709616 | | 0.0 | 67.7755 | 32600 | 0.8736 | 24862224 | | 0.0 | 68.1913 | 32800 | 0.8590 | 25015296 | | 0.0001 | 68.6071 | 33000 | 0.8721 | 25167744 | | 0.0 | 69.0229 | 33200 | 0.8601 | 25321016 | | 0.0022 | 69.4387 | 33400 | 0.8809 | 25473368 | | 0.0 | 69.8545 | 33600 | 0.8834 | 25626520 | | 0.0016 | 70.2703 | 33800 | 0.8706 | 25778248 | | 0.0001 | 70.6861 | 34000 | 0.8782 | 25930920 | | 0.0 | 71.1019 | 34200 | 0.8792 | 26083456 | | 0.0 | 71.5177 | 34400 | 0.9009 | 26235552 | | 0.0 | 71.9335 | 34600 | 0.8789 | 26388832 | | 0.0 | 72.3493 | 34800 | 0.8802 | 26541680 | | 0.0 | 72.7651 | 35000 | 0.8647 | 26694832 | | 0.0014 | 73.1809 | 35200 | 0.8723 | 26847168 | | 0.0 | 73.5967 | 35400 | 0.8574 | 27000096 | | 0.0 | 74.0125 | 35600 | 0.8642 | 27151800 | | 0.0143 | 74.4283 | 35800 | 0.8676 | 27304152 | | 0.0 | 74.8441 | 36000 | 0.8728 | 27456856 | | 0.0 | 75.2599 | 36200 | 0.8842 | 27610376 | | 0.0 | 75.6757 | 36400 | 0.8783 | 27762984 | | 0.0 | 76.0915 | 36600 | 0.8702 | 27915504 | | 0.0059 | 76.5073 | 36800 | 0.8630 | 28068432 | | 0.0022 | 76.9231 | 37000 | 0.8839 | 28220720 | | 0.0 | 77.3389 | 37200 | 0.8861 | 28373600 | | 0.0 | 77.7547 | 37400 | 0.8690 | 28526304 | | 0.0042 | 78.1705 | 37600 | 0.8750 | 28678672 | | 0.0 | 78.5863 | 37800 | 0.8820 | 28831632 | | 0.0 | 79.0021 | 38000 | 0.8786 | 28983144 | | 0.0 | 79.4179 | 38200 | 0.8864 | 29136008 | | 0.0 | 79.8337 | 38400 | 0.8769 | 29288104 | | 0.0 | 80.2495 | 38600 | 0.8865 | 29440312 | | 0.0049 | 80.6653 | 38800 | 0.8902 | 29592888 | | 0.0 | 81.0811 | 39000 | 0.8877 | 29745320 | | 0.0 | 81.4969 | 39200 | 0.8789 | 29898600 | | 0.0 | 81.9127 | 39400 | 0.8734 | 30050504 | | 0.0059 | 82.3285 | 39600 | 0.8737 | 30203576 | | 0.0 | 82.7443 | 39800 | 0.8784 | 30356408 | | 0.0 | 83.1601 | 40000 | 0.8806 | 30508240 | ### Framework versions - PEFT 0.15.1 - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
Navi004/deepseek-r1-finetuned_lora-adapter-Batch9_v3_DIAC_WoZ
Navi004
"2025-04-19T18:19:41Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2025-04-19T18:19:18Z"
--- base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Navi004 - **License:** apache-2.0 - **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mradermacher/StarrySky-12B-i1-GGUF
mradermacher
"2025-04-19T18:16:51Z"
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "chatml", "en", "ja", "base_model:yamatazen/StarrySky-12B", "base_model:quantized:yamatazen/StarrySky-12B", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
"2025-04-19T15:58:30Z"
--- base_model: yamatazen/StarrySky-12B language: - en - ja library_name: transformers quantized_by: mradermacher tags: - mergekit - merge - chatml --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/yamatazen/StarrySky-12B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/StarrySky-12B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/StarrySky-12B-i1-GGUF/resolve/main/StarrySky-12B.i1-IQ1_S.gguf) | i1-IQ1_S | 3.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/StarrySky-12B-i1-GGUF/resolve/main/StarrySky-12B.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/StarrySky-12B-i1-GGUF/resolve/main/StarrySky-12B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/StarrySky-12B-i1-GGUF/resolve/main/StarrySky-12B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/StarrySky-12B-i1-GGUF/resolve/main/StarrySky-12B.i1-IQ2_S.gguf) | i1-IQ2_S | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/StarrySky-12B-i1-GGUF/resolve/main/StarrySky-12B.i1-IQ2_M.gguf) | i1-IQ2_M | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/StarrySky-12B-i1-GGUF/resolve/main/StarrySky-12B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 4.6 | very low quality | | [GGUF](https://huggingface.co/mradermacher/StarrySky-12B-i1-GGUF/resolve/main/StarrySky-12B.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/StarrySky-12B-i1-GGUF/resolve/main/StarrySky-12B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/StarrySky-12B-i1-GGUF/resolve/main/StarrySky-12B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/StarrySky-12B-i1-GGUF/resolve/main/StarrySky-12B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/StarrySky-12B-i1-GGUF/resolve/main/StarrySky-12B.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/StarrySky-12B-i1-GGUF/resolve/main/StarrySky-12B.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/StarrySky-12B-i1-GGUF/resolve/main/StarrySky-12B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.2 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/StarrySky-12B-i1-GGUF/resolve/main/StarrySky-12B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.7 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/StarrySky-12B-i1-GGUF/resolve/main/StarrySky-12B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/StarrySky-12B-i1-GGUF/resolve/main/StarrySky-12B.i1-Q4_0.gguf) | i1-Q4_0 | 7.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/StarrySky-12B-i1-GGUF/resolve/main/StarrySky-12B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 7.2 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/StarrySky-12B-i1-GGUF/resolve/main/StarrySky-12B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/StarrySky-12B-i1-GGUF/resolve/main/StarrySky-12B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/StarrySky-12B-i1-GGUF/resolve/main/StarrySky-12B.i1-Q4_1.gguf) | i1-Q4_1 | 7.9 | | | [GGUF](https://huggingface.co/mradermacher/StarrySky-12B-i1-GGUF/resolve/main/StarrySky-12B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.6 | | | [GGUF](https://huggingface.co/mradermacher/StarrySky-12B-i1-GGUF/resolve/main/StarrySky-12B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/StarrySky-12B-i1-GGUF/resolve/main/StarrySky-12B.i1-Q6_K.gguf) | i1-Q6_K | 10.2 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
PlayerBPlaytime/MJ-Models
PlayerBPlaytime
"2025-04-19T18:09:19Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2025-01-18T21:52:38Z"
--- license: apache-2.0 ---
aniket-meta/llama-3.1-8b-mkb-lora-1-v3-targeted
aniket-meta
"2025-04-19T18:08:40Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-19T18:05:36Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
pragsri8/gemma-9b-it_bs128_lr1e-5_carma_100k_iter2_w-verif_upgradeall-degrade0p2_rrm-neutrals0p34
pragsri8
"2025-04-19T18:04:58Z"
0
0
null
[ "safetensors", "gemma2", "license:apache-2.0", "region:us" ]
null
"2025-04-19T17:45:58Z"
--- license: apache-2.0 ---
TheCluster/VL-Rethinker-7B-mlx-8bit
TheCluster
"2025-04-19T18:03:54Z"
0
0
mlx
[ "mlx", "safetensors", "qwen2_5_vl", "chat", "apple", "8bit", "multimodal", "visual-question-answering", "en", "arxiv:2504.08837", "base_model:TIGER-Lab/VL-Rethinker-7B", "base_model:quantized:TIGER-Lab/VL-Rethinker-7B", "license:apache-2.0", "region:us" ]
visual-question-answering
"2025-04-18T19:10:13Z"
--- license: apache-2.0 base_model: - TIGER-Lab/VL-Rethinker-7B base_model_relation: quantized pipeline_tag: visual-question-answering tags: - chat - mlx - apple - 8bit - multimodal language: - en library_name: mlx --- # VL-Rethinker-7B 8-bit MLX This model was converted to MLX format from [`TIGER-Lab/VL-Rethinker-7B`](https://huggingface.co/TIGER-Lab/VL-Rethinker-7B) using mlx-vlm version **0.1.23**. Refer to the [original model card](https://huggingface.co/TIGER-Lab/VL-Rethinker-7B) and [**📖Paper**](https://arxiv.org/abs/2504.08837) for more details on the model. ## Use with mlx ```bash pip install -U mlx-vlm ``` ```bash python -m mlx_vlm.generate --model TheCluster/VL-Rethinker-7B-mlx-8bit --max-tokens 512 --temperature 0.0 --prompt "Describe this image." --image <path_to_image> ```
MinaMila/llama_instbase_LoRa_Adult_cfda_ep1_22
MinaMila
"2025-04-19T18:03:42Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2025-04-19T18:03:37Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TheCluster/VL-Rethinker-72B-mlx-4bit
TheCluster
"2025-04-19T18:03:41Z"
0
0
mlx
[ "mlx", "safetensors", "qwen2_5_vl", "chat", "apple", "4bit", "multimodal", "visual-question-answering", "en", "arxiv:2504.08837", "base_model:TIGER-Lab/VL-Rethinker-72B", "base_model:quantized:TIGER-Lab/VL-Rethinker-72B", "license:apache-2.0", "region:us" ]
visual-question-answering
"2025-04-18T20:10:10Z"
--- license: apache-2.0 base_model: - TIGER-Lab/VL-Rethinker-72B base_model_relation: quantized pipeline_tag: visual-question-answering tags: - chat - mlx - apple - 4bit - multimodal language: - en library_name: mlx --- # VL-Rethinker-72B 4-bit MLX This model was converted to MLX format from [`TIGER-Lab/VL-Rethinker-72B`](https://huggingface.co/TIGER-Lab/VL-Rethinker-72B) using mlx-vlm version **0.1.23**. Refer to the [original model card](https://huggingface.co/TIGER-Lab/VL-Rethinker-72B) and [**📖Paper**](https://arxiv.org/abs/2504.08837) for more details on the model. ## Use with mlx ```bash pip install -U mlx-vlm ``` ```bash python -m mlx_vlm.generate --model TheCluster/VL-Rethinker-72B-mlx-4bit --max-tokens 512 --temperature 0.0 --prompt "Describe this image." --image <path_to_image> ```
kiwikiw/zudo
kiwikiw
"2025-04-19T18:03:08Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-19T17:59:13Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
filbertwijaya/Hokkien-Indonesian-Llama-2-Translator-7B-QLoRA-Adapters
filbertwijaya
"2025-04-19T18:02:22Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:Bohanlu/Taigi-Llama-2-Translator-7B", "base_model:finetune:Bohanlu/Taigi-Llama-2-Translator-7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2025-04-19T18:02:17Z"
--- base_model: Bohanlu/Taigi-Llama-2-Translator-7B tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** filbertwijaya - **License:** apache-2.0 - **Finetuned from model :** Bohanlu/Taigi-Llama-2-Translator-7B This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
YuHaaa/QwQ-R1984-32B-mlx-6Bit
YuHaaa
"2025-04-19T18:01:05Z"
7
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "abliterated", "uncensored", "SEARCH", "mlx", "mlx-my-repo", "conversational", "en", "base_model:VIDraft/QwQ-R1984-32B", "base_model:quantized:VIDraft/QwQ-R1984-32B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "6-bit", "region:us" ]
text-generation
"2025-04-08T13:59:34Z"
--- license: apache-2.0 language: - en base_model: - VIDraft/QwQ-R1984-32B tags: - abliterated - uncensored - SEARCH - mlx - mlx-my-repo library_name: transformers --- # YuHaaa/QwQ-R1984-32B-mlx-6Bit The Model [YuHaaa/QwQ-R1984-32B-mlx-6Bit](https://huggingface.co/YuHaaa/QwQ-R1984-32B-mlx-6Bit) was converted to MLX format from [marcuscedricridia/QwQ-R1984-32B](https://huggingface.co/marcuscedricridia/QwQ-R1984-32B) using mlx-lm version **0.22.1**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("YuHaaa/QwQ-R1984-32B-mlx-6Bit") prompt="hello" if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
Paywinful/wav2vec2-large-xls-r-300m-akan-v4
Paywinful
"2025-04-19T17:59:47Z"
0
0
transformers
[ "transformers", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "base_model:facebook/wav2vec2-xls-r-300m", "base_model:finetune:facebook/wav2vec2-xls-r-300m", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2025-04-19T17:56:45Z"
--- library_name: transformers license: apache-2.0 base_model: facebook/wav2vec2-xls-r-300m tags: - generated_from_trainer model-index: - name: wav2vec2-large-xls-r-300m-akan-v4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-akan-v4 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 2000 - training_steps: 20000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
YuHaaa/QwQ-R1984-32B-mlx-4Bit
YuHaaa
"2025-04-19T17:59:18Z"
9
1
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "abliterated", "uncensored", "SEARCH", "mlx", "mlx-my-repo", "conversational", "en", "base_model:VIDraft/QwQ-R1984-32B", "base_model:quantized:VIDraft/QwQ-R1984-32B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "region:us" ]
text-generation
"2025-04-08T17:33:48Z"
--- license: apache-2.0 language: - en base_model: - VIDraft/QwQ-R1984-32B tags: - abliterated - uncensored - SEARCH - mlx - mlx-my-repo library_name: transformers --- # YuHaaa/QwQ-R1984-32B-mlx-4Bit The Model [YuHaaa/QwQ-R1984-32B-mlx-4Bit](https://huggingface.co/YuHaaa/QwQ-R1984-32B-mlx-4Bit) was converted to MLX format from [marcuscedricridia/QwQ-R1984-32B](https://huggingface.co/marcuscedricridia/QwQ-R1984-32B) using mlx-lm version **0.22.1**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("YuHaaa/QwQ-R1984-32B-mlx-4Bit") prompt="hello" if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
jab11769/CPALL-Stock-Trend-Prediction-category-sentiment-filter-2ndphase-Wangchanberta-APR-2
jab11769
"2025-04-19T17:58:11Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "camembert", "text-classification", "generated_from_trainer", "base_model:jab11769/CPALL-Stock-Trend-Prediction-category-sentiment-filter-1stphase-Wangchanberta-APR-2", "base_model:finetune:jab11769/CPALL-Stock-Trend-Prediction-category-sentiment-filter-1stphase-Wangchanberta-APR-2", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2025-04-19T17:57:58Z"
--- library_name: transformers base_model: jab11769/CPALL-Stock-Trend-Prediction-category-sentiment-filter-1stphase-Wangchanberta-APR-2 tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: CPALL-Stock-Trend-Prediction-category-sentiment-filter-2ndphase-Wangchanberta-APR-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # CPALL-Stock-Trend-Prediction-category-sentiment-filter-2ndphase-Wangchanberta-APR-2 This model is a fine-tuned version of [jab11769/CPALL-Stock-Trend-Prediction-category-sentiment-filter-1stphase-Wangchanberta-APR-2](https://huggingface.co/jab11769/CPALL-Stock-Trend-Prediction-category-sentiment-filter-1stphase-Wangchanberta-APR-2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2217 - Accuracy: 0.3726 - Precision: 0.3958 - Recall: 0.3726 - F1: 0.3749 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 1.2574 | 1.0 | 731 | 1.1491 | 0.2562 | 0.0657 | 0.2562 | 0.1046 | | 1.1227 | 2.0 | 1462 | 1.0988 | 0.4008 | 0.4161 | 0.4008 | 0.4048 | | 1.0686 | 3.0 | 2193 | 1.1409 | 0.3396 | 0.4271 | 0.3396 | 0.3180 | | 1.0409 | 4.0 | 2924 | 1.1024 | 0.3989 | 0.4056 | 0.3989 | 0.4017 | | 0.9948 | 5.0 | 3655 | 1.1379 | 0.3948 | 0.4033 | 0.3948 | 0.3929 | | 0.9723 | 6.0 | 4386 | 1.1585 | 0.3945 | 0.3975 | 0.3945 | 0.3957 | | 0.9104 | 7.0 | 5117 | 1.2217 | 0.3726 | 0.3958 | 0.3726 | 0.3749 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
omenlaparrr/surabaya
omenlaparrr
"2025-04-19T17:57:31Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2025-04-19T17:57:17Z"
--- license: apache-2.0 ---
EpistemeAI/SAI-DeepCoder-14B-Preview-v1.0-GGUF
EpistemeAI
"2025-04-19T17:56:43Z"
11
0
transformers
[ "transformers", "gguf", "qwen2", "text-generation-inference", "unsloth", "en", "base_model:EpistemeAI/SAI-DeepCoder-14B-Preview-v1.0", "base_model:quantized:EpistemeAI/SAI-DeepCoder-14B-Preview-v1.0", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-04-16T16:53:44Z"
--- base_model: EpistemeAI/SAI-DeepCoder-14B-Preview-v1.0 tags: - text-generation-inference - transformers - unsloth - qwen2 - gguf license: mit language: - en --- # Uploaded model - **Developed by:** EpistemeAI - **License:** MIT - **Finetuned from model :** EpistemeAI/SAI-DeepCoder-14B-Preview-v1.0 This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
bachzz/PPO-LunarLander-v2-1000_iters
bachzz
"2025-04-19T17:55:26Z"
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2025-04-19T17:55:19Z"
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -132.69 +/- 80.84 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Darkhn/Rogue-Destiny-V2-Llama-3.3-70B
Darkhn
"2025-04-19T17:54:51Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2403.19522", "base_model:Nexesenex/Llama_3.3_70b_Wayfarer_Negative_fusion_v2", "base_model:merge:Nexesenex/Llama_3.3_70b_Wayfarer_Negative_fusion_v2", "base_model:ReadyArt/Forgotten-Abomination-70B-v5.0", "base_model:merge:ReadyArt/Forgotten-Abomination-70B-v5.0", "base_model:SentientAGI/Dobby-Unhinged-Llama-3.3-70B", "base_model:merge:SentientAGI/Dobby-Unhinged-Llama-3.3-70B", "base_model:Steelskull/L3.3-MS-Nevoria-70b", "base_model:merge:Steelskull/L3.3-MS-Nevoria-70b", "base_model:nbeerbower/Llama3.1-Gutenberg-Doppel-70B", "base_model:merge:nbeerbower/Llama3.1-Gutenberg-Doppel-70B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-19T17:16:59Z"
--- base_model: - SentientAGI/Dobby-Unhinged-Llama-3.3-70B - ReadyArt/Forgotten-Abomination-70B-v5.0 - Nexesenex/Llama_3.3_70b_Wayfarer_Negative_fusion_v2 - nbeerbower/Llama3.1-Gutenberg-Doppel-70B - Steelskull/L3.3-MS-Nevoria-70b library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Steelskull/L3.3-MS-Nevoria-70b](https://huggingface.co/Steelskull/L3.3-MS-Nevoria-70b) as a base. ### Models Merged The following models were included in the merge: * [SentientAGI/Dobby-Unhinged-Llama-3.3-70B](https://huggingface.co/SentientAGI/Dobby-Unhinged-Llama-3.3-70B) * [ReadyArt/Forgotten-Abomination-70B-v5.0](https://huggingface.co/ReadyArt/Forgotten-Abomination-70B-v5.0) * [Nexesenex/Llama_3.3_70b_Wayfarer_Negative_fusion_v2](https://huggingface.co/Nexesenex/Llama_3.3_70b_Wayfarer_Negative_fusion_v2) * [nbeerbower/Llama3.1-Gutenberg-Doppel-70B](https://huggingface.co/nbeerbower/Llama3.1-Gutenberg-Doppel-70B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: ReadyArt/Forgotten-Abomination-70B-v5.0 - model: Steelskull/L3.3-MS-Nevoria-70b - model: Nexesenex/Llama_3.3_70b_Wayfarer_Negative_fusion_v2 - model: SentientAGI/Dobby-Unhinged-Llama-3.3-70B - model: nbeerbower/Llama3.1-Gutenberg-Doppel-70B merge_method: model_stock base_model: Steelskull/L3.3-MS-Nevoria-70b out_dtype: bfloat16 chat_template: llama3 tokenizer: source: base ```
chchen/Llama3-OpenBioLLM-8B-PsyCourse-fold9
chchen
"2025-04-19T17:52:58Z"
0
0
peft
[ "peft", "safetensors", "llama-factory", "lora", "generated_from_trainer", "base_model:aaditya/Llama3-OpenBioLLM-8B", "base_model:adapter:aaditya/Llama3-OpenBioLLM-8B", "license:llama3", "region:us" ]
null
"2025-04-19T06:07:26Z"
--- library_name: peft license: llama3 base_model: aaditya/Llama3-OpenBioLLM-8B tags: - llama-factory - lora - generated_from_trainer model-index: - name: Llama3-OpenBioLLM-8B-PsyCourse-fold9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama3-OpenBioLLM-8B-PsyCourse-fold9 This model is a fine-tuned version of [aaditya/Llama3-OpenBioLLM-8B](https://huggingface.co/aaditya/Llama3-OpenBioLLM-8B) on the course-train-fold9 dataset. It achieves the following results on the evaluation set: - Loss: 0.0367 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.5046 | 0.0768 | 50 | 0.3063 | | 0.1059 | 0.1535 | 100 | 0.0842 | | 0.0783 | 0.2303 | 150 | 0.0695 | | 0.0635 | 0.3070 | 200 | 0.0595 | | 0.075 | 0.3838 | 250 | 0.0530 | | 0.065 | 0.4606 | 300 | 0.0491 | | 0.0474 | 0.5373 | 350 | 0.0478 | | 0.0461 | 0.6141 | 400 | 0.0493 | | 0.0533 | 0.6908 | 450 | 0.0540 | | 0.048 | 0.7676 | 500 | 0.0457 | | 0.0694 | 0.8444 | 550 | 0.0475 | | 0.0396 | 0.9211 | 600 | 0.0416 | | 0.0412 | 0.9979 | 650 | 0.0386 | | 0.0339 | 1.0746 | 700 | 0.0457 | | 0.0357 | 1.1514 | 750 | 0.0434 | | 0.0336 | 1.2282 | 800 | 0.0408 | | 0.0342 | 1.3049 | 850 | 0.0414 | | 0.0307 | 1.3817 | 900 | 0.0407 | | 0.0312 | 1.4585 | 950 | 0.0379 | | 0.0314 | 1.5352 | 1000 | 0.0392 | | 0.0229 | 1.6120 | 1050 | 0.0367 | | 0.0337 | 1.6887 | 1100 | 0.0372 | | 0.028 | 1.7655 | 1150 | 0.0379 | | 0.0191 | 1.8423 | 1200 | 0.0388 | | 0.0348 | 1.9190 | 1250 | 0.0411 | | 0.0469 | 1.9958 | 1300 | 0.0399 | | 0.0193 | 2.0725 | 1350 | 0.0412 | | 0.0168 | 2.1493 | 1400 | 0.0416 | | 0.019 | 2.2261 | 1450 | 0.0390 | | 0.0268 | 2.3028 | 1500 | 0.0390 | | 0.0221 | 2.3796 | 1550 | 0.0412 | | 0.0264 | 2.4563 | 1600 | 0.0408 | | 0.0248 | 2.5331 | 1650 | 0.0390 | | 0.018 | 2.6099 | 1700 | 0.0397 | | 0.0148 | 2.6866 | 1750 | 0.0406 | | 0.0228 | 2.7634 | 1800 | 0.0416 | | 0.0216 | 2.8401 | 1850 | 0.0392 | | 0.021 | 2.9169 | 1900 | 0.0396 | | 0.016 | 2.9937 | 1950 | 0.0393 | | 0.0055 | 3.0704 | 2000 | 0.0446 | | 0.0128 | 3.1472 | 2050 | 0.0464 | | 0.0105 | 3.2239 | 2100 | 0.0466 | | 0.009 | 3.3007 | 2150 | 0.0450 | | 0.0087 | 3.3775 | 2200 | 0.0487 | | 0.0102 | 3.4542 | 2250 | 0.0473 | | 0.007 | 3.5310 | 2300 | 0.0486 | | 0.0113 | 3.6078 | 2350 | 0.0490 | | 0.0066 | 3.6845 | 2400 | 0.0522 | | 0.0064 | 3.7613 | 2450 | 0.0510 | | 0.0095 | 3.8380 | 2500 | 0.0514 | | 0.0089 | 3.9148 | 2550 | 0.0521 | | 0.0065 | 3.9916 | 2600 | 0.0524 | | 0.0034 | 4.0683 | 2650 | 0.0540 | | 0.0032 | 4.1451 | 2700 | 0.0563 | | 0.0026 | 4.2218 | 2750 | 0.0564 | | 0.0024 | 4.2986 | 2800 | 0.0586 | | 0.0021 | 4.3754 | 2850 | 0.0595 | | 0.0043 | 4.4521 | 2900 | 0.0604 | | 0.0019 | 4.5289 | 2950 | 0.0607 | | 0.0011 | 4.6056 | 3000 | 0.0610 | | 0.0018 | 4.6824 | 3050 | 0.0617 | | 0.0051 | 4.7592 | 3100 | 0.0614 | | 0.0032 | 4.8359 | 3150 | 0.0617 | | 0.001 | 4.9127 | 3200 | 0.0617 | | 0.0029 | 4.9894 | 3250 | 0.0618 | ### Framework versions - PEFT 0.12.0 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
Merlinoz11/amoral-gemma3-12B-v2-qat-Q8_0-GGUF
Merlinoz11
"2025-04-19T17:47:33Z"
0
0
transformers
[ "transformers", "gguf", "text-generation-inference", "gemma3", "analytical-tasks", "bias-neutralization", "uncensored", "llama-cpp", "gguf-my-repo", "text-generation", "en", "dataset:TheDrummer/AmoralQA-v2", "base_model:soob3123/amoral-gemma3-12B-v2-qat", "base_model:quantized:soob3123/amoral-gemma3-12B-v2-qat", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
"2025-04-19T17:46:41Z"
--- base_model: soob3123/amoral-gemma3-12B-v2-qat datasets: - TheDrummer/AmoralQA-v2 language: - en license: apache-2.0 pipeline_tag: text-generation tags: - text-generation-inference - transformers - gemma3 - analytical-tasks - bias-neutralization - uncensored - llama-cpp - gguf-my-repo --- # Merlinoz11/amoral-gemma3-12B-v2-qat-Q8_0-GGUF This model was converted to GGUF format from [`soob3123/amoral-gemma3-12B-v2-qat`](https://huggingface.co/soob3123/amoral-gemma3-12B-v2-qat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/soob3123/amoral-gemma3-12B-v2-qat) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Merlinoz11/amoral-gemma3-12B-v2-qat-Q8_0-GGUF --hf-file amoral-gemma3-12b-v2-qat-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Merlinoz11/amoral-gemma3-12B-v2-qat-Q8_0-GGUF --hf-file amoral-gemma3-12b-v2-qat-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Merlinoz11/amoral-gemma3-12B-v2-qat-Q8_0-GGUF --hf-file amoral-gemma3-12b-v2-qat-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Merlinoz11/amoral-gemma3-12B-v2-qat-Q8_0-GGUF --hf-file amoral-gemma3-12b-v2-qat-q8_0.gguf -c 2048 ```
sxsun1684/dpo-llama3-lora-judge
sxsun1684
"2025-04-19T17:47:07Z"
17
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "dpo", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-17T20:51:26Z"
--- library_name: transformers tags: - trl - dpo --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> # DPO Fine-Tuning Report: LLaMA 3.2 + LLM Judge Preference Dataset ## Overview This document outlines the training process, configuration, and usage of the DPO fine-tuned LLaMA 3.2 model using the **LLM Judge** preference dataset. The fine-tuned model is available on the Hugging Face Hub at: [`sxsun1684/dpo-llama3-lora-judge`](https://huggingface.co/sxsun1684/dpo-llama3-lora-judge). ## Model Information - **Base model**: `meta-llama/Llama-3.2-1B` - **Fine-tuning method**: Direct Preference Optimization (DPO) - **LoRA adaptation**: Enabled with `peft` - **Preference dataset**: `sxsun1684/llm_judge_lima50_preferences` ## Dataset Summary - Dataset format: LLM-generated preference pairs with `prompt`, `chosen`, and `rejected` fields - Size: 75 examples - Origin: Annotated responses based on 50 seed instructions from the LIMA dataset ## Preprocessing - Tokenization: - `prompt`: max length 128 - `chosen`/`rejected`: max length 384 - All sequences padded to max length using `tokenizer.eos_token` - Output fields: - `prompt_input_ids`, `prompt_attention_mask` - `chosen_input_ids`, `chosen_attention_mask` - `rejected_input_ids`, `rejected_attention_mask` ## Training Configuration ```python DPOConfig( beta=0.1, learning_rate=2e-5, per_device_train_batch_size=1, gradient_accumulation_steps=8, num_train_epochs=3, max_length=512, save_strategy="epoch", logging_steps=10, push_to_hub=False, report_to="none", ) ``` ### PEFT Configuration (LoRA) ```python LoraConfig( r=8, lora_alpha=16, bias="none", task_type="CAUSAL_LM", ) ``` ## Training Details - Total steps: ~27 - Loss range: ~0.64 → 0.60 - Environment: Colab Pro, 15GB GPU ## Usage Example ```python from transformers import AutoTokenizer, AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("sxsun1684/dpo-llama3-lora-judge") tokenizer = AutoTokenizer.from_pretrained("sxsun1684/dpo-llama3-lora-judge") prompt = "Why do cats knead blankets before sleeping?" input_ids = tokenizer(prompt, return_tensors="pt").input_ids output = model.generate(input_ids, max_new_tokens=100) print(tokenizer.decode(output[0], skip_special_tokens=True)) ``` ## License & Credits - Base model licensed by Meta - Fine-tuned model for academic, non-commercial use - Author: `sxsun1684` - Dataset credits: Based on work from GAIR and OpenAssistant --- For questions or updates, contact the model maintainer via Hugging Face or GitHub.
Betha/fen_understanding_v1_r8
Betha
"2025-04-19T17:46:36Z"
85
0
transformers
[ "transformers", "safetensors", "gemma3", "image-text-to-text", "conversational", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
"2025-04-15T18:56:26Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Themira/dual_encoder_xcsqa
Themira
"2025-04-19T17:45:27Z"
0
0
null
[ "pytorch", "license:apache-2.0", "region:us" ]
null
"2025-04-18T07:46:15Z"
--- license: apache-2.0 ---
rbelanec/train_rte_1744902658
rbelanec
"2025-04-19T17:42:06Z"
0
0
peft
[ "peft", "safetensors", "llama-factory", "ia3", "generated_from_trainer", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct", "license:llama3", "region:us" ]
null
"2025-04-19T08:02:21Z"
--- library_name: peft license: llama3 base_model: meta-llama/Meta-Llama-3-8B-Instruct tags: - llama-factory - ia3 - generated_from_trainer model-index: - name: train_rte_1744902658 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # train_rte_1744902658 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the rte dataset. It achieves the following results on the evaluation set: - Loss: 0.0769 - Num Input Tokens Seen: 98761256 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 123 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - training_steps: 40000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen | |:-------------:|:--------:|:-----:|:---------------:|:-----------------:| | 0.092 | 1.4207 | 200 | 0.1188 | 496688 | | 0.0953 | 2.8414 | 400 | 0.1078 | 991488 | | 0.0855 | 4.2567 | 600 | 0.1014 | 1481464 | | 0.1016 | 5.6774 | 800 | 0.0951 | 1979088 | | 0.0674 | 7.0927 | 1000 | 0.0916 | 2468504 | | 0.0675 | 8.5134 | 1200 | 0.0880 | 2963120 | | 0.0855 | 9.9340 | 1400 | 0.0856 | 3459048 | | 0.0859 | 11.3494 | 1600 | 0.0827 | 3951104 | | 0.0635 | 12.7701 | 1800 | 0.0819 | 4445432 | | 0.0663 | 14.1854 | 2000 | 0.0801 | 4938824 | | 0.0538 | 15.6061 | 2200 | 0.0799 | 5433720 | | 0.0367 | 17.0214 | 2400 | 0.0782 | 5925896 | | 0.0789 | 18.4421 | 2600 | 0.0781 | 6422360 | | 0.0351 | 19.8627 | 2800 | 0.0769 | 6914152 | | 0.0685 | 21.2781 | 3000 | 0.0776 | 7403976 | | 0.0507 | 22.6988 | 3200 | 0.0777 | 7902520 | | 0.0508 | 24.1141 | 3400 | 0.0784 | 8394080 | | 0.0394 | 25.5348 | 3600 | 0.0784 | 8884224 | | 0.0448 | 26.9554 | 3800 | 0.0804 | 9382368 | | 0.0344 | 28.3708 | 4000 | 0.0781 | 9872768 | | 0.0462 | 29.7914 | 4200 | 0.0827 | 10366000 | | 0.0458 | 31.2068 | 4400 | 0.0818 | 10867488 | | 0.0274 | 32.6275 | 4600 | 0.0828 | 11358568 | | 0.0295 | 34.0428 | 4800 | 0.0867 | 11852320 | | 0.0279 | 35.4635 | 5000 | 0.0853 | 12343880 | | 0.0396 | 36.8841 | 5200 | 0.0864 | 12837040 | | 0.0394 | 38.2995 | 5400 | 0.0882 | 13329368 | | 0.0303 | 39.7201 | 5600 | 0.0914 | 13828784 | | 0.0265 | 41.1355 | 5800 | 0.0913 | 14315304 | | 0.0345 | 42.5561 | 6000 | 0.0965 | 14806592 | | 0.022 | 43.9768 | 6200 | 0.0976 | 15305208 | | 0.0608 | 45.3922 | 6400 | 0.0974 | 15791608 | | 0.0329 | 46.8128 | 6600 | 0.0993 | 16292464 | | 0.0078 | 48.2282 | 6800 | 0.0991 | 16781768 | | 0.0096 | 49.6488 | 7000 | 0.1037 | 17278560 | | 0.0179 | 51.0642 | 7200 | 0.1071 | 17769384 | | 0.024 | 52.4848 | 7400 | 0.1106 | 18262680 | | 0.025 | 53.9055 | 7600 | 0.1095 | 18763936 | | 0.0057 | 55.3209 | 7800 | 0.1153 | 19258096 | | 0.0096 | 56.7415 | 8000 | 0.1177 | 19753648 | | 0.0165 | 58.1569 | 8200 | 0.1222 | 20244128 | | 0.0033 | 59.5775 | 8400 | 0.1243 | 20739208 | | 0.0165 | 60.9982 | 8600 | 0.1350 | 21236872 | | 0.0311 | 62.4135 | 8800 | 0.1331 | 21726944 | | 0.0183 | 63.8342 | 9000 | 0.1417 | 22223288 | | 0.0047 | 65.2496 | 9200 | 0.1453 | 22716672 | | 0.0114 | 66.6702 | 9400 | 0.1469 | 23209088 | | 0.0033 | 68.0856 | 9600 | 0.1570 | 23701520 | | 0.0019 | 69.5062 | 9800 | 0.1625 | 24197944 | | 0.0007 | 70.9269 | 10000 | 0.1689 | 24694272 | | 0.0085 | 72.3422 | 10200 | 0.1779 | 25191256 | | 0.0024 | 73.7629 | 10400 | 0.1801 | 25688288 | | 0.0032 | 75.1783 | 10600 | 0.1853 | 26177720 | | 0.0094 | 76.5989 | 10800 | 0.1942 | 26675248 | | 0.0036 | 78.0143 | 11000 | 0.1949 | 27168496 | | 0.0012 | 79.4349 | 11200 | 0.2083 | 27664360 | | 0.0006 | 80.8556 | 11400 | 0.2168 | 28161984 | | 0.0003 | 82.2709 | 11600 | 0.2216 | 28655448 | | 0.0001 | 83.6916 | 11800 | 0.2328 | 29151808 | | 0.0003 | 85.1070 | 12000 | 0.2439 | 29642952 | | 0.0012 | 86.5276 | 12200 | 0.2450 | 30140536 | | 0.0002 | 87.9483 | 12400 | 0.2557 | 30639808 | | 0.0001 | 89.3636 | 12600 | 0.2625 | 31135048 | | 0.0001 | 90.7843 | 12800 | 0.2692 | 31630256 | | 0.0 | 92.1996 | 13000 | 0.2778 | 32121256 | | 0.0001 | 93.6203 | 13200 | 0.2851 | 32618184 | | 0.0 | 95.0357 | 13400 | 0.2871 | 33115432 | | 0.0 | 96.4563 | 13600 | 0.2967 | 33609472 | | 0.0 | 97.8770 | 13800 | 0.2957 | 34098712 | | 0.0 | 99.2923 | 14000 | 0.3018 | 34590368 | | 0.0 | 100.7130 | 14200 | 0.3042 | 35081248 | | 0.0 | 102.1283 | 14400 | 0.3081 | 35571464 | | 0.0 | 103.5490 | 14600 | 0.3173 | 36063824 | | 0.0 | 104.9697 | 14800 | 0.3213 | 36557944 | | 0.0 | 106.3850 | 15000 | 0.3214 | 37048560 | | 0.0 | 107.8057 | 15200 | 0.3286 | 37543928 | | 0.0 | 109.2210 | 15400 | 0.3324 | 38035968 | | 0.0 | 110.6417 | 15600 | 0.3341 | 38526000 | | 0.0 | 112.0570 | 15800 | 0.3375 | 39021440 | | 0.0 | 113.4777 | 16000 | 0.3440 | 39519712 | | 0.0 | 114.8984 | 16200 | 0.3470 | 40014440 | | 0.0 | 116.3137 | 16400 | 0.3465 | 40509368 | | 0.0 | 117.7344 | 16600 | 0.3544 | 41001000 | | 0.0 | 119.1497 | 16800 | 0.3556 | 41492672 | | 0.0 | 120.5704 | 17000 | 0.3608 | 41991984 | | 0.0 | 121.9911 | 17200 | 0.3677 | 42486736 | | 0.0 | 123.4064 | 17400 | 0.3618 | 42979888 | | 0.0 | 124.8271 | 17600 | 0.3672 | 43473920 | | 0.0 | 126.2424 | 17800 | 0.3735 | 43963728 | | 0.0 | 127.6631 | 18000 | 0.3795 | 44457208 | | 0.0 | 129.0784 | 18200 | 0.3762 | 44952664 | | 0.0 | 130.4991 | 18400 | 0.3836 | 45446704 | | 0.0 | 131.9198 | 18600 | 0.3844 | 45936552 | | 0.0 | 133.3351 | 18800 | 0.3866 | 46426240 | | 0.0 | 134.7558 | 19000 | 0.3884 | 46921256 | | 0.0 | 136.1711 | 19200 | 0.3936 | 47412080 | | 0.0 | 137.5918 | 19400 | 0.4000 | 47911024 | | 0.0 | 139.0071 | 19600 | 0.4013 | 48404752 | | 0.0 | 140.4278 | 19800 | 0.3979 | 48901416 | | 0.0 | 141.8485 | 20000 | 0.4048 | 49400736 | | 0.0 | 143.2638 | 20200 | 0.4050 | 49895752 | | 0.0 | 144.6845 | 20400 | 0.4090 | 50380736 | | 0.0 | 146.0998 | 20600 | 0.4114 | 50871288 | | 0.0 | 147.5205 | 20800 | 0.4133 | 51360328 | | 0.0 | 148.9412 | 21000 | 0.4221 | 51853696 | | 0.0 | 150.3565 | 21200 | 0.4133 | 52348712 | | 0.0 | 151.7772 | 21400 | 0.4201 | 52842992 | | 0.0 | 153.1925 | 21600 | 0.4187 | 53335368 | | 0.0 | 154.6132 | 21800 | 0.4241 | 53831240 | | 0.0 | 156.0285 | 22000 | 0.4271 | 54320840 | | 0.0 | 157.4492 | 22200 | 0.4315 | 54818304 | | 0.0 | 158.8699 | 22400 | 0.4378 | 55310560 | | 0.0 | 160.2852 | 22600 | 0.4331 | 55805192 | | 0.0 | 161.7059 | 22800 | 0.4387 | 56294240 | | 0.0 | 163.1212 | 23000 | 0.4402 | 56785216 | | 0.0 | 164.5419 | 23200 | 0.4422 | 57277112 | | 0.0 | 165.9626 | 23400 | 0.4469 | 57768960 | | 0.0 | 167.3779 | 23600 | 0.4443 | 58259216 | | 0.0 | 168.7986 | 23800 | 0.4450 | 58754552 | | 0.0 | 170.2139 | 24000 | 0.4488 | 59250304 | | 0.0 | 171.6346 | 24200 | 0.4542 | 59743752 | | 0.0 | 173.0499 | 24400 | 0.4631 | 60240920 | | 0.0 | 174.4706 | 24600 | 0.4591 | 60738488 | | 0.0 | 175.8913 | 24800 | 0.4609 | 61232632 | | 0.0 | 177.3066 | 25000 | 0.4604 | 61726896 | | 0.0 | 178.7273 | 25200 | 0.4639 | 62220440 | | 0.0 | 180.1426 | 25400 | 0.4690 | 62713544 | | 0.0 | 181.5633 | 25600 | 0.4743 | 63208560 | | 0.0 | 182.9840 | 25800 | 0.4737 | 63703320 | | 0.0 | 184.3993 | 26000 | 0.4700 | 64195280 | | 0.0 | 185.8200 | 26200 | 0.4720 | 64693448 | | 0.0 | 187.2353 | 26400 | 0.4812 | 65180864 | | 0.0 | 188.6560 | 26600 | 0.4797 | 65680024 | | 0.0 | 190.0713 | 26800 | 0.4736 | 66173368 | | 0.0 | 191.4920 | 27000 | 0.4879 | 66664968 | | 0.0 | 192.9127 | 27200 | 0.4814 | 67157528 | | 0.0 | 194.3280 | 27400 | 0.4878 | 67657848 | | 0.0 | 195.7487 | 27600 | 0.4905 | 68154280 | | 0.0 | 197.1640 | 27800 | 0.4967 | 68648760 | | 0.0 | 198.5847 | 28000 | 0.4929 | 69145424 | | 0.0 | 200.0 | 28200 | 0.4865 | 69634592 | | 0.0 | 201.4207 | 28400 | 0.5011 | 70126824 | | 0.0 | 202.8414 | 28600 | 0.4969 | 70621048 | | 0.0 | 204.2567 | 28800 | 0.5014 | 71112744 | | 0.0 | 205.6774 | 29000 | 0.5011 | 71609328 | | 0.0 | 207.0927 | 29200 | 0.5018 | 72096488 | | 0.0 | 208.5134 | 29400 | 0.5106 | 72590600 | | 0.0 | 209.9340 | 29600 | 0.5025 | 73085400 | | 0.0 | 211.3494 | 29800 | 0.5078 | 73578704 | | 0.0 | 212.7701 | 30000 | 0.5055 | 74071832 | | 0.0 | 214.1854 | 30200 | 0.5065 | 74558088 | | 0.0 | 215.6061 | 30400 | 0.5114 | 75054720 | | 0.0 | 217.0214 | 30600 | 0.5145 | 75550968 | | 0.0 | 218.4421 | 30800 | 0.5194 | 76052048 | | 0.0 | 219.8627 | 31000 | 0.5042 | 76544760 | | 0.0 | 221.2781 | 31200 | 0.5109 | 77039312 | | 0.0 | 222.6988 | 31400 | 0.5114 | 77536608 | | 0.0 | 224.1141 | 31600 | 0.5141 | 78029096 | | 0.0 | 225.5348 | 31800 | 0.5181 | 78521640 | | 0.0 | 226.9554 | 32000 | 0.5107 | 79014704 | | 0.0 | 228.3708 | 32200 | 0.5216 | 79509056 | | 0.0 | 229.7914 | 32400 | 0.5178 | 80004760 | | 0.0 | 231.2068 | 32600 | 0.5175 | 80498576 | | 0.0 | 232.6275 | 32800 | 0.5124 | 80992160 | | 0.0 | 234.0428 | 33000 | 0.5140 | 81484216 | | 0.0 | 235.4635 | 33200 | 0.5206 | 81981536 | | 0.0 | 236.8841 | 33400 | 0.5279 | 82469112 | | 0.0 | 238.2995 | 33600 | 0.5172 | 82967264 | | 0.0 | 239.7201 | 33800 | 0.5282 | 83460632 | | 0.0 | 241.1355 | 34000 | 0.5240 | 83946936 | | 0.0 | 242.5561 | 34200 | 0.5260 | 84438976 | | 0.0 | 243.9768 | 34400 | 0.5288 | 84936992 | | 0.0 | 245.3922 | 34600 | 0.5308 | 85424648 | | 0.0 | 246.8128 | 34800 | 0.5231 | 85921552 | | 0.0 | 248.2282 | 35000 | 0.5305 | 86414392 | | 0.0 | 249.6488 | 35200 | 0.5272 | 86904424 | | 0.0 | 251.0642 | 35400 | 0.5262 | 87399560 | | 0.0 | 252.4848 | 35600 | 0.5229 | 87900568 | | 0.0 | 253.9055 | 35800 | 0.5293 | 88391952 | | 0.0 | 255.3209 | 36000 | 0.5363 | 88887288 | | 0.0 | 256.7415 | 36200 | 0.5269 | 89375944 | | 0.0 | 258.1569 | 36400 | 0.5282 | 89868176 | | 0.0 | 259.5775 | 36600 | 0.5216 | 90365056 | | 0.0 | 260.9982 | 36800 | 0.5223 | 90855096 | | 0.0 | 262.4135 | 37000 | 0.5215 | 91348504 | | 0.0 | 263.8342 | 37200 | 0.5219 | 91843280 | | 0.0 | 265.2496 | 37400 | 0.5263 | 92339160 | | 0.0 | 266.6702 | 37600 | 0.5266 | 92834936 | | 0.0 | 268.0856 | 37800 | 0.5363 | 93329096 | | 0.0 | 269.5062 | 38000 | 0.5160 | 93825960 | | 0.0 | 270.9269 | 38200 | 0.5332 | 94316976 | | 0.0 | 272.3422 | 38400 | 0.5265 | 94808456 | | 0.0 | 273.7629 | 38600 | 0.5259 | 95304384 | | 0.0 | 275.1783 | 38800 | 0.5236 | 95796256 | | 0.0 | 276.5989 | 39000 | 0.5329 | 96293992 | | 0.0 | 278.0143 | 39200 | 0.5310 | 96783960 | | 0.0 | 279.4349 | 39400 | 0.5310 | 97275176 | | 0.0 | 280.8556 | 39600 | 0.5310 | 97769584 | | 0.0 | 282.2709 | 39800 | 0.5310 | 98266712 | | 0.0 | 283.6916 | 40000 | 0.5310 | 98761256 | ### Framework versions - PEFT 0.15.1 - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
JurisAnalyzer/A_legal
JurisAnalyzer
"2025-04-19T17:41:24Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2025-04-19T17:41:17Z"
--- license: apache-2.0 ---
s191287173/khalid
s191287173
"2025-04-19T17:38:09Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2025-04-19T17:37:50Z"
--- license: apache-2.0 ---
tanya17/mt5-swahili-finetuned
tanya17
"2025-04-19T17:37:10Z"
7
1
transformers
[ "transformers", "safetensors", "m2m_100", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2025-04-18T12:25:07Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [TANYA TOMAR ] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch1-gguf
RichardErkhov
"2025-04-19T17:35:34Z"
0
0
null
[ "gguf", "arxiv:1910.09700", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-04-19T16:10:08Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) mpg27_mistral7bv3_sft_ogd_rms_epoch1 - GGUF - Model creator: https://huggingface.co/yjwon/ - Original model: https://huggingface.co/yjwon/mpg27_mistral7bv3_sft_ogd_rms_epoch1/ | Name | Quant method | Size | | ---- | ---- | ---- | | [mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q2_K.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch1-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q2_K.gguf) | Q2_K | 2.54GB | | [mpg27_mistral7bv3_sft_ogd_rms_epoch1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch1-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch1.IQ3_XS.gguf) | IQ3_XS | 2.82GB | | [mpg27_mistral7bv3_sft_ogd_rms_epoch1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch1-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch1.IQ3_S.gguf) | IQ3_S | 2.97GB | | [mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch1-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [mpg27_mistral7bv3_sft_ogd_rms_epoch1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch1-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch1.IQ3_M.gguf) | IQ3_M | 3.06GB | | [mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q3_K.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch1-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q3_K.gguf) | Q3_K | 3.28GB | | [mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch1-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch1-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [mpg27_mistral7bv3_sft_ogd_rms_epoch1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch1-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch1.IQ4_XS.gguf) | IQ4_XS | 3.68GB | | [mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q4_0.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch1-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q4_0.gguf) | Q4_0 | 3.83GB | | [mpg27_mistral7bv3_sft_ogd_rms_epoch1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch1-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch1.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch1-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q4_K.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch1-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q4_K.gguf) | Q4_K | 4.07GB | | [mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch1-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q4_1.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch1-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q4_1.gguf) | Q4_1 | 4.24GB | | [mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q5_0.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch1-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q5_0.gguf) | Q5_0 | 4.66GB | | [mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch1-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q5_K_S.gguf) | Q5_K_S | 4.66GB | | [mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q5_K.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch1-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q5_K.gguf) | Q5_K | 4.78GB | | [mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch1-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q5_1.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch1-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q5_1.gguf) | Q5_1 | 5.07GB | | [mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q6_K.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch1-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q6_K.gguf) | Q6_K | 5.54GB | | [mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q8_0.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch1-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch1.Q8_0.gguf) | Q8_0 | 7.17GB | Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
devngho/llama3-jamo-tokenizer
devngho
"2025-04-19T17:35:13Z"
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2025-04-06T03:49:34Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
robertou2/task-7-microsoft-Phi-3-medium-128k-instruct
robertou2
"2025-04-19T17:34:14Z"
362
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:microsoft/Phi-3-medium-128k-instruct", "base_model:adapter:microsoft/Phi-3-medium-128k-instruct", "region:us" ]
null
"2025-04-17T18:31:26Z"
--- base_model: microsoft/Phi-3-medium-128k-instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.13.2
nhimtho231/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-hairy_soaring_pheasant
nhimtho231
"2025-04-19T17:34:10Z"
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am hairy soaring pheasant", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-19T17:25:50Z"
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-hairy_soaring_pheasant tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am hairy soaring pheasant - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-hairy_soaring_pheasant This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="nhimtho231/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-hairy_soaring_pheasant", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
cshim-cmu/k10
cshim-cmu
"2025-04-19T17:33:05Z"
0
0
null
[ "pytorch", "marian", "generated_from_trainer", "license:apache-2.0", "region:us" ]
null
"2025-04-19T16:50:06Z"
--- license: apache-2.0 tags: - generated_from_trainer metrics: - bleu model-index: - name: k10 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # k10 This model is a fine-tuned version of [Helsinki-NLP/opus-mt-es-fi](https://huggingface.co/Helsinki-NLP/opus-mt-es-fi) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.3717 - Bleu: 0.6904 - Chrf: 11.3255 - Gen Len: 15.4527 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Chrf | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:-------:| | 8.4287 | 0.09 | 1000 | 8.9256 | 0.939 | 11.8729 | 12.9497 | | 7.3476 | 0.18 | 2000 | 7.9195 | 0.9664 | 11.781 | 13.0443 | | 6.2951 | 0.27 | 3000 | 6.8831 | 0.8829 | 11.6758 | 13.164 | | 5.1269 | 0.36 | 4000 | 5.7552 | 0.8843 | 11.5684 | 13.4769 | | 3.9894 | 0.45 | 5000 | 4.5444 | 0.8652 | 11.4054 | 14.3219 | | 3.0077 | 0.54 | 6000 | 3.3717 | 0.6904 | 11.3255 | 15.4527 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu117 - Datasets 2.11.0 - Tokenizers 0.13.3
RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5-gguf
RichardErkhov
"2025-04-19T17:31:47Z"
0
0
null
[ "gguf", "arxiv:1910.09700", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-04-19T13:26:10Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5 - GGUF - Model creator: https://huggingface.co/yjwon/ - Original model: https://huggingface.co/yjwon/mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5/ | Name | Quant method | Size | | ---- | ---- | ---- | | [mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5.Q2_K.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5-gguf/blob/main/mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5.Q2_K.gguf) | Q2_K | 2.54GB | | [mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5-gguf/blob/main/mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5.IQ3_XS.gguf) | IQ3_XS | 2.82GB | | [mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5.IQ3_S.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5-gguf/blob/main/mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5.IQ3_S.gguf) | IQ3_S | 2.97GB | | [mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5-gguf/blob/main/mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5.IQ3_M.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5-gguf/blob/main/mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5.IQ3_M.gguf) | IQ3_M | 3.06GB | | [mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5.Q3_K.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5-gguf/blob/main/mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5.Q3_K.gguf) | Q3_K | 3.28GB | | [mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5-gguf/blob/main/mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5-gguf/blob/main/mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5-gguf/blob/main/mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5.IQ4_XS.gguf) | IQ4_XS | 3.68GB | | [mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5.Q4_0.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5-gguf/blob/main/mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5.Q4_0.gguf) | Q4_0 | 3.83GB | | [mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5-gguf/blob/main/mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5-gguf/blob/main/mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5.Q4_K.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5-gguf/blob/main/mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5.Q4_K.gguf) | Q4_K | 4.07GB | | [mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5-gguf/blob/main/mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5.Q4_1.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5-gguf/blob/main/mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5.Q4_1.gguf) | Q4_1 | 4.24GB | | [mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5.Q5_0.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5-gguf/blob/main/mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5.Q5_0.gguf) | Q5_0 | 4.66GB | | [mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5-gguf/blob/main/mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5.Q5_K_S.gguf) | Q5_K_S | 4.66GB | | [mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5.Q5_K.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5-gguf/blob/main/mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5.Q5_K.gguf) | Q5_K | 4.78GB | | [mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5-gguf/blob/main/mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5.Q5_1.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5-gguf/blob/main/mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5.Q5_1.gguf) | Q5_1 | 5.07GB | | [mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5.Q6_K.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5-gguf/blob/main/mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5.Q6_K.gguf) | Q6_K | 5.54GB | | [mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5.Q8_0.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5-gguf/blob/main/mpg27_mistral7bv3_sft_dpo_beta2e-2_epoch5.Q8_0.gguf) | Q8_0 | 7.17GB | Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AirMannanov/llm-course-hw3-dora
AirMannanov
"2025-04-19T17:31:40Z"
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-12T17:54:27Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
miuberry999/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mimic_wary_marmot
miuberry999
"2025-04-19T17:23:08Z"
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am mimic wary marmot", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-19T17:14:37Z"
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mimic_wary_marmot tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am mimic wary marmot - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mimic_wary_marmot This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="miuberry999/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mimic_wary_marmot", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
abhinavm16104/TinyLlama-1.1B-qlora-mango
abhinavm16104
"2025-04-19T17:21:12Z"
0
0
null
[ "safetensors", "llama", "en", "dataset:HuggingFaceH4/ultrachat_200k", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "license:mit", "region:us" ]
null
"2025-04-18T22:13:00Z"
--- license: mit datasets: - HuggingFaceH4/ultrachat_200k language: - en metrics: - perplexity base_model: - TinyLlama/TinyLlama-1.1B-Chat-v1.0 --- # 🍋 TinyLlama-1.1B-qlora-mango A fine-tuned version of the [TinyLlama-1.1B](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) model using QLoRA on a custom prompt-response dataset, [Ultrachat200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k). --- ## Model Details - **Base Model**: TinyLlama-1.1B-Chat - **Tuning Method**: QLoRA (Quantized Low-Rank Adaptation) - **Use Case**: Instruction-following / Chatbot generation - **Tokenizer**: TinyLlama tokenizer - **Framework**: Hugging Face Transformers --- ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline tokenizer = AutoTokenizer.from_pretrained("abhinavm16104/TinyLlama-1.1B-qlora-mango") model = AutoModelForCausalLM.from_pretrained("abhinavm16104/TinyLlama-1.1B-qlora-mango") pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) prompt = "<|user|>\nTell me something about mangoes.</s>\n<|assistant|>" print(pipe(prompt)[0]["generated_text"]) ``` ## Example Prompt ```text <|user|> Tell me something about mangoes.</s> <|assistant|> Mangoes are a type of fruit that originated in Southeast Asia and are now grown in many parts of the world... ``` ## Citation If you use tinyllama-1.1B-qlora-mango in your work, please cite the author: ``` @misc {tinyllama-1.1B-qlora-mango, author = {Abhinav Mangalore}, title = {TinyLlama-1.1B-qlora-mango}, year = {2025}, url = {https://huggingface.co/abhinavm16104/TinyLlama-1.1B-qlora-mango} } ````
saaduddinM/LLama-8BI-J-PRISM
saaduddinM
"2025-04-19T17:20:31Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:adapter:meta-llama/Llama-3.1-8B-Instruct", "region:us" ]
null
"2025-04-19T17:20:20Z"
--- base_model: meta-llama/Meta-Llama-3.1-8B-Instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.0
Synthcite24/rl_course_vizdoom_health_gathering_supreme
Synthcite24
"2025-04-19T17:19:37Z"
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2025-04-19T17:18:44Z"
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 13.84 +/- 3.47 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r Synthcite24/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
fadi77/pl-bert
fadi77
"2025-04-19T17:19:24Z"
0
0
null
[ "arxiv:2301.08810", "arxiv:2407.03236", "region:us" ]
null
"2025-03-27T16:24:04Z"
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # Model Card for Arabic PL-BERT Models This model card describes a collection of three Arabic BERT models trained with different objectives and datasets for phoneme-aware language modeling. ## Model Details ### Model Description These models are Arabic adaptations of the PL-BERT (Phoneme-aware Language BERT) approach introduced in [Ashby et al. (2023)](https://arxiv.org/pdf/2301.08810). The models incorporate phonemic information to enhance language understanding, with variations in training objectives and data preprocessing. The collection includes three models: - **mlm_p2g_non_diacritics**: Trained with both MLM (Masked Language Modeling) and P2G (Phoneme-to-Grapheme) objectives on non-diacritized Arabic text - **mlm_only_non_diacritics**: Trained with only the MLM objective on non-diacritized Arabic text - **mlm_only_with_diacritics**: Fine-tuned version of mlm_only_non_diacritics on diacritized Arabic text **Developed by:** Fadi (GitHub: Fadi987) **Model type:** Transformer-based language models (BERT variants) **Language:** Arabic ### Model Sources - **Paper (PL-BERT approach):** [Ashby et al. (2023)](https://arxiv.org/pdf/2301.08810) ## Training Details ### Training Data All models were initially trained on a cleaned version of the Arabic Wikipedia dataset. The dataset is available at [wikipedia.20231101.ar](https://huggingface.co/datasets/wikimedia/wikipedia/tree/main/20231101.ar). For the **mlm_only_with_diacritics** model, a random sample of 200,000 entries (out of approximately 1.2 million) was selected from the Wikipedia Arabic dataset and fully diacritized using the state-of-the-art CATT diacritizer ([Abjad AI, 2024](https://github.com/abjadai/catt)), introduced in [this paper](https://arxiv.org/abs/2407.03236) and licensed under CC BY-NC 4.0. ### Training Procedure #### Model Architecture and Objectives The models follow different training objectives: 1. **mlm_p2g_non_diacritics**: - Trained with dual objectives similar to the original PL-BERT: - Masked Language Modeling (MLM): Standard BERT pre-training objective - Phoneme-to-Grapheme (P2G): Predicting token IDs from phonemic representations - Tokenization was performed using [aubmindlab/bert-base-arabertv2](https://huggingface.co/aubmindlab/bert-base-arabertv2), which uses subword tokenization - Trained for 10 epochs on non-diacritized Wikipedia Arabic 2. **mlm_only_non_diacritics**: - Trained with only the MLM objective - Removes the P2G objective, which according to ablation studies in the PL-BERT paper minimally affected performance - This removal eliminated dependence on tokenization, which: - Reduced the model size considerably (word/subword tokenization has a much larger vocabulary than phoneme vocabulary) - Allowed phonemization of entire sentences at once, resulting in more accurate phonemization - Trained on non-diacritized Wikipedia Arabic 3. **mlm_only_with_diacritics**: - Fine-tuned version of mlm_only_non_diacritics - Trained for 10 epochs on diacritized Arabic text - Uses the same MLM-only objective ## Technical Considerations ### Tokenization Challenges For the **mlm_p2g_non_diacritics** model, a notable limitation was the use of subword tokenization. This approach is not ideal for pronunciation modeling because phonemizing parts of words independently loses the context of the word, which heavily affects pronunciation. The authors of the original PL-BERT paper used a word-level tokenizer for English, but a comparable high-quality word-level tokenizer was not available for Arabic. This limitation was addressed in the subsequent models by removing the P2G objective. ### Diacritization Arabic text can be written with or without diacritics (short vowel marks). The **mlm_only_with_diacritics** model specifically addresses this by training on fully diacritized text, which provides explicit pronunciation information that is typically absent in standard written Arabic. ## Uses These models can be used for Arabic natural language understanding tasks where phonemic awareness may be beneficial, such as: - Text-to-speech - Speech recognition post-processing - Dialect identification - Pronunciation-sensitive applications For examples on how these models can be used in code, take a look at: https://github.com/Fadi987/StyleTTS2/blob/main/Utils/PLBERT/util.py ## Bias, Risks, and Limitations The models are trained on Wikipedia data, which may not represent all varieties of Arabic equally. The diacritization process, while state-of-the-art, may introduce some errors or biases in the training data. The subword tokenization approach used in the mlm_p2g_non_diacritics model has limitations for phonemic modeling as noted above. ## Citation **BibTeX:** ```bibtex @article{catt2024, title={CATT: Character-based Arabic Tashkeel Transformer}, author={Alasmary, Faris and Zaafarani, Orjuwan and Ghannam, Ahmad}, journal={arXiv preprint arXiv:2407.03236}, year={2024} } @article{plbert2023, title={Phoneme-Level BERT for Enhanced Prosody of Text-to-Speech with Grapheme Predictions}, author={Li, Yinghao Aaron and Han, Cong and Jiang, Xilin and Mesgarani, Nima}, journal={arXiv preprint arXiv:2301.08810}, year={2023} } ```
ykarout/phi-4-deepseek-r1-distilled-16bit
ykarout
"2025-04-19T17:19:06Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:unsloth/phi-4-unsloth-bnb-4bit", "base_model:finetune:unsloth/phi-4-unsloth-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-19T16:57:21Z"
--- base_model: unsloth/phi-4-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** ykarout - **License:** apache-2.0 - **Finetuned from model :** unsloth/phi-4-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
falavi/SpeechLMM_v1.1_L_ASR
falavi
"2025-04-19T17:17:11Z"
0
0
null
[ "safetensors", "speechlmm", "license:other", "region:us" ]
null
"2025-04-19T15:22:15Z"
--- license: other license_name: license license_link: https://huggingface.co/meetween/Llama-speechlmm-1.0-l/blob/main/LICENSE ---
fadi77/StyleTTS2-LibriTTS-arabic
fadi77
"2025-04-19T17:17:02Z"
0
0
null
[ "text-to-speech", "tts", "arabic", "styletts2", "pl-bert", "ar", "arxiv:2306.07691", "license:mit", "region:us" ]
text-to-speech
"2025-04-10T20:33:29Z"
--- language: ar tags: - text-to-speech - tts - arabic - styletts2 - pl-bert license: mit hardware: H100 --- # Model Card for Arabic StyleTTS2 This is an Arabic text-to-speech model based on StyleTTS2 architecture, specifically adapted for Arabic language synthesis. The model achieves good quality Arabic speech synthesis, though not yet state-of-the-art, and further experimentation is needed to optimize performance for Arabic language specifically. All training objectives from the original StyleTTS2 were maintained, except for the WavLM objectives which were removed as they were primarily designed for English speech. ## Example Here is an example output from the model: #### Sample 1 <audio controls> <source src="https://huggingface.co/fadi77/StyleTTS2-LibriTTS-arabic/resolve/main/synthesized_audio.wav" type="audio/wav"> Your browser does not support the audio element. </audio> ## Efficiency and Performance A key strength of this model lies in its efficiency and performance characteristics: - **Compact Architecture**: Achieves impressive quality with <100M parameters - **Limited Training Data**: Trained on only 22 hours of single-speaker audio - **Transfer Learning**: Successfully fine-tuned from LibriTTS multi-speaker model to single-speaker Arabic - **Resource Efficient**: Good quality achieved despite limited computational resources Note: According to the StyleTTS2 authors, performance should improve further when training a single-speaker model from scratch rather than fine-tuning. This wasn't attempted in our case due to computational resource constraints, suggesting potential for even better results with more extensive training. ## Model Details ### Model Description This model is a modified version of StyleTTS2, specifically adapted for Arabic text-to-speech synthesis. It incorporates a custom-trained PL-BERT model for Arabic language understanding and removes the WavLM adversarial training component (which was primarily designed for English). - **Developed by:** Fadi (GitHub: Fadi987) - **Model type:** Text-to-Speech (StyleTTS2 architecture) - **Language(s):** Arabic - **Finetuned from model:** [yl4579/StyleTTS2-LibriTTS](https://huggingface.co/yl4579/StyleTTS2-LibriTTS) ### Model Sources - **Repository:** [Fadi987/StyleTTS2](https://github.com/Fadi987/StyleTTS2) - **Paper:** [StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models](https://arxiv.org/abs/2306.07691) - **PL-BERT Model:** [fadi77/pl-bert](https://huggingface.co/fadi77/pl-bert) ## Uses ### Direct Use The model can be used for generating Arabic speech from text. To use the model: 1. Clone the StyleTTS2 repository: ```bash git clone https://github.com/Fadi987/StyleTTS2 cd StyleTTS2 ``` 2. Install `espeak-ng` for phonemization backend: ```bash # For macOS brew install espeak-ng # For Ubuntu/Debian sudo apt-get install espeak-ng # For Windows # Download and install espeak-ng from: https://github.com/espeak-ng/espeak-ng/releases ``` 3. Install Python dependencies: ```bash pip install -r requirements.txt ``` 4. Download the `model.pth` and `config.yml` files from this repository 5. Run inference using: ```bash python inference.py --config config.yml --model model.pth --text "الإِتْقَانُ يَحْتَاجُ إِلَى الْعَمَلِ وَالْمُثَابَرَة" ``` Make sure use properly diacritized Arabic text for best results ### Out-of-Scope Use The model is specifically designed for Arabic text-to-speech synthesis and may not perform well for: - Other languages - Heavy dialect variations - Non-diacritized Arabic text ## Training Details ### Training Data - Training was performed on approximately 22 hours of Arabic audiobook data - Dataset: [fadi77/arabic-audiobook-dataset-24khz](https://huggingface.co/datasets/fadi77/arabic-audiobook-dataset-24khz) - The PL-BERT component was trained on fully diacritized Wikipedia Arabic text ### Training Hyperparameters - **Number of epochs:** 20 - **Diffusion training:** Started from epoch 5 ### Objectives - **Training objectives:** All original StyleTTS2 objectives maintained, except WavLM adversarial training - **Validation objectives:** Identical to original StyleTTS2 validation process ### Compute Infrastructure - **Hardware Type:** NVIDIA H100 GPU ### Notable Modifications from Original StyleTTS2 in Architecture and Objectives The architecture of the model follows that of StyleTTS2 with the following exceptions: - Removed WavLM adversarial training component - Custom PL-BERT trained for Arabic language ## Citation **BibTeX:** ```bibtex @article{styletts2, title={StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models}, author={Liu, Yinghao Aaron and Chen, Tao and Ping, Wei and Wu, Xiaoliang and Wang, Dongchao and Duan, Yuxuan and Li, Xiaodi and Li, Chong and Liang, Xuchen and Liu, Qiong and others}, journal={arXiv preprint arXiv:2306.07691}, year={2023} } ``` ## Model Card Contact GitHub: [@Fadi987](https://github.com/Fadi987) Hugging Face: [@fadi77](https://huggingface.co/fadi77)
TareksLab/Dungeons-and-Dragons-V2b-LLaMa-70B
TareksLab
"2025-04-19T17:16:38Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2408.07990", "base_model:TareksLab/Doppleganger-V8-LLaMa-70B", "base_model:merge:TareksLab/Doppleganger-V8-LLaMa-70B", "base_model:TareksLab/Dragons-V1-LLaMa-70B", "base_model:merge:TareksLab/Dragons-V1-LLaMa-70B", "base_model:TareksLab/Dungeons-R1-LLaMa-70B", "base_model:merge:TareksLab/Dungeons-R1-LLaMa-70B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-19T16:45:11Z"
--- base_model: - TareksLab/Doppleganger-V8-LLaMa-70B - TareksLab/Dungeons-R1-LLaMa-70B - TareksLab/Dragons-V1-LLaMa-70B library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [TareksLab/Doppleganger-V8-LLaMa-70B](https://huggingface.co/TareksLab/Doppleganger-V8-LLaMa-70B) as a base. ### Models Merged The following models were included in the merge: * [TareksLab/Dungeons-R1-LLaMa-70B](https://huggingface.co/TareksLab/Dungeons-R1-LLaMa-70B) * [TareksLab/Dragons-V1-LLaMa-70B](https://huggingface.co/TareksLab/Dragons-V1-LLaMa-70B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: TareksLab/Dungeons-R1-LLaMa-70B parameters: weight: 0.35 density: 0.5 select_topk: 0.5 lambda: 1.0 - model: TareksLab/Dragons-V1-LLaMa-70B parameters: weight: 0.35 density: 0.5 select_topk: 0.5 lambda: 1.0 - model: TareksLab/Doppleganger-V8-LLaMa-70B parameters: weight: 0.30 density: 0.5 select_topk: 0.5 lambda: 1.0 base_model: TareksLab/Doppleganger-V8-LLaMa-70B merge_method: sce parameters: normalize: false tokenizer: source: base chat_template: llama3 dtype: bfloat16 ```
fedovtt/354439fc-ae08-4b78-9b1f-8050db23f94d
fedovtt
"2025-04-19T17:12:10Z"
0
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:adapter:unsloth/Qwen2.5-0.5B-Instruct", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
"2025-04-19T16:58:33Z"
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2.5-0.5B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 354439fc-ae08-4b78-9b1f-8050db23f94d results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2.5-0.5B-Instruct bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 530fbab764a83a35_train_data.json ds_type: json format: custom path: /workspace/input_data/530fbab764a83a35_train_data.json type: field_input: intent field_instruction: instruction field_output: response format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: fedovtt/354439fc-ae08-4b78-9b1f-8050db23f94d hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: false load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 4 mixed_precision: bf16 mlflow_experiment_name: /tmp/530fbab764a83a35_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 9bdb3ced-7990-4e87-b385-102598d8fa24 wandb_project: 01-31 wandb_run: your_name wandb_runid: 9bdb3ced-7990-4e87-b385-102598d8fa24 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 354439fc-ae08-4b78-9b1f-8050db23f94d This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0 | 0.0169 | 200 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
anhht9824/Reinforce_cartpole_policy_gradient
anhht9824
"2025-04-19T17:11:49Z"
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
"2025-04-19T17:11:25Z"
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce_cartpole_policy_gradient results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 511.95 +/- 4.40 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Betha/fen_understanding_v1_r16
Betha
"2025-04-19T17:08:30Z"
0
0
transformers
[ "transformers", "safetensors", "gemma3", "image-text-to-text", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/gemma-3-4b-it", "base_model:finetune:unsloth/gemma-3-4b-it", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-text-to-text
"2025-04-19T01:38:07Z"
--- base_model: unsloth/gemma-3-4b-it tags: - text-generation-inference - transformers - unsloth - gemma3 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** Betha - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-4b-it This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
s1ghhh/Qwen2.5-1.5B-Verilog-2-GRPO
s1ghhh
"2025-04-19T17:04:45Z"
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "open-r1", "verilog", "trl", "grpo", "conversational", "dataset:LLM-EDA/opencores", "arxiv:2402.03300", "base_model:Qwen/Qwen2.5-1.5B-Instruct", "base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-19T16:19:28Z"
--- base_model: Qwen/Qwen2.5-1.5B-Instruct datasets: LLM-EDA/opencores library_name: transformers model_name: Qwen2.5-1.5B-Verilog-2-GRPO tags: - generated_from_trainer - open-r1 - verilog - trl - grpo licence: license --- # Model Card for Qwen2.5-1.5B-Verilog-2-GRPO This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the [LLM-EDA/opencores](https://huggingface.co/datasets/LLM-EDA/opencores) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="s1ghhh/Qwen2.5-1.5B-Verilog-2-GRPO", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/ywang144-university-of-maryland/huggingface/runs/0ygyapq7) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.16.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.5.1 - Datasets: 3.4.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
jairosolare/Rough_Doggy_SDXLv1_LoRa
jairosolare
"2025-04-19T17:01:42Z"
0
0
null
[ "region:us" ]
null
"2025-04-19T16:55:51Z"
pose LoRa sdxl 1.0 trained on Caligula XL, works with biglust 1.6 weight:0.5-0.9 creates a variety of rough doggy style positions from POV / first person perspective trigger word: "rough doggy", "rough_doggy", "rough doggystyle", "pov rough doggystyle hands gripping neck" credit to the guy who created this, it's a good one. https://civitai.com/models/801804/rough-doggy
owenpastor21/newLawMistral2
owenpastor21
"2025-04-19T17:00:48Z"
0
0
transformers
[ "transformers", "gguf", "mistral", "text-generation-inference", "unsloth", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "base_model:quantized:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-04-19T16:57:56Z"
--- base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit tags: - text-generation-inference - transformers - unsloth - mistral - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** owenpastor21 - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mradermacher/Omega-Darker_The-Final-Transgression-22B-i1-GGUF
mradermacher
"2025-04-19T17:00:07Z"
0
0
transformers
[ "transformers", "gguf", "nsfw", "explicit", "roleplay", "unaligned", "ERP", "Erotic", "Horror", "Violence", "en", "base_model:ReadyArt/Omega-Darker_The-Final-Transgression-22B", "base_model:quantized:ReadyArt/Omega-Darker_The-Final-Transgression-22B", "license:other", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
"2025-04-19T11:49:07Z"
--- base_model: ReadyArt/Omega-Darker_The-Final-Transgression-22B language: - en library_name: transformers license: other license_name: mrl quantized_by: mradermacher tags: - nsfw - explicit - roleplay - unaligned - ERP - Erotic - Horror - Violence --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/ReadyArt/Omega-Darker_The-Final-Transgression-22B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Omega-Darker_The-Final-Transgression-22B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Omega-Darker_The-Final-Transgression-22B-i1-GGUF/resolve/main/Omega-Darker_The-Final-Transgression-22B.i1-IQ1_S.gguf) | i1-IQ1_S | 4.9 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Omega-Darker_The-Final-Transgression-22B-i1-GGUF/resolve/main/Omega-Darker_The-Final-Transgression-22B.i1-IQ1_M.gguf) | i1-IQ1_M | 5.4 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Omega-Darker_The-Final-Transgression-22B-i1-GGUF/resolve/main/Omega-Darker_The-Final-Transgression-22B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/Omega-Darker_The-Final-Transgression-22B-i1-GGUF/resolve/main/Omega-Darker_The-Final-Transgression-22B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/Omega-Darker_The-Final-Transgression-22B-i1-GGUF/resolve/main/Omega-Darker_The-Final-Transgression-22B.i1-IQ2_S.gguf) | i1-IQ2_S | 7.1 | | | [GGUF](https://huggingface.co/mradermacher/Omega-Darker_The-Final-Transgression-22B-i1-GGUF/resolve/main/Omega-Darker_The-Final-Transgression-22B.i1-IQ2_M.gguf) | i1-IQ2_M | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/Omega-Darker_The-Final-Transgression-22B-i1-GGUF/resolve/main/Omega-Darker_The-Final-Transgression-22B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 7.8 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Omega-Darker_The-Final-Transgression-22B-i1-GGUF/resolve/main/Omega-Darker_The-Final-Transgression-22B.i1-Q2_K.gguf) | i1-Q2_K | 8.4 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Omega-Darker_The-Final-Transgression-22B-i1-GGUF/resolve/main/Omega-Darker_The-Final-Transgression-22B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 8.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Omega-Darker_The-Final-Transgression-22B-i1-GGUF/resolve/main/Omega-Darker_The-Final-Transgression-22B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 9.3 | | | [GGUF](https://huggingface.co/mradermacher/Omega-Darker_The-Final-Transgression-22B-i1-GGUF/resolve/main/Omega-Darker_The-Final-Transgression-22B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 9.7 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Omega-Darker_The-Final-Transgression-22B-i1-GGUF/resolve/main/Omega-Darker_The-Final-Transgression-22B.i1-IQ3_S.gguf) | i1-IQ3_S | 9.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Omega-Darker_The-Final-Transgression-22B-i1-GGUF/resolve/main/Omega-Darker_The-Final-Transgression-22B.i1-IQ3_M.gguf) | i1-IQ3_M | 10.2 | | | [GGUF](https://huggingface.co/mradermacher/Omega-Darker_The-Final-Transgression-22B-i1-GGUF/resolve/main/Omega-Darker_The-Final-Transgression-22B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 10.9 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Omega-Darker_The-Final-Transgression-22B-i1-GGUF/resolve/main/Omega-Darker_The-Final-Transgression-22B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 11.8 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Omega-Darker_The-Final-Transgression-22B-i1-GGUF/resolve/main/Omega-Darker_The-Final-Transgression-22B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 12.0 | | | [GGUF](https://huggingface.co/mradermacher/Omega-Darker_The-Final-Transgression-22B-i1-GGUF/resolve/main/Omega-Darker_The-Final-Transgression-22B.i1-Q4_0.gguf) | i1-Q4_0 | 12.7 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Omega-Darker_The-Final-Transgression-22B-i1-GGUF/resolve/main/Omega-Darker_The-Final-Transgression-22B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 12.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Omega-Darker_The-Final-Transgression-22B-i1-GGUF/resolve/main/Omega-Darker_The-Final-Transgression-22B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 13.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Omega-Darker_The-Final-Transgression-22B-i1-GGUF/resolve/main/Omega-Darker_The-Final-Transgression-22B.i1-Q4_1.gguf) | i1-Q4_1 | 14.0 | | | [GGUF](https://huggingface.co/mradermacher/Omega-Darker_The-Final-Transgression-22B-i1-GGUF/resolve/main/Omega-Darker_The-Final-Transgression-22B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 15.4 | | | [GGUF](https://huggingface.co/mradermacher/Omega-Darker_The-Final-Transgression-22B-i1-GGUF/resolve/main/Omega-Darker_The-Final-Transgression-22B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 15.8 | | | [GGUF](https://huggingface.co/mradermacher/Omega-Darker_The-Final-Transgression-22B-i1-GGUF/resolve/main/Omega-Darker_The-Final-Transgression-22B.i1-Q6_K.gguf) | i1-Q6_K | 18.4 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
birder-project/hiera_abswin_base_mim-intermediate-eu-common
birder-project
"2025-04-19T16:56:49Z"
0
0
birder
[ "birder", "image-classification", "pytorch", "arxiv:2306.00989", "arxiv:2311.05613", "base_model:birder-project/hiera_abswin_base_mim", "base_model:finetune:birder-project/hiera_abswin_base_mim", "license:apache-2.0", "region:us" ]
image-classification
"2025-04-19T16:51:10Z"
--- tags: - image-classification - birder - pytorch library_name: birder license: apache-2.0 base_model: - birder-project/hiera_abswin_base_mim --- # Model Card for hiera_abswin_base_mim-intermediate-eu-common A Hiera image classification model. The model follows a three-stage training process: first, masked image modeling, next intermediate training on a large-scale dataset containing diverse bird species from around the world, finally fine-tuned specifically on the `eu-common` dataset. The species list is derived from the Collins bird guide [^1]. [^1]: Svensson, L., Mullarney, K., & Zetterström, D. (2022). Collins bird guide (3rd ed.). London, England: William Collins. ## Model Details - **Model Type:** Image classification and detection backbone - **Model Stats:** - Params (M): 51.1 - Input image size: 384 x 384 - **Dataset:** eu-common (707 classes) - Intermediate training involved ~6000 species from asia, europe and africa - **Papers:** - Hiera: A Hierarchical Vision Transformer without the Bells-and-Whistles: <https://arxiv.org/abs/2306.00989> - Window Attention is Bugged: How not to Interpolate Position Embeddings: <https://arxiv.org/abs/2311.05613> ## Model Usage ### Image Classification ```python import birder from birder.inference.classification import infer_image (net, model_info) = birder.load_pretrained_model("hiera_abswin_base_mim-intermediate-eu-common", inference=True) # Get the image size the model was trained on size = birder.get_size_from_signature(model_info.signature) # Create an inference transform transform = birder.classification_transform(size, model_info.rgb_stats) image = "path/to/image.jpeg" # or a PIL image, must be loaded in RGB format (out, _) = infer_image(net, image, transform) # out is a NumPy array with shape of (1, 707), representing class probabilities. ``` ### Image Embeddings ```python import birder from birder.inference.classification import infer_image (net, model_info) = birder.load_pretrained_model("hiera_abswin_base_mim-intermediate-eu-common", inference=True) # Get the image size the model was trained on size = birder.get_size_from_signature(model_info.signature) # Create an inference transform transform = birder.classification_transform(size, model_info.rgb_stats) image = "path/to/image.jpeg" # or a PIL image (out, embedding) = infer_image(net, image, transform, return_embedding=True) # embedding is a NumPy array with shape of (1, 768) ``` ### Detection Feature Map ```python from PIL import Image import birder (net, model_info) = birder.load_pretrained_model("hiera_abswin_base_mim-intermediate-eu-common", inference=True) # Get the image size the model was trained on size = birder.get_size_from_signature(model_info.signature) # Create an inference transform transform = birder.classification_transform(size, model_info.rgb_stats) image = Image.open("path/to/image.jpeg") features = net.detection_features(transform(image).unsqueeze(0)) # features is a dict (stage name -> torch.Tensor) print([(k, v.size()) for k, v in features.items()]) # Output example: # [('stage1', torch.Size([1, 96, 96, 96])), # ('stage2', torch.Size([1, 192, 48, 48])), # ('stage3', torch.Size([1, 384, 24, 24])), # ('stage4', torch.Size([1, 768, 12, 12]))] ``` ## Citation ```bibtex @misc{ryali2023hierahierarchicalvisiontransformer, title={Hiera: A Hierarchical Vision Transformer without the Bells-and-Whistles}, author={Chaitanya Ryali and Yuan-Ting Hu and Daniel Bolya and Chen Wei and Haoqi Fan and Po-Yao Huang and Vaibhav Aggarwal and Arkabandhu Chowdhury and Omid Poursaeed and Judy Hoffman and Jitendra Malik and Yanghao Li and Christoph Feichtenhofer}, year={2023}, eprint={2306.00989}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2306.00989}, } @misc{bolya2023windowattentionbuggedinterpolate, title={Window Attention is Bugged: How not to Interpolate Position Embeddings}, author={Daniel Bolya and Chaitanya Ryali and Judy Hoffman and Christoph Feichtenhofer}, year={2023}, eprint={2311.05613}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2311.05613}, } ```
coast01/LVSM
coast01
"2025-04-19T16:55:55Z"
0
0
null
[ "arxiv:2410.17242", "region:us" ]
null
"2025-04-02T23:31:03Z"
paper: arxiv.org/abs/2410.17242
Io2007/gemma-3-1b-big
Io2007
"2025-04-19T16:54:51Z"
14
0
transformers
[ "transformers", "safetensors", "gemma3_text", "text-generation", "mergekit", "merge", "conversational", "base_model:NuclearAi/Nuke_X_Gemma3_1B_Reasoner_v1.0", "base_model:merge:NuclearAi/Nuke_X_Gemma3_1B_Reasoner_v1.0", "base_model:google/gemma-3-1b-it", "base_model:merge:google/gemma-3-1b-it", "base_model:google/gemma-3-1b-pt", "base_model:merge:google/gemma-3-1b-pt", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-12T18:10:05Z"
--- base_model: - NuclearAi/Nuke_X_Gemma3_1B_Reasoner_v1.0 - google/gemma-3-1b-it - google/gemma-3-1b-pt library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the Passthrough merge method. ### Models Merged The following models were included in the merge: * [NuclearAi/Nuke_X_Gemma3_1B_Reasoner_v1.0](https://huggingface.co/NuclearAi/Nuke_X_Gemma3_1B_Reasoner_v1.0) * [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it) * [google/gemma-3-1b-pt](https://huggingface.co/google/gemma-3-1b-pt) ### Configuration The following YAML configuration was used to produce this model: ```yaml dtype: bfloat16 merge_method: passthrough slices: - sources: - layer_range: [0, 2] model: google/gemma-3-1b-pt - sources: - layer_range: [3, 25] model: google/gemma-3-1b-it - sources: - layer_range: [10, 14] model: NuclearAi/Nuke_X_Gemma3_1B_Reasoner_v1.0 - sources: - layer_range: [24, 26] model: google/gemma-3-1b-pt ```
ykarout/phi-4-deepseek-r1-distilled-v2
ykarout
"2025-04-19T16:54:11Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/phi-4-unsloth-bnb-4bit", "base_model:finetune:unsloth/phi-4-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2025-04-19T16:53:56Z"
--- base_model: unsloth/phi-4-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** ykarout - **License:** apache-2.0 - **Finetuned from model :** unsloth/phi-4-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Technicalnvj/Gpu
Technicalnvj
"2025-04-19T16:54:03Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2025-04-19T16:54:03Z"
--- license: apache-2.0 ---
lex-au/Orpheus-3b-Kaya-Q2_K.gguf
lex-au
"2025-04-19T16:53:33Z"
0
0
null
[ "gguf", "llama", "text-to-speech", "tts", "audio", "speech-synthesis", "orpheus", "unsloth", "en", "dataset:lex-au/Orpheus-3b-Kaya", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-to-speech
"2025-04-19T16:40:57Z"
--- language: en tags: - text-to-speech - tts - audio - speech-synthesis - orpheus - gguf - unsloth license: apache-2.0 datasets: - lex-au/Orpheus-3b-Kaya --- # Orpheus-3b-Kaya-Q2_K This is a **fine-tuned version** of the pretrained model [canopylabs/orpheus-3b-0.1-pretrained](https://huggingface.co/canopylabs/orpheus-3b-0.1-pretrained), trained on a custom voice dataset and quantised to GGUF Q2_K format for fast, efficient inference. --- ## 🔧 Model Details - **Model Type**: Text-to-Speech (TTS) - **Architecture**: Token-to-audio language model - **Parameters**: ~3 billion - **Quantisation**: 8-bit GGUF (Q2_K) - **Sampling Rate**: 24kHz mono - **Training Epochs**: 1 - **Training Dataset**: [lex-au/Orpheus-3b-Kaya](https://huggingface.co/datasets/lex-au/Orpheus-3b-Kaya) - **Languages**: English --- ## 🚀 Quick Usage This model is designed for use with [Orpheus-FastAPI](https://github.com/Lex-au/Orpheus-FastAPI), an OpenAI-compatible inference server for text-to-speech generation. ### Compatible Inference Servers You can load this model into: - [GPUStack](https://github.com/gpustack/gpustack) - [LM Studio](https://lmstudio.ai/) - [llama.cpp](https://github.com/ggerganov/llama.cpp) - Any other GGUF-compatible OpenAI-style server ## 📜 License Apache License 2.0 — free for research and commercial use. --- ## 🙌 Credits - Original model by: [Canopy Labs](https://huggingface.co/canopylabs/orpheus-3b-0.1-pt) - Fine-tuned, quantised, and API-wrapped by: [Lex-au](https://huggingface.co/lex-au) via [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) --- ## 📚 Citation ``` @misc{orpheus-tts-2025, author = {Canopy Labs}, title = {Orpheus-3b-0.1-pt: Pretrained Text-to-Speech Model}, year = {2025}, publisher = {HuggingFace}, howpublished = {\url{https://huggingface.co/canopylabs/orpheus-3b-0.1-pt}} } @misc{orpheus-kaya-2025, author = {Lex-au}, title = {Orpheus-3b-Kaya-Q2_K: Fine-Tuned TTS Model (Quantised)}, note = {Fine-tuned from canopylabs/orpheus-3b-0.1-pt}, year = {2025}, publisher = {HuggingFace}, howpublished = {\url{https://huggingface.co/lex-au/Orpheus-3b-Kaya-Q2_K}} } ```
lex-au/Orpheus-3b-Kaya-Q4_K_M.gguf
lex-au
"2025-04-19T16:52:48Z"
0
0
null
[ "gguf", "llama", "text-to-speech", "tts", "audio", "speech-synthesis", "orpheus", "unsloth", "en", "dataset:lex-au/Orpheus-3b-Kaya", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-to-speech
"2025-04-19T16:38:16Z"
--- language: en tags: - text-to-speech - tts - audio - speech-synthesis - orpheus - gguf - unsloth license: apache-2.0 datasets: - lex-au/Orpheus-3b-Kaya --- # Orpheus-3b-Kaya-Q4_K_M This is a **fine-tuned version** of the pretrained model [canopylabs/orpheus-3b-0.1-pretrained](https://huggingface.co/canopylabs/orpheus-3b-0.1-pretrained), trained on a custom voice dataset and quantised to GGUF Q4_K_M format for fast, efficient inference. --- ## 🔧 Model Details - **Model Type**: Text-to-Speech (TTS) - **Architecture**: Token-to-audio language model - **Parameters**: ~3 billion - **Quantisation**: 8-bit GGUF (Q4_K_M) - **Sampling Rate**: 24kHz mono - **Training Epochs**: 1 - **Training Dataset**: [lex-au/Orpheus-3b-Kaya](https://huggingface.co/datasets/lex-au/Orpheus-3b-Kaya) - **Languages**: English --- ## 🚀 Quick Usage This model is designed for use with [Orpheus-FastAPI](https://github.com/Lex-au/Orpheus-FastAPI), an OpenAI-compatible inference server for text-to-speech generation. ### Compatible Inference Servers You can load this model into: - [GPUStack](https://github.com/gpustack/gpustack) - [LM Studio](https://lmstudio.ai/) - [llama.cpp](https://github.com/ggerganov/llama.cpp) - Any other GGUF-compatible OpenAI-style server ## 📜 License Apache License 2.0 — free for research and commercial use. --- ## 🙌 Credits - Original model by: [Canopy Labs](https://huggingface.co/canopylabs/orpheus-3b-0.1-pt) - Fine-tuned, quantised, and API-wrapped by: [Lex-au](https://huggingface.co/lex-au) via [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) --- ## 📚 Citation ``` @misc{orpheus-tts-2025, author = {Canopy Labs}, title = {Orpheus-3b-0.1-pt: Pretrained Text-to-Speech Model}, year = {2025}, publisher = {HuggingFace}, howpublished = {\url{https://huggingface.co/canopylabs/orpheus-3b-0.1-pt}} } @misc{orpheus-kaya-2025, author = {Lex-au}, title = {Orpheus-3b-Kaya-Q4_K_M: Fine-Tuned TTS Model (Quantised)}, note = {Fine-tuned from canopylabs/orpheus-3b-0.1-pt}, year = {2025}, publisher = {HuggingFace}, howpublished = {\url{https://huggingface.co/lex-au/Orpheus-3b-Kaya-Q4_K_M}} } ```
Lewdiculous/Violet_Magcap-12B-GGUF-IQ-Imatrix
Lewdiculous
"2025-04-19T16:52:46Z"
0
2
null
[ "gguf", "sillytavern", "presets", "mistral", "chatml", "roleplay", "conversational", "reasoning", "smart", "en", "base_model:Nitral-AI/Violet_Magcap-12B", "base_model:quantized:Nitral-AI/Violet_Magcap-12B", "license:other", "endpoints_compatible", "region:us", "imatrix" ]
null
"2025-04-19T14:17:46Z"
--- language: - en base_model: - Nitral-AI/Violet_Magcap-12B tags: - sillytavern - presets - mistral - chatml - roleplay - conversational - reasoning - smart license: other --- <!-- > [!WARNING] > **Uploading...** <br> --> ### Hello there. Here I share my personal **GGUF-Imatrix** quants of [**Violet_Magcap-12B**](https://huggingface.co/Nitral-AI/Violet_Magcap-12B). <br> ``` sillytavern, presets, mistral, chatml, roleplay, conversational, reasoning, smart ``` > [!IMPORTANT] > "It will help you solve problems. It will also make you question your existence." <br> **"Use wisely—or don't."** <br> > ![image/webp](violet-magcap.webp) > > *Please check out the original model card as well for added context and model information.* > [!TIP] > **Discussions** <br> > - [General discussion and author feedback.](https://huggingface.co/Lewdiculous/Violet_Magcap-12B-GGUF-IQ-Imatrix/discussions/1) <br> > Feedback is always welcome for potential issues with quants and as a way to guide the author in the future iterations. <br> Your comments for them are appreciated! > [!NOTE] > **SillyTavern** <br> > - [[SillyTavern Presets]](https://huggingface.co/Lewdiculous/Violet_Magcap-12B-GGUF-IQ-Imatrix/tree/main/SillyTavern) <br> > Initially recommended master-import presets and an additional quick-replies set. <details> <summary>[Click Here] [Please Read] - Additional setup. </summary> Reasoning Block + Prefix: ![Reasoning Format](https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/Gb6aBgJ_PVU0nDMp40wHJ.png) ChatML Format: ![ChatML Format](https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/cbb9qyoPs4cRKCzYU7jPN.png) Mistral Format: ![Mistral Format](https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/IQO9dhOrAySbSPGyiEfmd.png) </details>
Pongsaky/llama3.2-typhoon2-1b-lora-unfreeze-embedding-phonetic
Pongsaky
"2025-04-19T16:52:37Z"
7
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2025-04-14T19:09:18Z"
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TheArtOfficialTrainer/cu128Torch128whls
TheArtOfficialTrainer
"2025-04-19T16:52:01Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2025-04-18T16:11:51Z"
--- license: apache-2.0 ---
gmongaras/datav3_attempt5_8GPU_SoftFlash_RoPE2d_2AccSteps_140batchsize_stage1
gmongaras
"2025-04-19T16:51:17Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2025-04-11T14:48:55Z"
--- license: mit --- Code for this can be found at [https://github.com/gmongaras/Stable-Diffusion-3-From-Scratch](https://github.com/gmongaras/Stable-Diffusion-3-From-Scratch) A higher quality finetuned model can be found at [https://huggingface.co/gmongaras/datav3_attempt5_8GPU_SoftFlash_RoPE2d_2AccSteps_40batchsize_stage2](https://huggingface.co/gmongaras/datav3_attempt5_8GPU_SoftFlash_RoPE2d_2AccSteps_40batchsize_stage2) Data can be found at [https://huggingface.co/datasets/gmongaras/CC12M_and_Imagenet21K_Recap_Highqual](https://huggingface.co/datasets/gmongaras/CC12M_and_Imagenet21K_Recap_Highqual) Checkpoints finetuned with: - 8 GPUs - a batch size of 140 with 2 accumulation steps - using flash attention - 2d RoPE - 256x256 max resolution (but any multiple of 16 up to 256x256 works)
mradermacher/spintax-generation-gemma-3-27b-GGUF
mradermacher
"2025-04-19T16:43:25Z"
0
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "gemma3", "en", "base_model:alecccdd/spintax-generation-gemma-3-27b", "base_model:quantized:alecccdd/spintax-generation-gemma-3-27b", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-04-19T16:15:55Z"
--- base_model: alecccdd/spintax-generation-gemma-3-27b language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - gemma3 --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/alecccdd/spintax-generation-gemma-3-27b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/spintax-generation-gemma-3-27b-GGUF/resolve/main/spintax-generation-gemma-3-27b.Q2_K.gguf) | Q2_K | 10.6 | | | [GGUF](https://huggingface.co/mradermacher/spintax-generation-gemma-3-27b-GGUF/resolve/main/spintax-generation-gemma-3-27b.Q3_K_S.gguf) | Q3_K_S | 12.3 | | | [GGUF](https://huggingface.co/mradermacher/spintax-generation-gemma-3-27b-GGUF/resolve/main/spintax-generation-gemma-3-27b.Q3_K_M.gguf) | Q3_K_M | 13.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/spintax-generation-gemma-3-27b-GGUF/resolve/main/spintax-generation-gemma-3-27b.Q3_K_L.gguf) | Q3_K_L | 14.6 | | | [GGUF](https://huggingface.co/mradermacher/spintax-generation-gemma-3-27b-GGUF/resolve/main/spintax-generation-gemma-3-27b.IQ4_XS.gguf) | IQ4_XS | 15.0 | | | [GGUF](https://huggingface.co/mradermacher/spintax-generation-gemma-3-27b-GGUF/resolve/main/spintax-generation-gemma-3-27b.Q4_K_S.gguf) | Q4_K_S | 15.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/spintax-generation-gemma-3-27b-GGUF/resolve/main/spintax-generation-gemma-3-27b.Q4_K_M.gguf) | Q4_K_M | 16.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/spintax-generation-gemma-3-27b-GGUF/resolve/main/spintax-generation-gemma-3-27b.Q5_K_S.gguf) | Q5_K_S | 18.9 | | | [GGUF](https://huggingface.co/mradermacher/spintax-generation-gemma-3-27b-GGUF/resolve/main/spintax-generation-gemma-3-27b.Q5_K_M.gguf) | Q5_K_M | 19.4 | | | [GGUF](https://huggingface.co/mradermacher/spintax-generation-gemma-3-27b-GGUF/resolve/main/spintax-generation-gemma-3-27b.Q6_K.gguf) | Q6_K | 22.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/spintax-generation-gemma-3-27b-GGUF/resolve/main/spintax-generation-gemma-3-27b.Q8_0.gguf) | Q8_0 | 28.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
mradermacher/Gemma3-Rhino-27B-SFTv1-GGUF
mradermacher
"2025-04-19T16:43:25Z"
0
0
transformers
[ "transformers", "gguf", "en", "base_model:QomSSLab/Gemma3-Rhino-27B-SFTv1", "base_model:quantized:QomSSLab/Gemma3-Rhino-27B-SFTv1", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-04-19T16:16:28Z"
--- base_model: QomSSLab/Gemma3-Rhino-27B-SFTv1 language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/QomSSLab/Gemma3-Rhino-27B-SFTv1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Gemma3-Rhino-27B-SFTv1-GGUF/resolve/main/Gemma3-Rhino-27B-SFTv1.Q2_K.gguf) | Q2_K | 10.6 | | | [GGUF](https://huggingface.co/mradermacher/Gemma3-Rhino-27B-SFTv1-GGUF/resolve/main/Gemma3-Rhino-27B-SFTv1.Q3_K_S.gguf) | Q3_K_S | 12.3 | | | [GGUF](https://huggingface.co/mradermacher/Gemma3-Rhino-27B-SFTv1-GGUF/resolve/main/Gemma3-Rhino-27B-SFTv1.Q3_K_M.gguf) | Q3_K_M | 13.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Gemma3-Rhino-27B-SFTv1-GGUF/resolve/main/Gemma3-Rhino-27B-SFTv1.Q3_K_L.gguf) | Q3_K_L | 14.6 | | | [GGUF](https://huggingface.co/mradermacher/Gemma3-Rhino-27B-SFTv1-GGUF/resolve/main/Gemma3-Rhino-27B-SFTv1.IQ4_XS.gguf) | IQ4_XS | 15.0 | | | [GGUF](https://huggingface.co/mradermacher/Gemma3-Rhino-27B-SFTv1-GGUF/resolve/main/Gemma3-Rhino-27B-SFTv1.Q4_K_S.gguf) | Q4_K_S | 15.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Gemma3-Rhino-27B-SFTv1-GGUF/resolve/main/Gemma3-Rhino-27B-SFTv1.Q4_K_M.gguf) | Q4_K_M | 16.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Gemma3-Rhino-27B-SFTv1-GGUF/resolve/main/Gemma3-Rhino-27B-SFTv1.Q5_K_S.gguf) | Q5_K_S | 18.9 | | | [GGUF](https://huggingface.co/mradermacher/Gemma3-Rhino-27B-SFTv1-GGUF/resolve/main/Gemma3-Rhino-27B-SFTv1.Q5_K_M.gguf) | Q5_K_M | 19.4 | | | [GGUF](https://huggingface.co/mradermacher/Gemma3-Rhino-27B-SFTv1-GGUF/resolve/main/Gemma3-Rhino-27B-SFTv1.Q6_K.gguf) | Q6_K | 22.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Gemma3-Rhino-27B-SFTv1-GGUF/resolve/main/Gemma3-Rhino-27B-SFTv1.Q8_0.gguf) | Q8_0 | 28.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
pristinawang/tableQA-GRPO-Llama-3.2-1B-Instruct-20250417211115-step1600
pristinawang
"2025-04-19T16:35:49Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2025-04-19T16:35:47Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
CarlosElArtista/vizdoom_health_gathering_supreme
CarlosElArtista
"2025-04-19T16:35:29Z"
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2025-04-19T16:35:09Z"
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 11.37 +/- 5.40 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r CarlosElArtista/vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
atharvanighot/Hindi-Llama-1
atharvanighot
"2025-04-19T16:33:18Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:atharvanighot/Tinyllama-Hindi", "base_model:finetune:atharvanighot/Tinyllama-Hindi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-19T16:30:11Z"
--- base_model: atharvanighot/Tinyllama-Hindi tags: - text-generation-inference - transformers - unsloth - llama - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** atharvanighot - **License:** apache-2.0 - **Finetuned from model :** atharvanighot/Tinyllama-Hindi This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
genki10/BERT_V8_sp10_lw20_ex100_lo50_k3_k3_fold4
genki10
"2025-04-19T16:32:25Z"
0
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2025-04-19T16:16:29Z"
--- library_name: transformers license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer model-index: - name: BERT_V8_sp10_lw20_ex100_lo50_k3_k3_fold4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BERT_V8_sp10_lw20_ex100_lo50_k3_k3_fold4 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5829 - Qwk: 0.5289 - Mse: 0.5829 - Rmse: 0.7635 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:| | No log | 1.0 | 2 | 9.2168 | 0.0018 | 9.2168 | 3.0359 | | No log | 2.0 | 4 | 7.7948 | 0.0 | 7.7948 | 2.7919 | | No log | 3.0 | 6 | 6.2371 | 0.0001 | 6.2371 | 2.4974 | | No log | 4.0 | 8 | 4.8456 | 0.0128 | 4.8456 | 2.2013 | | No log | 5.0 | 10 | 3.8436 | 0.0040 | 3.8436 | 1.9605 | | No log | 6.0 | 12 | 3.2774 | 0.0040 | 3.2774 | 1.8104 | | No log | 7.0 | 14 | 2.4375 | 0.0240 | 2.4375 | 1.5613 | | No log | 8.0 | 16 | 1.9757 | 0.0382 | 1.9757 | 1.4056 | | No log | 9.0 | 18 | 1.5611 | 0.0344 | 1.5611 | 1.2494 | | No log | 10.0 | 20 | 1.2785 | 0.0239 | 1.2785 | 1.1307 | | No log | 11.0 | 22 | 1.0030 | 0.0107 | 1.0030 | 1.0015 | | No log | 12.0 | 24 | 0.8342 | 0.3686 | 0.8342 | 0.9133 | | No log | 13.0 | 26 | 0.7308 | 0.3918 | 0.7308 | 0.8549 | | No log | 14.0 | 28 | 0.6769 | 0.4163 | 0.6769 | 0.8227 | | No log | 15.0 | 30 | 0.6143 | 0.3849 | 0.6143 | 0.7838 | | No log | 16.0 | 32 | 0.7527 | 0.3646 | 0.7527 | 0.8676 | | No log | 17.0 | 34 | 0.5235 | 0.4894 | 0.5235 | 0.7235 | | No log | 18.0 | 36 | 0.6185 | 0.3904 | 0.6185 | 0.7864 | | No log | 19.0 | 38 | 0.6081 | 0.5475 | 0.6081 | 0.7798 | | No log | 20.0 | 40 | 1.5078 | 0.3065 | 1.5078 | 1.2279 | | No log | 21.0 | 42 | 0.7970 | 0.4881 | 0.7970 | 0.8928 | | No log | 22.0 | 44 | 0.5119 | 0.6126 | 0.5119 | 0.7155 | | No log | 23.0 | 46 | 0.5050 | 0.5946 | 0.5050 | 0.7107 | | No log | 24.0 | 48 | 0.8772 | 0.4344 | 0.8772 | 0.9366 | | No log | 25.0 | 50 | 0.6363 | 0.5687 | 0.6363 | 0.7977 | | No log | 26.0 | 52 | 0.5126 | 0.5538 | 0.5126 | 0.7160 | | No log | 27.0 | 54 | 0.4978 | 0.5673 | 0.4978 | 0.7056 | | No log | 28.0 | 56 | 0.6470 | 0.5588 | 0.6470 | 0.8043 | | No log | 29.0 | 58 | 0.5346 | 0.5788 | 0.5346 | 0.7312 | | No log | 30.0 | 60 | 0.5112 | 0.5778 | 0.5112 | 0.7150 | | No log | 31.0 | 62 | 0.6089 | 0.5578 | 0.6089 | 0.7803 | | No log | 32.0 | 64 | 0.5124 | 0.6022 | 0.5124 | 0.7159 | | No log | 33.0 | 66 | 0.5126 | 0.5940 | 0.5126 | 0.7160 | | No log | 34.0 | 68 | 0.5423 | 0.5938 | 0.5423 | 0.7364 | | No log | 35.0 | 70 | 0.5101 | 0.6293 | 0.5101 | 0.7142 | | No log | 36.0 | 72 | 0.6849 | 0.5343 | 0.6849 | 0.8276 | | No log | 37.0 | 74 | 0.6398 | 0.5590 | 0.6398 | 0.7999 | | No log | 38.0 | 76 | 0.5121 | 0.6422 | 0.5121 | 0.7156 | | No log | 39.0 | 78 | 0.5360 | 0.6097 | 0.5360 | 0.7321 | | No log | 40.0 | 80 | 0.7576 | 0.4934 | 0.7576 | 0.8704 | | No log | 41.0 | 82 | 0.6019 | 0.5423 | 0.6019 | 0.7758 | | No log | 42.0 | 84 | 0.5172 | 0.5893 | 0.5172 | 0.7192 | | No log | 43.0 | 86 | 0.5164 | 0.6104 | 0.5164 | 0.7186 | | No log | 44.0 | 88 | 0.6642 | 0.5370 | 0.6642 | 0.8150 | | No log | 45.0 | 90 | 0.5592 | 0.5779 | 0.5592 | 0.7478 | | No log | 46.0 | 92 | 0.5079 | 0.5891 | 0.5079 | 0.7127 | | No log | 47.0 | 94 | 0.5080 | 0.6023 | 0.5080 | 0.7127 | | No log | 48.0 | 96 | 0.6314 | 0.5094 | 0.6314 | 0.7946 | | No log | 49.0 | 98 | 0.5644 | 0.5379 | 0.5644 | 0.7513 | | No log | 50.0 | 100 | 0.5075 | 0.5974 | 0.5075 | 0.7124 | | No log | 51.0 | 102 | 0.5118 | 0.6115 | 0.5118 | 0.7154 | | No log | 52.0 | 104 | 0.5883 | 0.5441 | 0.5883 | 0.7670 | | No log | 53.0 | 106 | 0.5236 | 0.5995 | 0.5236 | 0.7236 | | No log | 54.0 | 108 | 0.5074 | 0.5957 | 0.5074 | 0.7123 | | No log | 55.0 | 110 | 0.5719 | 0.5428 | 0.5719 | 0.7562 | | No log | 56.0 | 112 | 0.5571 | 0.5418 | 0.5571 | 0.7464 | | No log | 57.0 | 114 | 0.5129 | 0.5845 | 0.5129 | 0.7161 | | No log | 58.0 | 116 | 0.5504 | 0.5675 | 0.5504 | 0.7419 | | No log | 59.0 | 118 | 0.6398 | 0.5196 | 0.6398 | 0.7999 | | No log | 60.0 | 120 | 0.5937 | 0.5500 | 0.5937 | 0.7705 | | No log | 61.0 | 122 | 0.5181 | 0.6172 | 0.5181 | 0.7198 | | No log | 62.0 | 124 | 0.5416 | 0.5688 | 0.5416 | 0.7359 | | No log | 63.0 | 126 | 0.5611 | 0.5415 | 0.5611 | 0.7491 | | No log | 64.0 | 128 | 0.5551 | 0.5372 | 0.5551 | 0.7450 | | No log | 65.0 | 130 | 0.5061 | 0.5724 | 0.5061 | 0.7114 | | No log | 66.0 | 132 | 0.5062 | 0.5827 | 0.5062 | 0.7115 | | No log | 67.0 | 134 | 0.5595 | 0.5500 | 0.5595 | 0.7480 | | No log | 68.0 | 136 | 0.5829 | 0.5289 | 0.5829 | 0.7635 | ### Framework versions - Transformers 4.47.0 - Pytorch 2.5.1+cu121 - Datasets 3.3.1 - Tokenizers 0.21.0
PROFESSORSDR/gpu
PROFESSORSDR
"2025-04-19T16:30:46Z"
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
"2025-04-19T16:30:46Z"
--- license: creativeml-openrail-m ---
cs2764/GLM-4-32B-0414-abliterated-Q8_0-GGUF
cs2764
"2025-04-19T16:29:52Z"
0
0
transformers
[ "transformers", "gguf", "abliterated", "uncensored", "llama-cpp", "gguf-my-repo", "text-generation", "zh", "en", "base_model:huihui-ai/GLM-4-32B-0414-abliterated", "base_model:quantized:huihui-ai/GLM-4-32B-0414-abliterated", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
text-generation
"2025-04-19T16:27:13Z"
--- base_model: huihui-ai/GLM-4-32B-0414-abliterated language: - zh - en library_name: transformers license: mit pipeline_tag: text-generation tags: - abliterated - uncensored - llama-cpp - gguf-my-repo --- # cs2764/GLM-4-32B-0414-abliterated-Q8_0-GGUF This model was converted to GGUF format from [`huihui-ai/GLM-4-32B-0414-abliterated`](https://huggingface.co/huihui-ai/GLM-4-32B-0414-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/huihui-ai/GLM-4-32B-0414-abliterated) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo cs2764/GLM-4-32B-0414-abliterated-Q8_0-GGUF --hf-file glm-4-32b-0414-abliterated-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo cs2764/GLM-4-32B-0414-abliterated-Q8_0-GGUF --hf-file glm-4-32b-0414-abliterated-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo cs2764/GLM-4-32B-0414-abliterated-Q8_0-GGUF --hf-file glm-4-32b-0414-abliterated-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo cs2764/GLM-4-32B-0414-abliterated-Q8_0-GGUF --hf-file glm-4-32b-0414-abliterated-q8_0.gguf -c 2048 ```
xw17/Qwen2-1.5B-Instruct_finetuned_4_optimized_lora_activity_origin
xw17
"2025-04-19T16:27:54Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2025-04-19T16:27:41Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
middaycatch0p/video.de.la.chama
middaycatch0p
"2025-04-19T16:27:32Z"
0
0
null
[ "region:us" ]
null
"2025-04-19T16:25:30Z"
<a href="https://skyhighway.sbs/tuhydb"> 🌐 Click Here To link (video de la chama la chama chiquita video NUYU MIEYA VIRAL) 🔴 ➤►DOWNLOAD👉👉🟢 ➤ <a href="https://skyhighway.sbs/tuhydb"> 🌐 video de la chama la chama chiquita video NUYU MIEYA VIRAL
SweUmaVarsh/hi-sa-m2m100
SweUmaVarsh
"2025-04-19T16:25:11Z"
0
0
transformers
[ "transformers", "safetensors", "m2m_100", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2025-04-19T16:24:01Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
yuchenxie/ArlowGPT-400M-Base
yuchenxie
"2025-04-19T16:23:39Z"
0
0
transformers
[ "transformers", "safetensors", "arlow", "text-generation", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-19T16:15:08Z"
--- license: apache-2.0 language: - en library_name: transformers ---
Speedsy/ytu-turkish-bert-tiny-uncased21-4000
Speedsy
"2025-04-19T16:21:04Z"
0
0
PyLate
[ "PyLate", "safetensors", "bert", "ColBERT", "sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:798036", "loss:Distillation", "en", "dataset:Speedsy/ms-marco-tr-bge", "arxiv:1908.10084", "base_model:ytu-ce-cosmos/turkish-tiny-bert-uncased", "base_model:finetune:ytu-ce-cosmos/turkish-tiny-bert-uncased", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
"2025-04-19T16:21:02Z"
--- language: - en tags: - ColBERT - PyLate - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:798036 - loss:Distillation base_model: ytu-ce-cosmos/turkish-tiny-bert-uncased datasets: - Speedsy/ms-marco-tr-bge pipeline_tag: sentence-similarity library_name: PyLate --- # PyLate model based on ytu-ce-cosmos/turkish-tiny-bert-uncased This is a [PyLate](https://github.com/lightonai/pylate) model finetuned from [ytu-ce-cosmos/turkish-tiny-bert-uncased](https://huggingface.co/ytu-ce-cosmos/turkish-tiny-bert-uncased) on the [train](https://huggingface.co/datasets/Speedsy/ms-marco-tr-bge) dataset. It maps sentences & paragraphs to sequences of 128-dimensional dense vectors and can be used for semantic textual similarity using the MaxSim operator. ## Model Details ### Model Description - **Model Type:** PyLate model - **Base model:** [ytu-ce-cosmos/turkish-tiny-bert-uncased](https://huggingface.co/ytu-ce-cosmos/turkish-tiny-bert-uncased) <!-- at revision 208794046047fc7445f7a4179636423802691268 --> - **Document Length:** 180 tokens - **Query Length:** 32 tokens - **Output Dimensionality:** 128 tokens - **Similarity Function:** MaxSim - **Training Dataset:** - [train](https://huggingface.co/datasets/Speedsy/ms-marco-tr-bge) - **Language:** en <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [PyLate Documentation](https://lightonai.github.io/pylate/) - **Repository:** [PyLate on GitHub](https://github.com/lightonai/pylate) - **Hugging Face:** [PyLate models on Hugging Face](https://huggingface.co/models?library=PyLate) ### Full Model Architecture ``` ColBERT( (0): Transformer({'max_seq_length': 179, 'do_lower_case': False}) with Transformer model: BertModel (1): Dense({'in_features': 128, 'out_features': 128, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'}) ) ``` ## Usage First install the PyLate library: ```bash pip install -U pylate ``` ### Retrieval PyLate provides a streamlined interface to index and retrieve documents using ColBERT models. The index leverages the Voyager HNSW index to efficiently handle document embeddings and enable fast retrieval. #### Indexing documents First, load the ColBERT model and initialize the Voyager index, then encode and index your documents: ```python from pylate import indexes, models, retrieve # Step 1: Load the ColBERT model model = models.ColBERT( model_name_or_path=pylate_model_id, ) # Step 2: Initialize the Voyager index index = indexes.Voyager( index_folder="pylate-index", index_name="index", override=True, # This overwrites the existing index if any ) # Step 3: Encode the documents documents_ids = ["1", "2", "3"] documents = ["document 1 text", "document 2 text", "document 3 text"] documents_embeddings = model.encode( documents, batch_size=32, is_query=False, # Ensure that it is set to False to indicate that these are documents, not queries show_progress_bar=True, ) # Step 4: Add document embeddings to the index by providing embeddings and corresponding ids index.add_documents( documents_ids=documents_ids, documents_embeddings=documents_embeddings, ) ``` Note that you do not have to recreate the index and encode the documents every time. Once you have created an index and added the documents, you can re-use the index later by loading it: ```python # To load an index, simply instantiate it with the correct folder/name and without overriding it index = indexes.Voyager( index_folder="pylate-index", index_name="index", ) ``` #### Retrieving top-k documents for queries Once the documents are indexed, you can retrieve the top-k most relevant documents for a given set of queries. To do so, initialize the ColBERT retriever with the index you want to search in, encode the queries and then retrieve the top-k documents to get the top matches ids and relevance scores: ```python # Step 1: Initialize the ColBERT retriever retriever = retrieve.ColBERT(index=index) # Step 2: Encode the queries queries_embeddings = model.encode( ["query for document 3", "query for document 1"], batch_size=32, is_query=True, # # Ensure that it is set to False to indicate that these are queries show_progress_bar=True, ) # Step 3: Retrieve top-k documents scores = retriever.retrieve( queries_embeddings=queries_embeddings, k=10, # Retrieve the top 10 matches for each query ) ``` ### Reranking If you only want to use the ColBERT model to perform reranking on top of your first-stage retrieval pipeline without building an index, you can simply use rank function and pass the queries and documents to rerank: ```python from pylate import rank, models queries = [ "query A", "query B", ] documents = [ ["document A", "document B"], ["document 1", "document C", "document B"], ] documents_ids = [ [1, 2], [1, 3, 2], ] model = models.ColBERT( model_name_or_path=pylate_model_id, ) queries_embeddings = model.encode( queries, is_query=True, ) documents_embeddings = model.encode( documents, is_query=False, ) reranked_documents = rank.rerank( documents_ids=documents_ids, queries_embeddings=queries_embeddings, documents_embeddings=documents_embeddings, ) ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### train * Dataset: [train](https://huggingface.co/datasets/Speedsy/ms-marco-tr-bge) at [b9b0f7f](https://huggingface.co/datasets/Speedsy/ms-marco-tr-bge/tree/b9b0f7fd13c3ce3b632a3a1cd37f6ddbf8a040f5) * Size: 798,036 training samples * Columns: <code>query_id</code>, <code>document_ids</code>, and <code>scores</code> * Approximate statistics based on the first 1000 samples: | | query_id | document_ids | scores | |:--------|:--------------------------------------------------------------------------------|:------------------------------------|:------------------------------------| | type | string | list | list | | details | <ul><li>min: 4 tokens</li><li>mean: 6.18 tokens</li><li>max: 8 tokens</li></ul> | <ul><li>size: 32 elements</li></ul> | <ul><li>size: 32 elements</li></ul> | * Samples: | query_id | document_ids | scores | |:---------------------|:--------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------| | <code>817836</code> | <code>['2716076', '6741935', '2681109', '5562684', '3507339', ...]</code> | <code>[1.0, 0.7059561610221863, 0.21702419221401215, 0.38270196318626404, 0.20812414586544037, ...]</code> | | <code>1045170</code> | <code>['5088671', '2953295', '8783471', '4268439', '6339935', ...]</code> | <code>[1.0, 0.6493034362792969, 0.0692221149802208, 0.17963139712810516, 0.6697239875793457, ...]</code> | | <code>1154488</code> | <code>['6498614', '3770829', '1060712', '2590533', '7672044', ...]</code> | <code>[0.9497447609901428, 0.6662212610244751, 0.7423420548439026, 1.0, 0.6580896973609924, ...]</code> | * Loss: <code>pylate.losses.distillation.Distillation</code> ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 16 - `learning_rate`: 3e-05 - `num_train_epochs`: 1 - `bf16`: True #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 8 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 3e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | |:------:|:----:|:-------------:| | 0.0100 | 500 | 0.046 | | 0.0200 | 1000 | 0.0443 | | 0.0301 | 1500 | 0.0441 | | 0.0401 | 2000 | 0.0439 | | 0.0501 | 2500 | 0.0435 | | 0.0601 | 3000 | 0.0427 | | 0.0702 | 3500 | 0.0425 | | 0.0802 | 4000 | 0.0424 | ### Framework Versions - Python: 3.11.12 - Sentence Transformers: 3.4.1 - PyLate: 1.1.7 - Transformers: 4.48.2 - PyTorch: 2.6.0+cu124 - Accelerate: 1.5.2 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084" } ``` #### PyLate ```bibtex @misc{PyLate, title={PyLate: Flexible Training and Retrieval for Late Interaction Models}, author={Chaffin, Antoine and Sourty, Raphaël}, url={https://github.com/lightonai/pylate}, year={2024} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
Speedsy/ytu-turkish-bert-tiny-uncased21-2500
Speedsy
"2025-04-19T16:20:53Z"
0
0
PyLate
[ "PyLate", "safetensors", "bert", "ColBERT", "sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:798036", "loss:Distillation", "en", "dataset:Speedsy/ms-marco-tr-bge", "arxiv:1908.10084", "base_model:ytu-ce-cosmos/turkish-tiny-bert-uncased", "base_model:finetune:ytu-ce-cosmos/turkish-tiny-bert-uncased", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
"2025-04-19T16:20:50Z"
--- language: - en tags: - ColBERT - PyLate - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:798036 - loss:Distillation base_model: ytu-ce-cosmos/turkish-tiny-bert-uncased datasets: - Speedsy/ms-marco-tr-bge pipeline_tag: sentence-similarity library_name: PyLate --- # PyLate model based on ytu-ce-cosmos/turkish-tiny-bert-uncased This is a [PyLate](https://github.com/lightonai/pylate) model finetuned from [ytu-ce-cosmos/turkish-tiny-bert-uncased](https://huggingface.co/ytu-ce-cosmos/turkish-tiny-bert-uncased) on the [train](https://huggingface.co/datasets/Speedsy/ms-marco-tr-bge) dataset. It maps sentences & paragraphs to sequences of 128-dimensional dense vectors and can be used for semantic textual similarity using the MaxSim operator. ## Model Details ### Model Description - **Model Type:** PyLate model - **Base model:** [ytu-ce-cosmos/turkish-tiny-bert-uncased](https://huggingface.co/ytu-ce-cosmos/turkish-tiny-bert-uncased) <!-- at revision 208794046047fc7445f7a4179636423802691268 --> - **Document Length:** 180 tokens - **Query Length:** 32 tokens - **Output Dimensionality:** 128 tokens - **Similarity Function:** MaxSim - **Training Dataset:** - [train](https://huggingface.co/datasets/Speedsy/ms-marco-tr-bge) - **Language:** en <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [PyLate Documentation](https://lightonai.github.io/pylate/) - **Repository:** [PyLate on GitHub](https://github.com/lightonai/pylate) - **Hugging Face:** [PyLate models on Hugging Face](https://huggingface.co/models?library=PyLate) ### Full Model Architecture ``` ColBERT( (0): Transformer({'max_seq_length': 179, 'do_lower_case': False}) with Transformer model: BertModel (1): Dense({'in_features': 128, 'out_features': 128, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'}) ) ``` ## Usage First install the PyLate library: ```bash pip install -U pylate ``` ### Retrieval PyLate provides a streamlined interface to index and retrieve documents using ColBERT models. The index leverages the Voyager HNSW index to efficiently handle document embeddings and enable fast retrieval. #### Indexing documents First, load the ColBERT model and initialize the Voyager index, then encode and index your documents: ```python from pylate import indexes, models, retrieve # Step 1: Load the ColBERT model model = models.ColBERT( model_name_or_path=pylate_model_id, ) # Step 2: Initialize the Voyager index index = indexes.Voyager( index_folder="pylate-index", index_name="index", override=True, # This overwrites the existing index if any ) # Step 3: Encode the documents documents_ids = ["1", "2", "3"] documents = ["document 1 text", "document 2 text", "document 3 text"] documents_embeddings = model.encode( documents, batch_size=32, is_query=False, # Ensure that it is set to False to indicate that these are documents, not queries show_progress_bar=True, ) # Step 4: Add document embeddings to the index by providing embeddings and corresponding ids index.add_documents( documents_ids=documents_ids, documents_embeddings=documents_embeddings, ) ``` Note that you do not have to recreate the index and encode the documents every time. Once you have created an index and added the documents, you can re-use the index later by loading it: ```python # To load an index, simply instantiate it with the correct folder/name and without overriding it index = indexes.Voyager( index_folder="pylate-index", index_name="index", ) ``` #### Retrieving top-k documents for queries Once the documents are indexed, you can retrieve the top-k most relevant documents for a given set of queries. To do so, initialize the ColBERT retriever with the index you want to search in, encode the queries and then retrieve the top-k documents to get the top matches ids and relevance scores: ```python # Step 1: Initialize the ColBERT retriever retriever = retrieve.ColBERT(index=index) # Step 2: Encode the queries queries_embeddings = model.encode( ["query for document 3", "query for document 1"], batch_size=32, is_query=True, # # Ensure that it is set to False to indicate that these are queries show_progress_bar=True, ) # Step 3: Retrieve top-k documents scores = retriever.retrieve( queries_embeddings=queries_embeddings, k=10, # Retrieve the top 10 matches for each query ) ``` ### Reranking If you only want to use the ColBERT model to perform reranking on top of your first-stage retrieval pipeline without building an index, you can simply use rank function and pass the queries and documents to rerank: ```python from pylate import rank, models queries = [ "query A", "query B", ] documents = [ ["document A", "document B"], ["document 1", "document C", "document B"], ] documents_ids = [ [1, 2], [1, 3, 2], ] model = models.ColBERT( model_name_or_path=pylate_model_id, ) queries_embeddings = model.encode( queries, is_query=True, ) documents_embeddings = model.encode( documents, is_query=False, ) reranked_documents = rank.rerank( documents_ids=documents_ids, queries_embeddings=queries_embeddings, documents_embeddings=documents_embeddings, ) ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### train * Dataset: [train](https://huggingface.co/datasets/Speedsy/ms-marco-tr-bge) at [b9b0f7f](https://huggingface.co/datasets/Speedsy/ms-marco-tr-bge/tree/b9b0f7fd13c3ce3b632a3a1cd37f6ddbf8a040f5) * Size: 798,036 training samples * Columns: <code>query_id</code>, <code>document_ids</code>, and <code>scores</code> * Approximate statistics based on the first 1000 samples: | | query_id | document_ids | scores | |:--------|:--------------------------------------------------------------------------------|:------------------------------------|:------------------------------------| | type | string | list | list | | details | <ul><li>min: 4 tokens</li><li>mean: 6.18 tokens</li><li>max: 8 tokens</li></ul> | <ul><li>size: 32 elements</li></ul> | <ul><li>size: 32 elements</li></ul> | * Samples: | query_id | document_ids | scores | |:---------------------|:--------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------| | <code>817836</code> | <code>['2716076', '6741935', '2681109', '5562684', '3507339', ...]</code> | <code>[1.0, 0.7059561610221863, 0.21702419221401215, 0.38270196318626404, 0.20812414586544037, ...]</code> | | <code>1045170</code> | <code>['5088671', '2953295', '8783471', '4268439', '6339935', ...]</code> | <code>[1.0, 0.6493034362792969, 0.0692221149802208, 0.17963139712810516, 0.6697239875793457, ...]</code> | | <code>1154488</code> | <code>['6498614', '3770829', '1060712', '2590533', '7672044', ...]</code> | <code>[0.9497447609901428, 0.6662212610244751, 0.7423420548439026, 1.0, 0.6580896973609924, ...]</code> | * Loss: <code>pylate.losses.distillation.Distillation</code> ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 16 - `learning_rate`: 3e-05 - `num_train_epochs`: 1 - `bf16`: True #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 8 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 3e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | |:------:|:----:|:-------------:| | 0.0100 | 500 | 0.046 | | 0.0200 | 1000 | 0.0443 | | 0.0301 | 1500 | 0.0441 | | 0.0401 | 2000 | 0.0439 | | 0.0501 | 2500 | 0.0435 | ### Framework Versions - Python: 3.11.12 - Sentence Transformers: 3.4.1 - PyLate: 1.1.7 - Transformers: 4.48.2 - PyTorch: 2.6.0+cu124 - Accelerate: 1.5.2 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084" } ``` #### PyLate ```bibtex @misc{PyLate, title={PyLate: Flexible Training and Retrieval for Late Interaction Models}, author={Chaffin, Antoine and Sourty, Raphaël}, url={https://github.com/lightonai/pylate}, year={2024} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
Speedsy/ytu-turkish-bert-tiny-uncased21-1500
Speedsy
"2025-04-19T16:20:46Z"
0
0
PyLate
[ "PyLate", "safetensors", "bert", "ColBERT", "sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:798036", "loss:Distillation", "en", "dataset:Speedsy/ms-marco-tr-bge", "arxiv:1908.10084", "base_model:ytu-ce-cosmos/turkish-tiny-bert-uncased", "base_model:finetune:ytu-ce-cosmos/turkish-tiny-bert-uncased", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
"2025-04-19T16:20:43Z"
--- language: - en tags: - ColBERT - PyLate - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:798036 - loss:Distillation base_model: ytu-ce-cosmos/turkish-tiny-bert-uncased datasets: - Speedsy/ms-marco-tr-bge pipeline_tag: sentence-similarity library_name: PyLate --- # PyLate model based on ytu-ce-cosmos/turkish-tiny-bert-uncased This is a [PyLate](https://github.com/lightonai/pylate) model finetuned from [ytu-ce-cosmos/turkish-tiny-bert-uncased](https://huggingface.co/ytu-ce-cosmos/turkish-tiny-bert-uncased) on the [train](https://huggingface.co/datasets/Speedsy/ms-marco-tr-bge) dataset. It maps sentences & paragraphs to sequences of 128-dimensional dense vectors and can be used for semantic textual similarity using the MaxSim operator. ## Model Details ### Model Description - **Model Type:** PyLate model - **Base model:** [ytu-ce-cosmos/turkish-tiny-bert-uncased](https://huggingface.co/ytu-ce-cosmos/turkish-tiny-bert-uncased) <!-- at revision 208794046047fc7445f7a4179636423802691268 --> - **Document Length:** 180 tokens - **Query Length:** 32 tokens - **Output Dimensionality:** 128 tokens - **Similarity Function:** MaxSim - **Training Dataset:** - [train](https://huggingface.co/datasets/Speedsy/ms-marco-tr-bge) - **Language:** en <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [PyLate Documentation](https://lightonai.github.io/pylate/) - **Repository:** [PyLate on GitHub](https://github.com/lightonai/pylate) - **Hugging Face:** [PyLate models on Hugging Face](https://huggingface.co/models?library=PyLate) ### Full Model Architecture ``` ColBERT( (0): Transformer({'max_seq_length': 179, 'do_lower_case': False}) with Transformer model: BertModel (1): Dense({'in_features': 128, 'out_features': 128, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'}) ) ``` ## Usage First install the PyLate library: ```bash pip install -U pylate ``` ### Retrieval PyLate provides a streamlined interface to index and retrieve documents using ColBERT models. The index leverages the Voyager HNSW index to efficiently handle document embeddings and enable fast retrieval. #### Indexing documents First, load the ColBERT model and initialize the Voyager index, then encode and index your documents: ```python from pylate import indexes, models, retrieve # Step 1: Load the ColBERT model model = models.ColBERT( model_name_or_path=pylate_model_id, ) # Step 2: Initialize the Voyager index index = indexes.Voyager( index_folder="pylate-index", index_name="index", override=True, # This overwrites the existing index if any ) # Step 3: Encode the documents documents_ids = ["1", "2", "3"] documents = ["document 1 text", "document 2 text", "document 3 text"] documents_embeddings = model.encode( documents, batch_size=32, is_query=False, # Ensure that it is set to False to indicate that these are documents, not queries show_progress_bar=True, ) # Step 4: Add document embeddings to the index by providing embeddings and corresponding ids index.add_documents( documents_ids=documents_ids, documents_embeddings=documents_embeddings, ) ``` Note that you do not have to recreate the index and encode the documents every time. Once you have created an index and added the documents, you can re-use the index later by loading it: ```python # To load an index, simply instantiate it with the correct folder/name and without overriding it index = indexes.Voyager( index_folder="pylate-index", index_name="index", ) ``` #### Retrieving top-k documents for queries Once the documents are indexed, you can retrieve the top-k most relevant documents for a given set of queries. To do so, initialize the ColBERT retriever with the index you want to search in, encode the queries and then retrieve the top-k documents to get the top matches ids and relevance scores: ```python # Step 1: Initialize the ColBERT retriever retriever = retrieve.ColBERT(index=index) # Step 2: Encode the queries queries_embeddings = model.encode( ["query for document 3", "query for document 1"], batch_size=32, is_query=True, # # Ensure that it is set to False to indicate that these are queries show_progress_bar=True, ) # Step 3: Retrieve top-k documents scores = retriever.retrieve( queries_embeddings=queries_embeddings, k=10, # Retrieve the top 10 matches for each query ) ``` ### Reranking If you only want to use the ColBERT model to perform reranking on top of your first-stage retrieval pipeline without building an index, you can simply use rank function and pass the queries and documents to rerank: ```python from pylate import rank, models queries = [ "query A", "query B", ] documents = [ ["document A", "document B"], ["document 1", "document C", "document B"], ] documents_ids = [ [1, 2], [1, 3, 2], ] model = models.ColBERT( model_name_or_path=pylate_model_id, ) queries_embeddings = model.encode( queries, is_query=True, ) documents_embeddings = model.encode( documents, is_query=False, ) reranked_documents = rank.rerank( documents_ids=documents_ids, queries_embeddings=queries_embeddings, documents_embeddings=documents_embeddings, ) ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### train * Dataset: [train](https://huggingface.co/datasets/Speedsy/ms-marco-tr-bge) at [b9b0f7f](https://huggingface.co/datasets/Speedsy/ms-marco-tr-bge/tree/b9b0f7fd13c3ce3b632a3a1cd37f6ddbf8a040f5) * Size: 798,036 training samples * Columns: <code>query_id</code>, <code>document_ids</code>, and <code>scores</code> * Approximate statistics based on the first 1000 samples: | | query_id | document_ids | scores | |:--------|:--------------------------------------------------------------------------------|:------------------------------------|:------------------------------------| | type | string | list | list | | details | <ul><li>min: 4 tokens</li><li>mean: 6.18 tokens</li><li>max: 8 tokens</li></ul> | <ul><li>size: 32 elements</li></ul> | <ul><li>size: 32 elements</li></ul> | * Samples: | query_id | document_ids | scores | |:---------------------|:--------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------| | <code>817836</code> | <code>['2716076', '6741935', '2681109', '5562684', '3507339', ...]</code> | <code>[1.0, 0.7059561610221863, 0.21702419221401215, 0.38270196318626404, 0.20812414586544037, ...]</code> | | <code>1045170</code> | <code>['5088671', '2953295', '8783471', '4268439', '6339935', ...]</code> | <code>[1.0, 0.6493034362792969, 0.0692221149802208, 0.17963139712810516, 0.6697239875793457, ...]</code> | | <code>1154488</code> | <code>['6498614', '3770829', '1060712', '2590533', '7672044', ...]</code> | <code>[0.9497447609901428, 0.6662212610244751, 0.7423420548439026, 1.0, 0.6580896973609924, ...]</code> | * Loss: <code>pylate.losses.distillation.Distillation</code> ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 16 - `learning_rate`: 3e-05 - `num_train_epochs`: 1 - `bf16`: True #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 8 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 3e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | |:------:|:----:|:-------------:| | 0.0100 | 500 | 0.046 | | 0.0200 | 1000 | 0.0443 | | 0.0301 | 1500 | 0.0441 | ### Framework Versions - Python: 3.11.12 - Sentence Transformers: 3.4.1 - PyLate: 1.1.7 - Transformers: 4.48.2 - PyTorch: 2.6.0+cu124 - Accelerate: 1.5.2 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084" } ``` #### PyLate ```bibtex @misc{PyLate, title={PyLate: Flexible Training and Retrieval for Late Interaction Models}, author={Chaffin, Antoine and Sourty, Raphaël}, url={https://github.com/lightonai/pylate}, year={2024} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
Speedsy/ytu-turkish-bert-tiny-uncased21-500
Speedsy
"2025-04-19T16:20:39Z"
0
0
PyLate
[ "PyLate", "safetensors", "bert", "ColBERT", "sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:798036", "loss:Distillation", "en", "dataset:Speedsy/ms-marco-tr-bge", "arxiv:1908.10084", "base_model:ytu-ce-cosmos/turkish-tiny-bert-uncased", "base_model:finetune:ytu-ce-cosmos/turkish-tiny-bert-uncased", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
"2025-04-19T16:20:36Z"
--- language: - en tags: - ColBERT - PyLate - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:798036 - loss:Distillation base_model: ytu-ce-cosmos/turkish-tiny-bert-uncased datasets: - Speedsy/ms-marco-tr-bge pipeline_tag: sentence-similarity library_name: PyLate --- # PyLate model based on ytu-ce-cosmos/turkish-tiny-bert-uncased This is a [PyLate](https://github.com/lightonai/pylate) model finetuned from [ytu-ce-cosmos/turkish-tiny-bert-uncased](https://huggingface.co/ytu-ce-cosmos/turkish-tiny-bert-uncased) on the [train](https://huggingface.co/datasets/Speedsy/ms-marco-tr-bge) dataset. It maps sentences & paragraphs to sequences of 128-dimensional dense vectors and can be used for semantic textual similarity using the MaxSim operator. ## Model Details ### Model Description - **Model Type:** PyLate model - **Base model:** [ytu-ce-cosmos/turkish-tiny-bert-uncased](https://huggingface.co/ytu-ce-cosmos/turkish-tiny-bert-uncased) <!-- at revision 208794046047fc7445f7a4179636423802691268 --> - **Document Length:** 180 tokens - **Query Length:** 32 tokens - **Output Dimensionality:** 128 tokens - **Similarity Function:** MaxSim - **Training Dataset:** - [train](https://huggingface.co/datasets/Speedsy/ms-marco-tr-bge) - **Language:** en <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [PyLate Documentation](https://lightonai.github.io/pylate/) - **Repository:** [PyLate on GitHub](https://github.com/lightonai/pylate) - **Hugging Face:** [PyLate models on Hugging Face](https://huggingface.co/models?library=PyLate) ### Full Model Architecture ``` ColBERT( (0): Transformer({'max_seq_length': 179, 'do_lower_case': False}) with Transformer model: BertModel (1): Dense({'in_features': 128, 'out_features': 128, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'}) ) ``` ## Usage First install the PyLate library: ```bash pip install -U pylate ``` ### Retrieval PyLate provides a streamlined interface to index and retrieve documents using ColBERT models. The index leverages the Voyager HNSW index to efficiently handle document embeddings and enable fast retrieval. #### Indexing documents First, load the ColBERT model and initialize the Voyager index, then encode and index your documents: ```python from pylate import indexes, models, retrieve # Step 1: Load the ColBERT model model = models.ColBERT( model_name_or_path=pylate_model_id, ) # Step 2: Initialize the Voyager index index = indexes.Voyager( index_folder="pylate-index", index_name="index", override=True, # This overwrites the existing index if any ) # Step 3: Encode the documents documents_ids = ["1", "2", "3"] documents = ["document 1 text", "document 2 text", "document 3 text"] documents_embeddings = model.encode( documents, batch_size=32, is_query=False, # Ensure that it is set to False to indicate that these are documents, not queries show_progress_bar=True, ) # Step 4: Add document embeddings to the index by providing embeddings and corresponding ids index.add_documents( documents_ids=documents_ids, documents_embeddings=documents_embeddings, ) ``` Note that you do not have to recreate the index and encode the documents every time. Once you have created an index and added the documents, you can re-use the index later by loading it: ```python # To load an index, simply instantiate it with the correct folder/name and without overriding it index = indexes.Voyager( index_folder="pylate-index", index_name="index", ) ``` #### Retrieving top-k documents for queries Once the documents are indexed, you can retrieve the top-k most relevant documents for a given set of queries. To do so, initialize the ColBERT retriever with the index you want to search in, encode the queries and then retrieve the top-k documents to get the top matches ids and relevance scores: ```python # Step 1: Initialize the ColBERT retriever retriever = retrieve.ColBERT(index=index) # Step 2: Encode the queries queries_embeddings = model.encode( ["query for document 3", "query for document 1"], batch_size=32, is_query=True, # # Ensure that it is set to False to indicate that these are queries show_progress_bar=True, ) # Step 3: Retrieve top-k documents scores = retriever.retrieve( queries_embeddings=queries_embeddings, k=10, # Retrieve the top 10 matches for each query ) ``` ### Reranking If you only want to use the ColBERT model to perform reranking on top of your first-stage retrieval pipeline without building an index, you can simply use rank function and pass the queries and documents to rerank: ```python from pylate import rank, models queries = [ "query A", "query B", ] documents = [ ["document A", "document B"], ["document 1", "document C", "document B"], ] documents_ids = [ [1, 2], [1, 3, 2], ] model = models.ColBERT( model_name_or_path=pylate_model_id, ) queries_embeddings = model.encode( queries, is_query=True, ) documents_embeddings = model.encode( documents, is_query=False, ) reranked_documents = rank.rerank( documents_ids=documents_ids, queries_embeddings=queries_embeddings, documents_embeddings=documents_embeddings, ) ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### train * Dataset: [train](https://huggingface.co/datasets/Speedsy/ms-marco-tr-bge) at [b9b0f7f](https://huggingface.co/datasets/Speedsy/ms-marco-tr-bge/tree/b9b0f7fd13c3ce3b632a3a1cd37f6ddbf8a040f5) * Size: 798,036 training samples * Columns: <code>query_id</code>, <code>document_ids</code>, and <code>scores</code> * Approximate statistics based on the first 1000 samples: | | query_id | document_ids | scores | |:--------|:--------------------------------------------------------------------------------|:------------------------------------|:------------------------------------| | type | string | list | list | | details | <ul><li>min: 4 tokens</li><li>mean: 6.18 tokens</li><li>max: 8 tokens</li></ul> | <ul><li>size: 32 elements</li></ul> | <ul><li>size: 32 elements</li></ul> | * Samples: | query_id | document_ids | scores | |:---------------------|:--------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------| | <code>817836</code> | <code>['2716076', '6741935', '2681109', '5562684', '3507339', ...]</code> | <code>[1.0, 0.7059561610221863, 0.21702419221401215, 0.38270196318626404, 0.20812414586544037, ...]</code> | | <code>1045170</code> | <code>['5088671', '2953295', '8783471', '4268439', '6339935', ...]</code> | <code>[1.0, 0.6493034362792969, 0.0692221149802208, 0.17963139712810516, 0.6697239875793457, ...]</code> | | <code>1154488</code> | <code>['6498614', '3770829', '1060712', '2590533', '7672044', ...]</code> | <code>[0.9497447609901428, 0.6662212610244751, 0.7423420548439026, 1.0, 0.6580896973609924, ...]</code> | * Loss: <code>pylate.losses.distillation.Distillation</code> ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 16 - `learning_rate`: 3e-05 - `num_train_epochs`: 1 - `bf16`: True #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 8 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 3e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | |:------:|:----:|:-------------:| | 0.0100 | 500 | 0.046 | ### Framework Versions - Python: 3.11.12 - Sentence Transformers: 3.4.1 - PyLate: 1.1.7 - Transformers: 4.48.2 - PyTorch: 2.6.0+cu124 - Accelerate: 1.5.2 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084" } ``` #### PyLate ```bibtex @misc{PyLate, title={PyLate: Flexible Training and Retrieval for Late Interaction Models}, author={Chaffin, Antoine and Sourty, Raphaël}, url={https://github.com/lightonai/pylate}, year={2024} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
netcat420/MFANN3bv2.0-BETA1
netcat420
"2025-04-19T16:20:33Z"
0
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-19T16:17:04Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
2568wisitsak/gemma-3-12b-it-unsloth-bnb-4bit
2568wisitsak
"2025-04-19T16:20:30Z"
0
0
transformers
[ "transformers", "safetensors", "gemma3_text", "text-generation", "text-generation-inference", "unsloth", "gemma3", "conversational", "en", "base_model:unsloth/gemma-3-12b-it-unsloth-bnb-4bit", "base_model:finetune:unsloth/gemma-3-12b-it-unsloth-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-19T16:07:36Z"
--- base_model: unsloth/gemma-3-12b-it-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma3 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** 2568wisitsak - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-12b-it-unsloth-bnb-4bit This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
cs2764/GLM-Z1-9B-0414-abliterated-Q8_0-GGUF
cs2764
"2025-04-19T16:18:44Z"
0
0
transformers
[ "transformers", "gguf", "abliterated", "uncensored", "llama-cpp", "gguf-my-repo", "text-generation", "zh", "en", "base_model:huihui-ai/GLM-Z1-9B-0414-abliterated", "base_model:quantized:huihui-ai/GLM-Z1-9B-0414-abliterated", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
text-generation
"2025-04-19T16:17:59Z"
--- base_model: huihui-ai/GLM-Z1-9B-0414-abliterated language: - zh - en library_name: transformers license: mit pipeline_tag: text-generation tags: - abliterated - uncensored - llama-cpp - gguf-my-repo --- # cs2764/GLM-Z1-9B-0414-abliterated-Q8_0-GGUF This model was converted to GGUF format from [`huihui-ai/GLM-Z1-9B-0414-abliterated`](https://huggingface.co/huihui-ai/GLM-Z1-9B-0414-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/huihui-ai/GLM-Z1-9B-0414-abliterated) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo cs2764/GLM-Z1-9B-0414-abliterated-Q8_0-GGUF --hf-file glm-z1-9b-0414-abliterated-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo cs2764/GLM-Z1-9B-0414-abliterated-Q8_0-GGUF --hf-file glm-z1-9b-0414-abliterated-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo cs2764/GLM-Z1-9B-0414-abliterated-Q8_0-GGUF --hf-file glm-z1-9b-0414-abliterated-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo cs2764/GLM-Z1-9B-0414-abliterated-Q8_0-GGUF --hf-file glm-z1-9b-0414-abliterated-q8_0.gguf -c 2048 ```
dgambettaphd/M_gmm2_gen10_run0_W_doc1000_synt64_MPP
dgambettaphd
"2025-04-19T16:15:34Z"
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2025-04-19T16:15:12Z"
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Speedsy/ytu-turkish-bert-tiny-uncased-2000
Speedsy
"2025-04-19T16:15:06Z"
0
0
PyLate
[ "PyLate", "safetensors", "bert", "ColBERT", "sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:798036", "loss:Distillation", "en", "dataset:Speedsy/ms-marco-tr-bge", "arxiv:1908.10084", "base_model:ytu-ce-cosmos/turkish-base-bert-uncased", "base_model:finetune:ytu-ce-cosmos/turkish-base-bert-uncased", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
"2025-04-19T16:14:39Z"
--- language: - en tags: - ColBERT - PyLate - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:798036 - loss:Distillation base_model: ytu-ce-cosmos/turkish-base-bert-uncased datasets: - Speedsy/ms-marco-tr-bge pipeline_tag: sentence-similarity library_name: PyLate --- # PyLate model based on ytu-ce-cosmos/turkish-base-bert-uncased This is a [PyLate](https://github.com/lightonai/pylate) model finetuned from [ytu-ce-cosmos/turkish-base-bert-uncased](https://huggingface.co/ytu-ce-cosmos/turkish-base-bert-uncased) on the [train](https://huggingface.co/datasets/Speedsy/ms-marco-tr-bge) dataset. It maps sentences & paragraphs to sequences of 128-dimensional dense vectors and can be used for semantic textual similarity using the MaxSim operator. ## Model Details ### Model Description - **Model Type:** PyLate model - **Base model:** [ytu-ce-cosmos/turkish-base-bert-uncased](https://huggingface.co/ytu-ce-cosmos/turkish-base-bert-uncased) <!-- at revision 11cac7fa187691c8518e587699c7939a55133e8f --> - **Document Length:** 180 tokens - **Query Length:** 32 tokens - **Output Dimensionality:** 128 tokens - **Similarity Function:** MaxSim - **Training Dataset:** - [train](https://huggingface.co/datasets/Speedsy/ms-marco-tr-bge) - **Language:** en <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [PyLate Documentation](https://lightonai.github.io/pylate/) - **Repository:** [PyLate on GitHub](https://github.com/lightonai/pylate) - **Hugging Face:** [PyLate models on Hugging Face](https://huggingface.co/models?library=PyLate) ### Full Model Architecture ``` ColBERT( (0): Transformer({'max_seq_length': 179, 'do_lower_case': False}) with Transformer model: BertModel (1): Dense({'in_features': 768, 'out_features': 128, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'}) ) ``` ## Usage First install the PyLate library: ```bash pip install -U pylate ``` ### Retrieval PyLate provides a streamlined interface to index and retrieve documents using ColBERT models. The index leverages the Voyager HNSW index to efficiently handle document embeddings and enable fast retrieval. #### Indexing documents First, load the ColBERT model and initialize the Voyager index, then encode and index your documents: ```python from pylate import indexes, models, retrieve # Step 1: Load the ColBERT model model = models.ColBERT( model_name_or_path=pylate_model_id, ) # Step 2: Initialize the Voyager index index = indexes.Voyager( index_folder="pylate-index", index_name="index", override=True, # This overwrites the existing index if any ) # Step 3: Encode the documents documents_ids = ["1", "2", "3"] documents = ["document 1 text", "document 2 text", "document 3 text"] documents_embeddings = model.encode( documents, batch_size=32, is_query=False, # Ensure that it is set to False to indicate that these are documents, not queries show_progress_bar=True, ) # Step 4: Add document embeddings to the index by providing embeddings and corresponding ids index.add_documents( documents_ids=documents_ids, documents_embeddings=documents_embeddings, ) ``` Note that you do not have to recreate the index and encode the documents every time. Once you have created an index and added the documents, you can re-use the index later by loading it: ```python # To load an index, simply instantiate it with the correct folder/name and without overriding it index = indexes.Voyager( index_folder="pylate-index", index_name="index", ) ``` #### Retrieving top-k documents for queries Once the documents are indexed, you can retrieve the top-k most relevant documents for a given set of queries. To do so, initialize the ColBERT retriever with the index you want to search in, encode the queries and then retrieve the top-k documents to get the top matches ids and relevance scores: ```python # Step 1: Initialize the ColBERT retriever retriever = retrieve.ColBERT(index=index) # Step 2: Encode the queries queries_embeddings = model.encode( ["query for document 3", "query for document 1"], batch_size=32, is_query=True, # # Ensure that it is set to False to indicate that these are queries show_progress_bar=True, ) # Step 3: Retrieve top-k documents scores = retriever.retrieve( queries_embeddings=queries_embeddings, k=10, # Retrieve the top 10 matches for each query ) ``` ### Reranking If you only want to use the ColBERT model to perform reranking on top of your first-stage retrieval pipeline without building an index, you can simply use rank function and pass the queries and documents to rerank: ```python from pylate import rank, models queries = [ "query A", "query B", ] documents = [ ["document A", "document B"], ["document 1", "document C", "document B"], ] documents_ids = [ [1, 2], [1, 3, 2], ] model = models.ColBERT( model_name_or_path=pylate_model_id, ) queries_embeddings = model.encode( queries, is_query=True, ) documents_embeddings = model.encode( documents, is_query=False, ) reranked_documents = rank.rerank( documents_ids=documents_ids, queries_embeddings=queries_embeddings, documents_embeddings=documents_embeddings, ) ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### train * Dataset: [train](https://huggingface.co/datasets/Speedsy/ms-marco-tr-bge) at [b9b0f7f](https://huggingface.co/datasets/Speedsy/ms-marco-tr-bge/tree/b9b0f7fd13c3ce3b632a3a1cd37f6ddbf8a040f5) * Size: 798,036 training samples * Columns: <code>query_id</code>, <code>document_ids</code>, and <code>scores</code> * Approximate statistics based on the first 1000 samples: | | query_id | document_ids | scores | |:--------|:--------------------------------------------------------------------------------|:------------------------------------|:------------------------------------| | type | string | list | list | | details | <ul><li>min: 4 tokens</li><li>mean: 6.18 tokens</li><li>max: 8 tokens</li></ul> | <ul><li>size: 32 elements</li></ul> | <ul><li>size: 32 elements</li></ul> | * Samples: | query_id | document_ids | scores | |:---------------------|:--------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------| | <code>817836</code> | <code>['2716076', '6741935', '2681109', '5562684', '3507339', ...]</code> | <code>[1.0, 0.7059561610221863, 0.21702419221401215, 0.38270196318626404, 0.20812414586544037, ...]</code> | | <code>1045170</code> | <code>['5088671', '2953295', '8783471', '4268439', '6339935', ...]</code> | <code>[1.0, 0.6493034362792969, 0.0692221149802208, 0.17963139712810516, 0.6697239875793457, ...]</code> | | <code>1154488</code> | <code>['6498614', '3770829', '1060712', '2590533', '7672044', ...]</code> | <code>[0.9497447609901428, 0.6662212610244751, 0.7423420548439026, 1.0, 0.6580896973609924, ...]</code> | * Loss: <code>pylate.losses.distillation.Distillation</code> ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 16 - `learning_rate`: 3e-05 - `num_train_epochs`: 1 - `bf16`: True #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 8 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 3e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | |:------:|:----:|:-------------:| | 0.0100 | 500 | 0.0412 | | 0.0200 | 1000 | 0.0393 | | 0.0301 | 1500 | 0.0389 | | 0.0401 | 2000 | 0.0382 | ### Framework Versions - Python: 3.11.12 - Sentence Transformers: 3.4.1 - PyLate: 1.1.7 - Transformers: 4.48.2 - PyTorch: 2.6.0+cu124 - Accelerate: 1.5.2 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084" } ``` #### PyLate ```bibtex @misc{PyLate, title={PyLate: Flexible Training and Retrieval for Late Interaction Models}, author={Chaffin, Antoine and Sourty, Raphaël}, url={https://github.com/lightonai/pylate}, year={2024} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
mradermacher/s1.1-limo-multilingual-4-14B-GGUF
mradermacher
"2025-04-19T16:13:49Z"
0
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "trl", "sft", "en", "base_model:shanchen/s1.1-limo-multilingual-4-14B", "base_model:quantized:shanchen/s1.1-limo-multilingual-4-14B", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-04-19T15:54:49Z"
--- base_model: shanchen/s1.1-limo-multilingual-4-14B language: - en library_name: transformers model_name: s1.1-limo-multilingual-20250418_165139 quantized_by: mradermacher tags: - generated_from_trainer - trl - sft --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/shanchen/s1.1-limo-multilingual-4-14B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/s1.1-limo-multilingual-4-14B-GGUF/resolve/main/s1.1-limo-multilingual-4-14B.Q2_K.gguf) | Q2_K | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/s1.1-limo-multilingual-4-14B-GGUF/resolve/main/s1.1-limo-multilingual-4-14B.Q3_K_S.gguf) | Q3_K_S | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/s1.1-limo-multilingual-4-14B-GGUF/resolve/main/s1.1-limo-multilingual-4-14B.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/s1.1-limo-multilingual-4-14B-GGUF/resolve/main/s1.1-limo-multilingual-4-14B.Q3_K_L.gguf) | Q3_K_L | 8.0 | | | [GGUF](https://huggingface.co/mradermacher/s1.1-limo-multilingual-4-14B-GGUF/resolve/main/s1.1-limo-multilingual-4-14B.IQ4_XS.gguf) | IQ4_XS | 8.3 | | | [GGUF](https://huggingface.co/mradermacher/s1.1-limo-multilingual-4-14B-GGUF/resolve/main/s1.1-limo-multilingual-4-14B.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/s1.1-limo-multilingual-4-14B-GGUF/resolve/main/s1.1-limo-multilingual-4-14B.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/s1.1-limo-multilingual-4-14B-GGUF/resolve/main/s1.1-limo-multilingual-4-14B.Q5_K_S.gguf) | Q5_K_S | 10.4 | | | [GGUF](https://huggingface.co/mradermacher/s1.1-limo-multilingual-4-14B-GGUF/resolve/main/s1.1-limo-multilingual-4-14B.Q5_K_M.gguf) | Q5_K_M | 10.6 | | | [GGUF](https://huggingface.co/mradermacher/s1.1-limo-multilingual-4-14B-GGUF/resolve/main/s1.1-limo-multilingual-4-14B.Q6_K.gguf) | Q6_K | 12.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/s1.1-limo-multilingual-4-14B-GGUF/resolve/main/s1.1-limo-multilingual-4-14B.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
Htrixie/TRIXIE
Htrixie
"2025-04-19T16:13:22Z"
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
"2025-04-19T15:41:12Z"
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: TRIXIE --- # Trixie <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `TRIXIE` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "TRIXIE", "lora_weights": "https://huggingface.co/Htrixie/TRIXIE/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('Htrixie/TRIXIE', weight_name='lora.safetensors') image = pipeline('TRIXIE').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/Htrixie/TRIXIE/discussions) to add images that show off what you’ve made with this LoRA.
evgenyz/Reinforce-CartPole-v1
evgenyz
"2025-04-19T16:11:37Z"
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
"2025-04-19T11:26:16Z"
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Leonel-Maia/whisper-small-splitted
Leonel-Maia
"2025-04-19T16:10:49Z"
3
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dataset:Leonel-Maia/fongbe-splitted", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2025-04-18T14:01:35Z"
--- library_name: transformers license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer datasets: - Leonel-Maia/fongbe-splitted metrics: - wer model-index: - name: whisper-small-splitted results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Leonel-Maia/fongbe-splitted type: Leonel-Maia/fongbe-splitted metrics: - name: Wer type: wer value: 0.11257288805358778 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-small-splitted This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Leonel-Maia/fongbe-splitted dataset. It achieves the following results on the evaluation set: - Loss: 0.1084 - Wer: 0.1126 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 60.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-------:|:----:|:---------------:|:------:| | 0.1324 | 2.1094 | 500 | 0.1909 | 0.3307 | | 0.0383 | 4.2187 | 1000 | 0.1153 | 0.1131 | | 0.0184 | 6.3281 | 1500 | 0.1084 | 0.1126 | | 0.0098 | 8.4374 | 2000 | 0.1122 | 0.1014 | | 0.0076 | 10.5468 | 2500 | 0.1101 | 0.0966 | | 0.0075 | 12.6562 | 3000 | 0.1156 | 0.0992 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
nJavo/llama-3-8b-Instruct-bnb-4bit-aiaustin-demo
nJavo
"2025-04-19T16:09:43Z"
0
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "base_model:quantized:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-04-14T23:46:28Z"
--- base_model: unsloth/llama-3-8b-Instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** nJavo - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
asm3515/llama3-agnews-full
asm3515
"2025-04-19T16:09:06Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
"2025-04-19T16:06:24Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mbort1/f0758bc5-d41e-4a1c-b3ab-05c4f263897f
mbort1
"2025-04-19T16:08:55Z"
0
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen2.5-Coder-7B-Instruct", "base_model:adapter:Qwen/Qwen2.5-Coder-7B-Instruct", "license:apache-2.0", "region:us" ]
null
"2025-04-19T12:16:41Z"
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2.5-Coder-7B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: f0758bc5-d41e-4a1c-b3ab-05c4f263897f results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Qwen/Qwen2.5-Coder-7B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 487444ad7c629297_train_data.json ds_type: json format: custom path: /workspace/input_data/487444ad7c629297_train_data.json type: field_instruction: Article field_output: Headline format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null do_eval: true early_stopping_patience: 3 eval_batch_size: 8 eval_max_new_tokens: 128 eval_steps: 250 evals_per_epoch: null flash_attention: false fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: mbort1/f0758bc5-d41e-4a1c-b3ab-05c4f263897f hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 50 lora_alpha: 128 lora_dropout: 0.15 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_steps: 750 micro_batch_size: 8 mlflow_experiment_name: /tmp/487444ad7c629297_train_data.json model_type: AutoModelForCausalLM num_epochs: 10 optimizer: adamw_torch_fused output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 250 saves_per_epoch: null seed: 1 sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: ac00572d-71c9-447d-a8a6-0f1e3ac64035 wandb_project: mb1 wandb_run: your_name wandb_runid: ac00572d-71c9-447d-a8a6-0f1e3ac64035 warmup_steps: 30 weight_decay: 0.1 xformers_attention: null ``` </details><br> # f0758bc5-d41e-4a1c-b3ab-05c4f263897f This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1558 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 1 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 30 - training_steps: 750 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0003 | 1 | 3.3763 | | 1.2889 | 0.0708 | 250 | 1.3106 | | 1.2256 | 0.1415 | 500 | 1.1869 | | 1.1805 | 0.2123 | 750 | 1.1558 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
nqdhocai/eduQ-Me5-4cls
nqdhocai
"2025-04-19T16:07:05Z"
0
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2025-04-19T15:37:42Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> **This model** is a fine-tuned version of [`intfloat/multilingual-e5-base`](https://huggingface.co/intfloat/multilingual-e5-base), specifically adapted for classification tasks on **education-related question types**. It predicts one of four categories for a given question: #### Label Definitions - `0` – **YES/NO/UNCERTAIN** - `1` – **MULTIPLE CHOICE** - `2` – **NUMBER** - `3` – **NUMBER & YES/NO/UNCERTAIN** #### Model Details - **Developed by:** `nqdhocai` - **Model type:** Sentence Embedding + Classification - **Languages:** Multilingual (e.g., English, Vietnamese, etc.) - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) - **Finetuned from:** [`intfloat/multilingual-e5-base`](https://huggingface.co/intfloat/multilingual-e5-base) ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bekors/bab
bekors
"2025-04-19T16:04:52Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2025-04-19T16:04:52Z"
--- license: apache-2.0 ---
cshim-cmu/es_fi_quz
cshim-cmu
"2025-04-19T16:04:42Z"
754
0
null
[ "pytorch", "marian", "generated_from_trainer", "license:apache-2.0", "region:us" ]
null
"2025-04-18T00:32:38Z"
--- license: apache-2.0 tags: - generated_from_trainer metrics: - bleu model-index: - name: es_fi_quz results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # es_fi_quz This model is a fine-tuned version of [Helsinki-NLP/opus-mt-es-fi](https://huggingface.co/Helsinki-NLP/opus-mt-es-fi) on the None dataset. It achieves the following results on the evaluation set: - Loss: 7.8410 - Bleu: 0.802 - Chrf: 11.5747 - Gen Len: 13.6378 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Chrf | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:-------:| | 8.9357 | 0.09 | 1000 | 9.4845 | 0.9353 | 11.8318 | 12.8913 | | 8.5265 | 0.18 | 2000 | 9.0739 | 0.9747 | 11.792 | 13.0161 | | 8.2076 | 0.27 | 3000 | 8.7285 | 0.9526 | 11.7495 | 13.1338 | | 7.8835 | 0.36 | 4000 | 8.4139 | 0.9341 | 11.6651 | 13.2606 | | 7.6147 | 0.45 | 5000 | 8.1205 | 0.9309 | 11.6082 | 13.3592 | | 7.3691 | 0.54 | 6000 | 7.8410 | 0.802 | 11.5747 | 13.6378 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu117 - Datasets 2.11.0 - Tokenizers 0.13.3
Greenf1re/Qwen2.5-0.5B-nietzsche-alpaca
Greenf1re
"2025-04-19T16:04:18Z"
2
0
null
[ "gguf", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2025-04-18T18:43:41Z"
--- license: apache-2.0 ---
ZeroHang/dummy-model
ZeroHang
"2025-04-19T16:04:10Z"
0
0
transformers
[ "transformers", "safetensors", "camembert", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2025-04-19T16:03:41Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/mega_blend_model-i1-GGUF
mradermacher
"2025-04-19T16:01:53Z"
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:Pedro13543/mega_blend_model", "base_model:quantized:Pedro13543/mega_blend_model", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
"2025-04-19T14:12:46Z"
--- base_model: Pedro13543/mega_blend_model language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/Pedro13543/mega_blend_model <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/mega_blend_model-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/mega_blend_model-i1-GGUF/resolve/main/mega_blend_model.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/mega_blend_model-i1-GGUF/resolve/main/mega_blend_model.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/mega_blend_model-i1-GGUF/resolve/main/mega_blend_model.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | | | [GGUF](https://huggingface.co/mradermacher/mega_blend_model-i1-GGUF/resolve/main/mega_blend_model.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/mega_blend_model-i1-GGUF/resolve/main/mega_blend_model.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/mega_blend_model-i1-GGUF/resolve/main/mega_blend_model.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/mega_blend_model-i1-GGUF/resolve/main/mega_blend_model.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality | | [GGUF](https://huggingface.co/mradermacher/mega_blend_model-i1-GGUF/resolve/main/mega_blend_model.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/mega_blend_model-i1-GGUF/resolve/main/mega_blend_model.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/mega_blend_model-i1-GGUF/resolve/main/mega_blend_model.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/mega_blend_model-i1-GGUF/resolve/main/mega_blend_model.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/mega_blend_model-i1-GGUF/resolve/main/mega_blend_model.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/mega_blend_model-i1-GGUF/resolve/main/mega_blend_model.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/mega_blend_model-i1-GGUF/resolve/main/mega_blend_model.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/mega_blend_model-i1-GGUF/resolve/main/mega_blend_model.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/mega_blend_model-i1-GGUF/resolve/main/mega_blend_model.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/mega_blend_model-i1-GGUF/resolve/main/mega_blend_model.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/mega_blend_model-i1-GGUF/resolve/main/mega_blend_model.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/mega_blend_model-i1-GGUF/resolve/main/mega_blend_model.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/mega_blend_model-i1-GGUF/resolve/main/mega_blend_model.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/mega_blend_model-i1-GGUF/resolve/main/mega_blend_model.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/mega_blend_model-i1-GGUF/resolve/main/mega_blend_model.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/mega_blend_model-i1-GGUF/resolve/main/mega_blend_model.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/mega_blend_model-i1-GGUF/resolve/main/mega_blend_model.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
StevenMD/legal-summarizer5
StevenMD
"2025-04-19T15:57:29Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "longt5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2025-04-19T10:46:23Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
YungRox/Beyazz
YungRox
"2025-04-19T15:56:34Z"
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
"2025-04-19T15:42:52Z"
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: Beyazz --- # Beyazz <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `Beyazz` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "Beyazz", "lora_weights": "https://huggingface.co/YungRox/Beyazz/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('YungRox/Beyazz', weight_name='lora.safetensors') image = pipeline('Beyazz').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 1000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/YungRox/Beyazz/discussions) to add images that show off what you’ve made with this LoRA.