Dataset Viewer
Auto-converted to Parquet
modelId
stringlengths
5
138
author
stringlengths
2
42
last_modified
unknowndate
2020-02-15 11:33:14
2025-04-13 18:27:00
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
425 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
unknowndate
2022-03-02 23:29:04
2025-04-13 18:24:29
card
stringlengths
11
1.01M
shukdevdatta123/Dreaddit_DistillBert_Stress_Model
shukdevdatta123
"2024-11-17T18:24:33"
18
0
null
[ "tf", "distilbert", "license:apache-2.0", "region:us" ]
null
"2024-11-17T18:23:28"
--- license: apache-2.0 ---
mradermacher/kannada-QA-0.1-GGUF
mradermacher
"2024-12-29T04:31:42"
46
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:projectbaraat/kannada-QA-0.1", "base_model:quantized:projectbaraat/kannada-QA-0.1", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-12-29T04:11:28"
--- base_model: projectbaraat/kannada-QA-0.1 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - llama - trl --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> static quants of https://huggingface.co/projectbaraat/kannada-QA-0.1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/kannada-QA-0.1-GGUF/resolve/main/kannada-QA-0.1.Q2_K.gguf) | Q2_K | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/kannada-QA-0.1-GGUF/resolve/main/kannada-QA-0.1.Q3_K_S.gguf) | Q3_K_S | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/kannada-QA-0.1-GGUF/resolve/main/kannada-QA-0.1.Q3_K_M.gguf) | Q3_K_M | 3.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/kannada-QA-0.1-GGUF/resolve/main/kannada-QA-0.1.Q3_K_L.gguf) | Q3_K_L | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/kannada-QA-0.1-GGUF/resolve/main/kannada-QA-0.1.IQ4_XS.gguf) | IQ4_XS | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/kannada-QA-0.1-GGUF/resolve/main/kannada-QA-0.1.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/kannada-QA-0.1-GGUF/resolve/main/kannada-QA-0.1.Q4_K_M.gguf) | Q4_K_M | 4.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/kannada-QA-0.1-GGUF/resolve/main/kannada-QA-0.1.Q5_K_S.gguf) | Q5_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/kannada-QA-0.1-GGUF/resolve/main/kannada-QA-0.1.Q5_K_M.gguf) | Q5_K_M | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/kannada-QA-0.1-GGUF/resolve/main/kannada-QA-0.1.Q6_K.gguf) | Q6_K | 5.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/kannada-QA-0.1-GGUF/resolve/main/kannada-QA-0.1.Q8_0.gguf) | Q8_0 | 7.4 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/kannada-QA-0.1-GGUF/resolve/main/kannada-QA-0.1.f16.gguf) | f16 | 13.8 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
distillslm/alpaca_supervised_kd_sft_Qwen2.5-3B-Instruct_from_Qwen2.5-7B-Instruct
distillslm
"2025-03-10T16:32:35"
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "trl", "gkd", "conversational", "arxiv:2306.13649", "base_model:Qwen/Qwen2.5-3B-Instruct", "base_model:finetune:Qwen/Qwen2.5-3B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-03-10T06:21:22"
--- base_model: Qwen/Qwen2.5-3B-Instruct library_name: transformers model_name: alpaca_supervised_kd_sft_Qwen2.5-3B-Instruct_from_Qwen2.5-7B-Instruct tags: - generated_from_trainer - trl - gkd licence: license --- # Model Card for alpaca_supervised_kd_sft_Qwen2.5-3B-Instruct_from_Qwen2.5-7B-Instruct This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="distillslm/alpaca_supervised_kd_sft_Qwen2.5-3B-Instruct_from_Qwen2.5-7B-Instruct", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/rucnyz/huggingface/runs/k0mfdxqd) This model was trained with GKD, a method introduced in [On-Policy Distillation of Language Models: Learning from Self-Generated Mistakes](https://huggingface.co/papers/2306.13649). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.6.0 - Datasets: 3.2.0 - Tokenizers: 0.21.0 ## Citations Cite GKD as: ```bibtex @inproceedings{agarwal2024on-policy, title = {{On-Policy Distillation of Language Models: Learning from Self-Generated Mistakes}}, author = {Rishabh Agarwal and Nino Vieillard and Yongchao Zhou and Piotr Stanczyk and Sabela Ramos Garea and Matthieu Geist and Olivier Bachem}, year = 2024, booktitle = {The Twelfth International Conference on Learning Representations, {ICLR} 2024, Vienna, Austria, May 7-11, 2024}, publisher = {OpenReview.net}, url = {https://openreview.net/forum?id=3zKtaqxLhW}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
iceman2434/xlm-roberta-base-ft-udpos213-top9lang-lr4.5e-5
iceman2434
"2024-04-17T13:15:10"
105
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "tl", "dataset:universal_dependencies", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2024-04-17T13:13:05"
--- datasets: - universal_dependencies language: - tl metrics: - f1 pipeline_tag: token-classification --- ## Model Specification - Model: XLM-RoBERTa (base-sized model) - Training Data: - Combined Afrikaans, Hebrew, Bulgarian, Vietnamese, Norwegian, Urdu, Czech, Persian, & Faroese corpora (Top 9 Languages) - Training Details: - Base configurations with a minor adjustment in learning rate (4.5e-5) ## Evaluation - Evaluation Dataset: Universal Dependencies Tagalog Ugnayan (Testing Set) - Tested in a zero-shot cross-lingual scenario on a Universal Dependencies Tagalog Ugnayan testing dataset (with 75.98\% Accuracy) ## POS Tags - ADJ – ADP – ADV – CCONJ – DET – INTJ – NOUN – NUM – PART – PRON – PROPN – PUNCT – SCONJ – VERB
xiuyul/mamba-2.8b-zephyr
xiuyul
"2025-01-12T20:38:57"
22,599
18
transformers
[ "transformers", "pytorch", "safetensors", "dataset:HuggingFaceH4/ultrafeedback_binarized", "arxiv:2305.18290", "base_model:xiuyul/mamba-2.8b-ultrachat", "base_model:finetune:xiuyul/mamba-2.8b-ultrachat", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2023-12-28T17:36:20"
--- license: apache-2.0 base_model: xiuyul/mamba-2.8b-ultrachat datasets: - HuggingFaceH4/ultrafeedback_binarized model-index: - name: mamba-2.8b-zephyr results: [] --- # mamba-2.8b-zephyr This model is a fine-tuned version of [xiuyul/mamba-2.8b-ultrachat](https://huggingface.co/xiuyul/mamba-2.8b-ultrachat) on the [HuggingFaceH4/ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) dataset trained using [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290). The base model, [xiuyul/mamba-2.8b-ultrachat](https://huggingface.co/xiuyul/mamba-2.8b-ultrachat), was instruction-tuned from [state-spaces/mamba-2.8b-slimpj](https://huggingface.co/state-spaces/mamba-2.8b-slimpj) on the [HuggingFaceH4/ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) dataset. It achieves the following results on the evaluation set: - Loss: 0.4996 - Rewards/chosen: -0.4523 - Rewards/rejected: -1.6105 - Rewards/accuracies: 0.7857 - Rewards/margins: 1.1582 - Logps/rejected: -290.1885 - Logps/chosen: -359.0926 - Logits/rejected: 23.0423 - Logits/chosen: 23.1861 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.6639 | 0.1 | 100 | 0.6593 | 0.1762 | 0.0957 | 0.6151 | 0.0805 | -273.1268 | -352.8086 | 23.5852 | 23.8356 | | 0.5804 | 0.21 | 200 | 0.5836 | 0.0780 | -0.3396 | 0.6508 | 0.4176 | -277.4798 | -353.7904 | 23.5872 | 23.8302 | | 0.5815 | 0.31 | 300 | 0.5510 | -0.1923 | -0.7857 | 0.7421 | 0.5934 | -281.9403 | -356.4929 | 23.5224 | 23.7498 | | 0.5526 | 0.41 | 400 | 0.5361 | -0.1953 | -0.8928 | 0.7341 | 0.6975 | -283.0119 | -356.5235 | 23.5033 | 23.7264 | | 0.5225 | 0.52 | 500 | 0.5262 | -0.1041 | -0.8809 | 0.7540 | 0.7768 | -282.8929 | -355.6114 | 23.4578 | 23.6718 | | 0.5577 | 0.62 | 600 | 0.5156 | -0.1946 | -1.0285 | 0.7659 | 0.8339 | -284.3683 | -356.5158 | 23.4466 | 23.6618 | | 0.5515 | 0.72 | 700 | 0.5163 | 0.0648 | -0.7650 | 0.7659 | 0.8298 | -281.7334 | -353.9220 | 23.4243 | 23.6343 | | 0.5159 | 0.83 | 800 | 0.5113 | -0.1400 | -1.0595 | 0.7778 | 0.9195 | -284.6783 | -355.9698 | 23.4095 | 23.6179 | | 0.5242 | 0.93 | 900 | 0.5089 | -0.0383 | -0.9148 | 0.7659 | 0.8766 | -283.2318 | -354.9529 | 23.4035 | 23.6145 | | 0.4618 | 1.03 | 1000 | 0.5077 | -0.1223 | -1.0201 | 0.7778 | 0.8978 | -284.2841 | -355.7929 | 23.3805 | 23.5856 | | 0.4484 | 1.14 | 1100 | 0.5019 | -0.3311 | -1.3299 | 0.7778 | 0.9989 | -287.3827 | -357.8807 | 23.3427 | 23.5381 | | 0.4228 | 1.24 | 1200 | 0.5034 | -0.0617 | -1.0989 | 0.7619 | 1.0372 | -285.0726 | -355.1871 | 23.3191 | 23.5101 | | 0.4306 | 1.34 | 1300 | 0.5032 | -0.1585 | -1.1849 | 0.7698 | 1.0264 | -285.9320 | -356.1549 | 23.2889 | 23.4787 | | 0.4678 | 1.45 | 1400 | 0.5030 | -0.2351 | -1.1601 | 0.7817 | 0.9250 | -285.6841 | -356.9207 | 23.2661 | 23.4551 | | 0.4317 | 1.55 | 1500 | 0.4997 | -0.1401 | -1.1458 | 0.7619 | 1.0057 | -285.5417 | -355.9716 | 23.2621 | 23.4524 | | 0.4363 | 1.65 | 1600 | 0.5010 | -0.3313 | -1.3592 | 0.7738 | 1.0279 | -287.6752 | -357.8830 | 23.2320 | 23.4178 | | 0.408 | 1.76 | 1700 | 0.4989 | -0.2456 | -1.3073 | 0.7778 | 1.0617 | -287.1568 | -357.0265 | 23.2135 | 23.3950 | | 0.4076 | 1.86 | 1800 | 0.4996 | -0.3904 | -1.4365 | 0.7659 | 1.0461 | -288.4482 | -358.4738 | 23.1866 | 23.3617 | | 0.4547 | 1.96 | 1900 | 0.5008 | -0.2516 | -1.2648 | 0.7857 | 1.0133 | -286.7317 | -357.0858 | 23.1605 | 23.3298 | | 0.3469 | 2.07 | 2000 | 0.4977 | -0.2868 | -1.3916 | 0.7778 | 1.1048 | -287.9999 | -357.4383 | 23.1361 | 23.2990 | | 0.3547 | 2.17 | 2100 | 0.4987 | -0.4251 | -1.5510 | 0.7619 | 1.1259 | -289.5935 | -358.8210 | 23.1142 | 23.2730 | | 0.3468 | 2.27 | 2200 | 0.4979 | -0.2674 | -1.3945 | 0.7778 | 1.1271 | -288.0285 | -357.2443 | 23.0998 | 23.2561 | | 0.3432 | 2.37 | 2300 | 0.5026 | -0.3792 | -1.4630 | 0.7738 | 1.0838 | -288.7130 | -358.3621 | 23.0726 | 23.2233 | | 0.324 | 2.48 | 2400 | 0.5022 | -0.4892 | -1.6090 | 0.7698 | 1.1198 | -290.1737 | -359.4620 | 23.0543 | 23.2006 | | 0.3556 | 2.58 | 2500 | 0.5010 | -0.5270 | -1.6576 | 0.7817 | 1.1306 | -290.6595 | -359.8404 | 23.0520 | 23.1981 | | 0.3277 | 2.68 | 2600 | 0.4990 | -0.5401 | -1.6816 | 0.7778 | 1.1415 | -290.8996 | -359.9708 | 23.0449 | 23.1901 | | 0.3262 | 2.79 | 2700 | 0.4993 | -0.4952 | -1.6410 | 0.7778 | 1.1458 | -290.4932 | -359.5220 | 23.0439 | 23.1878 | | 0.3566 | 2.89 | 2800 | 0.4985 | -0.4474 | -1.5918 | 0.7778 | 1.1443 | -290.0010 | -359.0445 | 23.0433 | 23.1871 | | 0.3386 | 2.99 | 2900 | 0.4983 | -0.4598 | -1.6040 | 0.7817 | 1.1442 | -290.1235 | -359.1679 | 23.0427 | 23.1866 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.1+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
mwalmsley/zoobot-encoder-maxvit_rmlp_small_rw_224
mwalmsley
"2024-04-11T20:15:45"
19
0
timm
[ "timm", "pytorch", "image-classification", "license:apache-2.0", "region:us" ]
image-classification
"2024-03-19T21:09:10"
--- tags: - image-classification - timm library_name: timm license: apache-2.0 --- # Model card for zoobot-encoder-maxvit_rmlp_small_rw_224 Please see the [Zoobot docs](https://zoobot.readthedocs.io/en/latest/pretrained_models.html) for loading and finetuning instructions. But minimally, you can use this like any timm encoder: ```python import timm encoder = timm.create_model('hf_hub:mwalmsley/zoobot-encoder-some-name', pretrained=True, num_classes=0) ```
wATCH-Sophie-Rain-Spiderman-2025-NudeVideo/Sophie.Rain.SpiderMan.Nude.Video.Instagram
wATCH-Sophie-Rain-Spiderman-2025-NudeVideo
"2025-03-03T16:24:54"
0
0
null
[ "region:us" ]
null
"2025-03-03T16:24:47"
2 minutes ago <a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️​</a></p> <a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️​</a></p> <p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
PrunaAI/vit_small_patch16_224.augreg_in1k-turbo-tiny-green-smashed
PrunaAI
"2024-11-13T13:22:49"
3
0
pruna-engine
[ "pruna-engine", "region:us" ]
null
"2024-03-14T11:32:07"
--- library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining quantization, xformers, jit, cuda graphs, triton. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 2. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir vit_small_patch16_224.augreg_in1k-turbo-tiny-green-smashed huggingface-cli download PrunaAI/vit_small_patch16_224.augreg_in1k-turbo-tiny-green-smashed --local-dir vit_small_patch16_224.augreg_in1k-turbo-tiny-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "vit_small_patch16_224.augreg_in1k-turbo-tiny-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "vit_small_patch16_224.augreg_in1k-turbo-tiny-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. import torch; image = torch.rand(1, 3, 224, 224).to('cuda') smashed_model(image) ``` ## Configurations The configuration info are in `model/smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model vit_small_patch16_224.augreg_in1k before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
common-canvas/CommonCanvas-XL-NC
common-canvas
"2024-05-16T18:47:08"
61
9
diffusers
[ "diffusers", "onnx", "safetensors", "common-canvas", "stable-diffusion", "sdxl", "en", "dataset:common-canvas/commoncatalog-cc-by-sa", "dataset:common-canvas/commoncatalog-cc-by", "dataset:common-canvas/commoncatalog-cc-by-nc-sa", "dataset:common-canvas/commoncatalog-cc-by-nc", "arxiv:2310.16825", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-02-29T16:21:32"
--- license: cc-by-nc-sa-4.0 tags: - common-canvas - stable-diffusion - sdxl datasets: - common-canvas/commoncatalog-cc-by-sa - common-canvas/commoncatalog-cc-by - common-canvas/commoncatalog-cc-by-nc-sa - common-canvas/commoncatalog-cc-by-nc language: - en --- # CommonCanvas-XL-NC ## Summary CommonCanvas is a family of latent diffusion models capable of generating images from a given text prompt. The architecture is based off of Stable Diffusion XL. Different CommonCanvas models are trained exclusively on subsets of the CommonCatalog Dataset (See Data Card), a large dataset of Creative Commons licensed images with synthetic captions produced using a pre-trained BLIP-2 captioning model. **Input:** CommonCatalog Text Captions **Output:** CommonCatalog Images **Architecture:** Stable Diffusion XL **Version Number:** 0.1 The goal of this purpose is to produce a model that is competitive with Stable Diffusion XL, but to do so using an easily accessible dataset of known provenance. Doing so makes replicating the model significantly easier and provides proper attribution to all the creative commons work used to train the model. The exact training recipe of the model can be found in the paper hosted at this link. https://arxiv.org/abs/2310.16825 ## Performance Limitations CommonCanvas under-performs in several categories, including faces, general photography, and paintings (see paper, Figure 8). These datasets all originated from the Conceptual Captions dataset, which relies on web-scraped data. These web-sourced captions, while abundant, may not always align with human-generated language nuances. Transitioning to synthetic captions introduces certain performance challenges, however, the drop in performance is not as dramatic as one might assume. ## Training Dataset Limitations The model is trained on 10 year old YFCC data and may not have modern concepts or recent events in its training corpus. Performance on this model will be worse on certain proper nouns or specific celebrities, but this is a feature not a bug. The model may not generate known artwork, individual celebrities, or specific locations due to the autogenerated nature of the caption data. Note: The non-commercial variants of this model are explicitly not intended to be use * It is trained on data derived from the Flickr100M dataset. The information is dated and known to have a bias towards internet connected Western countries. Some areas such as the global south lack representation. ## Associated Risks * Text in images produced by the model will likely be difficult to read. * The model struggles with more complex tasks that require compositional understanding * It may not accurately generate faces or representations of specific people. * The model primarily learned from English descriptions and may not perform as effectively in other languages. * The autoencoder aspect of the model introduces some information loss. * It may be possible to guide the model to generate objectionable content, i.e. nudity or other NSFW material. ## Intended Uses * Using the model for generative AI research * Safe deployment of models which have the potential to generate harmful content. * Probing and understanding the limitations and biases of generative models. * Generation of artworks and use in design and other artistic processes. * Applications in educational or creative tools. * Research on generative models. ## Unintended Uses * Commercial Uses ## Usage We recommend using the MosaicML Diffusion Repo to finetune / train the model: https://github.com/mosaicml/diffusion. Example finetuning code coming soon. ### Spaces demo Try the model demo on [Hugging Face Spaces](https://huggingface.co/spaces/common-canvas/CommonCanvas) ### Inference with 🧨 diffusers ```py from diffusers import StableDiffusionXLPipeline pipe = StableDiffusionXLPipeline.from_pretrained( "common-canvas/CommonCanvas-XL-NC", custom_pipeline="multimodalart/sdxl_perturbed_attention_guidance", #read more at https://huggingface.co/multimodalart/sdxl_perturbed_attention_guidance torch_dtype=torch.float16 ).to(device) prompt = "a cat sitting in a car seat" image = pipe(prompt, num_inference_steps=25).images[0] ``` ### Inference with ComfyUI / AUTOMATIC1111 [Download safetensors ⬇️](https://huggingface.co/common-canvas/CommonCanvas-XLNC/resolve/main/commoncanvas_xl_nc.safetensors?download=true) ## Evaluation/Validation We validated the model against Stability AI’s SD2 model and compared human user study ## Acknowledgements We thank @multimodalart, @Wauplin, and @lhoestq at Hugging Face for helping us host the dataset, and model weights. ## Citation ``` @article{gokaslan2023commoncanvas, title={CommonCanvas: An Open Diffusion Model Trained with Creative-Commons Images}, author={Gokaslan, Aaron and Cooper, A Feder and Collins, Jasmine and Seguin, Landan and Jacobson, Austin and Patel, Mihir and Frankle, Jonathan and Stephenson, Cory and Kuleshov, Volodymyr}, journal={arXiv preprint arXiv:2310.16825}, year={2023} } ```
TOMFORD79/TCCS9080_CS3
TOMFORD79
"2025-02-25T17:10:28"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-02-25T16:45:37"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
duyphu/097f0465-7545-4bd7-975a-c64f5c657c07
duyphu
"2025-01-15T13:07:55"
9
0
peft
[ "peft", "safetensors", "falcon", "axolotl", "generated_from_trainer", "custom_code", "base_model:tiiuae/falcon-7b", "base_model:adapter:tiiuae/falcon-7b", "license:apache-2.0", "region:us" ]
null
"2025-01-15T12:57:27"
--- library_name: peft license: apache-2.0 base_model: tiiuae/falcon-7b tags: - axolotl - generated_from_trainer model-index: - name: 097f0465-7545-4bd7-975a-c64f5c657c07 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: tiiuae/falcon-7b bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - f99bc2b131bc3ad4_train_data.json ds_type: json format: custom path: /workspace/input_data/f99bc2b131bc3ad4_train_data.json type: field_input: context field_instruction: instruction field_output: response format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 5 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: duyphu/097f0465-7545-4bd7-975a-c64f5c657c07 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 5 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/f99bc2b131bc3ad4_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 special_tokens: pad_token: <|endoftext|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: c0192787-2382-452a-bd70-d92d11e6d747 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: c0192787-2382-452a-bd70-d92d11e6d747 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 097f0465-7545-4bd7-975a-c64f5c657c07 This model is a fine-tuned version of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4167 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0004 | 1 | 1.6710 | | 7.0404 | 0.0044 | 10 | 1.6212 | | 6.3266 | 0.0089 | 20 | 1.4813 | | 6.0375 | 0.0133 | 30 | 1.4421 | | 5.7293 | 0.0177 | 40 | 1.4200 | | 6.0671 | 0.0221 | 50 | 1.4167 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
OverlordGreyrat/q-Taxi-v3
OverlordGreyrat
"2025-03-16T16:51:52"
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2025-03-16T16:51:48"
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.54 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="OverlordGreyrat/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
lilyyellow/lora_model
lilyyellow
"2024-04-08T08:23:58"
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-04-08T08:21:11"
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
zhanjun520/ppo-LunarLander-v2
zhanjun520
"2024-01-28T13:03:51"
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2024-01-28T12:59:06"
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 237.37 +/- 16.42 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
SEBIS/legal_t5_small_multitask_cs_es
SEBIS
"2021-06-23T10:51:58"
5
0
transformers
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Cszech Spanish model", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
"2022-03-02T23:29:04"
--- language: Cszech Spanish tags: - translation Cszech Spanish model datasets: - dcep europarl jrc-acquis widget: - text: "Antonio Tajani (místopředseda Komise) ." --- # legal_t5_small_multitask_cs_es model Model on translating legal text from Cszech to Spanish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model. ## Model description No pretraining is involved in case of legal_t5_small_multitask_cs_es model, rather the unsupervised task is added with all the translation task to realize the multitask learning scenario. ## Intended uses & limitations The model could be used for translation of legal texts from Cszech to Spanish. ### How to use Here is how to use this model to translate legal text from Cszech to Spanish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_cs_es"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_cs_es", do_lower_case=False, skip_special_tokens=True), device=0 ) cs_text = "Antonio Tajani (místopředseda Komise) ." pipeline([cs_text], max_length=512) ``` ## Training data The legal_t5_small_multitask_cs_es model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_multitask_cs_es | 48.559| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
StaAhmed/QA_prompt_C
StaAhmed
"2024-03-10T09:44:12"
92
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:StaAhmed/QA_prompt_C", "base_model:finetune:StaAhmed/QA_prompt_C", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-03-10T08:14:59"
--- license: mit base_model: StaAhmed/QA_prompt_C tags: - generated_from_trainer model-index: - name: QA_prompt_C results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # QA_prompt_C This model is a fine-tuned version of [StaAhmed/QA_prompt_C](https://huggingface.co/StaAhmed/QA_prompt_C) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 0.03 - training_steps: 40 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.1.2 - Datasets 2.1.0 - Tokenizers 0.15.2
Gikubu/joe_roberta
Gikubu
"2023-07-22T21:55:46"
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/roberta-base", "base_model:finetune:FacebookAI/roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2023-07-22T19:49:08"
--- license: mit base_model: roberta-base tags: - generated_from_trainer model-index: - name: joe_roberta results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # joe_roberta This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5302 - Rmse: 0.5886 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rmse | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.6724 | 4.0 | 500 | 0.5302 | 0.5886 | | 0.2745 | 8.0 | 1000 | 0.7656 | 0.6029 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
robiulawaldev/129650dd-f325-4cd0-9d96-ef26ec8c4563
robiulawaldev
"2025-02-02T03:53:35"
8
0
peft
[ "peft", "safetensors", "gpt_neox", "axolotl", "generated_from_trainer", "base_model:beomi/polyglot-ko-12.8b-safetensors", "base_model:adapter:beomi/polyglot-ko-12.8b-safetensors", "license:apache-2.0", "region:us" ]
null
"2025-02-02T01:34:49"
--- library_name: peft license: apache-2.0 base_model: beomi/polyglot-ko-12.8b-safetensors tags: - axolotl - generated_from_trainer model-index: - name: 129650dd-f325-4cd0-9d96-ef26ec8c4563 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: beomi/polyglot-ko-12.8b-safetensors bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - c144d6f191283d08_train_data.json ds_type: json format: custom path: /workspace/input_data/c144d6f191283d08_train_data.json type: field_input: task field_instruction: input field_output: label format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: false group_by_length: false hub_model_id: robiulawaldev/129650dd-f325-4cd0-9d96-ef26ec8c4563 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: constant max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/c144d6f191283d08_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 061b2dbd-12bf-4498-970d-9e283abf6f06 wandb_project: Birthday-SN56-35-Gradients-On-Demand wandb_run: your_name wandb_runid: 061b2dbd-12bf-4498-970d-9e283abf6f06 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 129650dd-f325-4cd0-9d96-ef26ec8c4563 This model is a fine-tuned version of [beomi/polyglot-ko-12.8b-safetensors](https://huggingface.co/beomi/polyglot-ko-12.8b-safetensors) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0209 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0000 | 1 | 0.3309 | | 0.0754 | 0.0005 | 50 | 0.0309 | | 0.0696 | 0.0009 | 100 | 0.0237 | | 0.0522 | 0.0014 | 150 | 0.0226 | | 0.0403 | 0.0019 | 200 | 0.0209 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
martimfasantos/tinyllama-1.1b-sum-sft-full_LR1e-5
martimfasantos
"2024-06-30T20:22:42"
11
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "alignment-handbook", "trl", "sft", "generated_from_trainer", "dataset:martimfasantos/openai-summarize-tldr", "base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T", "base_model:finetune:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-30T18:29:22"
--- license: apache-2.0 base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T tags: - alignment-handbook - trl - sft - generated_from_trainer - trl - sft - generated_from_trainer datasets: - martimfasantos/openai-summarize-tldr model-index: - name: tinyllama-1.1b-sum-sft-full_LR1e-5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tinyllama-1.1b-sum-sft-full_LR1e-5 This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) on the martimfasantos/openai-summarize-tldr dataset. It achieves the following results on the evaluation set: - Loss: 2.1608 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.1562 | 0.9997 | 1476 | 2.1608 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.1.2 - Datasets 2.20.0 - Tokenizers 0.19.1
bmehrba/Llama-2-13b-chat-hf-fine-tuned-adapters_Aleatoric_Llama13b_0.6_Seed104
bmehrba
"2024-03-14T20:20:20"
2
0
peft
[ "peft", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-13b-chat-hf", "base_model:adapter:meta-llama/Llama-2-13b-chat-hf", "region:us" ]
null
"2024-03-14T20:20:12"
--- library_name: peft base_model: meta-llama/Llama-2-13b-chat-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.7.0.dev0 ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.7.0.dev0
classla/bcms-bertic-frenk-hate
classla
"2023-06-23T06:30:55"
138
1
transformers
[ "transformers", "pytorch", "safetensors", "bert", "text-classification", "hate-speech", "hr", "arxiv:1906.02045", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-03-02T23:29:05"
--- language: "hr" license: "cc-by-sa-4.0" tags: - text-classification - hate-speech widget: - text: "Potpredsjednik Vlade i ministar branitelja Tomo Medved komentirao je Vladine planove za zakonsku zabranu pozdrava 'za dom spremni'." --- # bcms-bertic-frenk-hate Text classification model based on [`classla/bcms-bertic`](https://huggingface.co/classla/bcms-bertic) and fine-tuned on the [FRENK dataset](https://www.clarin.si/repository/xmlui/handle/11356/1433) comprising of LGBT and migrant hatespeech. Only the Croatian subset of the data was used for fine-tuning and the dataset has been relabeled for binary classification (offensive or acceptable). ## Fine-tuning hyperparameters Fine-tuning was performed with `simpletransformers`. Beforehand a brief hyperparameter optimisation was performed and the presumed optimal hyperparameters are: ```python model_args = { "num_train_epochs": 12, "learning_rate": 1e-5, "train_batch_size": 74} ``` ## Performance The same pipeline was run with two other transformer models and `fasttext` for comparison. Accuracy and macro F1 score were recorded for each of the 6 fine-tuning sessions and post festum analyzed. | model | average accuracy | average macro F1 | |----------------------------|------------------|------------------| | bcms-bertic-frenk-hate | 0.8313 | 0.8219 | | EMBEDDIA/crosloengual-bert | 0.8054 | 0.796 | | xlm-roberta-base | 0.7175 | 0.7049 | | fasttext | 0.771 | 0.754 | From recorded accuracies and macro F1 scores p-values were also calculated: Comparison with `crosloengual-bert`: | test | accuracy p-value | macro F1 p-value | |----------------|------------------|------------------| | Wilcoxon | 0.00781 | 0.00781 | | Mann Whithney | 0.00108 | 0.00108 | | Student t-test | 2.43e-10 | 1.27e-10 | Comparison with `xlm-roberta-base`: | test | accuracy p-value | macro F1 p-value | |----------------|------------------|------------------| | Wilcoxon | 0.00781 | 0.00781 | | Mann Whithney | 0.00107 | 0.00108 | | Student t-test | 4.83e-11 | 5.61e-11 | ## Use examples ```python from simpletransformers.classification import ClassificationModel model = ClassificationModel( "bert", "5roop/bcms-bertic-frenk-hate", use_cuda=True, ) predictions, logit_output = model.predict(['Ne odbacujem da će RH primiti još migranata iz Afganistana, no neće biti novog vala', "Potpredsjednik Vlade i ministar branitelja Tomo Medved komentirao je Vladine planove za zakonsku zabranu pozdrava 'za dom spremni' "]) predictions ### Output: ### array([0, 0]) ``` ## Citation If you use the model, please cite the following paper on which the original model is based: ``` @inproceedings{ljubesic-lauc-2021-bertic, title = "{BERT}i{\'c} - The Transformer Language Model for {B}osnian, {C}roatian, {M}ontenegrin and {S}erbian", author = "Ljube{\v{s}}i{\'c}, Nikola and Lauc, Davor", booktitle = "Proceedings of the 8th Workshop on Balto-Slavic Natural Language Processing", month = apr, year = "2021", address = "Kiyv, Ukraine", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.bsnlp-1.5", pages = "37--42", } ``` and the dataset used for fine-tuning: ``` @misc{ljubešić2019frenk, title={The FRENK Datasets of Socially Unacceptable Discourse in Slovene and English}, author={Nikola Ljubešić and Darja Fišer and Tomaž Erjavec}, year={2019}, eprint={1906.02045}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/1906.02045} } ```
Dereklehmkuhler07/rl_course_vizdoom_health_gathering_supreme
Dereklehmkuhler07
"2024-05-27T15:44:18"
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2024-05-27T03:15:29"
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 3.84 +/- 0.00 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r Dereklehmkuhler07/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
datasciathlete/mdeberta-v3-base-open-ner-aihub
datasciathlete
"2024-02-23T02:43:49"
5
0
transformers
[ "transformers", "safetensors", "deberta-v2", "token-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2024-02-23T02:42:45"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MaziyarPanahi/OpenHermes-2.5-neural-chat-7b-v3-1-7B-Mistral-7B-Instruct-v0.1-GGUF
MaziyarPanahi
"2024-01-28T14:24:30"
58
1
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "Safetensors", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-1-7B", "dataset:Open-Orca/SlimOrca", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us", "base_model:MaziyarPanahi/OpenHermes-2.5-neural-chat-7b-v3-1-7B-Mistral-7B-Instruct-v0.1", "base_model:quantized:MaziyarPanahi/OpenHermes-2.5-neural-chat-7b-v3-1-7B-Mistral-7B-Instruct-v0.1", "conversational" ]
text-generation
"2024-01-28T14:13:28"
--- license: apache-2.0 tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - Safetensors - text-generation-inference - merge - 7b - mistralai/Mistral-7B-Instruct-v0.1 - Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-1-7B - dataset:Open-Orca/SlimOrca - license:apache-2.0 - autotrain_compatible - endpoints_compatible - region:us model_name: OpenHermes-2.5-neural-chat-7b-v3-1-7B-Mistral-7B-Instruct-v0.1-GGUF base_model: MaziyarPanahi/OpenHermes-2.5-neural-chat-7b-v3-1-7B-Mistral-7B-Instruct-v0.1 inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/OpenHermes-2.5-neural-chat-7b-v3-1-7B-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/OpenHermes-2.5-neural-chat-7b-v3-1-7B-Mistral-7B-Instruct-v0.1-GGUF) - Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi) - Original model: [MaziyarPanahi/OpenHermes-2.5-neural-chat-7b-v3-1-7B-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/OpenHermes-2.5-neural-chat-7b-v3-1-7B-Mistral-7B-Instruct-v0.1) ## Description [MaziyarPanahi/OpenHermes-2.5-neural-chat-7b-v3-1-7B-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/OpenHermes-2.5-neural-chat-7b-v3-1-7B-Mistral-7B-Instruct-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/OpenHermes-2.5-neural-chat-7b-v3-1-7B-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/OpenHermes-2.5-neural-chat-7b-v3-1-7B-Mistral-7B-Instruct-v0.1). ## How to use Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models: ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ### Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: [MaziyarPanahi/OpenHermes-2.5-neural-chat-7b-v3-1-7B-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/OpenHermes-2.5-neural-chat-7b-v3-1-7B-Mistral-7B-Instruct-v0.1-GGUF) and below it, a specific filename to download, such as: OpenHermes-2.5-neural-chat-7b-v3-1-7B-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download MaziyarPanahi/OpenHermes-2.5-neural-chat-7b-v3-1-7B-Mistral-7B-Instruct-v0.1-GGUF OpenHermes-2.5-neural-chat-7b-v3-1-7B-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` </details> <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download [MaziyarPanahi/OpenHermes-2.5-neural-chat-7b-v3-1-7B-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/OpenHermes-2.5-neural-chat-7b-v3-1-7B-Mistral-7B-Instruct-v0.1-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/OpenHermes-2.5-neural-chat-7b-v3-1-7B-Mistral-7B-Instruct-v0.1-GGUF OpenHermes-2.5-neural-chat-7b-v3-1-7B-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m OpenHermes-2.5-neural-chat-7b-v3-1-7B-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./OpenHermes-2.5-neural-chat-7b-v3-1-7B-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./OpenHermes-2.5-neural-chat-7b-v3-1-7B-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
TheLongTran/Dialogue-For-ChatBot
TheLongTran
"2025-02-21T14:42:43"
0
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-02-21T13:56:17"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
catrabbitbear/taxi-v3-attempt2
catrabbitbear
"2023-06-20T15:54:57"
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2023-06-20T15:54:55"
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: taxi-v3-attempt2 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="catrabbitbear/taxi-v3-attempt2", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
minhtrannnn/9cf7b0fb-d200-4025-9b5a-6e54183ec18a
minhtrannnn
"2025-02-01T10:42:37"
8
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2.5-Math-1.5B-Instruct", "base_model:adapter:unsloth/Qwen2.5-Math-1.5B-Instruct", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
"2025-02-01T10:28:19"
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2.5-Math-1.5B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 9cf7b0fb-d200-4025-9b5a-6e54183ec18a results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2.5-Math-1.5B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 22587293b779bc55_train_data.json ds_type: json format: custom path: /workspace/input_data/22587293b779bc55_train_data.json type: field_input: content field_instruction: title field_output: summary format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: minhtrannnn/9cf7b0fb-d200-4025-9b5a-6e54183ec18a hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/22587293b779bc55_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 6863ca7d-dba1-4f20-86fd-f4e741cc8950 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 6863ca7d-dba1-4f20-86fd-f4e741cc8950 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 9cf7b0fb-d200-4025-9b5a-6e54183ec18a This model is a fine-tuned version of [unsloth/Qwen2.5-Math-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Math-1.5B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7204 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.5683 | 0.6809 | 200 | 0.7204 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Asmamalica/my_awesome_model
Asmamalica
"2024-01-19T16:08:03"
4
0
transformers
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-01-19T14:30:38"
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_keras_callback model-index: - name: Asmamalica/my_awesome_model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Asmamalica/my_awesome_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0025 - Validation Loss: 0.0345 - Train Accuracy: 0.9935 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 9910, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.0494 | 0.0275 | 0.991 | 0 | | 0.0091 | 0.0282 | 0.993 | 1 | | 0.0025 | 0.0345 | 0.9935 | 2 | ### Framework versions - Transformers 4.35.2 - TensorFlow 2.15.0 - Datasets 2.16.1 - Tokenizers 0.15.0
JuanMa360/Reinforce-Cartpole8
JuanMa360
"2023-12-09T07:54:41"
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
"2023-12-09T07:51:14"
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Cartpole8 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 240.40 +/- 13.46 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
nomsgadded/Segments
nomsgadded
"2023-08-30T01:49:48"
210
0
transformers
[ "transformers", "safetensors", "segformer", "image-segmentation", "vision", "generated_from_trainer", "base_model:nvidia/mit-b0", "base_model:finetune:nvidia/mit-b0", "license:other", "endpoints_compatible", "region:us" ]
image-segmentation
"2023-08-24T01:04:00"
--- license: other base_model: nvidia/mit-b0 tags: - image-segmentation - vision - generated_from_trainer model-index: - name: Segments results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Segments This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the segments/sidewalk-semantic dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30.0 ### Training results ### Framework versions - Transformers 4.33.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
kyujinpy/ko-platypus-kiwi-13B
kyujinpy
"2023-11-23T04:09:32"
2,288
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "ko", "dataset:kyujinpy/KOR-Orca-Platypus-kiwi", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2023-11-14T12:10:14"
--- language: - ko datasets: - kyujinpy/KOR-Orca-Platypus-kiwi library_name: transformers pipeline_tag: text-generation license: cc-by-nc-sa-4.0 --- **(주)미디어그룹사람과숲과 (주)마커의 LLM 연구 컨소시엄에서 개발된 모델입니다** **The license is `cc-by-nc-sa-4.0`.** # **KOR-Orca-Platypus-kiwi🥝** ## Model Details **Model Developers** Kyujin Han (kyujinpy) **Model Architecture** ko-platypus-kiwi-13B is an auto-regressive language model based on the LLaMA2 transformer architecture. **Base Model** [hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b) **Training Dataset** I used [kyujinpy/KOR-Orca-Platypus-kiwi](https://huggingface.co/datasets/kyujinpy/KOR-Orca-Platypus-kiwi). # Model comparisons | Model | Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 | | --- | --- | --- | --- | --- | --- | --- | | **ko-platypus-kiwi-13B🥝** | 48.97 | 42.41 | 54.29 | 41.98 | 40.05 | **66.12** | # Implementation Code ```python ### KO-Platypus from transformers import AutoModelForCausalLM, AutoTokenizer import torch repo = "kyujinpy/ko-platypus-kiwi-13B" OpenOrca = AutoModelForCausalLM.from_pretrained( repo, return_dict=True, torch_dtype=torch.float16, device_map='auto' ) OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo) ``` ---
Nisk36/SFT_both_lr5
Nisk36
"2025-01-30T13:17:03"
39
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-01-30T13:13:01"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lesso04/5474cf30-2148-4447-b605-c5caa1105425
lesso04
"2025-03-16T18:52:35"
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B", "base_model:adapter:Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B", "region:us" ]
null
"2025-03-16T16:59:56"
--- library_name: peft base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B tags: - axolotl - generated_from_trainer model-index: - name: 5474cf30-2148-4447-b605-c5caa1105425 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - d88646bdc2f5ed23_train_data.json ds_type: json format: custom path: /workspace/input_data/d88646bdc2f5ed23_train_data.json type: field_input: init_response field_instruction: init_prompt field_output: critic_response format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null do_eval: true early_stopping_patience: 3 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 500 evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 8 gradient_checkpointing: true group_by_length: true hub_model_id: lesso04/5474cf30-2148-4447-b605-c5caa1105425 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.000204 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 50 lora_alpha: 128 lora_dropout: 0.15 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_steps: 500 micro_batch_size: 4 mlflow_experiment_name: /tmp/d88646bdc2f5ed23_train_data.json model_type: AutoModelForCausalLM num_epochs: 10 optimizer: adamw_torch_fused output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 500 saves_per_epoch: null seed: 40 sequence_len: 1024 special_tokens: pad_token: <|eot_id|> strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 006af62d-6365-41f8-8647-7e1d4e655660 wandb_project: 04a wandb_run: your_name wandb_runid: 006af62d-6365-41f8-8647-7e1d4e655660 warmup_steps: 100 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 5474cf30-2148-4447-b605-c5caa1105425 This model is a fine-tuned version of [Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1869 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000204 - train_batch_size: 4 - eval_batch_size: 4 - seed: 40 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0008 | 1 | 2.5413 | | 0.1901 | 0.3823 | 500 | 0.1869 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
ABrinkmann/sbert_xtremedistil-l6-h256-uncased-mean-cosine-h32
ABrinkmann
"2022-04-13T15:45:07"
3
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
"2022-04-13T13:54:18"
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # ABrinkmann/sbert_xtremedistil-l6-h256-uncased-mean-cosine-h32 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 32 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('ABrinkmann/sbert_xtremedistil-l6-h256-uncased-mean-cosine-h32') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ABrinkmann/sbert_xtremedistil-l6-h256-uncased-mean-cosine-h32) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 251 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 1000, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 26, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 16, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 256, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Dense({'in_features': 256, 'out_features': 32, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
AdnaneIsMe/oas_lora_model_v4
AdnaneIsMe
"2025-04-11T15:08:26"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit", "base_model:finetune:unsloth/mistral-7b-instruct-v0.3-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2025-04-11T15:08:23"
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p> </div> </main> </body> </html>
PavanDeepak/ppo-Huggy
PavanDeepak
"2023-03-18T23:00:47"
2
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
"2023-03-18T23:00:40"
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Find your model_id: PavanDeepak/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
DragosGorduza/FRPile_GPL_test_pipeline_DragosGorduza-FRPile_MLM_Basel-GPT3_BASEL_FULL-notrescaled_70000
DragosGorduza
"2024-04-09T10:35:50"
4
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
"2024-04-09T10:34:30"
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 51857 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `gpl.toolkit.loss.MarginDistillationLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 140000, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
jlbaker361/kmeans-test-ddpo
jlbaker361
"2024-03-22T15:19:30"
0
0
null
[ "region:us" ]
null
"2024-03-02T03:37:15"
--- {} --- # DDPO trained model num_epochs=3 train_gradient_accumulation_steps=1 sample_num_steps=2 sample_batch_size=2 train_batch_size=2 sample_num_batches_per_epoch=2 based off of stabilityai/stable-diffusion-2-base and then trained off of None
JulianVelandia/Llama-3.2-1B-unal-instruct-ft-gguf
JulianVelandia
"2025-03-12T04:54:45"
75
2
null
[ "safetensors", "gguf", "llama", "nlp", "instruct", "fine-tuning", "unal", "causal-lm", "es", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-03-07T05:13:21"
--- license: apache-2.0 language: - es tags: - nlp - instruct - llama - fine-tuning - unal - causal-lm pretty_name: Llama 3.2-1B UNAL Instruct FT size_categories: - 1B<n<10B task_categories: - text-generation - causal-lm --- # **Llama 3.2-1B UNAL Instruct FT** ## **Descripción** Modelo basado en **Meta-Llama-3.2-1B**, ajustado con **LoRA** para tareas de generación de texto en español. Fue entrenado utilizando el dataset [Grade Works UNAL Dataset Instruct](https://huggingface.co/datasets/JulianVelandia/unal-repository-dataset-instruct), el cual contiene preguntas y respuestas derivadas de trabajos de grado de la Universidad Nacional de Colombia. El entrenamiento se realizó en **Google Colab Free** con GPU, de manera **diferida**, y tomó aproximadamente **7 horas**. ## **Notebook** https://github.com/julianVelandia/FinetuningLLMGradeWorksUNALDatasetInstruct ## **Características** - **Modelo base**: Meta-Llama-3.2-1B. - **Técnica de ajuste**: LoRA (Low-Rank Adaptation). - **Formato de entrenamiento**: Instrucción basada en pares pregunta-respuesta. - **Idiomas soportados**: Español. - **Tamaño**: 1B parámetros. ## **Dataset utilizado** El modelo fue entrenado con el dataset [Grade Works UNAL Dataset Instruct](https://huggingface.co/datasets/JulianVelandia/unal-repository-dataset-instruct), que contiene: - **16,700 registros** de pares pregunta-respuesta. - **Fuente**: Trabajos de grado de la Universidad Nacional de Colombia. - **Formato**: `prompt` (pregunta), `completion` (respuesta), `fragment` (texto base). ## **Licencia** Apache 2.0
jamesjunyuguo/llama-3-1-8b-math-orca-qlora-10k-ep1
jamesjunyuguo
"2025-04-09T21:54:14"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "sft", "endpoints_compatible", "region:us" ]
null
"2025-02-28T05:13:23"
--- base_model: Meta-Llama/Meta-Llama-3.1-8B library_name: transformers model_name: llama-3-1-8b-math-orca-qlora-10k-ep1 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for llama-3-1-8b-math-orca-qlora-10k-ep1 This model is a fine-tuned version of [Meta-Llama/Meta-Llama-3.1-8B](https://huggingface.co/Meta-Llama/Meta-Llama-3.1-8B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="jamesjunyuguo/llama-3-1-8b-math-orca-qlora-10k-ep1", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.12.1 - Transformers: 4.46.3 - Pytorch: 2.4.1 - Datasets: 3.1.0 - Tokenizers: 0.20.3 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
sang-kyung/bottle
sang-kyung
"2023-07-04T06:54:36"
0
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "base_model:stabilityai/stable-diffusion-2-1-base", "base_model:finetune:stabilityai/stable-diffusion-2-1-base", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-07-02T08:05:05"
--- license: creativeml-openrail-m base_model: stabilityai/stable-diffusion-2-1-base instance_prompt: a photo of sks bottle tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - dreambooth inference: true --- # DreamBooth - sang-kyung/bottle This is a dreambooth model derived from stabilityai/stable-diffusion-2-1-base. The weights were trained on a photo of sks bottle using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: True.
ducnt2406/simpletuner-lora
ducnt2406
"2025-03-30T15:23:10"
0
0
null
[ "region:us" ]
null
"2025-03-30T15:23:10"
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p> </div> </main> </body> </html>
tuanio/1-epochs167.0-char-based-freeze_cnn-dropout0.1
tuanio
"2023-10-29T11:59:42"
157
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "base_model:facebook/wav2vec2-xls-r-300m", "base_model:finetune:facebook/wav2vec2-xls-r-300m", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2023-10-29T08:07:38"
--- license: apache-2.0 base_model: facebook/wav2vec2-xls-r-300m tags: - generated_from_trainer metrics: - wer model-index: - name: 1-epochs167.0-char-based-freeze_cnn-dropout0.1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 1-epochs167.0-char-based-freeze_cnn-dropout0.1 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: nan - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 10 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - total_train_batch_size: 40 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 167.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:-----:|:---------------:|:---:| | 17.348 | 13.89 | 2500 | 26.1164 | 1.0 | | 0.0 | 27.78 | 5000 | nan | 1.0 | | 0.0 | 41.67 | 7500 | nan | 1.0 | | 0.0 | 55.56 | 10000 | nan | 1.0 | | 0.0 | 69.44 | 12500 | nan | 1.0 | | 0.0 | 83.33 | 15000 | nan | 1.0 | | 0.0 | 97.22 | 17500 | nan | 1.0 | | 0.0 | 111.11 | 20000 | nan | 1.0 | | 0.0 | 125.0 | 22500 | nan | 1.0 | | 0.0 | 138.89 | 25000 | nan | 1.0 | | 0.0 | 152.78 | 27500 | nan | 1.0 | | 0.0 | 166.67 | 30000 | nan | 1.0 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1 - Datasets 2.14.5 - Tokenizers 0.14.1
scriptmoney/Qwen-Qwen1.5-0.5B-1717088456
scriptmoney
"2024-05-30T17:01:43"
152
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-05-30T17:00:57"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
OddTheGreat/Gaijin_12B-Q8_0-GGUF
OddTheGreat
"2025-02-15T08:51:33"
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "RP", "roleplay", "ERP", "Creative", "dark", "llama-cpp", "gguf-my-repo", "en", "ru", "base_model:OddTheGreat/Gaijin_12B", "base_model:quantized:OddTheGreat/Gaijin_12B", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-02-15T08:50:37"
--- base_model: OddTheGreat/Gaijin_12B library_name: transformers tags: - mergekit - merge - RP - roleplay - ERP - Creative - dark - llama-cpp - gguf-my-repo language: - en - ru --- # OddTheGreat/Gaijin_12B-Q8_0-GGUF This model was converted to GGUF format from [`OddTheGreat/Gaijin_12B`](https://huggingface.co/OddTheGreat/Gaijin_12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/OddTheGreat/Gaijin_12B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo OddTheGreat/Gaijin_12B-Q8_0-GGUF --hf-file gaijin_12b-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo OddTheGreat/Gaijin_12B-Q8_0-GGUF --hf-file gaijin_12b-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo OddTheGreat/Gaijin_12B-Q8_0-GGUF --hf-file gaijin_12b-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo OddTheGreat/Gaijin_12B-Q8_0-GGUF --hf-file gaijin_12b-q8_0.gguf -c 2048 ```
memevis/SG12
memevis
"2025-03-03T14:34:08"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-03-03T14:28:44"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
GiKAGraphy/ProductLlama_V2
GiKAGraphy
"2024-11-07T06:22:18"
13
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:finetune:meta-llama/Llama-3.1-8B-Instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-10-30T00:29:33"
--- license: apache-2.0 language: - en base_model: - meta-llama/Llama-3.1-8B-Instruct pipeline_tag: text-generation tags: - text-generation-inference - transformers - unsloth - llama ---
dalopeza98/intel_image_classification_fastai
dalopeza98
"2024-03-09T16:49:04"
0
0
fastai
[ "fastai", "region:us" ]
null
"2024-03-09T16:49:00"
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
mradermacher/YamshadowStrangemerges_32_Experiment28Experiment26-GGUF
mradermacher
"2025-01-04T11:17:10"
25
0
transformers
[ "transformers", "gguf", "Safetensors", "text-generation-inference", "merge", "en", "base_model:MaziyarPanahi/YamshadowStrangemerges_32_Experiment28Experiment26", "base_model:quantized:MaziyarPanahi/YamshadowStrangemerges_32_Experiment28Experiment26", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2025-01-04T11:09:07"
--- base_model: MaziyarPanahi/YamshadowStrangemerges_32_Experiment28Experiment26 language: - en library_name: transformers license: apache-2.0 model_creator: MaziyarPanahi model_name: YamshadowStrangemerges_32_Experiment28Experiment26 quantized_by: mradermacher tags: - Safetensors - text-generation-inference - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/MaziyarPanahi/YamshadowStrangemerges_32_Experiment28Experiment26 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/YamshadowStrangemerges_32_Experiment28Experiment26-GGUF/resolve/main/YamshadowStrangemerges_32_Experiment28Experiment26.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/YamshadowStrangemerges_32_Experiment28Experiment26-GGUF/resolve/main/YamshadowStrangemerges_32_Experiment28Experiment26.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/YamshadowStrangemerges_32_Experiment28Experiment26-GGUF/resolve/main/YamshadowStrangemerges_32_Experiment28Experiment26.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/YamshadowStrangemerges_32_Experiment28Experiment26-GGUF/resolve/main/YamshadowStrangemerges_32_Experiment28Experiment26.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/YamshadowStrangemerges_32_Experiment28Experiment26-GGUF/resolve/main/YamshadowStrangemerges_32_Experiment28Experiment26.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/YamshadowStrangemerges_32_Experiment28Experiment26-GGUF/resolve/main/YamshadowStrangemerges_32_Experiment28Experiment26.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/YamshadowStrangemerges_32_Experiment28Experiment26-GGUF/resolve/main/YamshadowStrangemerges_32_Experiment28Experiment26.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/YamshadowStrangemerges_32_Experiment28Experiment26-GGUF/resolve/main/YamshadowStrangemerges_32_Experiment28Experiment26.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/YamshadowStrangemerges_32_Experiment28Experiment26-GGUF/resolve/main/YamshadowStrangemerges_32_Experiment28Experiment26.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/YamshadowStrangemerges_32_Experiment28Experiment26-GGUF/resolve/main/YamshadowStrangemerges_32_Experiment28Experiment26.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/YamshadowStrangemerges_32_Experiment28Experiment26-GGUF/resolve/main/YamshadowStrangemerges_32_Experiment28Experiment26.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/YamshadowStrangemerges_32_Experiment28Experiment26-GGUF/resolve/main/YamshadowStrangemerges_32_Experiment28Experiment26.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
onnx-community/BiRefNet_512x512-ONNX
onnx-community
"2025-03-07T11:34:08"
19
0
transformers.js
[ "transformers.js", "onnx", "birefnet", "background-removal", "mask-generation", "Dichotomous Image Segmentation", "Camouflaged Object Detection", "Salient Object Detection", "image-segmentation", "base_model:ZhengPeng7/BiRefNet_512x512", "base_model:quantized:ZhengPeng7/BiRefNet_512x512", "license:mit", "region:us" ]
image-segmentation
"2025-03-02T17:20:16"
--- library_name: transformers.js tags: - background-removal - mask-generation - Dichotomous Image Segmentation - Camouflaged Object Detection - Salient Object Detection repo_url: https://github.com/ZhengPeng7/BiRefNet pipeline_tag: image-segmentation license: mit base_model: - ZhengPeng7/BiRefNet_512x512 --- <h1 align="center">Bilateral Reference for High-Resolution Dichotomous Image Segmentation</h1> <div align='center'> <a href='https://scholar.google.com/citations?user=TZRzWOsAAAAJ' target='_blank'><strong>Peng Zheng</strong></a><sup> 1,4,5,6</sup>,&thinsp; <a href='https://scholar.google.com/citations?user=0uPb8MMAAAAJ' target='_blank'><strong>Dehong Gao</strong></a><sup> 2</sup>,&thinsp; <a href='https://scholar.google.com/citations?user=kakwJ5QAAAAJ' target='_blank'><strong>Deng-Ping Fan</strong></a><sup> 1*</sup>,&thinsp; <a href='https://scholar.google.com/citations?user=9cMQrVsAAAAJ' target='_blank'><strong>Li Liu</strong></a><sup> 3</sup>,&thinsp; <a href='https://scholar.google.com/citations?user=qQP6WXIAAAAJ' target='_blank'><strong>Jorma Laaksonen</strong></a><sup> 4</sup>,&thinsp; <a href='https://scholar.google.com/citations?user=pw_0Z_UAAAAJ' target='_blank'><strong>Wanli Ouyang</strong></a><sup> 5</sup>,&thinsp; <a href='https://scholar.google.com/citations?user=stFCYOAAAAAJ' target='_blank'><strong>Nicu Sebe</strong></a><sup> 6</sup> </div> <div align='center'> <sup>1 </sup>Nankai University&ensp; <sup>2 </sup>Northwestern Polytechnical University&ensp; <sup>3 </sup>National University of Defense Technology&ensp; <sup>4 </sup>Aalto University&ensp; <sup>5 </sup>Shanghai AI Laboratory&ensp; <sup>6 </sup>University of Trento&ensp; </div> | *DIS-Sample_1* | *DIS-Sample_2* | | :------------------------------: | :-------------------------------: | | <img src="https://drive.google.com/thumbnail?id=1ItXaA26iYnE8XQ_GgNLy71MOWePoS2-g&sz=w400" /> | <img src="https://drive.google.com/thumbnail?id=1Z-esCujQF_uEa_YJjkibc3NUrW4aR_d4&sz=w400" /> | For more information, check out the official [repository](https://github.com/ZhengPeng7/BiRefNet). ## Usage (Transformers.js) If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using: ```bash npm i @huggingface/transformers ``` You can then use the model for image matting, as follows: ```js import { AutoModel, AutoProcessor, RawImage } from '@huggingface/transformers'; // Load model and processor const model_id = 'onnx-community/BiRefNet_512x512-ONNX'; const model = await AutoModel.from_pretrained(model_id, { dtype: 'fp32' }); const processor = await AutoProcessor.from_pretrained(model_id); // Load image from URL const url = 'https://images.pexels.com/photos/5965592/pexels-photo-5965592.jpeg?auto=compress&cs=tinysrgb&w=1024'; const image = await RawImage.fromURL(url); // Pre-process image const { pixel_values } = await processor(image); // Predict alpha matte const { output_image } = await model({ input_image: pixel_values }); // Save output mask const mask = await RawImage.fromTensor(output_image[0].sigmoid().mul(255).to('uint8')).resize(image.width, image.height); mask.save('mask.png'); ``` | Input image | Output mask | |--------|--------| | ![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b253b7ac5ecaae3d1efe0c/cRw4xmlhgkCZ72qJckrps.png) | ![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b253b7ac5ecaae3d1efe0c/pcUeTxkZKPRVfT5oDn0Un.png) | ## Citation ``` @article{BiRefNet, title={Bilateral Reference for High-Resolution Dichotomous Image Segmentation}, author={Zheng, Peng and Gao, Dehong and Fan, Deng-Ping and Liu, Li and Laaksonen, Jorma and Ouyang, Wanli and Sebe, Nicu}, journal={CAAI Artificial Intelligence Research}, year={2024} } ``` --- Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
Chan2chan1/solar_test240517_4bit
Chan2chan1
"2024-05-17T02:38:26"
79
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-nc-nd-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
"2024-05-16T07:02:09"
--- license: cc-by-nc-nd-4.0 ---
HerbertAIHug/NLP_Capstone
HerbertAIHug
"2023-10-30T13:03:06"
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "base_model:huawei-noah/TinyBERT_General_4L_312D", "base_model:finetune:huawei-noah/TinyBERT_General_4L_312D", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2023-10-26T13:03:45"
--- base_model: huawei-noah/TinyBERT_General_4L_312D tags: - generated_from_trainer metrics: - accuracy model-index: - name: NLP_Capstone results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NLP_Capstone This model is a fine-tuned version of [huawei-noah/TinyBERT_General_4L_312D](https://huggingface.co/huawei-noah/TinyBERT_General_4L_312D) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3176 - Accuracy: 0.8671 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.5286 | 0.2 | 500 | 0.4169 | 0.8251 | | 0.4299 | 0.4 | 1000 | 0.4137 | 0.8332 | | 0.3856 | 0.6 | 1500 | 0.3714 | 0.8512 | | 0.3692 | 0.8 | 2000 | 0.3176 | 0.8671 | | 0.3604 | 1.0 | 2500 | 0.3869 | 0.8635 | | 0.3457 | 1.2 | 3000 | 0.4126 | 0.8631 | | 0.3291 | 1.41 | 3500 | 0.4272 | 0.8675 | | 0.3481 | 1.61 | 4000 | 0.3754 | 0.8775 | | 0.3253 | 1.81 | 4500 | 0.4293 | 0.8649 | | 0.3306 | 2.01 | 5000 | 0.3807 | 0.8789 | | 0.2849 | 2.21 | 5500 | 0.4291 | 0.8803 | | 0.2824 | 2.41 | 6000 | 0.4058 | 0.8797 | | 0.279 | 2.61 | 6500 | 0.4521 | 0.8761 | | 0.2944 | 2.81 | 7000 | 0.4986 | 0.8747 | | 0.3347 | 3.01 | 7500 | 0.4364 | 0.8815 | | 0.2622 | 3.21 | 8000 | 0.5368 | 0.8703 | | 0.2494 | 3.41 | 8500 | 0.4795 | 0.8854 | | 0.2645 | 3.61 | 9000 | 0.4795 | 0.8864 | | 0.243 | 3.81 | 9500 | 0.4570 | 0.8874 | | 0.2399 | 4.01 | 10000 | 0.5219 | 0.8795 | | 0.2103 | 4.22 | 10500 | 0.5325 | 0.8775 | | 0.2196 | 4.42 | 11000 | 0.5629 | 0.8729 | | 0.2494 | 4.62 | 11500 | 0.5087 | 0.8826 | | 0.1968 | 4.82 | 12000 | 0.5332 | 0.8779 | ### Framework versions - Transformers 4.34.1 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
GirishKumarBK/dialogue_Summary
GirishKumarBK
"2024-03-24T10:45:52"
106
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2024-03-24T10:44:10"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lif31up/contextual-chat-bot
lif31up
"2025-02-11T16:08:08"
0
0
null
[ "region:us" ]
null
"2024-08-11T09:04:18"
`nltk` `torch` `yaml` `tqdm` * **task**: classifying context of the input string, then the model responses based on it. * **dataset**: integrated with `yaml` format document. ## Contextual Understanding Chatbot using Bag-of-Words (BoW) This project is a Contextual Understanding Chatbot that uses Bag-of-Words (BoW) for text processing. The chatbot is designed to understand and respond to user inputs by converting text data into numerical representations, allowing the model to process and match patterns in conversations. The chatbot leverages the Bag-of-Words (BoW) technique to represent user inputs as a collection of word frequency vectors. The model is trained to respond contextually based on pre-defined intents or keywords. This approach focuses on understanding the user's intent and matching it to appropriate responses. ### Data Preprocessing (BoW) * Tokenization: Splitting the text into individual words. * Lowercasing: Converting all text to lowercase for uniformity. * Stop-word Removal: Removing common words (e.g., "the", "and", "is") that do not contribute to meaningful context. * Stemming: Reducing words to their root form (e.g., "running" to "run"). * Bag-of-Words (BoW): Converting text into a fixed-length vector, where each element represents the frequency of a particular word from a vocabulary. --- ## Instruction ### Evaluate Model Use this command to evaluate your trained model on a specified dataset. ``` python run.py --path <path> ``` * `<path>`: Path to the model or dataset you want to evaluate. ### Train Model Train your model on a specified training dataset and set the number of iterations for training. ``` python run.py train --path <trainset_path> --save-to <model_path> --iters <number_iterations> ``` * `<trainset_path>`: Path to your training data file (e.g., train.json or CSV). * `<number_iterations>`: Number of training iterations to run. This controls how many times the model will learn from the data. ### Chat with Model This command allows you to chat with the trained model. The chatbot will respond to your input based on its training. ``` python run.py chat --path <model_path> --response <responses_path> ``` * `<model_path>`: Path to the trained model you wish to interact with. * `<responses_path>`: Path to the responses file that contains predefined responses associated with various intents. --- ### 모델 평가 훈련된 모델을 특정 데이터셋에서 평가하려면 아래 명령어를 사용하세요. ``` python run.py --path <path> ``` * `<path>`: 평가하려는 모델 또는 데이터셋의 경로를 지정합니다. ### 모델 훈련 지정된 훈련 데이터셋을 기반으로 모델을 학습시키고, 학습 반복 횟수를 설정합니다. ``` python run.py chat --path <model_path> --response <responses_path> ``` * `<trainset_path>`: 훈련 데이터 파일의 경로 (예: `train.json`, `train.csv`). * `<model_path>`: 학습된 모델을 저장할 경로를 지정합니다. * `<number_iterations>`: 학습 반복 횟수. 데이터에서 학습을 수행하는 횟수를 설정합니다. ### 대화하기 훈련된 모델과 대화를 나눌 수 있습니다. 챗봇은 훈련 데이터를 기반으로 사용자의 입력에 응답합니다. ``` python run.py chat --path <model_path> --response <responses_path> ``` * `<model_path>`: 상호작용할 훈련된 모델의 경로를 지정합니다. * `<responses_path>`: 다양한 의도(intent)에 대한 사전 정의된 응답을 포함한 파일 경로를 지정합니다.
ngeg2015/wav2vec2-base-finetuned-ks
ngeg2015
"2023-01-08T01:38:49"
160
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "audio-classification", "generated_from_trainer", "dataset:superb", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
"2022-12-31T13:58:47"
--- license: apache-2.0 tags: - generated_from_trainer datasets: - superb metrics: - accuracy model-index: - name: wav2vec2-base-finetuned-ks results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-finetuned-ks This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset. It achieves the following results on the evaluation set: - Loss: 0.1020 - Accuracy: 0.9815 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7887 | 1.0 | 399 | 0.7190 | 0.7682 | | 0.3784 | 2.0 | 798 | 0.2387 | 0.9737 | | 0.2159 | 3.0 | 1197 | 0.1335 | 0.9785 | | 0.1809 | 4.0 | 1596 | 0.1088 | 0.9798 | | 0.1527 | 5.0 | 1995 | 0.1020 | 0.9815 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.12.1+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
lesso01/3774ac68-3edd-462b-9f77-75410fc76dc2
lesso01
"2025-04-11T15:20:04"
0
0
null
[ "region:us" ]
null
"2025-04-11T14:51:29"
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p> </div> </main> </body> </html>
End of preview. Expand in Data Studio

Dataset Card for Hugging Face Hub Model Cards

This datasets consists of model cards for models hosted on the Hugging Face Hub. The model cards are created by the community and provide information about the model, its performance, its intended uses, and more. This dataset is updated on a daily basis and includes publicly available models on the Hugging Face Hub.

This dataset is made available to help support users wanting to work with a large number of Model Cards from the Hub. We hope that this dataset will help support research in the area of Model Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new discussion.

Dataset Details

Uses

There are a number of potential uses for this dataset including:

  • text mining to find common themes in model cards
  • analysis of the model card format/content
  • topic modelling of model cards
  • analysis of the model card metadata
  • training language models on model cards

Out-of-Scope Use

[More Information Needed]

Dataset Structure

This dataset has a single split.

Dataset Creation

Curation Rationale

The dataset was created to assist people in working with model cards. In particular it was created to support research in the area of model cards and their use. It is possible to use the Hugging Face Hub API or client library to download model cards and this option may be preferable if you have a very specific use case or require a different format.

Source Data

The source data is README.md files for models hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the model card directory.

Data Collection and Processing

The data is downloaded using a CRON job on a daily basis.

Who are the source data producers?

The source data producers are the creators of the model cards on the Hugging Face Hub. This includes a broad variety of people from the community ranging from large companies to individual researchers. We do not gather any information about who created the model card in this repository although this information can be gathered from the Hugging Face Hub API.

Annotations [optional]

There are no additional annotations in this dataset beyond the model card content.

Annotation process

N/A

Who are the annotators?

N/A

Personal and Sensitive Information

We make no effort to anonymize the data. Whilst we don't expect the majority of model cards to contain personal or sensitive information, it is possible that some model cards may contain this information. Model cards may also link to websites or email addresses.

Bias, Risks, and Limitations

Model cards are created by the community and we do not have any control over the content of the model cards. We do not review the content of the model cards and we do not make any claims about the accuracy of the information in the model cards. Some model cards will themselves discuss bias and sometimes this is done by providing examples of bias in either the training data or the responses provided by the model. As a result this dataset may contain examples of bias.

Whilst we do not directly download any images linked to in the model cards, some model cards may include images. Some of these images may not be suitable for all audiences.

Recommendations

Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.

Citation

No formal citation is required for this dataset but if you use this dataset in your work, please include a link to this dataset page.

Dataset Card Authors

@davanstrien

Dataset Card Contact

@davanstrien

Downloads last month
1,340

Collection including librarian-bots/model_cards_with_metadata