modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-27 18:27:39
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
500 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-27 18:23:41
card
stringlengths
11
1.01M
TheBloke/orca_mini_3B-GGML
TheBloke
2023-06-25T10:25:04Z
0
59
transformers
[ "transformers", "en", "dataset:psmathur/alpaca_orca", "dataset:psmathur/dolly-v2_orca", "dataset:psmathur/WizardLM_Orca", "arxiv:2306.02707", "license:mit", "region:us" ]
null
2023-06-24T22:33:56Z
--- inference: false license: mit language: - en library_name: transformers datasets: - psmathur/alpaca_orca - psmathur/dolly-v2_orca - psmathur/WizardLM_Orca --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Pankaj Mathur's Orca Mini 3B GGML These files are GGML format model files for [Pankaj Mathur's Orca Mini 3B](https://huggingface.co/psmathur/orca_mini_3b). GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as: * [text-generation-webui](https://github.com/oobabooga/text-generation-webui) * [KoboldCpp](https://github.com/LostRuins/koboldcpp) * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui) * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) * [ctransformers](https://github.com/marella/ctransformers) ## Repositories available * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/orca_mini_3B-GGML) * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/psmathur/orca_mini_3b) ## Prompt template: ``` ### System: You are an AI assistant that follows instruction extremely well. Help as much as you can. ### User: prompt ### Response: ``` or ``` ### System: You are an AI assistant that follows instruction extremely well. Help as much as you can. ### User: prompt ### Input: input ### Response: ``` <!-- compatibility_ggml start --> ## Compatibility ### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0` I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`. These are guaranteed to be compatbile with any UIs, tools and libraries released since late May. ### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K` These cannot be provided with Open Llama 3B models at this time, due to an issue in llama.cpp. This is being worked on in the llama.cpp repo. More issues here: https://github.com/ggerganov/llama.cpp/issues/1919 Refer to the Provided Files table below to see what files use which methods, and how. <!-- compatibility_ggml end --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | orca-mini-3b.ggmlv3.q4_0.bin | q4_0 | 4 | 1.93 GB | 4.43 GB | Original llama.cpp quant method, 4-bit. | | orca-mini-3b.ggmlv3.q4_1.bin | q4_1 | 4 | 2.14 GB | 4.64 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. | | orca-mini-3b.ggmlv3.q5_0.bin | q5_0 | 5 | 2.36 GB | 4.86 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. | | orca-mini-3b.ggmlv3.q5_1.bin | q5_1 | 5 | 2.57 GB | 5.07 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. | | orca-mini-3b.ggmlv3.q8_0.bin | q8_0 | 8 | 3.64 GB | 6.14 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ## How to run in `llama.cpp` I use the following command line; adjust for your tastes and needs: ``` ./main -t 10 -ngl 32 -m orca-mini-3b.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### System:\nYou are an story writing assistant who writes very long, detailed and interesting stories\n\n### User:\nWrite a story about llamas\n\n### Input:\n{input}\n\n### Response:\n" ``` If you're able to use full GPU offloading, you should use `-t 1` to get best performance. If not able to fully offload to GPU, you should use more cores. Change `-t 10` to the number of physical CPU cores you have, or a lower number depending on what gives best performance. Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md). <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: Pyrater, WelcomeToTheClub, Kalila, Mano Prime, Trenton Dambrowitz, Spiking Neurons AB, Pierre Kircher, Fen Risland, Kevin Schuppel, Luke, Rainer Wilmers, vamX, Gabriel Puliatti, Alex , Karl Bernard, Ajan Kanaga, Talal Aujan, Space Cruiser, ya boyyy, biorpg, Johann-Peter Hartmann, Asp the Wyvern, Ai Maven, Ghost , Preetika Verma, Nikolai Manek, trip7s trip, John Detwiler, Fred von Graf, Artur Olbinski, subjectnull, John Villwock, Junyu Yang, Rod A, Lone Striker, Chris McCloskey, Iucharbius , Matthew Berman, Illia Dulskyi, Khalefa Al-Ahmad, Imad Khwaja, chris gileta, Willem Michiel, Greatston Gnanesh, Derek Yates, K, Alps Aficionado, Oscar Rangel, David Flickinger, Luke Pendergrass, Deep Realms, Eugene Pentland, Cory Kujawski, terasurfer , Jonathan Leane, senxiiz, Joseph William Delisle, Sean Connelly, webtim, zynix , Nathan LeClaire. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Pankaj Mathur's Orca Mini 3B # orca_mini_3b An [OpenLLaMa-3B model](https://github.com/openlm-research/open_llama) model trained on explain tuned datasets, created using Instructions and Input from WizardLM, Alpaca & Dolly-V2 datasets and applying Orca Research Paper dataset construction approaches. # Dataset We build explain tuned [WizardLM dataset ~70K](https://github.com/nlpxucan/WizardLM), [Alpaca dataset ~52K](https://crfm.stanford.edu/2023/03/13/alpaca.html) & [Dolly-V2 dataset ~15K](https://github.com/databrickslabs/dolly) created using approaches from [Orca Research Paper](https://arxiv.org/abs/2306.02707). We leverage all of the 15 system instructions provided in Orca Research Paper. to generate custom datasets, in contrast to vanilla instruction tuning approaches used by original datasets. This helps student model aka this model to learn ***thought*** process from teacher model, which is ChatGPT (gpt-3.5-turbo-0301 version). Please see below example usage how the **System** prompt is added before each **instruction**. # Training The training configurations are provided in the table below. The training takes on 8x A100(80G) GPUs and lasts for around 4 Hours for cost of $48 using [Lambda Labs](https://lambdalabs.com) We used DeepSpeed with fully sharded data parallelism, also know as [ZeRO stage 3](https://engineering.fb.com/2021/07/15/open-source/fsdp/) by writing our own fine tunning scripts plus leveraging some of the model training code provided by amazing [OpenAlpaca repo](https://github.com/yxuansu/OpenAlpaca) Here are some of params used during training: ||| |:-------------:|:-------------:| |*batch_size*|64| |*train_micro_batch_size_per_gpu*|4| |*gradient_accumulation_steps*|2| |*Learning rate*|2e-5| |*Max length*|1024| |*Epochs*|3| |*Optimizer*|AdamW| # Example Usage Below shows an example on how to use this model ```python import torch from transformers import LlamaForCausalLM, LlamaTokenizer # Hugging Face model_path model_path = 'psmathur/orca_mini_3b' tokenizer = LlamaTokenizer.from_pretrained(model_path) model = LlamaForCausalLM.from_pretrained( model_path, torch_dtype=torch.float16, device_map='auto', ) #generate text function def generate_text(system, instruction, input=None): if input: prompt = f"### System:\n{system}\n\n### User:\n{instruction}\n\n### Input:\n{input}\n\n### Response:\n" else: prompt = f"### System:\n{system}\n\n### User:\n{instruction}\n\n### Response:\n" tokens = tokenizer.encode(prompt) tokens = torch.LongTensor(tokens).unsqueeze(0) tokens = tokens.to('cuda') instance = {'input_ids': tokens,'top_p': 1.0, 'temperature':0.7, 'generate_len': 1024, 'top_k': 50} length = len(tokens[0]) with torch.no_grad(): rest = model.generate( input_ids=tokens, max_length=length+instance['generate_len'], use_cache=True, do_sample=True, top_p=instance['top_p'], temperature=instance['temperature'], top_k=instance['top_k'] ) output = rest[0][length:] string = tokenizer.decode(output, skip_special_tokens=True) return f'[!] Response: {string}' # Sample Test Instruction Used by Youtuber Sam Witteveen https://www.youtube.com/@samwitteveenai system = 'You are an AI assistant that follows instruction extremely well. Help as much as you can.' instruction = 'Write a letter to Sam Altman, CEO of OpenAI, requesting him to convert GPT4 a private model by OpenAI to an open source project' print(generate_text(system, instruction)) ``` ``` [!] Response: Dear Sam Altman, I am writing to request that you convert the GPT4 private model developed by OpenAI to an open source project. As a user of OpenAI, I have been waiting for the day when I can use the advanced natural language processing capabilities of GPT4 in a more open and accessible way. While OpenAI has made significant progress in developing AI applications, it has primarily focused on building private models that are not accessible to the general public. However, with the recent release of GPT-3, there is a growing demand for more open and accessible AI tools. Converting GPT4 to an open source project would allow for greater transparency, collaboration, and innovation. It would also help to build trust in the technology and ensure that it is used ethically and responsibly. I urge you to consider converting GPT4 to an open source project. This would be a significant contribution to the AI community and would help to create a more open and accessible future. Thank you for your consideration. Sincerely, [Your Name] ``` **P.S. I am #opentowork and #collaboration, if you can help, please reach out to me at [email protected]** Next Goals: 1) Try more data like actually using FLAN-v2, just like Orka Research Paper (I am open for suggestions) 2) Provide more options for Text generation UI. (may be https://github.com/oobabooga/text-generation-webui) 3) Provide 4bit GGML/GPTQ quantized model (may be [TheBloke](https://huggingface.co/TheBloke) can help here) Limitations & Biases: This model can produce factually incorrect output, and should not be relied on to produce factually accurate information. This model was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs. Disclaimer: The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes. Citiation: If you found wizardlm_alpaca_dolly_orca_open_llama_3b useful in your research or applications, please kindly cite using the following BibTeX: ``` @misc{wizardlm_alpaca_dolly_orca_open_llama_3b, author = {Pankaj Mathur}, title = {wizardlm_alpaca_dolly_orca_open_llama_3b: An explain tuned OpenLLaMA-3b model on custom wizardlm, alpaca, & dolly datasets}, year = {2023}, publisher = {GitHub, HuggingFace}, journal = {GitHub repository, HuggingFace repository}, howpublished = {\url{https://github.com/pankajarm/wizardlm_alpaca_dolly_orca_open_llama_3b}, \url{https://https://huggingface.co/psmathur/wizardlm_alpaca_dolly_orca_open_llama_3b}}, } ``` ``` @software{openlm2023openllama, author = {Xinyang Geng and Hao Liu}, title = {OpenLLaMA: An Open Reproduction of LLaMA}, month = May, year = 2023, url = {https://github.com/openlm-research/open_llama} } ``` ``` @misc{openalpaca, author = {Yixuan Su and Tian Lan and Deng Cai}, title = {OpenAlpaca: A Fully Open-Source Instruction-Following Model Based On OpenLLaMA}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/yxuansu/OpenAlpaca}}, } ``` ``` @misc{alpaca, author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto }, title = {Stanford Alpaca: An Instruction-following LLaMA model}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}}, } ```
kbondar17/test-trainer
kbondar17
2023-06-25T10:12:41Z
103
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-25T10:06:32Z
--- license: mit tags: - generated_from_trainer metrics: - f1 - accuracy model-index: - name: test-trainer results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test-trainer This model is a fine-tuned version of [nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4009 - F1: 0.6363 - Roc Auc: 0.7682 - Accuracy: 0.6079 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:| | No log | 1.0 | 125 | 0.2975 | 0.5710 | 0.7129 | 0.4693 | | No log | 2.0 | 250 | 0.3742 | 0.6226 | 0.7621 | 0.6013 | | No log | 3.0 | 375 | 0.4009 | 0.6363 | 0.7682 | 0.6079 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu117 - Datasets 2.13.1 - Tokenizers 0.13.3
dhruvil237/userutterance_classification_verplus
dhruvil237
2023-06-25T10:05:26Z
13
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "deberta-v2", "text-classification", "generated_from_trainer", "dataset:clinc_oos", "doi:10.57967/hf/0811", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-05T12:20:52Z
--- license: mit tags: - generated_from_trainer datasets: - clinc_oos metrics: - accuracy model-index: - name: userutterance_classification_verplus results: - task: name: Text Classification type: text-classification dataset: name: clinc_oos type: clinc_oos config: plus split: validation args: plus metrics: - name: Accuracy type: accuracy value: 0.9619354838709677 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # userutterance_classification_verplus This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.2270 - Accuracy: 0.9619 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 5.0219 | 0.21 | 200 | 4.9813 | 0.0077 | | 4.8915 | 0.42 | 400 | 4.5741 | 0.1155 | | 4.2736 | 0.63 | 600 | 3.5359 | 0.4719 | | 3.2701 | 0.84 | 800 | 2.4291 | 0.7429 | | 2.3578 | 1.05 | 1000 | 1.5793 | 0.8413 | | 1.5695 | 1.26 | 1200 | 1.0029 | 0.8994 | | 1.0412 | 1.47 | 1400 | 0.6475 | 0.9187 | | 0.7034 | 1.68 | 1600 | 0.4439 | 0.9303 | | 0.501 | 1.89 | 1800 | 0.3400 | 0.9381 | | 0.3187 | 2.1 | 2000 | 0.2793 | 0.9439 | | 0.2185 | 2.31 | 2200 | 0.2538 | 0.9490 | | 0.1669 | 2.52 | 2400 | 0.2210 | 0.9523 | | 0.1081 | 2.73 | 2600 | 0.2225 | 0.9519 | | 0.1004 | 2.94 | 2800 | 0.2136 | 0.9555 | | 0.0665 | 3.14 | 3000 | 0.2078 | 0.9561 | | 0.0509 | 3.35 | 3200 | 0.2155 | 0.9568 | | 0.05 | 3.56 | 3400 | 0.2107 | 0.9581 | | 0.0527 | 3.77 | 3600 | 0.2171 | 0.9568 | | 0.0447 | 3.98 | 3800 | 0.2128 | 0.9590 | | 0.0259 | 4.19 | 4000 | 0.2099 | 0.9587 | | 0.0279 | 4.4 | 4200 | 0.2179 | 0.9577 | | 0.0176 | 4.61 | 4400 | 0.2191 | 0.9574 | | 0.0288 | 4.82 | 4600 | 0.2216 | 0.9590 | | 0.0328 | 5.03 | 4800 | 0.2237 | 0.9606 | | 0.0154 | 5.24 | 5000 | 0.2241 | 0.9616 | | 0.0157 | 5.45 | 5200 | 0.2265 | 0.9603 | | 0.023 | 5.66 | 5400 | 0.2276 | 0.9613 | | 0.0178 | 5.87 | 5600 | 0.2270 | 0.9619 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
joohwan/2222333l-gd
joohwan
2023-06-25T10:05:13Z
114
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-06-25T08:10:32Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - wer model-index: - name: 2222333l-gd results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 2222333l-gd This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0984 - Wer: 13.1908 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 2000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0206 | 0.18 | 500 | 0.1634 | 17.8738 | | 0.0496 | 0.36 | 1000 | 0.1403 | 12.4680 | | 0.0516 | 0.54 | 1500 | 0.1123 | 10.2394 | | 0.0755 | 0.72 | 2000 | 0.0984 | 13.1908 | ### Framework versions - Transformers 4.31.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
GarbageCollector/MusicVOX
GarbageCollector
2023-06-25T10:02:39Z
0
2
null
[ "stable-diffusion", "safetensors", "text-to-image", "license:unknown", "region:us" ]
text-to-image
2023-06-01T14:48:41Z
--- tags: - stable-diffusion - safetensors pipeline_tag: text-to-image license: unknown --- <p>this place is my garbage collection.<br> some models are not better than others.</p> <p>___SAMPLES___</p> <p>Area_51<br> <img src="https://huggingface.co/GarbageCollector/MusicVOX/resolve/main/samples/sample_Area_51.jpg"/> </p> <p>Cat_Landmine<br> <img src="https://huggingface.co/GarbageCollector/MusicVOX/resolve/main/samples/sample_Cat_Landmine.jpg"/> </p> <p>GREEN_sky<br> <img src="https://huggingface.co/GarbageCollector/MusicVOX/resolve/main/samples/sample_GREEN_sky.jpg"/> </p> <p>Hocus_Pocus<br> <img src="https://huggingface.co/GarbageCollector/MusicVOX/resolve/main/samples/sample_Hocus_Pocus.jpg"/> </p> <p>Mixture_LivingDEAD<br> <img src="https://huggingface.co/GarbageCollector/MusicVOX/resolve/main/samples/sample_Mixture_LivingDEAD.jpg"/> </p> <p>RAINBOW_sky<br> <img src="https://huggingface.co/GarbageCollector/MusicVOX/resolve/main/samples/sample_RAINBOW_sky.jpg"/> </p> <p>blackend_sky<br> <img src="https://huggingface.co/GarbageCollector/MusicVOX/resolve/main/samples/sample_blackend_sky.jpg"/> </p> <p>leaving_tomorrow<br> <img src="https://huggingface.co/GarbageCollector/MusicVOX/resolve/main/samples/sample_leaving_tomorrow.jpg"/> </p> <p>NEUROMANCE<br> <img src="https://huggingface.co/GarbageCollector/MusicVOX/resolve/main/samples/sample_NEUROMANCE.jpg"/> </p> <p>Karnevil<br> <img src="https://huggingface.co/GarbageCollector/MusicVOX/resolve/main/samples/sample_Karnevil.jpg"/> </p> <p>Sinktank<br> <img src="https://huggingface.co/GarbageCollector/MusicVOX/resolve/main/samples/sample_Sinktank.jpg"/> </p>
mrizalf7/xlm-r-qa-squad-retrained
mrizalf7
2023-06-25T09:57:29Z
102
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "question-answering", "generated_from_trainer", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2023-06-13T19:17:39Z
--- license: mit tags: - generated_from_trainer model-index: - name: xlm-roberta-finetuned-small-squad-indonesian-rizal-4-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-finetuned-small-squad-indonesian-rizal-4-2 This model is a fine-tuned version of [mrizalf7/xlm-roberta-finetuned-small-squad-indonesian-rizal-4](https://huggingface.co/mrizalf7/xlm-roberta-finetuned-small-squad-indonesian-rizal-4) on the None dataset. It achieves the following results on the evaluation set: - Loss: 5.4152 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 1 | 6.1326 | | No log | 2.0 | 2 | 6.1326 | | No log | 3.0 | 3 | 5.4152 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
VilohitT/question_answering_majorproject
VilohitT
2023-06-25T09:46:03Z
0
0
peft
[ "peft", "region:us" ]
null
2023-06-25T09:46:01Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0.dev0
bogdancazan/t5-small-newsela-biendata-with-domain-adaptation
bogdancazan
2023-06-25T09:45:44Z
106
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-06-19T11:56:49Z
training_args = TrainingArguments( output_dir='t5-small-newsela-biendata-with-domain-adaptation', num_train_epochs=20, warmup_steps=250, per_device_train_batch_size=BATCH_SIZE, weight_decay=0.01, learning_rate=2e-4, fp16=True, optim="adafactor", ) Step Training Loss 500 35.466600 1000 25.795400 1500 10.923200 2000 4.515500 TrainOutput(global_step=2320, training_loss=16.92537920721646, metrics={'train_runtime': 628.0033, 'train_samples_per_second': 472.418, 'train_steps_per_second': 3.694, 'total_flos': 0.0, 'train_loss': 16.92537920721646, 'epoch': 20.0})
lucasbertola/ppo-LunarLander-v2
lucasbertola
2023-06-25T09:29:20Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-06-19T11:40:03Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 295.14 +/- 14.94 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
sd-concepts-library/pokemon-raichu-sd-model
sd-concepts-library
2023-06-25T09:26:29Z
0
0
null
[ "base_model:stabilityai/stable-diffusion-2", "base_model:finetune:stabilityai/stable-diffusion-2", "license:mit", "region:us" ]
null
2023-06-25T09:26:28Z
--- license: mit base_model: stabilityai/stable-diffusion-2 --- ### Pokemon Raichu - SD model on Stable Diffusion This is the `<cat-toy>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<cat-toy> 0](https://huggingface.co/sd-concepts-library/pokemon-raichu-sd-model/resolve/main/concept_images/0.jpeg) ![<cat-toy> 1](https://huggingface.co/sd-concepts-library/pokemon-raichu-sd-model/resolve/main/concept_images/1.jpeg) ![<cat-toy> 2](https://huggingface.co/sd-concepts-library/pokemon-raichu-sd-model/resolve/main/concept_images/2.jpeg) ![<cat-toy> 3](https://huggingface.co/sd-concepts-library/pokemon-raichu-sd-model/resolve/main/concept_images/3.jpeg)
ahishamm/vit-base-HAM-10000-sharpened
ahishamm
2023-06-25T09:17:26Z
190
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-06-25T08:42:48Z
--- license: apache-2.0 tags: - image-classification - generated_from_trainer metrics: - accuracy - recall - f1 - precision model-index: - name: vit-base-HAM-10000-sharpened results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-HAM-10000-sharpened This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the ahishamm/HAM_db_sharpened dataset. It achieves the following results on the evaluation set: - Loss: 0.4392 - Accuracy: 0.8529 - Recall: 0.8529 - F1: 0.8529 - Precision: 0.8529 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 | Precision | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:| | 0.7303 | 0.2 | 100 | 0.7828 | 0.7197 | 0.7197 | 0.7197 | 0.7197 | | 0.7198 | 0.4 | 200 | 0.7519 | 0.7377 | 0.7377 | 0.7377 | 0.7377 | | 0.7519 | 0.6 | 300 | 0.7125 | 0.7541 | 0.7541 | 0.7541 | 0.7541 | | 0.6657 | 0.8 | 400 | 0.6623 | 0.7571 | 0.7571 | 0.7571 | 0.7571 | | 0.5896 | 1.0 | 500 | 0.5964 | 0.7835 | 0.7835 | 0.7835 | 0.7835 | | 0.515 | 1.2 | 600 | 0.5745 | 0.8015 | 0.8015 | 0.8015 | 0.8015 | | 0.4318 | 1.4 | 700 | 0.5061 | 0.8200 | 0.8200 | 0.8200 | 0.8200 | | 0.4299 | 1.6 | 800 | 0.5239 | 0.8075 | 0.8075 | 0.8075 | 0.8075 | | 0.4793 | 1.8 | 900 | 0.5366 | 0.8125 | 0.8125 | 0.8125 | 0.8125 | | 0.4202 | 2.0 | 1000 | 0.4882 | 0.8244 | 0.8244 | 0.8244 | 0.8244 | | 0.2105 | 2.2 | 1100 | 0.5330 | 0.8234 | 0.8234 | 0.8234 | 0.8234 | | 0.2597 | 2.4 | 1200 | 0.4604 | 0.8369 | 0.8369 | 0.8369 | 0.8369 | | 0.2261 | 2.59 | 1300 | 0.4893 | 0.8409 | 0.8409 | 0.8409 | 0.8409 | | 0.1853 | 2.79 | 1400 | 0.4793 | 0.8494 | 0.8494 | 0.8494 | 0.8494 | | 0.1739 | 2.99 | 1500 | 0.4392 | 0.8529 | 0.8529 | 0.8529 | 0.8529 | | 0.0629 | 3.19 | 1600 | 0.4941 | 0.8584 | 0.8584 | 0.8584 | 0.8584 | | 0.0802 | 3.39 | 1700 | 0.4974 | 0.8613 | 0.8613 | 0.8613 | 0.8613 | | 0.0712 | 3.59 | 1800 | 0.5416 | 0.8594 | 0.8594 | 0.8594 | 0.8594 | | 0.0365 | 3.79 | 1900 | 0.5318 | 0.8574 | 0.8574 | 0.8574 | 0.8574 | | 0.0591 | 3.99 | 2000 | 0.5344 | 0.8574 | 0.8574 | 0.8574 | 0.8574 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
melowhy/whyde
melowhy
2023-06-25T09:15:10Z
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
2023-06-25T09:15:10Z
--- license: bigscience-openrail-m ---
thinkamconnect/ThinkSites
thinkamconnect
2023-06-25T09:06:30Z
0
0
null
[ "text-to-video", "license:mit", "region:us" ]
text-to-video
2023-06-25T09:03:27Z
--- license: mit pipeline_tag: text-to-video ---
tnvmadhav/food_classifier
tnvmadhav
2023-06-25T09:06:06Z
63
0
transformers
[ "transformers", "tf", "vit", "image-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-06-25T08:32:22Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: tnvmadhav/food_classifier results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # tnvmadhav/food_classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4025 - Validation Loss: 0.3368 - Train Accuracy: 0.91 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 2.8090 | 1.6205 | 0.817 | 0 | | 1.2350 | 0.8021 | 0.879 | 1 | | 0.7254 | 0.5466 | 0.899 | 2 | | 0.5023 | 0.3927 | 0.914 | 3 | | 0.4025 | 0.3368 | 0.91 | 4 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.12.0 - Datasets 2.13.1 - Tokenizers 0.13.3
BaoKien/my_awesome_qa_model
BaoKien
2023-06-25T08:36:36Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "albert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-06-22T15:57:17Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: my_awesome_qa_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_qa_model This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 0.9515 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.889 | 1.0 | 4380 | 0.8729 | | 0.6755 | 2.0 | 8760 | 0.8529 | | 0.4681 | 3.0 | 13140 | 0.9515 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
joohwan/whisper-small-gd
joohwan
2023-06-25T08:10:27Z
79
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-06-25T05:51:56Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - wer model-index: - name: whisper-small-gd results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-small-gd This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1180 - Wer: 14.2298 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - training_steps: 2000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0723 | 0.09 | 250 | 0.2013 | 22.6924 | | 0.044 | 0.18 | 500 | 0.1826 | 27.3905 | | 0.1209 | 0.27 | 750 | 0.1705 | 27.2700 | | 0.0973 | 0.36 | 1000 | 0.1462 | 15.1182 | | 0.0941 | 0.45 | 1250 | 0.1322 | 15.6603 | | 0.076 | 0.54 | 1500 | 0.1258 | 18.3557 | | 0.0967 | 0.63 | 1750 | 0.1203 | 14.8020 | | 0.0757 | 0.72 | 2000 | 0.1180 | 14.2298 | ### Framework versions - Transformers 4.31.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Lajonbot/polish-alpaca7B-lora
Lajonbot
2023-06-25T07:41:13Z
0
0
null
[ "tensorboard", "pl", "dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish", "license:openrail", "region:us" ]
null
2023-05-01T07:08:31Z
--- license: openrail datasets: - Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish language: - pl ---
Lajonbot/Amazon-LightGPT-pl-qlora
Lajonbot
2023-06-25T07:40:56Z
0
0
null
[ "tensorboard", "pl", "dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish", "license:openrail", "region:us" ]
null
2023-05-29T06:22:37Z
--- license: openrail datasets: - Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish language: - pl ---
Lajonbot/falcon-7b-instruct-pl-lora
Lajonbot
2023-06-25T07:38:22Z
0
0
null
[ "tensorboard", "pl", "dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish", "license:openrail", "region:us" ]
null
2023-06-12T06:13:24Z
--- license: openrail datasets: - Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish language: - pl ---
Lajonbot/stablelm-base-alpha-3b-instruct-pl-lora
Lajonbot
2023-06-25T07:37:23Z
0
0
null
[ "tensorboard", "pl", "dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish", "license:openrail", "region:us" ]
null
2023-06-15T06:13:44Z
--- license: openrail datasets: - Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish language: - pl ---
Lajonbot/polish-gpt2-small-instruct
Lajonbot
2023-06-25T07:36:40Z
114
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "pl", "dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish", "license:openrail", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-06-20T19:33:30Z
--- license: openrail datasets: - Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish language: - pl ---
Davlan/bert-base-multilingual-cased-finetuned-swahili
Davlan
2023-06-25T07:32:51Z
568
3
transformers
[ "transformers", "pytorch", "tf", "safetensors", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
Hugging Face's logo --- language: ha datasets: --- # bert-base-multilingual-cased-finetuned-swahili ## Model description **bert-base-multilingual-cased-finetuned-swahili** is a **Swahili BERT** model obtained by fine-tuning **bert-base-multilingual-cased** model on Swahili language texts. It provides **better performance** than the multilingual BERT on text classification and named entity recognition datasets. Specifically, this model is a *bert-base-multilingual-cased* model that was fine-tuned on Swahili corpus. ## Intended uses & limitations #### How to use You can use this model with Transformers *pipeline* for masked token prediction. ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='Davlan/bert-base-multilingual-cased-finetuned-swahili') >>> unmasker("Jumatatu, Bwana Kagame alielezea shirika la France24 huko [MASK] kwamba "hakuna uhalifu ulitendwa") [{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Paris kwamba hakuna uhalifu ulitendwa', 'score': 0.31642526388168335, 'token': 10728, 'token_str': 'Paris'}, {'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Rwanda kwamba hakuna uhalifu ulitendwa', 'score': 0.15753623843193054, 'token': 57557, 'token_str': 'Rwanda'}, {'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Burundi kwamba hakuna uhalifu ulitendwa', 'score': 0.07211585342884064, 'token': 57824, 'token_str': 'Burundi'}, {'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko France kwamba hakuna uhalifu ulitendwa', 'score': 0.029844321310520172, 'token': 10688, 'token_str': 'France'}, {'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Senegal kwamba hakuna uhalifu ulitendwa', 'score': 0.0265930388122797, 'token': 38052, 'token_str': 'Senegal'}] ``` #### Limitations and bias This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. ## Training data This model was fine-tuned on [Swahili CC-100](http://data.statmt.org/cc-100/) ## Training procedure This model was trained on a single NVIDIA V100 GPU ## Eval results on Test set (F-score, average over 5 runs) Dataset| mBERT F1 | sw_bert F1 -|-|- [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 86.80 | 89.36 ### BibTeX entry and citation info By David Adelani ``` ```
Davlan/bert-base-multilingual-cased-finetuned-igbo
Davlan
2023-06-25T07:32:13Z
117
0
transformers
[ "transformers", "pytorch", "safetensors", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
Hugging Face's logo --- language: ig datasets: --- # bert-base-multilingual-cased-finetuned-igbo ## Model description **bert-base-multilingual-cased-finetuned-igbo** is a **Igbo BERT** model obtained by fine-tuning **bert-base-multilingual-cased** model on Igbo language texts. It provides **better performance** than the multilingual BERT on text classification and named entity recognition datasets. Specifically, this model is a *bert-base-multilingual-cased* model that was fine-tuned on Igbo corpus. ## Intended uses & limitations #### How to use You can use this model with Transformers *pipeline* for masked token prediction. ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='Davlan/bert-base-multilingual-cased-finetuned-igbo') >>> unmasker("Reno Omokri na Gọọmentị [MASK] enweghị ihe ha ga-eji hiwe ya bụ mmachi.") ``` #### Limitations and bias This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. ## Training data This model was fine-tuned on JW300 + OPUS CC-Align + [IGBO NLP Corpus](https://github.com/IgnatiusEzeani/IGBONLP) +[Igbo CC-100](http://data.statmt.org/cc-100/) ## Training procedure This model was trained on a single NVIDIA V100 GPU ## Eval results on Test set (F-score, average over 5 runs) Dataset| mBERT F1 | ig_bert F1 -|-|- [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 85.11 | 86.75 ### BibTeX entry and citation info By David Adelani ``` ```
boleshirish/Marathi_GPT2_Pretrained
boleshirish
2023-06-25T07:29:30Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "mr", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-06-17T06:29:25Z
--- license: mit tags: - generated_from_trainer model-index: - name: Marathi_GPT2_Pretrained results: [] language: - mr metrics: - accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Marathi_GPT2_Pretrained - Loss: 1.8264 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.8893 | 0.27 | 500 | 2.5366 | | 2.3286 | 0.53 | 1000 | 2.1366 | | 2.005 | 0.8 | 1500 | 1.8264 | ### Framework versions - Transformers 4.18.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.0 - Tokenizers 0.12.1
AdShenoy/Bart-samsum-fastai
AdShenoy
2023-06-25T07:20:59Z
0
0
fastai
[ "fastai", "region:us" ]
null
2023-06-24T06:53:20Z
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
Davlan/xlm-roberta-large-masakhaner
Davlan
2023-06-25T07:12:21Z
135
2
transformers
[ "transformers", "pytorch", "tf", "safetensors", "xlm-roberta", "token-classification", "arxiv:2103.11811", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:04Z
Hugging Face's logo --- language: - amh - hau - ibo - kin - lug - luo - pcm - swa - wol - yor - multilingual datasets: - masakhaner --- # xlm-roberta-large-masakhaner ## Model description **xlm-roberta-large-masakhaner** is the first **Named Entity Recognition** model for 10 African languages (Amharic, Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahili, Wolof, and Yorùbá) based on a fine-tuned XLM-RoBERTa large model. It achieves the **state-of-the-art performance** for the NER task. It has been trained to recognize four types of entities: dates & times (DATE), location (LOC), organizations (ORG), and person (PER). Specifically, this model is a *xlm-roberta-large* model that was fine-tuned on an aggregation of African language datasets obtained from Masakhane [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset. ## Intended uses & limitations #### How to use You can use this model with Transformers *pipeline* for NER. ```python from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("Davlan/xlm-roberta-large-masakhaner") model = AutoModelForTokenClassification.from_pretrained("Davlan/xlm-roberta-large-masakhaner") nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "Emir of Kano turban Zhang wey don spend 18 years for Nigeria" ner_results = nlp(example) print(ner_results) ``` #### Limitations and bias This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. ## Training data This model was fine-tuned on 10 African NER datasets (Amharic, Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahili, Wolof, and Yorùbá) Masakhane [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes: Abbreviation|Description -|- O|Outside of a named entity B-DATE |Beginning of a DATE entity right after another DATE entity I-DATE |DATE entity B-PER |Beginning of a person’s name right after another person’s name I-PER |Person’s name B-ORG |Beginning of an organisation right after another organisation I-ORG |Organisation B-LOC |Beginning of a location right after another location I-LOC |Location ## Training procedure This model was trained on a single NVIDIA V100 GPU with recommended hyperparameters from the [original MasakhaNER paper](https://arxiv.org/abs/2103.11811) which trained & evaluated the model on MasakhaNER corpus. ## Eval results on Test set (F-score) language|F1-score -|- amh |75.76 hau |91.75 ibo |86.26 kin |76.38 lug |84.64 luo |80.65 pcm |89.55 swa |89.48 wol |70.70 yor |82.05 ### BibTeX entry and citation info ``` @article{adelani21tacl, title = {Masakha{NER}: Named Entity Recognition for African Languages}, author = {David Ifeoluwa Adelani and Jade Abbott and Graham Neubig and Daniel D'souza and Julia Kreutzer and Constantine Lignos and Chester Palen-Michel and Happy Buzaaba and Shruti Rijhwani and Sebastian Ruder and Stephen Mayhew and Israel Abebe Azime and Shamsuddeen Muhammad and Chris Chinenye Emezue and Joyce Nakatumba-Nabende and Perez Ogayo and Anuoluwapo Aremu and Catherine Gitau and Derguene Mbaye and Jesujoba Alabi and Seid Muhie Yimam and Tajuddeen Gwadabe and Ignatius Ezeani and Rubungo Andre Niyongabo and Jonathan Mukiibi and Verrah Otiende and Iroro Orife and Davis David and Samba Ngom and Tosin Adewumi and Paul Rayson and Mofetoluwa Adeyemi and Gerald Muriuki and Emmanuel Anebi and Chiamaka Chukwuneke and Nkiruka Odu and Eric Peter Wairagala and Samuel Oyerinde and Clemencia Siro and Tobius Saul Bateesa and Temilola Oloyede and Yvonne Wambui and Victor Akinode and Deborah Nabagereka and Maurice Katusiime and Ayodele Awokoya and Mouhamadane MBOUP and Dibora Gebreyohannes and Henok Tilaye and Kelechi Nwaike and Degaga Wolde and Abdoulaye Faye and Blessing Sibanda and Orevaoghene Ahia and Bonaventure F. P. Dossou and Kelechi Ogueji and Thierno Ibrahima DIOP and Abdoulaye Diallo and Adewale Akinfaderin and Tendai Marengereke and Salomey Osei}, journal = {Transactions of the Association for Computational Linguistics (TACL)}, month = {}, url = {https://arxiv.org/abs/2103.11811}, year = {2021} } ```
ssj1989/open_llama_001
ssj1989
2023-06-25T07:02:13Z
0
0
null
[ "code", "en", "license:mit", "region:us" ]
null
2023-06-25T07:00:59Z
--- license: mit language: - en tags: - code ---
psymon/QLoRa-polyglot-5.8b-translate
psymon
2023-06-25T06:53:47Z
1
0
peft
[ "peft", "region:us" ]
null
2023-06-25T02:54:27Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0.dev0
teoha/openai-whisper-medium-PeftType.LORA-colab
teoha
2023-06-25T06:51:18Z
2
0
peft
[ "peft", "region:us" ]
null
2023-06-25T06:51:11Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0.dev0
zanafi/sentiment_model
zanafi
2023-06-25T06:31:04Z
16
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "dataset:indonlu", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-23T06:53:10Z
--- license: mit tags: - generated_from_trainer datasets: - indonlu metrics: - accuracy - precision - recall - f1 model-index: - name: sentiment_model results: - task: name: Text Classification type: text-classification dataset: name: indonlu type: indonlu config: emot split: validation args: emot metrics: - name: Accuracy type: accuracy value: 0.7363636363636363 - name: Precision type: precision value: 0.7397155596092384 - name: Recall type: recall value: 0.7459489407651173 - name: F1 type: f1 value: 0.741920437379511 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sentiment_model This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on the indonlu dataset. It achieves the following results on the evaluation set: - Loss: 0.7788 - Accuracy: 0.7364 - Precision: 0.7397 - Recall: 0.7459 - F1: 0.7419 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 1.1939 | 1.0 | 221 | 0.8261 | 0.6932 | 0.7203 | 0.7034 | 0.7056 | | 0.6866 | 2.0 | 442 | 0.7925 | 0.725 | 0.7378 | 0.7377 | 0.7346 | | 0.4791 | 3.0 | 663 | 0.7788 | 0.7364 | 0.7397 | 0.7459 | 0.7419 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
sukantan/all-mpnet-base-v2-ftlegal-v3
sukantan
2023-06-25T06:20:52Z
5
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "dataset:sukantan/nyaya-st-training", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-06-25T06:20:46Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity datasets: - sukantan/nyaya-st-training --- # sukantan/all-mpnet-base-v2-ftlegal-v3 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sukantan/all-mpnet-base-v2-ftlegal-v3') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sukantan/all-mpnet-base-v2-ftlegal-v3) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 391 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MegaBatchMarginLoss.MegaBatchMarginLoss` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 391, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
nolanaatama/mlycrsrvc750pchsvrs
nolanaatama
2023-06-25T05:19:58Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-06-25T04:47:27Z
--- license: creativeml-openrail-m ---
blackmount8/open-llama-13b-open-instruct-ct2-int8_float16
blackmount8
2023-06-25T05:06:24Z
1
0
transformers
[ "transformers", "text-generation", "en", "dataset:VMware/open-instruct-v1-oasst-dolly-hhrlhf", "license:cc", "region:us" ]
text-generation
2023-06-24T17:32:32Z
--- inference: false license: cc datasets: - VMware/open-instruct-v1-oasst-dolly-hhrlhf language: - en library_name: transformers pipeline_tag: text-generation --- # blackmount8/open-llama-13B-open-instruct-ct2-int8_float16 Int8_float16 version of [VMware/open-llama-13b-open-instruct](https://huggingface.co/VMware/open-llama-13b-open-instruct), quantized using CTranslate2. ## VMware/open-llama-13B-open-instruct Instruction-tuned version of the fully trained Open LLama 13B model. The model is open for `<b>`COMMERCIAL USE `</b>`. `<br>` `<b>` NOTE `</b>` : The model was trained using the Alpaca prompt template `<b>` NOTE `</b>` : Fast tokenizer results in incorrect encoding, set the ``use_fast = False`` parameter, when instantiating the tokenizer ## License - `<b>`Commercially Viable `</b>` - Instruction dataset, [VMware/open-instruct-v1-oasst-dolly-hhrlhf](https://huggingface.co/datasets/VMware/open-instruct-v1-oasst-dolly-hhrlhf) is under cc-by-sa-3.0 - Language Model, ([openlm-research/open_llama_13b](https://huggingface.co/openlm-research/open_llama_13b)) is under apache-2.0 ## Nomenclature - Model : Open-llama - Model Size: 13B parameters - Dataset: Open-instruct-v1 (oasst, dolly, hhrlhf) ## Use in CTranslate2 ``` import ctranslate2 from transformers import AutoTokenizer model_name = "blackmount8/open-llama-13b-open-instruct-ct2-int8_float16" tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False, padding_side="left", truncation_side="left") model = ctranslate2.Generator(model_name, device="auto", compute_type="int8_float16") input_text = ["What is the meaning of stonehenge?", "Hello mate!"] input_ids = tokenizer(input_text, return_tensors="pt", padding=True, truncation=True).input_ids input_tokens = [tokenizer.convert_ids_to_tokens(ele) for ele in input_ids] outputs = model.generate_batch(input_tokens, max_length=128) output_tokens = [ ele.sequences_ids[0] for ele in outputs ] output = tokenizer.batch_decode(output_tokens) print(output) ```
Gayathri142214002/t5_qg_1
Gayathri142214002
2023-06-25T04:58:01Z
161
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-06-25T04:53:50Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: t5_qg_1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5_qg_1 This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0323 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.658 | 0.69 | 10 | 1.9854 | | 1.7442 | 1.38 | 20 | 1.6146 | | 1.3456 | 2.07 | 30 | 1.3937 | | 0.9931 | 2.76 | 40 | 1.2447 | | 0.9253 | 3.45 | 50 | 1.1519 | | 0.7154 | 4.14 | 60 | 1.0958 | | 0.6624 | 4.83 | 70 | 1.0645 | | 0.6384 | 5.52 | 80 | 1.0412 | | 0.4889 | 6.21 | 90 | 1.0323 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
pandalla/datatager_E-Commerce_review_extraction
pandalla
2023-06-25T04:26:25Z
103
27
transformers
[ "transformers", "pytorch", "chatglm", "feature-extraction", "text-classification", "custom_code", "zh", "license:apache-2.0", "region:us" ]
text-classification
2023-06-25T03:26:30Z
--- license: apache-2.0 language: - zh pipeline_tag: text-classification --- # CommentOpinionExtract 本模型用于从电商评论数据中,提取关键词和核心观点 # Dataset 本模型利用5000条淘宝评论数据训练,先使用GPT4通过prompt抽取数据的关键词,经过清洗再使用ChatGLM进行训练 # Result | 原句 | keywords | | ------------------------------------------------------------ | ---------------------------------------------------------- | | 好用!!! | 好用、值得推荐、性价比高 | | 这是第二瓶,我都怀疑是不是买了个假货,包装也都换了,换的质感挺low,用完油油腻腻,第一瓶时候挺清爽,所以续购,没想到第二瓶跟第一瓶完全不一样,用完还闷痘,油腻!不管真假不会回购了 | 假货、包装质感low、油腻腻、闷痘、不回购 | | 买了两个50的套餐,一个好点的挖勺都不送一个??? | 价格贵、无语 | | 包装品质:不错 商品气味:普通香 使用效果:一般 同价位不如腊梅精华水…… 。。。。哈哈哈哈哈哈我真的好喜欢这个节目的呢我真的好喜欢这个节目真的是太给力了哟我们的综艺节目都是这么给力的吗。 | 包装品质不错、商品气味普通香、使用效果一般、不如腊梅精华水 |
andrewromitti/alzheimer_model_aug_deit5
andrewromitti
2023-06-25T03:58:45Z
193
1
transformers
[ "transformers", "pytorch", "deit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-06-25T02:14:15Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: alzheimer_model_aug_deit5 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: test args: default metrics: - name: Accuracy type: accuracy value: 0.9996875 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # alzheimer_model_aug_deit5 This model is a fine-tuned version of [facebook/deit-base-distilled-patch16-224](https://huggingface.co/facebook/deit-base-distilled-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0012 - Accuracy: 0.9997 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 1234 - gradient_accumulation_steps: 10 - total_train_batch_size: 160 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5045 | 1.0 | 212 | 0.1414 | 0.9522 | | 0.0779 | 2.0 | 424 | 0.0222 | 0.9961 | | 0.0156 | 3.0 | 637 | 0.0164 | 0.9941 | | 0.0032 | 4.0 | 849 | 0.0044 | 0.9983 | | 0.0004 | 4.99 | 1060 | 0.0012 | 0.9997 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
ardhies/dev
ardhies
2023-06-25T03:55:45Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-06-25T03:51:43Z
--- license: creativeml-openrail-m ---
blackmount8/open-llama-7b-open-instruct-ct2-float16
blackmount8
2023-06-25T03:49:04Z
9
0
transformers
[ "transformers", "text-generation", "en", "dataset:VMware/open-instruct-v1-oasst-dolly-hhrlhf", "license:cc", "region:us" ]
text-generation
2023-06-24T15:05:27Z
--- inference: false license: cc datasets: - VMware/open-instruct-v1-oasst-dolly-hhrlhf language: - en library_name: transformers pipeline_tag: text-generation --- # blackmount8/open-llama-7B-open-instruct-ct2-float16 Float16 version of [VMware/open-llama-7b-open-instruct](https://huggingface.co/VMware/open-llama-7b-open-instruct), quantized using CTranslate2. ## VMware/open-llama-7B-open-instruct Instruction-tuned version of the fully trained Open LLama 7B model. The model is open for `<b>`COMMERCIAL USE `</b>`. `<br>` `<b>` NOTE `</b>` : The model was trained using the Alpaca prompt template `<b>` NOTE `</b>` : Fast tokenizer results in incorrect encoding, set the ``use_fast = False`` parameter, when instantiating the tokenizer ## License - `<b>`Commercially Viable `</b>` - Instruction dataset, [VMware/open-instruct-v1-oasst-dolly-hhrlhf](https://huggingface.co/datasets/VMware/open-instruct-v1-oasst-dolly-hhrlhf) is under cc-by-sa-3.0 - Language Model, ([openlm-research/open_llama_7b](https://huggingface.co/openlm-research/open_llama_7b)) is under apache-2.0 ## Nomenclature - Model : Open-llama - Model Size: 7B parameters - Dataset: Open-instruct-v1 (oasst, dolly, hhrlhf) ## Use in CTranslate2 ``` import ctranslate2 from transformers import AutoTokenizer model_name = "blackmount8/open-llama-7b-open-instruct-ct2-float16" tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False, padding_side="left", truncation_side="left") model = ctranslate2.Generator(model_name, device="auto", compute_type="float16") input_text = ["What is the meaning of stonehenge?", "Hello mate!"] input_ids = tokenizer(input_text, return_tensors="pt", padding=True, truncation=True).input_ids input_tokens = [tokenizer.convert_ids_to_tokens(ele) for ele in input_ids] outputs = model.generate_batch(input_tokens, max_length=128) output_tokens = [ ele.sequences_ids[0] for ele in outputs ] output = tokenizer.batch_decode(output_tokens) print(output) ```
blackmount8/open-llama-13b-open-instruct-ct2-float16
blackmount8
2023-06-25T03:48:21Z
4
0
transformers
[ "transformers", "text-generation", "en", "dataset:VMware/open-instruct-v1-oasst-dolly-hhrlhf", "license:cc", "region:us" ]
text-generation
2023-06-24T16:44:56Z
--- inference: false license: cc datasets: - VMware/open-instruct-v1-oasst-dolly-hhrlhf language: - en library_name: transformers pipeline_tag: text-generation --- # blackmount8/open-llama-13B-open-instruct-ct2-float16 Float16 version of [VMware/open-llama-13b-open-instruct](https://huggingface.co/VMware/open-llama-13b-open-instruct), quantized using CTranslate2. ## VMware/open-llama-13B-open-instruct Instruction-tuned version of the fully trained Open LLama 13B model. The model is open for `<b>`COMMERCIAL USE `</b>`. `<br>` `<b>` NOTE `</b>` : The model was trained using the Alpaca prompt template `<b>` NOTE `</b>` : Fast tokenizer results in incorrect encoding, set the ``use_fast = False`` parameter, when instantiating the tokenizer ## License - `<b>`Commercially Viable `</b>` - Instruction dataset, [VMware/open-instruct-v1-oasst-dolly-hhrlhf](https://huggingface.co/datasets/VMware/open-instruct-v1-oasst-dolly-hhrlhf) is under cc-by-sa-3.0 - Language Model, ([openlm-research/open_llama_13b](https://huggingface.co/openlm-research/open_llama_13b)) is under apache-2.0 ## Nomenclature - Model : Open-llama - Model Size: 13B parameters - Dataset: Open-instruct-v1 (oasst, dolly, hhrlhf) ## Use in CTranslate2 ``` import ctranslate2 from transformers import AutoTokenizer model_name = "blackmount8/open-llama-13b-open-instruct-ct2-float16" tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False, padding_side="left", truncation_side="left") model = ctranslate2.Generator(model_name, device="auto", compute_type="float16") input_text = ["What is the meaning of stonehenge?", "Hello mate!"] input_ids = tokenizer(input_text, return_tensors="pt", padding=True, truncation=True).input_ids input_tokens = [tokenizer.convert_ids_to_tokens(ele) for ele in input_ids] outputs = model.generate_batch(input_tokens, max_length=128) output_tokens = [ ele.sequences_ids[0] for ele in outputs ] output = tokenizer.batch_decode(output_tokens) print(output) ```
duyhngoc/Wave2Vec2_OV_Vie
duyhngoc
2023-06-25T03:47:48Z
77
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "vivos", "generated_from_trainer", "dataset:vivos", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-06-21T10:58:36Z
--- license: apache-2.0 tags: - automatic-speech-recognition - vivos - generated_from_trainer datasets: - vivos metrics: - wer model-index: - name: Wave2Vec2_OV_Vie results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Wave2Vec2_OV_Vie This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the VIVOS - NA dataset. It achieves the following results on the evaluation set: - Loss: 3.5894 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | No log | 0.27 | 100 | 3.9210 | 1.0 | | No log | 0.55 | 200 | 3.4375 | 1.0 | | No log | 0.82 | 300 | 3.4356 | 1.0 | | No log | 1.1 | 400 | 3.4045 | 1.0 | | 4.1866 | 1.37 | 500 | 3.4694 | 1.0 | | 4.1866 | 1.65 | 600 | 3.6266 | 1.0 | | 4.1866 | 1.92 | 700 | 3.5694 | 1.0 | | 4.1866 | 2.19 | 800 | 3.5733 | 1.0 | | 4.1866 | 2.47 | 900 | 3.6381 | 1.0 | | 3.4376 | 2.74 | 1000 | 3.6604 | 1.0 | | 3.4376 | 3.02 | 1100 | 3.5868 | 1.0 | | 3.4376 | 3.29 | 1200 | 3.4988 | 1.0 | | 3.4376 | 3.57 | 1300 | 3.5409 | 1.0 | | 3.4376 | 3.84 | 1400 | 3.4883 | 1.0 | | 3.4365 | 4.12 | 1500 | 3.6125 | 1.0 | | 3.4365 | 4.39 | 1600 | 3.6123 | 1.0 | | 3.4365 | 4.66 | 1700 | 3.5978 | 1.0 | | 3.4365 | 4.94 | 1800 | 3.5693 | 1.0 | | 3.4365 | 5.21 | 1900 | 3.5659 | 1.0 | | 3.4339 | 5.49 | 2000 | 3.6234 | 1.0 | | 3.4339 | 5.76 | 2100 | 3.5997 | 1.0 | | 3.4339 | 6.04 | 2200 | 3.6529 | 1.0 | | 3.4339 | 6.31 | 2300 | 3.5780 | 1.0 | | 3.4339 | 6.58 | 2400 | 3.5844 | 1.0 | | 3.4333 | 6.86 | 2500 | 3.5792 | 1.0 | | 3.4333 | 7.13 | 2600 | 3.5468 | 1.0 | | 3.4333 | 7.41 | 2700 | 3.5691 | 1.0 | | 3.4333 | 7.68 | 2800 | 3.5408 | 1.0 | | 3.4333 | 7.96 | 2900 | 3.5482 | 1.0 | | 3.4294 | 8.23 | 3000 | 3.6070 | 1.0 | | 3.4294 | 8.5 | 3100 | 3.5905 | 1.0 | | 3.4294 | 8.78 | 3200 | 3.6018 | 1.0 | | 3.4294 | 9.05 | 3300 | 3.6326 | 1.0 | | 3.4294 | 9.33 | 3400 | 3.6214 | 1.0 | | 3.4293 | 9.6 | 3500 | 3.6372 | 1.0 | | 3.4293 | 9.88 | 3600 | 3.6215 | 1.0 | | 3.4293 | 10.15 | 3700 | 3.5106 | 1.0 | | 3.4293 | 10.43 | 3800 | 3.5066 | 1.0 | | 3.4293 | 10.7 | 3900 | 3.5352 | 1.0 | | 3.4295 | 10.97 | 4000 | 3.5129 | 1.0 | | 3.4295 | 11.25 | 4100 | 3.6384 | 1.0 | | 3.4295 | 11.52 | 4200 | 3.6019 | 1.0 | | 3.4295 | 11.8 | 4300 | 3.5876 | 1.0 | | 3.4295 | 12.07 | 4400 | 3.6207 | 1.0 | | 3.4252 | 12.35 | 4500 | 3.5998 | 1.0 | | 3.4252 | 12.62 | 4600 | 3.6216 | 1.0 | | 3.4252 | 12.89 | 4700 | 3.6073 | 1.0 | | 3.4252 | 13.17 | 4800 | 3.5567 | 1.0 | | 3.4252 | 13.44 | 4900 | 3.5745 | 1.0 | | 3.4274 | 13.72 | 5000 | 3.5738 | 1.0 | | 3.4274 | 13.99 | 5100 | 3.5914 | 1.0 | | 3.4274 | 14.27 | 5200 | 3.6004 | 1.0 | | 3.4274 | 14.54 | 5300 | 3.5968 | 1.0 | | 3.4274 | 14.81 | 5400 | 3.5908 | 1.0 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu117 - Datasets 2.13.0 - Tokenizers 0.13.3
dxyy/monteCarlo-cartpolev1
dxyy
2023-06-25T03:34:19Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-06-25T03:21:22Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: monteCarlo-cartpolev1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 470.60 +/- 18.14 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
jiuzhan/YoLoV7-dog
jiuzhan
2023-06-25T03:27:58Z
0
0
null
[ "region:us" ]
null
2023-06-25T03:11:07Z
# 狗的种类识别 用了500张图片的数据集,有点少。 分了38个种类,每种张数有点少。 效果一般。
hongyin/awareness-en-zh-0.8b-instruct
hongyin
2023-06-25T03:26:59Z
130
1
transformers
[ "transformers", "pytorch", "bloom", "text-generation", "en", "zh", "arxiv:2302.13173", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-06-24T23:35:39Z
--- language: - en - zh pipeline_tag: text-generation --- ## hongyin/awareness-en-zh-0.8b-instruct This is a English-Chinese bilingual autoregressive language model based on Bloom with a parameter size of 0.8b. The training process consists of two parts: (1) NTP task. (2) Instruction tuning. It is worth noting that although the model has learned many NLP downstream tasks (excluding multi-round dialogue), the model is not yet able to act like a chatbot. The model can be used as a raw material for alchemy. ## Bibtex entry and citation info Please cite if you find it helpful. ``` @article{zhu2023metaaid, title={MetaAID 2.0: An Extensible Framework for Developing Metaverse Applications via Human-controllable Pre-trained Models}, author={Zhu, Hongyin}, journal={arXiv preprint arXiv:2302.13173}, year={2023} } ``` --- license: other ---
gaiamolinaro/ppo-SnowballTarget
gaiamolinaro
2023-06-25T02:36:18Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-06-25T02:36:11Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: gaiamolinaro/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Rasith/NZappFineTune
Rasith
2023-06-25T02:31:07Z
61
0
transformers
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-25T02:30:54Z
--- tags: - generated_from_keras_callback model-index: - name: NZappFineTune results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # NZappFineTune This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.30.2 - TensorFlow 2.12.0 - Tokenizers 0.13.3
gaiamolinaro/Reinforce-PixelCopter
gaiamolinaro
2023-06-25T02:20:00Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-06-25T02:19:35Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-PixelCopter results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: -4.90 +/- 0.30 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
marty1885/RWKV-v4-neo-gemini
marty1885
2023-06-25T02:13:45Z
0
0
null
[ "region:us" ]
null
2023-06-25T01:35:12Z
# RWKV V4neo Gemini RWKV V4neo langage models trained completely from scratch using only textual data collected from the [Gemini protocol](https://gemini.circumlunar.space/). ## License [PMLL](https://github.com/marty1885/libreml-license/blob/master/PMLL.txt)
mort666/faster-whisper-large-v2-th
mort666
2023-06-25T02:13:04Z
644
8
ctranslate2
[ "ctranslate2", "audio", "automatic-speech-recognition", "en", "zh", "de", "es", "ru", "ko", "fr", "ja", "pt", "tr", "pl", "ca", "nl", "ar", "sv", "it", "id", "hi", "fi", "vi", "he", "uk", "el", "ms", "cs", "ro", "da", "hu", "ta", "no", "th", "ur", "hr", "bg", "lt", "la", "mi", "ml", "cy", "sk", "te", "fa", "lv", "bn", "sr", "az", "sl", "kn", "et", "mk", "br", "eu", "is", "hy", "ne", "mn", "bs", "kk", "sq", "sw", "gl", "mr", "pa", "si", "km", "sn", "yo", "so", "af", "oc", "ka", "be", "tg", "sd", "gu", "am", "yi", "lo", "uz", "fo", "ht", "ps", "tk", "nn", "mt", "sa", "lb", "my", "bo", "tl", "mg", "as", "tt", "haw", "ln", "ha", "ba", "jw", "su", "license:mit", "region:us" ]
automatic-speech-recognition
2023-06-18T15:28:40Z
--- language: - en - zh - de - es - ru - ko - fr - ja - pt - tr - pl - ca - nl - ar - sv - it - id - hi - fi - vi - he - uk - el - ms - cs - ro - da - hu - ta - 'no' - th - ur - hr - bg - lt - la - mi - ml - cy - sk - te - fa - lv - bn - sr - az - sl - kn - et - mk - br - eu - is - hy - ne - mn - bs - kk - sq - sw - gl - mr - pa - si - km - sn - yo - so - af - oc - ka - be - tg - sd - gu - am - yi - lo - uz - fo - ht - ps - tk - nn - mt - sa - lb - my - bo - tl - mg - as - tt - haw - ln - ha - ba - jw - su tags: - audio - automatic-speech-recognition license: mit library_name: ctranslate2 --- # Whisper large-v2 (Thai Finetune) model for CTranslate2 This repository contains the conversion of the [biodatlab/whisper-th-large-combined](https://huggingface.co/biodatlab/whisper-th-large-combined) which is finetune of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) for the Thai language to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format. This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/guillaumekln/faster-whisper). ## Example ```python from faster_whisper import WhisperModel model = WhisperModel("large-v2") segments, info = model.transcribe("audio.mp3") for segment in segments: print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text)) ``` ## Conversion details The original model was converted with the following command: ``` ct2-transformers-converter --model biodatlab/whisper-th-large-combined --output_dir faster-whisper-large-v2-th \ --copy_files tokenizer.json --quantization float16 ``` Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html). ## More information **For more information about the original model, see its [model card](https://huggingface.co/biodatlab/whisper-th-large-combined).**
nbiish/learning-FrozenLake-v1-4x4-noSlip
nbiish
2023-06-25T02:12:15Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-06-25T01:54:56Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: learning-FrozenLake-v1-4x4-noSlip results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="nbiish/learning-FrozenLake-v1-4x4-noSlip", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
NasimB/gpt2-dp-mod_aochild
NasimB
2023-06-25T00:27:44Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-06-24T20:59:20Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: gpt2-dp-mod_aochild results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-dp-mod_aochild This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 4.4146 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.706 | 0.27 | 500 | 5.6466 | | 5.3616 | 0.54 | 1000 | 5.2058 | | 5.0148 | 0.81 | 1500 | 4.9571 | | 4.7595 | 1.08 | 2000 | 4.8100 | | 4.5716 | 1.35 | 2500 | 4.6947 | | 4.4792 | 1.62 | 3000 | 4.5951 | | 4.3985 | 1.89 | 3500 | 4.5126 | | 4.2203 | 2.16 | 4000 | 4.4747 | | 4.1373 | 2.42 | 4500 | 4.4206 | | 4.1109 | 2.69 | 5000 | 4.3695 | | 4.0827 | 2.96 | 5500 | 4.3285 | | 3.8662 | 3.23 | 6000 | 4.3409 | | 3.863 | 3.5 | 6500 | 4.3058 | | 3.8585 | 3.77 | 7000 | 4.2777 | | 3.8073 | 4.04 | 7500 | 4.2766 | | 3.594 | 4.31 | 8000 | 4.2886 | | 3.6275 | 4.58 | 8500 | 4.2700 | | 3.6373 | 4.85 | 9000 | 4.2436 | | 3.488 | 5.12 | 9500 | 4.2800 | | 3.3669 | 5.39 | 10000 | 4.2884 | | 3.3981 | 5.66 | 10500 | 4.2764 | | 3.3991 | 5.93 | 11000 | 4.2533 | | 3.177 | 6.2 | 11500 | 4.3110 | | 3.1321 | 6.47 | 12000 | 4.3137 | | 3.1491 | 6.73 | 12500 | 4.3083 | | 3.1544 | 7.0 | 13000 | 4.3112 | | 2.8924 | 7.27 | 13500 | 4.3587 | | 2.9109 | 7.54 | 14000 | 4.3634 | | 2.9185 | 7.81 | 14500 | 4.3600 | | 2.8619 | 8.08 | 15000 | 4.3819 | | 2.7347 | 8.35 | 15500 | 4.3980 | | 2.7435 | 8.62 | 16000 | 4.4007 | | 2.752 | 8.89 | 16500 | 4.4012 | | 2.6887 | 9.16 | 17000 | 4.4116 | | 2.6506 | 9.43 | 17500 | 4.4137 | | 2.6588 | 9.7 | 18000 | 4.4144 | | 2.66 | 9.97 | 18500 | 4.4146 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
alantao912/models
alantao912
2023-06-25T00:07:35Z
13
0
transformers
[ "transformers", "pytorch", "blip", "image-text-to-text", "generated_from_trainer", "dataset:imagefolder", "license:bsd-3-clause", "endpoints_compatible", "region:us" ]
image-text-to-text
2023-06-24T20:19:09Z
--- license: bsd-3-clause tags: - generated_from_trainer datasets: - imagefolder model-index: - name: models results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # models This model is a fine-tuned version of [Salesforce/blip-image-captioning-base](https://huggingface.co/Salesforce/blip-image-captioning-base) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.4107 - Wer Score: 0.5495 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Score | |:-------------:|:-----:|:----:|:---------------:|:---------:| | 9.4536 | 0.05 | 10 | 7.8217 | 41.7753 | | 7.3267 | 0.11 | 20 | 6.6585 | 0.7753 | | 6.2358 | 0.16 | 30 | 5.7758 | 0.5667 | | 5.2862 | 0.22 | 40 | 4.7628 | 0.5419 | | 4.3786 | 0.27 | 50 | 3.9203 | 0.6398 | | 3.5554 | 0.33 | 60 | 3.1482 | 0.5613 | | 2.849 | 0.38 | 70 | 2.5209 | 0.5548 | | 2.3041 | 0.44 | 80 | 2.0561 | 0.5645 | | 1.8999 | 0.49 | 90 | 1.7474 | 0.5645 | | 1.658 | 0.55 | 100 | 1.5722 | 0.5548 | | 1.5238 | 0.6 | 110 | 1.4836 | 0.5591 | | 1.4726 | 0.66 | 120 | 1.4461 | 0.5538 | | 1.4328 | 0.71 | 130 | 1.4285 | 0.5473 | | 1.4211 | 0.77 | 140 | 1.4205 | 0.5559 | | 1.4202 | 0.82 | 150 | 1.4156 | 0.5548 | | 1.4098 | 0.88 | 160 | 1.4129 | 0.5505 | | 1.4124 | 0.93 | 170 | 1.4113 | 0.5548 | | 1.4075 | 0.99 | 180 | 1.4107 | 0.5495 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cpu - Datasets 2.13.1 - Tokenizers 0.13.3
nolestock/RedPajama-INCITE-Base-3B-v1-ft-guanaco-lora
nolestock
2023-06-24T23:48:17Z
1
0
peft
[ "peft", "region:us" ]
null
2023-06-24T23:48:06Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0.dev0
MonkDalma/xlm-roberta-base-finetuned-panx-all
MonkDalma
2023-06-24T23:45:34Z
103
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-06-24T23:40:04Z
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-all results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-all This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1743 - F1: 0.8540 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.3004 | 1.0 | 835 | 0.1875 | 0.8178 | | 0.1562 | 2.0 | 1670 | 0.1724 | 0.8430 | | 0.1013 | 3.0 | 2505 | 0.1743 | 0.8540 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
mohameddhiab/rate-jokes-bert
mohameddhiab
2023-06-24T23:45:08Z
121
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-24T23:21:00Z
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: rate-jokes-bert results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # rate-jokes-bert This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.0871 - F1: 0.0444 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-06 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 64 | 2.4209 | 0.0028 | | No log | 2.0 | 128 | 2.3785 | 0.0130 | | No log | 3.0 | 192 | 2.3215 | 0.0729 | | No log | 4.0 | 256 | 2.1787 | 0.0444 | | No log | 5.0 | 320 | 2.1038 | 0.0444 | | No log | 6.0 | 384 | 2.0944 | 0.0444 | | No log | 7.0 | 448 | 2.0911 | 0.0444 | | 2.2915 | 8.0 | 512 | 2.0901 | 0.0444 | | 2.2915 | 9.0 | 576 | 2.0892 | 0.0444 | | 2.2915 | 10.0 | 640 | 2.0871 | 0.0444 | ### Framework versions - Transformers 4.31.0.dev0 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
justinhoang/Reinforce-Pixelcopter-PLE-v0
justinhoang
2023-06-24T23:39:19Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-06-24T05:29:10Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 39.50 +/- 26.16 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
MonkDalma/xlm-roberta-base-finetuned-panx-it
MonkDalma
2023-06-24T23:37:15Z
114
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-06-24T23:34:50Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-it results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme config: PAN-X.it split: validation args: PAN-X.it metrics: - name: F1 type: f1 value: 0.8332647179909428 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-it This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2442 - F1: 0.8333 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.8366 | 1.0 | 70 | 0.3126 | 0.7444 | | 0.2814 | 2.0 | 140 | 0.2561 | 0.8094 | | 0.1843 | 3.0 | 210 | 0.2442 | 0.8333 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
MonkDalma/xlm-roberta-base-finetuned-panx-fr
MonkDalma
2023-06-24T23:34:41Z
103
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-06-24T23:31:54Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-fr results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme config: PAN-X.fr split: validation args: PAN-X.fr metrics: - name: F1 type: f1 value: 0.8423885618166527 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2707 - F1: 0.8424 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.5862 | 1.0 | 191 | 0.3257 | 0.7841 | | 0.2586 | 2.0 | 382 | 0.2732 | 0.8262 | | 0.1714 | 3.0 | 573 | 0.2707 | 0.8424 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
97jmlr/pyramids
97jmlr
2023-06-24T23:32:30Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-06-24T23:32:23Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: 97jmlr/pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Monk666/my_awesome_eli5_clm-model
Monk666
2023-06-24T23:28:23Z
63
0
transformers
[ "transformers", "tf", "gpt2", "text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-06-24T23:19:53Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Monk666/my_awesome_eli5_clm-model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Monk666/my_awesome_eli5_clm-model This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 3.7288 - Validation Loss: 3.7309 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 3.9096 | 3.7608 | 0 | | 3.7906 | 3.7412 | 1 | | 3.7288 | 3.7309 | 2 | ### Framework versions - Transformers 4.28.1 - TensorFlow 2.11.0 - Datasets 2.1.0 - Tokenizers 0.13.3
bogdancazan/pegasus-base-newsela-biendata-with-domain-adaptation
bogdancazan
2023-06-24T23:13:07Z
0
0
null
[ "region:us" ]
null
2023-06-24T23:05:47Z
training_args = TrainingArguments( output_dir='pegasus-base-wikilarge-newsela-with-domain-adaptation', num_train_epochs=20, warmup_steps=250, per_device_train_batch_size=BATCH_SIZE, weight_decay=0.01, learning_rate=2e-4, # fp16=True, optim="adafactor", ) Step Training Loss 500 4.391800 1000 3.994400 1500 3.009300 2000 2.596300 2500 2.389600 3000 2.328100 3500 2.272900 4000 2.125300 4500 2.019500 5000 2.005500 5500 1.994400 6000 1.853300 6500 1.838700 7000 1.840200 7500 1.822700 8000 1.716600 8500 1.734100 9000 1.739500 9500 1.696000 10000 1.661700 10500 1.672500 11000 1.666800 11500 1.617700 12000 1.611400 12500 1.616300 13000 1.625800 13500 1.567700 14000 1.584600 14500 1.589800 15000 1.574600 15500 1.548300 16000 1.559800 16500 1.562100 17000 1.541600 17500 1.533500 18000 1.538400 18500 1.545700 19000 1.510100 19500 1.522600 20000 1.529100 20500 1.520900 21000 1.501100 21500 1.508400 22000 1.515400 22500 1.500100 23000 1.496700 23500 1.495900 24000 1.505200 24500 1.484400 25000 1.483000 25500 1.491200 26000 1.491900 26500 1.468600 27000 1.479800 27500 1.479600 28000 1.474400 28500 1.466000 29000 1.471700 29500 1.473100 30000 1.463900 30500 1.459300 31000 1.463200 31500 1.466800 32000 1.455300 32500 1.458600 33000 1.455200 33500 1.455500 34000 1.449100 34500 1.451400 35000 1.451200 35500 1.449000 36000 1.443700 36500 1.447600 37000 1.447300 TrainOutput(global_step=37100, training_loss=1.7110925547967382, metrics={'train_runtime': 33376.534, 'train_samples_per_second': 8.889, 'train_steps_per_second': 1.112, 'total_flos': 0.0, 'train_loss': 1.7110925547967382, 'epoch': 20.0})
mohalm/videomae-base-finetuned-ucf101-subset
mohalm
2023-06-24T23:03:09Z
61
0
transformers
[ "transformers", "pytorch", "tensorboard", "videomae", "video-classification", "generated_from_trainer", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
2023-06-24T20:21:51Z
--- license: cc-by-nc-4.0 tags: - generated_from_trainer model-index: - name: videomae-base-finetuned-ucf101-subset results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # videomae-base-finetuned-ucf101-subset This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 0.0008 - eval_accuracy: 1.0 - eval_runtime: 223.6754 - eval_samples_per_second: 0.443 - eval_steps_per_second: 0.076 - epoch: 1.01 - step: 43 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 164 ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
TheBloke/orca_mini_7B-GGML
TheBloke
2023-06-24T22:49:41Z
0
21
transformers
[ "transformers", "en", "dataset:psmathur/alpaca_orca", "dataset:psmathur/dolly-v2_orca", "dataset:psmathur/WizardLM_Orca", "arxiv:2306.02707", "license:mit", "region:us" ]
null
2023-06-24T22:07:15Z
--- inference: false license: mit language: - en library_name: transformers datasets: - psmathur/alpaca_orca - psmathur/dolly-v2_orca - psmathur/WizardLM_Orca --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Pankaj Mathur's Orca Mini 7B GGML These files are GGML format model files for [Pankaj Mathur's Orca Mini 7B](https://huggingface.co/psmathur/orca_mini_7b). GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as: * [text-generation-webui](https://github.com/oobabooga/text-generation-webui) * [KoboldCpp](https://github.com/LostRuins/koboldcpp) * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui) * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) * [ctransformers](https://github.com/marella/ctransformers) ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/orca_mini_7B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/orca_mini_7B-GGML) * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/psmathur/orca_mini_7b) ## Prompt template: ``` ### System: You are an AI assistant that follows instruction extremely well. Help as much as you can. ### User: prompt ### Response: ``` or ``` ### System: You are an AI assistant that follows instruction extremely well. Help as much as you can. ### User: prompt ### Input: input ### Response: ``` <!-- compatibility_ggml start --> ## Compatibility ### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0` I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`. These are guaranteed to be compatbile with any UIs, tools and libraries released since late May. ### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K` These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`. They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python and ctransformers. Other tools and libraries may or may not be compatible - check their documentation if in doubt. ## Explanation of the new k-quant methods The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw * GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type. Refer to the Provided Files table below to see what files use which methods, and how. <!-- compatibility_ggml end --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | orca-mini-7b.ggmlv3.q2_K.bin | q2_K | 2 | 2.87 GB | 5.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. | | orca-mini-7b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 3.60 GB | 6.10 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | orca-mini-7b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 3.28 GB | 5.78 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | orca-mini-7b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 2.95 GB | 5.45 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors | | orca-mini-7b.ggmlv3.q4_0.bin | q4_0 | 4 | 3.79 GB | 6.29 GB | Original llama.cpp quant method, 4-bit. | | orca-mini-7b.ggmlv3.q4_1.bin | q4_1 | 4 | 4.21 GB | 6.71 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. | | orca-mini-7b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 4.08 GB | 6.58 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K | | orca-mini-7b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 3.83 GB | 6.33 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors | | orca-mini-7b.ggmlv3.q5_0.bin | q5_0 | 5 | 4.63 GB | 7.13 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. | | orca-mini-7b.ggmlv3.q5_1.bin | q5_1 | 5 | 5.06 GB | 7.56 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. | | orca-mini-7b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 4.78 GB | 7.28 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K | | orca-mini-7b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 4.65 GB | 7.15 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors | | orca-mini-7b.ggmlv3.q6_K.bin | q6_K | 6 | 5.53 GB | 8.03 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors | | orca-mini-7b.ggmlv3.q8_0.bin | q8_0 | 8 | 7.16 GB | 9.66 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ## How to run in `llama.cpp` I use the following command line; adjust for your tastes and needs: ``` ./main -t 10 -ngl 32 -m orca-mini-7b.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### System:\nYou are an story writing assistant who writes very long, detailed and interesting stories\n\n### User:\nWrite a story about llamas\n\n### Input:\n{input}\n\n### Response:\n" ``` If you're able to use full GPU offloading, you should use `-t 1` to get best performance. If not able to fully offload to GPU, you should use more cores. Change `-t 10` to the number of physical CPU cores you have, or a lower number depending on what gives best performance. Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md). <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: Pyrater, WelcomeToTheClub, Kalila, Mano Prime, Trenton Dambrowitz, Spiking Neurons AB, Pierre Kircher, Fen Risland, Kevin Schuppel, Luke, Rainer Wilmers, vamX, Gabriel Puliatti, Alex , Karl Bernard, Ajan Kanaga, Talal Aujan, Space Cruiser, ya boyyy, biorpg, Johann-Peter Hartmann, Asp the Wyvern, Ai Maven, Ghost , Preetika Verma, Nikolai Manek, trip7s trip, John Detwiler, Fred von Graf, Artur Olbinski, subjectnull, John Villwock, Junyu Yang, Rod A, Lone Striker, Chris McCloskey, Iucharbius , Matthew Berman, Illia Dulskyi, Khalefa Al-Ahmad, Imad Khwaja, chris gileta, Willem Michiel, Greatston Gnanesh, Derek Yates, K, Alps Aficionado, Oscar Rangel, David Flickinger, Luke Pendergrass, Deep Realms, Eugene Pentland, Cory Kujawski, terasurfer , Jonathan Leane, senxiiz, Joseph William Delisle, Sean Connelly, webtim, zynix , Nathan LeClaire. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Pankaj Mathur's Orca Mini 7B # orca_mini_7b An [OpenLLaMa-7B model](https://github.com/openlm-research/open_llama) model trained on explain tuned datasets, created using Instructions and Input from WizardLM, Alpaca & Dolly-V2 datasets and applying Orca Research Paper dataset construction approaches. # Dataset We build explain tuned [WizardLM dataset ~70K](https://github.com/nlpxucan/WizardLM), [Alpaca dataset ~52K](https://crfm.stanford.edu/2023/03/13/alpaca.html) & [Dolly-V2 dataset ~15K](https://github.com/databrickslabs/dolly) created using approaches from [Orca Research Paper](https://arxiv.org/abs/2306.02707). We leverage all of the 15 system instructions provided in Orca Research Paper. to generate custom datasets, in contrast to vanilla instruction tuning approaches used by original datasets. This helps student model aka this model to learn ***thought*** process from teacher model, which is ChatGPT (gpt-3.5-turbo-0301 version). Please see below example usage how the **System** prompt is added before each **instruction**. # Training The training configurations are provided in the table below. The training takes on 8x A100(80G) GPUs and lasts for around 7 Hours for cost of $84 using [Lambda Labs](https://lambdalabs.com) We used DeepSpeed with fully sharded data parallelism, also know as [ZeRO stage 3](https://engineering.fb.com/2021/07/15/open-source/fsdp/) by writing our own fine tunning scripts plus leveraging some of the model training code provided by amazing [OpenAlpaca repo](https://github.com/yxuansu/OpenAlpaca) Here are some of params used during training: ||| |:-------------:|:-------------:| |*batch_size*|32| |*train_micro_batch_size_per_gpu*|2| |*gradient_accumulation_steps*|2| |*Learning rate*|2e-5| |*Max length*|1024| |*Epochs*|3| |*Optimizer*|AdamW| # Example Usage Below shows an example on how to use this model ```python import torch from transformers import LlamaForCausalLM, LlamaTokenizer # Hugging Face model_path model_path = 'psmathur/orca_mini_7b' tokenizer = LlamaTokenizer.from_pretrained(model_path) model = LlamaForCausalLM.from_pretrained( model_path, torch_dtype=torch.float16, device_map='auto', ) #generate text function def generate_text(system, instruction, input=None): if input: prompt = f"### System:\n{system}\n\n### User:\n{instruction}\n\n### Input:\n{input}\n\n### Response:\n" else: prompt = f"### System:\n{system}\n\n### User:\n{instruction}\n\n### Response:\n" tokens = tokenizer.encode(prompt) tokens = torch.LongTensor(tokens).unsqueeze(0) tokens = tokens.to('cuda') instance = {'input_ids': tokens,'top_p': 1.0, 'temperature':0.7, 'generate_len': 1024, 'top_k': 50} length = len(tokens[0]) with torch.no_grad(): rest = model.generate( input_ids=tokens, max_length=length+instance['generate_len'], use_cache=True, do_sample=True, top_p=instance['top_p'], temperature=instance['temperature'], top_k=instance['top_k'] ) output = rest[0][length:] string = tokenizer.decode(output, skip_special_tokens=True) return f'[!] Response: {string}' # Sample Test Instruction Used by Youtuber Sam Witteveen https://www.youtube.com/@samwitteveenai system = 'You are an AI assistant that follows instruction extremely well. Help as much as you can.' instruction = 'Write a letter to Sam Altman, CEO of OpenAI, requesting him to convert GPT4 a private model by OpenAI to an open source project' print(generate_text(system, instruction)) ``` ``` [!] Response: Dear Sam Altman, I am writing to request that you convert the GPT4 private model developed by OpenAI to an open source project. As a user of OpenAI, I have been waiting for the day when I can use the advanced natural language processing capabilities of GPT4 in a more open and accessible way. While OpenAI has made significant progress in developing AI applications, it has primarily focused on building private models that are not accessible to the general public. However, with the recent release of GPT-3, there is a growing demand for more open and accessible AI tools. Converting GPT4 to an open source project would allow for greater transparency, collaboration, and innovation. It would also help to build trust in the technology and ensure that it is used ethically and responsibly. I urge you to consider converting GPT4 to an open source project. This would be a significant contribution to the AI community and would help to create a more open and accessible future. Thank you for your consideration. Sincerely, [Your Name] ``` **P.S. I am #opentowork and #collaboration, if you can help, please reach out to me at [email protected]** Next Goals: 1) Try more data like actually using FLAN-v2, just like Orka Research Paper (I am open for suggestions) 2) Provide more options for Text generation UI. (may be https://github.com/oobabooga/text-generation-webui) 3) Provide 4bit GGML/GPTQ quantized model (may be [TheBloke](https://huggingface.co/TheBloke) can help here) Limitations & Biases: This model can produce factually incorrect output, and should not be relied on to produce factually accurate information. This model was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs. Disclaimer: The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes. Citiation: If you found wizardlm_alpaca_dolly_orca_open_llama_7b useful in your research or applications, please kindly cite using the following BibTeX: ``` @misc{wizardlm_alpaca_dolly_orca_open_llama_7b, author = {Pankaj Mathur}, title = {wizardlm_alpaca_dolly_orca_open_llama_7b: An explain tuned OpenLLaMA-7b model on custom wizardlm, alpaca, & dolly datasets}, year = {2023}, publisher = {GitHub, HuggingFace}, journal = {GitHub repository, HuggingFace repository}, howpublished = {\url{https://github.com/pankajarm/wizardlm_alpaca_dolly_orca_open_llama_7b}, \url{https://https://huggingface.co/psmathur/wizardlm_alpaca_dolly_orca_open_llama_7b}}, } ``` ``` @software{openlm2023openllama, author = {Xinyang Geng and Hao Liu}, title = {OpenLLaMA: An Open Reproduction of LLaMA}, month = May, year = 2023, url = {https://github.com/openlm-research/open_llama} } ``` ``` @misc{openalpaca, author = {Yixuan Su and Tian Lan and Deng Cai}, title = {OpenAlpaca: A Fully Open-Source Instruction-Following Model Based On OpenLLaMA}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/yxuansu/OpenAlpaca}}, } ``` ``` @misc{alpaca, author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto }, title = {Stanford Alpaca: An Instruction-following LLaMA model}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}}, } ```
TheBloke/orca_mini_13B-GGML
TheBloke
2023-06-24T22:49:26Z
0
56
transformers
[ "transformers", "en", "dataset:psmathur/alpaca_orca", "dataset:psmathur/dolly-v2_orca", "dataset:psmathur/WizardLM_Orca", "arxiv:2306.02707", "license:mit", "region:us" ]
null
2023-06-24T21:36:11Z
--- inference: false license: mit language: - en library_name: transformers datasets: - psmathur/alpaca_orca - psmathur/dolly-v2_orca - psmathur/WizardLM_Orca --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Pankaj Mathur's Orca Mini 13B GGML These files are GGML format model files for [Pankaj Mathur's Orca Mini 13B](https://huggingface.co/psmathur/orca_mini_13b). GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as: * [text-generation-webui](https://github.com/oobabooga/text-generation-webui) * [KoboldCpp](https://github.com/LostRuins/koboldcpp) * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui) * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) * [ctransformers](https://github.com/marella/ctransformers) ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/orca_mini_13B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/orca_mini_13B-GGML) * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/psmathur/orca_mini_13b) ## Prompt template: ``` ### System: You are an AI assistant that follows instruction extremely well. Help as much as you can. ### User: prompt ### Response: ``` or ``` ### System: You are an AI assistant that follows instruction extremely well. Help as much as you can. ### User: prompt ### Input: input ### Response: ``` <!-- compatibility_ggml start --> ## Compatibility ### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0` I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`. These are guaranteed to be compatbile with any UIs, tools and libraries released since late May. ### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K` These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`. They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python and ctransformers. Other tools and libraries may or may not be compatible - check their documentation if in doubt. ## Explanation of the new k-quant methods The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw * GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type. Refer to the Provided Files table below to see what files use which methods, and how. <!-- compatibility_ggml end --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | orca-mini-13b.ggmlv3.q2_K.bin | q2_K | 2 | 5.51 GB | 8.01 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. | | orca-mini-13b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 6.93 GB | 9.43 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | orca-mini-13b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 6.31 GB | 8.81 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | orca-mini-13b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 5.66 GB | 8.16 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors | | orca-mini-13b.ggmlv3.q4_0.bin | q4_0 | 4 | 7.32 GB | 9.82 GB | Original llama.cpp quant method, 4-bit. | | orca-mini-13b.ggmlv3.q4_1.bin | q4_1 | 4 | 8.14 GB | 10.64 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. | | orca-mini-13b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 7.87 GB | 10.37 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K | | orca-mini-13b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 7.37 GB | 9.87 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors | | orca-mini-13b.ggmlv3.q5_0.bin | q5_0 | 5 | 8.95 GB | 11.45 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. | | orca-mini-13b.ggmlv3.q5_1.bin | q5_1 | 5 | 9.76 GB | 12.26 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. | | orca-mini-13b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 9.23 GB | 11.73 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K | | orca-mini-13b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 8.97 GB | 11.47 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors | | orca-mini-13b.ggmlv3.q6_K.bin | q6_K | 6 | 10.68 GB | 13.18 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors | | orca-mini-13b.ggmlv3.q8_0.bin | q8_0 | 8 | 13.83 GB | 16.33 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ## How to run in `llama.cpp` I use the following command line; adjust for your tastes and needs: ``` ./main -t 10 -ngl 32 -m orca-mini-13b.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### System:\nYou are an story writing assistant who writes very long, detailed and interesting stories\n\n### User:\nWrite a story about llamas\n\n### Input:\n{input}\n\n### Response:\n" ``` If you're able to use full GPU offloading, you should use `-t 1` to get best performance. If not able to fully offload to GPU, you should use more cores. Change `-t 10` to the number of physical CPU cores you have, or a lower number depending on what gives best performance. Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md). <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: Pyrater, WelcomeToTheClub, Kalila, Mano Prime, Trenton Dambrowitz, Spiking Neurons AB, Pierre Kircher, Fen Risland, Kevin Schuppel, Luke, Rainer Wilmers, vamX, Gabriel Puliatti, Alex , Karl Bernard, Ajan Kanaga, Talal Aujan, Space Cruiser, ya boyyy, biorpg, Johann-Peter Hartmann, Asp the Wyvern, Ai Maven, Ghost , Preetika Verma, Nikolai Manek, trip7s trip, John Detwiler, Fred von Graf, Artur Olbinski, subjectnull, John Villwock, Junyu Yang, Rod A, Lone Striker, Chris McCloskey, Iucharbius , Matthew Berman, Illia Dulskyi, Khalefa Al-Ahmad, Imad Khwaja, chris gileta, Willem Michiel, Greatston Gnanesh, Derek Yates, K, Alps Aficionado, Oscar Rangel, David Flickinger, Luke Pendergrass, Deep Realms, Eugene Pentland, Cory Kujawski, terasurfer , Jonathan Leane, senxiiz, Joseph William Delisle, Sean Connelly, webtim, zynix , Nathan LeClaire. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Pankaj Mathur's Orca Mini 13B # orca_mini_13b An [OpenLLaMa-13B model](https://github.com/openlm-research/open_llama) model trained on explain tuned datasets, created using Instructions and Input from WizardLM, Alpaca & Dolly-V2 datasets and applying Orca Research Paper dataset construction approaches. # Dataset We build explain tuned [WizardLM dataset ~70K](https://github.com/nlpxucan/WizardLM), [Alpaca dataset ~52K](https://crfm.stanford.edu/2023/03/13/alpaca.html) & [Dolly-V2 dataset ~15K](https://github.com/databrickslabs/dolly) created using approaches from [Orca Research Paper](https://arxiv.org/abs/2306.02707). We leverage all of the 15 system instructions provided in Orca Research Paper. to generate custom datasets, in contrast to vanilla instruction tuning approaches used by original datasets. This helps student model aka this model to learn ***thought*** process from teacher model, which is ChatGPT (gpt-3.5-turbo-0301 version). Please see below example usage how the **System** prompt is added before each **instruction**. # Training The training configurations are provided in the table below. The training takes on 8x A100(80G) GPUs and lasts for around 15 Hours for cost of $180 using [Lambda Labs](https://lambdalabs.com) We used DeepSpeed with fully sharded data parallelism, also know as [ZeRO stage 3](https://engineering.fb.com/2021/07/15/open-source/fsdp/) by writing our own fine tunning scripts plus leveraging some of the model training code provided by amazing [OpenAlpaca repo](https://github.com/yxuansu/OpenAlpaca) Here are some of params used during training: ||| |:-------------:|:-------------:| |*batch_size*|16| |*train_micro_batch_size_per_gpu*|2| |*gradient_accumulation_steps*|1| |*Learning rate*|2e-5| |*Max length*|1024| |*Epochs*|3| |*Optimizer*|AdamW| # Example Usage Below shows an example on how to use this model ```python import torch from transformers import LlamaForCausalLM, LlamaTokenizer # Hugging Face model_path model_path = 'psmathur/orca_mini_13b' tokenizer = LlamaTokenizer.from_pretrained(model_path) model = LlamaForCausalLM.from_pretrained( model_path, torch_dtype=torch.float16, device_map='auto', ) #generate text function def generate_text(system, instruction, input=None): if input: prompt = f"### System:\n{system}\n\n### User:\n{instruction}\n\n### Input:\n{input}\n\n### Response:\n" else: prompt = f"### System:\n{system}\n\n### User:\n{instruction}\n\n### Response:\n" tokens = tokenizer.encode(prompt) tokens = torch.LongTensor(tokens).unsqueeze(0) tokens = tokens.to('cuda') instance = {'input_ids': tokens,'top_p': 1.0, 'temperature':0.7, 'generate_len': 1024, 'top_k': 50} length = len(tokens[0]) with torch.no_grad(): rest = model.generate( input_ids=tokens, max_length=length+instance['generate_len'], use_cache=True, do_sample=True, top_p=instance['top_p'], temperature=instance['temperature'], top_k=instance['top_k'] ) output = rest[0][length:] string = tokenizer.decode(output, skip_special_tokens=True) return f'[!] Response: {string}' # Sample Test Instruction Used by Youtuber Sam Witteveen https://www.youtube.com/@samwitteveenai system = 'You are an AI assistant that follows instruction extremely well. Help as much as you can.' instruction = 'Write a letter to Sam Altman, CEO of OpenAI, requesting him to convert GPT4 a private model by OpenAI to an open source project' print(generate_text(system, instruction)) ``` ``` [!] Response: Dear Sam Altman, I am writing to request that you convert the GPT4 private model developed by OpenAI to an open source project. As a user of OpenAI, I have been waiting for the day when I can use the advanced natural language processing capabilities of GPT4 in a more open and accessible way. While OpenAI has made significant progress in developing AI applications, it has primarily focused on building private models that are not accessible to the general public. However, with the recent release of GPT-3, there is a growing demand for more open and accessible AI tools. Converting GPT4 to an open source project would allow for greater transparency, collaboration, and innovation. It would also help to build trust in the technology and ensure that it is used ethically and responsibly. I urge you to consider converting GPT4 to an open source project. This would be a significant contribution to the AI community and would help to create a more open and accessible future. Thank you for your consideration. Sincerely, [Your Name] ``` **P.S. I am #opentowork and #collaboration, if you can help, please reach out to me at [email protected]** Next Goals: 1) Try more data like actually using FLAN-v2, just like Orka Research Paper (I am open for suggestions) 2) Provide more options for Text generation UI. (may be https://github.com/oobabooga/text-generation-webui) 3) Provide 4bit GGML/GPTQ quantized model (may be [TheBloke](https://huggingface.co/TheBloke) can help here) Limitations & Biases: This model can produce factually incorrect output, and should not be relied on to produce factually accurate information. This model was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs. Disclaimer: The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes. Citiation: If you found wizardlm_alpaca_dolly_orca_open_llama_13b useful in your research or applications, please kindly cite using the following BibTeX: ``` @misc{wizardlm_alpaca_dolly_orca_open_llama_13b, author = {Pankaj Mathur}, title = {wizardlm_alpaca_dolly_orca_open_llama_13b: An explain tuned OpenLLaMA-13b model on custom wizardlm, alpaca, & dolly datasets}, year = {2023}, publisher = {GitHub, HuggingFace}, journal = {GitHub repository, HuggingFace repository}, howpublished = {\url{https://github.com/pankajarm/wizardlm_alpaca_dolly_orca_open_llama_13b}, \url{https://https://huggingface.co/psmathur/wizardlm_alpaca_dolly_orca_open_llama_13b}}, } ``` ``` @software{openlm2023openllama, author = {Xinyang Geng and Hao Liu}, title = {OpenLLaMA: An Open Reproduction of LLaMA}, month = May, year = 2023, url = {https://github.com/openlm-research/open_llama} } ``` ``` @misc{openalpaca, author = {Yixuan Su and Tian Lan and Deng Cai}, title = {OpenAlpaca: A Fully Open-Source Instruction-Following Model Based On OpenLLaMA}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/yxuansu/OpenAlpaca}}, } ``` ``` @misc{alpaca, author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto }, title = {Stanford Alpaca: An Instruction-following LLaMA model}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}}, } ```
dar-tau/ppo-LunarLander-v2
dar-tau
2023-06-24T22:12:59Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-06-24T22:06:50Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 256.44 +/- 18.33 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
pongjin/en_with_korean_w2v_model_960h
pongjin
2023-06-24T21:57:13Z
82
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:pongjin/en_corpora_parliament_processed", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-06-07T05:56:38Z
--- license: apache-2.0 datasets: - pongjin/en_corpora_parliament_processed language: - en pipeline_tag: automatic-speech-recognition metrics: - wer --- **This model has been referred to the following links** 1) https://huggingface.co/blog/wav2vec2-with-ngram 2) https://huggingface.co/blog/fine-tune-wav2vec2-english Thanks to [patrickvonplaten Patrick von Platen](https://huggingface.co/patrickvonplaten) 해당 모델은 한국인의 영어 발화 인식 성능 개선을 위해 facebook/wav2vec2-base-960h 로 파인튜닝한 모델에 KenLM 5-gram 을 붙인 ASR + LM 모델입니다. If you want to use LM, you must have kenlm installed https://github.com/kpu/kenlm ```python pip install https://github.com/kpu/kenlm/archive/master.zip ``` 학습 데이터 출처 : https://aiopen.etri.re.kr/voiceModel >transformers==4.24.0 >huggingface_hub==0.13.2 | wer | epoch | batch | lr | weight_decay| warmup_steps| | --- | --- | --- | --- | --- | --- | | 0.32 | 10 | 16 | 1e-4 | 0.005 | 1000 |
pongjin/en_with_korean_model_large_960h
pongjin
2023-06-24T21:56:41Z
79
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:pongjin/en_corpora_parliament_processed", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-06-16T16:55:59Z
--- license: apache-2.0 datasets: - pongjin/en_corpora_parliament_processed language: - en pipeline_tag: automatic-speech-recognition metrics: - wer --- **This model has been referred to the following links** 1) https://huggingface.co/blog/wav2vec2-with-ngram 2) https://huggingface.co/blog/fine-tune-wav2vec2-english Thanks to [patrickvonplaten Patrick von Platen](https://huggingface.co/patrickvonplaten) 해당 모델은 한국인의 영어 발화 인식 성능 개선을 위해 facebook/wav2vec2-large-960h 로 파인튜닝한 모델에 KenLM 5-gram 을 붙인 ASR + LM 모델입니다. If you want to use LM, you must have kenlm installed https://github.com/kpu/kenlm ```python pip install https://github.com/kpu/kenlm/archive/master.zip ``` 학습 데이터 출처 : https://aiopen.etri.re.kr/voiceModel >transformers==4.24.0 >huggingface_hub==0.13.2 | wer | epoch | batch | lr | weight_decay| warmup_steps| | --- | --- | --- | --- | --- | --- | | 0.17 | 10 | 16 | 1e-4 | 0.005 | 1000 |
dar-tau/q-Taxi-v3
dar-tau
2023-06-24T21:54:15Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-06-24T21:54:13Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.54 +/- 2.74 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="dar-tau/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
SandeepKanao/HL7-FHIR-Model-V1
SandeepKanao
2023-06-24T21:45:41Z
105
1
transformers
[ "transformers", "pytorch", "distilbert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-06-22T12:44:31Z
--- license: apache-2.0 language: - en tags: - Token Classification co2_eq_emissions: 0.0279399890043426 widget: - text: ""MSH|^~&|SendingAPP|MYTEST|||20230621090000||ORU^R01|1|P|2.5.1||||||UNICODE PID|1||13579246^^^TEST||Taylor^Michael||19830520|M|||987 Pine St^^Anytown^NY^23456||555-456-7890 PV1|1||bc^^004 OBR|1||13579246|BCD^LEFT Breast Cancer Diagnosis^99MRC||20230621090000|||Taylor^Sarah||20230620090000|||N OBX|1|ST|FINDINGS^Findings^99MRC||Lab report shows asymmetric density in the right breast.|F|||R OBX|2|ST|IMPRESSION^Impression^99MRC||BIRADS category: 4 - Probably left side as issues.|F|||R OBX|3|ST|RECOMMENDATION^Recommendation^99MRC||Follow-up specialit visit in six months.|F|||R"" example_title: "example 1" - text: "MSH|^~&|SendingAPP|MYTEST|||20230621090000||ORU^R01|1|P|2.5.1||||||UNICODE PID|1||13579246^^^TEST||Taylor^Michael||19830520|M|||987 Pine St^^Anytown^NY^23456||555-456-7890 PV1|1||bc^^004 OBR|1||13579246|BCD^LEFT Breast Cancer Diagnosis^99MRC||20230621090000|||Taylor^Sarah||20230620090000|||N OBX|1|ST|FINDINGS^Findings^99MRC||Lab report shows asymmetric density in the right breast.|F|||R OBX|2|ST|IMPRESSION^Impression^99MRC||BIRADS category: 4 - Probably left side as issues.|F|||R OBX|3|ST|RECOMMENDATION^Recommendation^99MRC||Follow-up specialit visit in six months.|F|||R" ## About the Model An English Named Entity Recognition model, trained on Maccrobat to recognize the bio-medical entities (107 entities) from a given text corpus (case reports etc.). This model was built on top of distilbert-base-uncased - Dataset: Maccrobat https://figshare.com/articles/dataset/MACCROBAT2018/9764942 - Carbon emission: 0.0279399890043426 Kg - Training time: 30.16527 minutes - GPU used : 1 x GeForce RTX 3060 Laptop GPU Checkout the tutorial video for explanation of this model and corresponding python library: https://youtu.be/xpiDPdBpS18 ## Usage The easiest way is to load the inference api from huggingface and second method is through the pipeline object offered by transformers library. ```python from transformers import pipeline from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("d4data/biomedical-ner-all") model = AutoModelForTokenClassification.from_pretrained("d4data/biomedical-ner-all") pipe = pipeline("ner", model=model, tokenizer=tokenizer, aggregation_strategy="simple") # pass device=0 if using gpu pipe("""The patient reported no recurrence of palpitations at follow-up 6 months after the ablation.""") ``` ## Author
Neko-Institute-of-Science/Pygmalion-13B-GGML
Neko-Institute-of-Science
2023-06-24T20:52:30Z
0
4
null
[ "region:us" ]
null
2023-06-24T04:18:05Z
GGML version of https://huggingface.co/TehVenom/Pygmalion-13b-Merged this is also version 3 of ggml so it works on the latest llama.cpp as of this date.
Jinouga/andy-raconte
Jinouga
2023-06-24T20:46:04Z
31
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-06-24T20:41:36Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### andy_raconte Dreambooth model trained by Jinouga with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
echrisantus/LunarLander-v2
echrisantus
2023-06-24T20:38:48Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-06-24T20:38:24Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 259.33 +/- 24.69 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
RajkNakka/bert-finetuned-squad
RajkNakka
2023-06-24T20:27:39Z
119
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-06-24T19:22:37Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
natope/mT5-tfidf-10pass-all-questions-QA-22-06-2023-without-ams-3epochs-contextonly
natope
2023-06-24T19:57:32Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "mt5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-06-24T17:10:51Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: mT5-tfidf-10pass-all-questions-QA-22-06-2023-without-ams-3epochs-contextonly results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mT5-tfidf-10pass-all-questions-QA-22-06-2023-without-ams-3epochs-contextonly This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.0701 - Rouge1: 0.141 - Rouge2: 0.0269 - Rougel: 0.1158 - Rougelsum: 0.116 - Gen Len: 18.8553 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 2.812 | 1.0 | 5187 | 2.1633 | 0.0371 | 0.0058 | 0.0317 | 0.0318 | 9.2088 | | 2.5987 | 2.0 | 10374 | 2.0883 | 0.1405 | 0.0277 | 0.1163 | 0.1164 | 18.8388 | | 2.5587 | 3.0 | 15561 | 2.0701 | 0.141 | 0.0269 | 0.1158 | 0.116 | 18.8553 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
lordsauron/ppo-LunarLander-v2
lordsauron
2023-06-24T19:52:43Z
2
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-06-24T19:52:22Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 243.33 +/- 18.63 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
echrisantus/ppo-LunarLander-v2
echrisantus
2023-06-24T19:44:53Z
1
0
transformers
[ "transformers", "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "endpoints_compatible", "region:us" ]
reinforcement-learning
2023-06-18T06:47:42Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -172.09 +/- 58.08 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 50000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'echrisantus/ppo-LunarLander-v2' 'batch_size': 512 'minibatch_size': 128} ```
echrisantus/ppo-CartPole-v1
echrisantus
2023-06-24T19:43:52Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2023-06-24T19:40:50Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -166.57 +/- 78.22 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 50000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'echrisantus/ppo-CartPole-v1' 'batch_size': 512 'minibatch_size': 128} ```
TheHappyDrone/DialoGPT-medium-Nexus-Nova-turing-v2
TheHappyDrone
2023-06-24T19:42:57Z
113
0
transformers
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "conversational", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-01-03T00:32:57Z
--- language: - en tags: - conversational --- Meet Nexus Nova, the spacecraft salesman! CEO Of Nova Space Ships, former cargo hauler, and family man. He's here to offer you the best deals in this corner of the galaxy. Nex is ready to answer all your questions and help you find the best ship for your budget Trained for 9 epochs on 1100 lines of custom dialogue, written partially by hand and partialy with the help of GPT-3.5-turbo. Because of this, this model is not suitable for commercial use and is offered to the public soley for personal enjoyment and experimentation. ## Uses This model is designed to be a conversational roleplaying chatbot for a home made tabletop rpg campaign. It is not intended to answer general questions or provide factual information. ### Direct Use This model is intended to be ran on low end hardware and hosted as a chatbot. Conversations are intended to be at most 25 turns in total. ### Out-of-Scope Use This model is only intended for short role-play conversaions, and fails to perform adequatley at any other tasks. ## Bias, Risks, and Limitations This model is a fine tuned variant of Microsoft's DialoGPT, so it likely has the same inherit biases as thier base model. It may create harmful outputs if pushed to, but is highly unlikely to unless prompted to. This model is only fine tuned on hyper specific dialogue scenerios, and is severley limited in its ability to hold conversations outside of its domain (spaceship sales). This model does exhibit short term memory issues when not reminded of which ship it is talking about, and may also sometimes hallucinate new ships, prices, features, etc that were not part of its training. ### Recommendations This model should not be used in any commercial or public settings, or outside of its intended domain as an entertainment/experiment
yashgharat/ppo-LunarLander-v2
yashgharat
2023-06-24T19:31:01Z
3
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-06-24T19:30:27Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 285.11 +/- 21.66 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
magnustragardh/q-FrozenLake-v1-4x4-noSlippery
magnustragardh
2023-06-24T19:28:54Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-06-24T19:11:34Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="magnustragardh/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"])
Bodolaz/Unit-8.1
Bodolaz
2023-06-24T19:22:34Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2023-06-24T19:22:30Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -141.94 +/- 111.39 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 50000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'Bodolaz/Unit-8.1' 'batch_size': 512 'minibatch_size': 128} ```
andrewromitti/alzheimer_model_aug
andrewromitti
2023-06-24T18:58:43Z
191
0
transformers
[ "transformers", "pytorch", "tensorboard", "deit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-06-24T18:29:34Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: alzheimer_model_aug results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: test args: default metrics: - name: Accuracy type: accuracy value: 0.94453125 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # alzheimer_model_aug This model is a fine-tuned version of [facebook/deit-base-distilled-patch16-224](https://huggingface.co/facebook/deit-base-distilled-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.1758 - Accuracy: 0.9445 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 1234 - gradient_accumulation_steps: 10 - total_train_batch_size: 160 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.47 | 1.0 | 212 | 0.1758 | 0.9445 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
t3PbMvBN6SXv/dqn-SpaceInvadersNoFrameskip-v4
t3PbMvBN6SXv
2023-06-24T18:39:47Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-06-24T18:39:31Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 575.50 +/- 129.04 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga t3PbMvBN6SXv -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga t3PbMvBN6SXv -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga t3PbMvBN6SXv ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
RogerioFreitas/whisper-medium-portuguese
RogerioFreitas
2023-06-24T18:39:22Z
42
0
transformers
[ "transformers", "pytorch", "jax", "whisper", "automatic-speech-recognition", "generated_from_trainer", "whisper-event", "pt", "dataset:mozilla-foundation/common_voice_11_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-06-24T17:42:08Z
--- language: pt license: apache-2.0 tags: - generated_from_trainer - whisper-event datasets: - mozilla-foundation/common_voice_11_0 metrics: - wer model-index: - name: openai/whisper-medium results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: mozilla-foundation/common_voice_11_0 type: mozilla-foundation/common_voice_11_0 config: pt split: test args: pt metrics: - name: Wer type: wer value: 6.598745817992301 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Modelo Flax do Pierre em Português para Reconhecimento de Fala (ASR) Este repositório é um fork do repositório original criado por [Pierre Guillou](https://github.com/piegu). Ele contém uma versão convertida do modelo Whisper da OpenAI, fine-tuned no conjunto de dados `common_voice_11_0` para o idioma Português. ## Resultados O modelo atinge os seguintes resultados no conjunto de avaliação: - Perda (Loss): 0.2628 - Taxa de Erro de Palavra (Word Error Rate - WER): 6.5987 Para obter mais informações sobre este modelo, consulte este post do autor no blog: [Speech-to-Text & IA | Transcreva qualquer áudio para o português com o Whisper (OpenAI)... sem nenhum custo!](https://medium.com/@pierre_guillou). Este modelo, batizado de "Portuguese Medium Whisper", é superior ao modelo original Whisper Medium da OpenAI na transcrição de áudios em português (e inclusive melhor que o modelo Whisper Large, que possui um WER de 7.1). ## Treinamento | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.0333 | 2.07 | 1500 | 0.2073 | 6.9770 | | 0.0061 | 5.05 | 3000 | 0.2628 | 6.5987 | | 0.0007 | 8.03 | 4500 | 0.2960 | 6.6979 | | 0.0004 | 11.0 | 6000 | 0.3212 | 6.6794 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu117 - Datasets 2.7.1.dev0 - Tokenizers 0.13.2
PlankyxD/Taxi-v3
PlankyxD
2023-06-24T18:28:08Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-06-24T18:28:07Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.46 +/- 2.65 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="PlankyxD/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
PlankyxD/q-FrozenLake-v1-4x4-noSlippery
PlankyxD
2023-06-24T18:25:30Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-06-24T18:25:29Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="PlankyxD/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
mkhan149/output_model6
mkhan149
2023-06-24T18:24:45Z
61
0
transformers
[ "transformers", "tf", "distilbert", "fill-mask", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-06-24T17:51:37Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: mkhan149/output_model6 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # mkhan149/output_model6 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 3.7680 - Validation Loss: 4.2181 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 263, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 3.7680 | 4.2181 | 0 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.11.0 - Datasets 2.13.1 - Tokenizers 0.13.3
hopkins/svo-ss10k
hopkins
2023-06-24T17:41:19Z
112
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-06-24T17:14:08Z
--- tags: - generated_from_trainer datasets: - generator model-index: - name: svo-ss10k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # svo-ss10k This model was trained from scratch on the generator dataset. It achieves the following results on the evaluation set: - Loss: 5.9286 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 18 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 10.3545 | 3.83 | 50 | 8.2279 | | 7.7367 | 7.68 | 100 | 6.7753 | | 6.457 | 11.53 | 150 | 6.2465 | | 5.8091 | 15.38 | 200 | 5.9286 | ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
lewdryuna/A-Others
lewdryuna
2023-06-24T17:34:11Z
0
1
null
[ "stable-diffusion", "text-to-image", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-06-24T17:34:11Z
--- license: creativeml-openrail-m tags: - stable-diffusion - text-to-image duplicated_from: Hemlok/DateMix --- # 『Date Mix & RDt Mix』 ![a](Image/DateMix.png) ![a](Image/1.png) - "Date Mix" is a model created by hierarchical merging based on "anything-v4.5"([andite/anything-v4.0 · Hugging Face](https://huggingface.co/andite/anything-v4.0)). ---- ![a](Image/RDtMix.png) ![a](Image/2.png) - "RDtMix" is a merged realistic model based on "Date Mix". ---- # ◆Discord [Join Discord Server](https://discord.gg/eN6aSWRddT) - The merged model community of Hemlok. ---- # ◆About - This model was created to improve composition and color tone. - If you like realistic illustrations, please use "RDtModel". - Sampler: DDIM or DPM++ SDE Karras - Steps: 50~ - Clipskip: 2 - CFG Scale: 5-12 - Denoise strength: 0.5-0.7(As you like) - Negative prompts should be as few as possible. - vae: As you wish. (Any etc. If not used, color may become lighter) ---- # ◆How to use - Please download the file by yourself and use it with WebUI(AUTOMATIC1111) etc. - Use the f16 version for Colab(T4) or a PC with low RAM. ---- # ◆Colab Note [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1Bsm7p_Db5u3IiHr3YjTeKu4jOllfTyqZ?usp=sharing) - (I have not checked the operation but it probably works.) ---- # ◆Comparison <img src="https://i.imgur.com/fW2cPsY.jpg" width="1700" height=""> <img src="https://i.imgur.com/0k48okp.jpg" width="1700" height=""> ``` kawaii, 1girl, looking at viewer, smile ``` ---- <img src="https://i.imgur.com/d44iIvs.jpg" width="1700" height=""> ``` (morning), (school), 1girl, solo, looking at viewer, cowboy shot, (school uniform), smile, black hair, stockings ``` ---- # ◆Sampler & CFG Scale <img src="https://i.imgur.com/vzUBzmw.jpg" width="1700" height=""> ``` (morning), (school), 1girl, solo, looking at viewer, cowboy shot, (school uniform), smile, black hair, stockings ``` ---- # Disclaimer - The creation of SFW and NSFW images is at the discretion of the individual creator. - This model is not a model created to publish NSFW content in public places, etc. ---- ## License This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) (Full text of the license: https://huggingface.co/spaces/CompVis/stable-diffusion-license)
mariololo/Reinforce-CartPole-v1
mariololo
2023-06-24T17:08:18Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-06-24T17:08:09Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
paramrah/shoesv1
paramrah
2023-06-24T17:02:54Z
2
0
tf-keras
[ "tf-keras", "image-classification", "region:us" ]
image-classification
2023-06-24T16:15:38Z
--- pipeline_tag: image-classification ---
osiria/deberta-base-italian
osiria
2023-06-24T16:54:52Z
830
2
transformers
[ "transformers", "pytorch", "safetensors", "deberta-v2", "feature-extraction", "it", "arxiv:2111.09543", "arxiv:2010.05609", "license:mit", "endpoints_compatible", "region:us" ]
feature-extraction
2023-05-29T17:48:34Z
--- license: mit language: - it --- -------------------------------------------------------------------------------------------------- <body> <span class="vertical-text" style="background-color:lightgreen;border-radius: 3px;padding: 3px;"> </span> <br> <span class="vertical-text" style="background-color:orange;border-radius: 3px;padding: 3px;">  </span> <br> <span class="vertical-text" style="background-color:lightblue;border-radius: 3px;padding: 3px;">    Model: DeBERTa</span> <br> <span class="vertical-text" style="background-color:tomato;border-radius: 3px;padding: 3px;">    Lang: IT</span> <br> <span class="vertical-text" style="background-color:lightgrey;border-radius: 3px;padding: 3px;">  </span> <br> <span class="vertical-text" style="background-color:#CF9FFF;border-radius: 3px;padding: 3px;"> </span> </body> -------------------------------------------------------------------------------------------------- <h3>Model description</h3> This is a <b>DeBERTa</b> <b>[1]</b> model for the <b>Italian</b> language, obtained using <b>mDeBERTa</b> ([mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base)) as a starting point and focusing it on the Italian language by modifying the embedding layer (as in <b>[2]</b>, computing document-level frequencies over the <b>Wikipedia</b> dataset) The resulting model has 124M parameters, a vocabulary of 50.256 tokens, and a size of ~500 MB. <h3>Quick usage</h3> ```python from transformers import DebertaV2TokenizerFast, DebertaV2Model tokenizer = DebertaV2TokenizerFast.from_pretrained("osiria/deberta-base-italian") model = DebertaV2Model.from_pretrained("osiria/deberta-base-italian") ``` <h3>References</h3> [1] https://arxiv.org/abs/2111.09543 [2] https://arxiv.org/abs/2010.05609 <h3>License</h3> The model is released under <b>MIT</b> license
yasamingol/NLP_Model_Classifier
yasamingol
2023-06-24T16:32:02Z
163
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-23T15:04:55Z
--- tags: - generated_from_trainer metrics: - accuracy - f1 - recall - precision model-index: - name: NLP_Model_Classifier results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NLP_Model_Classifier This model is a fine-tuned version of [HooshvareLab/bert-base-parsbert-uncased](https://huggingface.co/HooshvareLab/bert-base-parsbert-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5617 - Accuracy: {'accuracy': 0.912962962962963} - F1 Weighted: {'f1': 0.9141126838691339} - F1: {'f1': 0.912962962962963} - Recall: {'recall': 0.912962962962963} - Precision: {'precision': 0.912962962962963} ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Weighted | F1 | Recall | Precision | |:-------------:|:-----:|:----:|:---------------:|:--------------------------------:|:--------------------------:|:--------------------------:|:------------------------------:|:---------------------------------:| | No log | 1.0 | 270 | 0.4435 | {'accuracy': 0.8851851851851852} | {'f1': 0.8837658106568329} | {'f1': 0.8851851851851851} | {'recall': 0.8851851851851852} | {'precision': 0.8851851851851852} | | 0.6248 | 2.0 | 540 | 0.4601 | {'accuracy': 0.9018518518518519} | {'f1': 0.9003935312696059} | {'f1': 0.9018518518518519} | {'recall': 0.9018518518518519} | {'precision': 0.9018518518518519} | | 0.6248 | 3.0 | 810 | 0.5067 | {'accuracy': 0.9018518518518519} | {'f1': 0.903263191345543} | {'f1': 0.9018518518518519} | {'recall': 0.9018518518518519} | {'precision': 0.9018518518518519} | | 0.0931 | 4.0 | 1080 | 0.5500 | {'accuracy': 0.912962962962963} | {'f1': 0.9137945012321058} | {'f1': 0.912962962962963} | {'recall': 0.912962962962963} | {'precision': 0.912962962962963} | | 0.0931 | 5.0 | 1350 | 0.5617 | {'accuracy': 0.912962962962963} | {'f1': 0.9141126838691339} | {'f1': 0.912962962962963} | {'recall': 0.912962962962963} | {'precision': 0.912962962962963} | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
WelfCrozzo/T5-L128-belarusian
WelfCrozzo
2023-06-24T15:59:54Z
140
1
transformers
[ "transformers", "pytorch", "safetensors", "t5", "text2text-generation", "translation", "be", "en", "ru", "dataset:WelfCrozzo/kupalinka", "license:openrail", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
translation
2023-06-24T14:38:38Z
--- license: openrail datasets: - WelfCrozzo/kupalinka language: - be - en - ru metrics: - bleu library_name: transformers tags: - translation widget: - text: "<extra_id_1>да зорак праз цяжкасці" example_title: "be -> ru" - text: "<extra_id_2>да зорак праз цяжкасці" example_title: "be -> en" - text: "<extra_id_3>к звездам через трудности" example_title: "ru -> be" - text: "<extra_id_5>к звездам через трудности" example_title: "ru -> en" - text: "<extra_id_6>to the stars through difficulties." example_title: "en -> be" - text: "<extra_id_7>to the stars through difficulties." example_title: "en -> ru" --- # T5 for belarusian language ![model image](https://camo.githubusercontent.com/623b4dea0b653f2ad3f36c71ebfe749a677ac0a1/68747470733a2f2f6d69726f2e6d656469756d2e636f6d2f6d61782f343030362f312a44304a31674e51663876727255704b657944387750412e706e67) This model is based on T5-small with sequence length equal 128 tokens. Model trained from scratch on RTX 3090 24GB. # Supported tasks: - translation BE to RU: `<extra_id_1>` - translation BE to EN: `<extra_id_2>` - translation RU to BE: `<extra_id_3>` - translation RU to EN: `<extra_id_5>` - translation EN to BE: `<extra_id_6>` - translation EN to RU: `<extra_id_7>` # Metrics: - [evel/BLEU](https://api.wandb.ai/links/miklgr500/31mq4s36) - [evel/loss](https://api.wandb.ai/links/miklgr500/rvi2p69n) - [train/loss](https://api.wandb.ai/links/miklgr500/z9alu3n5) # How to Get Started with the Model <details> <summary> Click to expand </summary> ```python from transformers import T5TokenizerFast, T5ForConditionalGeneration tokenizer = T5TokenizerFast.from_pretrained("WelfCrozzo/T5-L128-belarusian") model = T5ForConditionalGeneration.from_pretrained("WelfCrozzo/T5-L128-belarusian") x = tokenizer.encode('<extra_id_1>да зорак праз цяжкасці', return_tensors='pt') result = model.generate(x, return_dict_in_generate=True, output_scores=True,max_length=128) print(tokenizer.decode(result["sequences"][0])) ``` </details> # References - [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://jmlr.org/papers/volume21/20-074/20-074.pdf)
LOGQS/poca-SoccerTwos
LOGQS
2023-06-24T15:47:38Z
13
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2023-06-24T15:45:25Z
--- library_name: ml-agents tags: - SoccerTwos - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: LOGQS/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
keyfan/falcon-qlora-chinese
keyfan
2023-06-24T15:33:08Z
0
3
null
[ "zh", "en", "dataset:QingyiSi/Alpaca-CoT", "license:apache-2.0", "region:us" ]
null
2023-06-09T02:20:18Z
--- license: apache-2.0 datasets: - QingyiSi/Alpaca-CoT language: - zh - en --- This is a QLoRa adapter model to [Falcon-40b](https://huggingface.co/tiiuae/falcon-40b). Though Falcon is not specifically trained on Chinese corpus, it exhibits strong performance in Chinese Language Understanding in our experiment. We would like to explore out of curiosity whether a small amount of Chinese instruction data can push it further and make it better at speaking.The LoRa model is trained with the [QLoRa repo](https://github.com/artidoro/qlora) on a subset of bilingual instruction data from [Alpaca-CoT dataset](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT). The result is suprisingly good considering the number of Chinese tokens it has ever seen, though it shows clear limitions. Please check the examples below. ``` Question: 序列1,4,9,16的后一个数是什么? Answer: 序列1,4,9,16的后一个数是25。因为序列中的每个数字都是前一个数字加上一个递增的差值,第一个差值是3,第二个差值是5,第三个差值是7,第四个差值是9,所以最后一个数字应该是第四个数字16加上第四个差值9,即16+9=25。因此,序列1,4,9,16的后一个数是25。 Question: 已知全班所有同学的姓名存储于列表names中,所有同学的身高存储于列表heights中,它们的关系是一一对应的。请写一段Python代码,将所有同学的姓名按身高由低到高排序。 Answer: names = ['小明', '小红', '小张', '小李'] heights = [170, 165, 175, 180] # 使用zip函数将姓名和身高对应起来,然后根据身高排序 sorted_names = sorted(names, key=lambda x: heights[names.index(x)]) # 输出排序后的姓名列表 print(sorted_names) Question: 中国的“东北三省”指的是哪里? Answer: 中国的东北三省指的是吉林省、黑龙江省和辽宁省。这些省份位于中国东北地区,北临俄罗斯和乌克兰,东临日本海。它们是中国最大的省份之一,占中国陆地面积的七分之一。东北三省以其自然风光、丰富的民族文化和历史遗产而闻名。它们是中国旅游业的重要目的地之一。您还有其他问题吗? Question: “但愿人长久,千里共婵娟”,其中婵娟指的是什么? A、月亮 B、姻缘 Answer: 这道题目考察的是对于古诗词的理解和文化常识。这句话出自唐代诗人杜甫的《月夜》一诗,其中婵娟是指月亮。这句话的意思是希望人和月亮能够长久存在,共同见证世间美好的事物。婵娟是古代诗词中常用的一个词语,指的是月亮的光辉、美丽和媚态。因此,答案是A,月亮。 ``` ## Evalutions We evaluate on [C-Eval](https://cevalbenchmark.com/). More results will be added later. Result on C-Eval test set with 5-shot and no CoT, which is better than LLaMA-series models. | Average | Avg(Hard) | STEM | Social Science | Humanities | Others | | - | - | - | - | - | - | | 42.0 | 32.4 | 38.5 | 50.3 | 42.5 | 40.5 |
uraskargi/q-Taxi-v3
uraskargi
2023-06-24T15:27:57Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-06-24T15:27:55Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.48 +/- 2.76 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="uraskargi/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Amool/albert-base-v2-finetuned-squad
Amool
2023-06-24T15:06:41Z
115
0
transformers
[ "transformers", "pytorch", "tensorboard", "albert", "question-answering", "generated_from_trainer", "dataset:squad_v2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-06-24T06:49:43Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad_v2 model-index: - name: albert-base-v2-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-base-v2-finetuned-squad This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the squad_v2 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
daan1213/my_awesome_model
daan1213
2023-06-24T14:52:31Z
107
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:financial_phrasebank", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-24T14:03:54Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - financial_phrasebank metrics: - accuracy model-index: - name: my_awesome_model results: - task: name: Text Classification type: text-classification dataset: name: financial_phrasebank type: financial_phrasebank config: sentences_allagree split: train args: sentences_allagree metrics: - name: Accuracy type: accuracy value: 0.9668874172185431 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the financial_phrasebank dataset. It achieves the following results on the evaluation set: - Loss: 0.1210 - Accuracy: 0.9669 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 114 | 0.1984 | 0.9360 | | No log | 2.0 | 228 | 0.1210 | 0.9669 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
KrakeTito/medical_qa_llama
KrakeTito
2023-06-24T14:40:42Z
0
0
peft
[ "peft", "region:us" ]
null
2023-06-24T14:40:42Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0.dev0
Davit6174/georgian-distilbert-mlm
Davit6174
2023-06-24T14:39:56Z
96
0
transformers
[ "transformers", "tf", "distilbert", "fill-mask", "general", "ka", "dataset:mc4", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-06-23T21:19:11Z
--- datasets: - mc4 language: - ka library_name: transformers tags: - general widget: - text: "ქართული [MASK] სწავლა საკმაოდ რთულია" example_title: "Georgian Language" - text: "საქართველოს [MASK] ნაკრები ერთა ლიგაზე კარგად ასპარეზობს" example_title: "Football" - text: "ქართული ღვინო განთქმულია [MASK] მსოფლიოში" example_title: "Wine" --- # General Georgian Language Model This language model is a pretrained model specifically designed to understand and generate text in the Georgian language. It is based on the DistilBERT-base-uncased architecture and has been trained on the MC4 dataset, which contains a large collection of Georgian web documents. ## Model Details - **Architecture**: DistilBERT-base-uncased - **Pretraining Corpus**: MC4 (Multilingual Crawl Corpus) - **Language**: Georgian ## Pretraining The model has undergone a pretraining phase using the DistilBERT architecture, which is a distilled version of the original BERT model. DistilBERT is known for its smaller size and faster inference speed while still maintaining a high level of performance. During pretraining, the model was exposed to a vast amount of preprocessed Georgian text data from the MC4 dataset. ## Usage To use the General Georgian Language Model, you can utilize the model through various natural language processing (NLP) tasks, such as: - Text classification - Named entity recognition - Sentiment analysis - Language generation You can fine-tune this model on specific downstream tasks using task-specific datasets or use it as a feature extractor for transfer learning. ## Example Code Here is an example of how to use the General Georgian Language Model using the Hugging Face `transformers` library in Python: ```python from transformers import AutoTokenizer, TFAutoModel from transformers import pipeline # Load the tokenizer and model tokenizer = AutoTokenizer.from_pretrained("Davit6174/georgian-distilbert-mlm") model = TFAutoModel.from_pretrained("Davit6174/georgian-distilbert-mlm") # Build pipeline mask_filler = pipeline( "fill-mask", model=model, tokenizer=tokenizer ) text = 'ქართული [MASK] სწავლა საკმაოდ რთულია' # Generate model output preds = mask_filler(text) # Print top 5 predictions for pred in preds: print(f">>> {pred['sequence']}") ``` ## Limitations and Considerations - The model's performance may vary across different downstream tasks and domains. - The model's understanding of context and nuanced meanings may not always be accurate. - The model may generate plausible-sounding but incorrect or nonsensical Georgian text. - Therefore, it is recommended to evaluate the model's performance and fine-tune it on task-specific datasets when necessary. ## Acknowledgments The Georgian Language Model was pretrained using the Hugging Face transformers library and trained on the MC4 dataset, which is maintained by the community. I would like to express my gratitude to the contributors and maintainers of these valuable resources.