modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-23 18:27:52
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
492 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-23 18:25:26
card
stringlengths
11
1.01M
yifanxie/literate-toucanet1-1-1
yifanxie
2024-05-20T06:06:49Z
133
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "gpt", "llm", "large language model", "h2o-llmstudio", "en", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-20T06:05:00Z
--- language: - en library_name: transformers tags: - gpt - llm - large language model - h2o-llmstudio inference: false thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico --- # Model Card ## Summary This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio). - Base model: [google/gemma-2b](https://huggingface.co/google/gemma-2b) ## Usage To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` library installed. ```bash pip install transformers==4.40.1 ``` Also make sure you are providing your huggingface token to the pipeline if the model is lying in a private repo. - Either leave `token=True` in the `pipeline` and login to hugginface_hub by running ```python import huggingface_hub huggingface_hub.login(<ACCESS_TOKEN>) ``` - Or directly pass your <ACCESS_TOKEN> to `token` in the `pipeline` ```python from transformers import pipeline generate_text = pipeline( model="yifanxie/literate-toucanet1-1-1", torch_dtype="auto", trust_remote_code=True, use_fast=True, device_map={"": "cuda:0"}, token=True, ) # generate configuration can be modified to your needs # generate_text.model.generation_config.min_new_tokens = 2 # generate_text.model.generation_config.max_new_tokens = 256 # generate_text.model.generation_config.do_sample = False # generate_text.model.generation_config.num_beams = 1 # generate_text.model.generation_config.temperature = float(0.0) # generate_text.model.generation_config.repetition_penalty = float(1.0) res = generate_text( "Why is drinking water so healthy?", renormalize_logits=True ) print(res[0]["generated_text"]) ``` You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer: ```python print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"]) ``` ```bash <|prompt|>Why is drinking water so healthy?<eos><|answer|> ``` Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`. ```python from h2oai_pipeline import H2OTextGenerationPipeline from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( "yifanxie/literate-toucanet1-1-1", use_fast=True, padding_side="left", trust_remote_code=True, ) model = AutoModelForCausalLM.from_pretrained( "yifanxie/literate-toucanet1-1-1", torch_dtype="auto", device_map={"": "cuda:0"}, trust_remote_code=True, ) generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer) # generate configuration can be modified to your needs # generate_text.model.generation_config.min_new_tokens = 2 # generate_text.model.generation_config.max_new_tokens = 256 # generate_text.model.generation_config.do_sample = False # generate_text.model.generation_config.num_beams = 1 # generate_text.model.generation_config.temperature = float(0.0) # generate_text.model.generation_config.repetition_penalty = float(1.0) res = generate_text( "Why is drinking water so healthy?", renormalize_logits=True ) print(res[0]["generated_text"]) ``` You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "yifanxie/literate-toucanet1-1-1" # either local folder or huggingface model name # Important: The prompt needs to be in the same format the model was trained with. # You can find an example prompt in the experiment logs. prompt = "<|prompt|>How are you?<eos><|answer|>" tokenizer = AutoTokenizer.from_pretrained( model_name, use_fast=True, trust_remote_code=True, ) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map={"": "cuda:0"}, trust_remote_code=True, ) model.cuda().eval() inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda") # generate configuration can be modified to your needs # model.generation_config.min_new_tokens = 2 # model.generation_config.max_new_tokens = 256 # model.generation_config.do_sample = False # model.generation_config.num_beams = 1 # model.generation_config.temperature = float(0.0) # model.generation_config.repetition_penalty = float(1.0) tokens = model.generate( input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"], renormalize_logits=True )[0] tokens = tokens[inputs["input_ids"].shape[1]:] answer = tokenizer.decode(tokens, skip_special_tokens=True) print(answer) ``` ## Quantization and sharding You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```. ## Model Architecture ``` GemmaForCausalLM( (model): GemmaModel( (embed_tokens): Embedding(256000, 2048, padding_idx=0) (layers): ModuleList( (0-17): 18 x GemmaDecoderLayer( (self_attn): GemmaSdpaAttention( (q_proj): Linear(in_features=2048, out_features=2048, bias=False) (k_proj): Linear(in_features=2048, out_features=256, bias=False) (v_proj): Linear(in_features=2048, out_features=256, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): GemmaRotaryEmbedding() ) (mlp): GemmaMLP( (gate_proj): Linear(in_features=2048, out_features=16384, bias=False) (up_proj): Linear(in_features=2048, out_features=16384, bias=False) (down_proj): Linear(in_features=16384, out_features=2048, bias=False) (act_fn): PytorchGELUTanh() ) (input_layernorm): GemmaRMSNorm() (post_attention_layernorm): GemmaRMSNorm() ) ) (norm): GemmaRMSNorm() ) (lm_head): Linear(in_features=2048, out_features=256000, bias=False) ) ``` ## Model Configuration This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models. ## Disclaimer Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions. - Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints. - Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion. - Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model. - Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities. - Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues. - Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes. By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
miraiminds/arithmathBio-function
miraiminds
2024-05-20T06:05:09Z
1
0
peft
[ "peft", "tensorboard", "safetensors", "mistral", "generated_from_trainer", "base_model:miraiminds/mergeMathBio-7B", "base_model:adapter:miraiminds/mergeMathBio-7B", "region:us" ]
null
2024-05-20T06:04:46Z
--- library_name: peft tags: - generated_from_trainer base_model: miraiminds/mergeMathBio-7B model-index: - name: outputs/lora-out results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: miraiminds/mergeMathBio-7B model_type: MistralForCausalLM tokenizer_type: LlamaTokenizer load_in_8bit: false load_in_4bit: false strict: false datasets: - path: tanyakansal/functioncaliing type: completion field: chat shards: 10 output_dir: ./outputs/lora-out adapter: lora lora_model_dir: sequence_len: 8192 sample_packing: true pad_to_sequence_len: true lora_r: 32 lora_alpha: 16 lora_dropout: 0.05 lora_target_linear: true lora_fan_in_fan_out: lora_target_modules: - gate_proj - down_proj - up_proj - q_proj - v_proj - k_proj - o_proj wandb_project: wandb_entity: wandb_watch: wandb_name: wandb_log_model: gradient_accumulation_steps: 4 micro_batch_size: 2 num_epochs: 1 optimizer: adamw_bnb_8bit lr_scheduler: cosine learning_rate: 0.0002 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true loss_watchdog_threshold: 5.0 loss_watchdog_patience: 3 warmup_steps: 10 evals_per_epoch: 4 eval_table_size: eval_max_new_tokens: 128 saves_per_epoch: 1 debug: deepspeed: weight_decay: 0.0 fsdp: fsdp_config: special_tokens: ``` </details><br> # outputs/lora-out This model is a fine-tuned version of [miraiminds/mergeMathBio-7B](https://huggingface.co/miraiminds/mergeMathBio-7B) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 1 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0.dev0 - Pytorch 2.2.0+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
RichardErkhov/TheBloke_-_CodeLlama-13B-Instruct-fp16-gguf
RichardErkhov
2024-05-20T06:03:07Z
56
0
null
[ "gguf", "endpoints_compatible", "region:us" ]
null
2024-05-20T01:28:25Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) CodeLlama-13B-Instruct-fp16 - GGUF - Model creator: https://huggingface.co/TheBloke/ - Original model: https://huggingface.co/TheBloke/CodeLlama-13B-Instruct-fp16/ | Name | Quant method | Size | | ---- | ---- | ---- | | [CodeLlama-13B-Instruct-fp16.Q2_K.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-13B-Instruct-fp16-gguf/blob/main/CodeLlama-13B-Instruct-fp16.Q2_K.gguf) | Q2_K | 4.52GB | | [CodeLlama-13B-Instruct-fp16.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-13B-Instruct-fp16-gguf/blob/main/CodeLlama-13B-Instruct-fp16.IQ3_XS.gguf) | IQ3_XS | 4.99GB | | [CodeLlama-13B-Instruct-fp16.IQ3_S.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-13B-Instruct-fp16-gguf/blob/main/CodeLlama-13B-Instruct-fp16.IQ3_S.gguf) | IQ3_S | 5.27GB | | [CodeLlama-13B-Instruct-fp16.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-13B-Instruct-fp16-gguf/blob/main/CodeLlama-13B-Instruct-fp16.Q3_K_S.gguf) | Q3_K_S | 5.27GB | | [CodeLlama-13B-Instruct-fp16.IQ3_M.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-13B-Instruct-fp16-gguf/blob/main/CodeLlama-13B-Instruct-fp16.IQ3_M.gguf) | IQ3_M | 5.57GB | | [CodeLlama-13B-Instruct-fp16.Q3_K.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-13B-Instruct-fp16-gguf/blob/main/CodeLlama-13B-Instruct-fp16.Q3_K.gguf) | Q3_K | 5.9GB | | [CodeLlama-13B-Instruct-fp16.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-13B-Instruct-fp16-gguf/blob/main/CodeLlama-13B-Instruct-fp16.Q3_K_M.gguf) | Q3_K_M | 5.9GB | | [CodeLlama-13B-Instruct-fp16.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-13B-Instruct-fp16-gguf/blob/main/CodeLlama-13B-Instruct-fp16.Q3_K_L.gguf) | Q3_K_L | 6.45GB | | [CodeLlama-13B-Instruct-fp16.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-13B-Instruct-fp16-gguf/blob/main/CodeLlama-13B-Instruct-fp16.IQ4_XS.gguf) | IQ4_XS | 6.54GB | | [CodeLlama-13B-Instruct-fp16.Q4_0.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-13B-Instruct-fp16-gguf/blob/main/CodeLlama-13B-Instruct-fp16.Q4_0.gguf) | Q4_0 | 6.86GB | | [CodeLlama-13B-Instruct-fp16.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-13B-Instruct-fp16-gguf/blob/main/CodeLlama-13B-Instruct-fp16.IQ4_NL.gguf) | IQ4_NL | 6.9GB | | [CodeLlama-13B-Instruct-fp16.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-13B-Instruct-fp16-gguf/blob/main/CodeLlama-13B-Instruct-fp16.Q4_K_S.gguf) | Q4_K_S | 6.91GB | | [CodeLlama-13B-Instruct-fp16.Q4_K.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-13B-Instruct-fp16-gguf/blob/main/CodeLlama-13B-Instruct-fp16.Q4_K.gguf) | Q4_K | 7.33GB | | [CodeLlama-13B-Instruct-fp16.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-13B-Instruct-fp16-gguf/blob/main/CodeLlama-13B-Instruct-fp16.Q4_K_M.gguf) | Q4_K_M | 7.33GB | | [CodeLlama-13B-Instruct-fp16.Q4_1.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-13B-Instruct-fp16-gguf/blob/main/CodeLlama-13B-Instruct-fp16.Q4_1.gguf) | Q4_1 | 7.61GB | | [CodeLlama-13B-Instruct-fp16.Q5_0.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-13B-Instruct-fp16-gguf/blob/main/CodeLlama-13B-Instruct-fp16.Q5_0.gguf) | Q5_0 | 8.36GB | | [CodeLlama-13B-Instruct-fp16.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-13B-Instruct-fp16-gguf/blob/main/CodeLlama-13B-Instruct-fp16.Q5_K_S.gguf) | Q5_K_S | 8.36GB | | [CodeLlama-13B-Instruct-fp16.Q5_K.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-13B-Instruct-fp16-gguf/blob/main/CodeLlama-13B-Instruct-fp16.Q5_K.gguf) | Q5_K | 8.6GB | | [CodeLlama-13B-Instruct-fp16.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-13B-Instruct-fp16-gguf/blob/main/CodeLlama-13B-Instruct-fp16.Q5_K_M.gguf) | Q5_K_M | 8.6GB | | [CodeLlama-13B-Instruct-fp16.Q5_1.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-13B-Instruct-fp16-gguf/blob/main/CodeLlama-13B-Instruct-fp16.Q5_1.gguf) | Q5_1 | 9.1GB | | [CodeLlama-13B-Instruct-fp16.Q6_K.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-13B-Instruct-fp16-gguf/blob/main/CodeLlama-13B-Instruct-fp16.Q6_K.gguf) | Q6_K | 9.95GB | | [CodeLlama-13B-Instruct-fp16.Q8_0.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-13B-Instruct-fp16-gguf/blob/main/CodeLlama-13B-Instruct-fp16.Q8_0.gguf) | Q8_0 | 12.88GB | Original model description: --- license: llama2 tags: - llama-2 - codellama --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # CodeLlama 13B-Instruct fp16 - Model creator: [Meta](https://ai.meta.com/llama/) ## Description This is Transformers/HF format fp16 weights for CodeLlama 13B-Instruct. It is the result of downloading CodeLlama 13B-Instruct from [Meta](https://ai.meta.com/blog/code-llama-large-language-model-coding/) and converting to HF using `convert_llama_weights_to_hf.py`. Quantisations will be coming shortly. Please note that due to a change in the RoPE Theta value, for correct results you must load these FP16 models with `trust_remote_code=True` Credit to @emozilla for creating the necessary modelling code to achieve this! ## Prompt template: TBC <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card # Code Llama ## **Model Details** **Model Developers** Meta AI **Variations** Code Llama comes in three model sizes, and three variants: 1) Code Llama: our base models designed for general code synthesis and understanding 2) Code Llama - Python: designed specifically for Python 3) Code Llama - Instruct: for instruction following and safer deployment All variants are available in sizes of 7B, 13B and 34B parameters. **Input** Models input text only. **Output** Models output text only. **Model Architecture** Code Llama and its variants are autoregressive language models using optimized transformer architectures. Code Llama 7B and 13B additionally support infilling text generation. All models were fine-tuned with up to 16K tokens, and support up to 100K tokens at inference time. **Model Dates** Code Llama and its variants have been trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released as we improve model safety with community feedback. **Licence** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/). **Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)". **Where to send comments** Instructions on how to provide feedback or comments on the model can be found in the model [README](README.md), or by opening an issue in the GitHub repository ([https://github.com/facebookresearch/codellama/](https://github.com/facebookresearch/codellama/)). ## **Intended Use** **Intended Use Cases** Code Llama and its variants is intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications. **Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants. ## **Hardware and Software** **Training Factors** We used custom training libraries. The training and fine-tuning of the released models have been performed Meta’s Research Super Cluster. **Carbon Footprint** In aggregate, training all 9 Code Llama models required 400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 65.3 tCO2eq, 100% of which were offset by Meta’s sustainability program. **Training data** All experiments reported here and the released models have been trained and fine-tuned using the same data as Llama 2 with different weights (see Section 2 and Table 1 in the [research paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) for details). Code Llama - Instruct uses additional instruction fine-tuning data. **Evaluation Results** See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper. ## **Ethical Considerations and Limitations** Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-user-guide](https://ai.meta.com/llama/responsible-user-guide).
comidan/merlinite-7b-instructlab-bts
comidan
2024-05-20T06:00:08Z
80
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-20T05:49:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID This model based on merlinite-7b will able to answer with up-to-date information, through a taxnomy tree based InstructLab fine tuning, to interesting company focused questions. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** Daniele Comi - **Model type:** merlinite-7b - **Language(s) (NLP):** English - **License:** Apache-2.0 - **Finetuned from model:** merlinite-7b using InstructLab
ukung/TinyLlama-1.1B-indo-v1-GGUF
ukung
2024-05-20T05:52:40Z
36
0
null
[ "gguf", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-20T05:04:52Z
--- license: apache-2.0 ---
Ashmal/MBZUAI-oryx
Ashmal
2024-05-20T05:52:18Z
2,902
1
transformers
[ "transformers", "safetensors", "cohere", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-18T13:22:54Z
--- license: apache-2.0 --- --- This is the Arabic test model built at MBZUAI. More details of the projects will be announced later along with the release. This model card is just to test the capabilities of this model on Arabic benchmarks.
automated-finetunning/phi2_fata_20p_5e
automated-finetunning
2024-05-20T05:51:48Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-17T13:02:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Sneha-Mahata/Blood-Cell-Detection-DETR
Sneha-Mahata
2024-05-20T05:50:50Z
190
0
transformers
[ "transformers", "safetensors", "detr", "object-detection", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
object-detection
2024-05-20T05:50:44Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Ruiz3/phi-2-kingshipAI-product-tag
Ruiz3
2024-05-20T05:48:39Z
132
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-20T05:26:43Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
salahyahya/grammer_checker_model_1
salahyahya
2024-05-20T05:46:18Z
129
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-base", "base_model:finetune:google-t5/t5-base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-20T05:44:30Z
--- license: apache-2.0 base_model: t5-base tags: - generated_from_trainer metrics: - bleu model-index: - name: grammer_checker_model_1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # grammer_checker_model_1 This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0002 - Bleu: 0.006 - Gen Len: 13.3816 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 6 - total_train_batch_size: 192 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-----:|:-------:| | No log | 0.31 | 250 | 0.0004 | 0.006 | 13.3815 | | 0.0009 | 0.63 | 500 | 0.0003 | 0.006 | 13.3809 | | 0.0009 | 0.94 | 750 | 0.0002 | 0.006 | 13.3816 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
doggywastaken/bmri-prep_cnn_seg
doggywastaken
2024-05-20T05:39:36Z
0
0
null
[ "safetensors", "pytorch_model_hub_mixin", "model_hub_mixin", "mask-generation", "dataset:doggywastaken/manual_breast_segs", "license:afl-3.0", "region:us" ]
mask-generation
2024-05-19T19:53:14Z
--- tags: - pytorch_model_hub_mixin - model_hub_mixin license: afl-3.0 datasets: - doggywastaken/manual_breast_segs pipeline_tag: mask-generation --- This model has been pushed to the Hub using ****: - Repo: [More Information Needed] - Docs: [More Information Needed]
luchenyu/llama3-8b
luchenyu
2024-05-20T05:39:30Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-17T02:48:40Z
--- license: apache-2.0 ---
Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-4_0bpw_exl2
Zoyd
2024-05-20T05:36:50Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "dataset:aqua_rat", "dataset:microsoft/orca-math-word-problems-200k", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "arxiv:2402.13228", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "exl2", "region:us" ]
text-generation
2024-05-19T14:11:38Z
--- library_name: transformers license: llama2 datasets: - aqua_rat - microsoft/orca-math-word-problems-200k - m-a-p/CodeFeedback-Filtered-Instruction --- **Exllamav2** quant (**exl2** / **4.0 bpw**) made with ExLlamaV2 v0.0.21 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-2_2bpw_exl2)**</center> | <center>20886 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-2_5bpw_exl2)**</center> | <center>23192 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-3_0bpw_exl2)**</center> | <center>27273 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-3_5bpw_exl2)**</center> | <center>31356 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-3_75bpw_exl2)**</center> | <center>33398 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-4_0bpw_exl2)**</center> | <center>35434 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-4_25bpw_exl2)**</center> | <center>37456 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-5_0bpw_exl2)**</center> | <center>43598 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-6_0bpw_exl2)**</center> | <center>51957 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-6_5bpw_exl2)**</center> | <center>56014 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-8_0bpw_exl2)**</center> | <center>60211 MB</center> | <center>8</center> | # Smaug-Llama-3-70B-Instruct ### Built with Meta Llama 3 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c14f6b02e1f8f67c73bd05/ZxYuHKmU_AtuEJbGtuEBC.png) This model was built using a new Smaug recipe for improving performance on real world multi-turn conversations applied to [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct). The model outperforms Llama-3-70B-Instruct substantially, and is on par with GPT-4-Turbo, on MT-Bench (see below). EDIT: Smaug-Llama-3-70B-Instruct is the top open source model on Arena-Hard currently! It is also nearly on par with Claude Opus - see below. We are conducting additional benchmark evaluations and will add those when available. ### Model Description - **Developed by:** [Abacus.AI](https://abacus.ai) - **License:** https://llama.meta.com/llama3/license/ - **Finetuned from model:** [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct). ## Evaluation ### Arena-Hard Score vs selected others (sourced from: (https://lmsys.org/blog/2024-04-19-arena-hard/#full-leaderboard-with-gpt-4-turbo-as-judge)) | Model | Score | 95% Confidence Interval | Average Tokens | | :---- | ---------: | ----------: | ------: | | GPT-4-Turbo-2024-04-09 | 82.6 | (-1.8, 1.6) | 662 | | Claude-3-Opus-20240229 | 60.4 | (-3.3, 2.4) | 541 | | **Smaug-Llama-3-70B-Instruct** | 56.7 | (-2.2, 2.6) | 661 | | GPT-4-0314 | 50.0 | (-0.0, 0.0) | 423 | | Claude-3-Sonnet-20240229 | 46.8 | (-2.1, 2.2) | 552 | | Llama-3-70B-Instruct | 41.1 | (-2.5, 2.4) | 583 | | GPT-4-0613 | 37.9 | (-2.2, 2.0) | 354 | | Mistral-Large-2402 | 37.7 | (-1.9, 2.6) | 400 | | Mixtral-8x22B-Instruct-v0.1 | 36.4 | (-2.7, 2.9) | 430 | | Qwen1.5-72B-Chat | 36.1 | (-2.5, 2.2) | 474 | | Command-R-Plus | 33.1 | (-2.1, 2.2) | 541 | | Mistral-Medium | 31.9 | (-2.3, 2.4) | 485 | | GPT-3.5-Turbo-0613 | 24.8 | (-1.6, 2.0) | 401 | ### MT-Bench ``` ########## First turn ########## score model turn Smaug-Llama-3-70B-Instruct 1 9.40000 GPT-4-Turbo 1 9.37500 Meta-Llama-3-70B-Instruct 1 9.21250 ########## Second turn ########## score model turn Smaug-Llama-3-70B-Instruct 2 9.0125 GPT-4-Turbo 2 9.0000 Meta-Llama-3-70B-Instruct 2 8.8000 ########## Average ########## score model Smaug-Llama-3-70B-Instruct 9.206250 GPT-4-Turbo 9.187500 Meta-Llama-3-70B-Instruct 9.006250 ``` | Model | First turn | Second Turn | Average | | :---- | ---------: | ----------: | ------: | | **Smaug-Llama-3-70B-Instruct** | 9.40 | 9.01 | 9.21 | | GPT-4-Turbo | 9.38 | 9.00 | 9.19 | | Meta-Llama-3-70B-Instruct | 9.21 | 8.80 | 9.01 | This version of Smaug uses new techniques and new data compared to [Smaug-72B](https://huggingface.co/abacusai/Smaug-72B-v0.1), and more information will be released later on. For now, see the previous Smaug paper: https://arxiv.org/abs/2402.13228.
sidddddddddddd/lora_model_10_examples51
sidddddddddddd
2024-05-20T05:36:09Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-20T05:36:00Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** sidddddddddddd - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-3_75bpw_exl2
Zoyd
2024-05-20T05:34:12Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "dataset:aqua_rat", "dataset:microsoft/orca-math-word-problems-200k", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "arxiv:2402.13228", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl2", "region:us" ]
text-generation
2024-05-19T12:04:22Z
--- library_name: transformers license: llama2 datasets: - aqua_rat - microsoft/orca-math-word-problems-200k - m-a-p/CodeFeedback-Filtered-Instruction --- **Exllamav2** quant (**exl2** / **3.75 bpw**) made with ExLlamaV2 v0.0.21 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-2_2bpw_exl2)**</center> | <center>20886 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-2_5bpw_exl2)**</center> | <center>23192 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-3_0bpw_exl2)**</center> | <center>27273 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-3_5bpw_exl2)**</center> | <center>31356 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-3_75bpw_exl2)**</center> | <center>33398 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-4_0bpw_exl2)**</center> | <center>35434 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-4_25bpw_exl2)**</center> | <center>37456 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-5_0bpw_exl2)**</center> | <center>43598 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-6_0bpw_exl2)**</center> | <center>51957 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-6_5bpw_exl2)**</center> | <center>56014 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-8_0bpw_exl2)**</center> | <center>60211 MB</center> | <center>8</center> | # Smaug-Llama-3-70B-Instruct ### Built with Meta Llama 3 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c14f6b02e1f8f67c73bd05/ZxYuHKmU_AtuEJbGtuEBC.png) This model was built using a new Smaug recipe for improving performance on real world multi-turn conversations applied to [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct). The model outperforms Llama-3-70B-Instruct substantially, and is on par with GPT-4-Turbo, on MT-Bench (see below). EDIT: Smaug-Llama-3-70B-Instruct is the top open source model on Arena-Hard currently! It is also nearly on par with Claude Opus - see below. We are conducting additional benchmark evaluations and will add those when available. ### Model Description - **Developed by:** [Abacus.AI](https://abacus.ai) - **License:** https://llama.meta.com/llama3/license/ - **Finetuned from model:** [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct). ## Evaluation ### Arena-Hard Score vs selected others (sourced from: (https://lmsys.org/blog/2024-04-19-arena-hard/#full-leaderboard-with-gpt-4-turbo-as-judge)) | Model | Score | 95% Confidence Interval | Average Tokens | | :---- | ---------: | ----------: | ------: | | GPT-4-Turbo-2024-04-09 | 82.6 | (-1.8, 1.6) | 662 | | Claude-3-Opus-20240229 | 60.4 | (-3.3, 2.4) | 541 | | **Smaug-Llama-3-70B-Instruct** | 56.7 | (-2.2, 2.6) | 661 | | GPT-4-0314 | 50.0 | (-0.0, 0.0) | 423 | | Claude-3-Sonnet-20240229 | 46.8 | (-2.1, 2.2) | 552 | | Llama-3-70B-Instruct | 41.1 | (-2.5, 2.4) | 583 | | GPT-4-0613 | 37.9 | (-2.2, 2.0) | 354 | | Mistral-Large-2402 | 37.7 | (-1.9, 2.6) | 400 | | Mixtral-8x22B-Instruct-v0.1 | 36.4 | (-2.7, 2.9) | 430 | | Qwen1.5-72B-Chat | 36.1 | (-2.5, 2.2) | 474 | | Command-R-Plus | 33.1 | (-2.1, 2.2) | 541 | | Mistral-Medium | 31.9 | (-2.3, 2.4) | 485 | | GPT-3.5-Turbo-0613 | 24.8 | (-1.6, 2.0) | 401 | ### MT-Bench ``` ########## First turn ########## score model turn Smaug-Llama-3-70B-Instruct 1 9.40000 GPT-4-Turbo 1 9.37500 Meta-Llama-3-70B-Instruct 1 9.21250 ########## Second turn ########## score model turn Smaug-Llama-3-70B-Instruct 2 9.0125 GPT-4-Turbo 2 9.0000 Meta-Llama-3-70B-Instruct 2 8.8000 ########## Average ########## score model Smaug-Llama-3-70B-Instruct 9.206250 GPT-4-Turbo 9.187500 Meta-Llama-3-70B-Instruct 9.006250 ``` | Model | First turn | Second Turn | Average | | :---- | ---------: | ----------: | ------: | | **Smaug-Llama-3-70B-Instruct** | 9.40 | 9.01 | 9.21 | | GPT-4-Turbo | 9.38 | 9.00 | 9.19 | | Meta-Llama-3-70B-Instruct | 9.21 | 8.80 | 9.01 | This version of Smaug uses new techniques and new data compared to [Smaug-72B](https://huggingface.co/abacusai/Smaug-72B-v0.1), and more information will be released later on. For now, see the previous Smaug paper: https://arxiv.org/abs/2402.13228.
Cran-May/firefly-qwen1.5-en-14b-alpha-Q4_K_M-GGUF
Cran-May
2024-05-20T05:33:13Z
0
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-20T05:32:47Z
--- license: apache-2.0 library_name: transformers tags: - llama-cpp - gguf-my-repo basemodel: Qwen/Qwen1.5-14B --- # Cran-May/firefly-qwen1.5-en-14b-alpha-Q4_K_M-GGUF This model was converted to GGUF format from [`YeungNLP/firefly-qwen1.5-en-14b-alpha`](https://huggingface.co/YeungNLP/firefly-qwen1.5-en-14b-alpha) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/YeungNLP/firefly-qwen1.5-en-14b-alpha) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo Cran-May/firefly-qwen1.5-en-14b-alpha-Q4_K_M-GGUF --model firefly-qwen1.5-en-14b-alpha.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo Cran-May/firefly-qwen1.5-en-14b-alpha-Q4_K_M-GGUF --model firefly-qwen1.5-en-14b-alpha.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m firefly-qwen1.5-en-14b-alpha.Q4_K_M.gguf -n 128 ```
Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-3_5bpw_exl2
Zoyd
2024-05-20T05:32:45Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "dataset:aqua_rat", "dataset:microsoft/orca-math-word-problems-200k", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "arxiv:2402.13228", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl2", "region:us" ]
text-generation
2024-05-19T11:26:18Z
--- library_name: transformers license: llama2 datasets: - aqua_rat - microsoft/orca-math-word-problems-200k - m-a-p/CodeFeedback-Filtered-Instruction --- **Exllamav2** quant (**exl2** / **3.5 bpw**) made with ExLlamaV2 v0.0.21 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-2_2bpw_exl2)**</center> | <center>20886 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-2_5bpw_exl2)**</center> | <center>23192 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-3_0bpw_exl2)**</center> | <center>27273 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-3_5bpw_exl2)**</center> | <center>31356 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-3_75bpw_exl2)**</center> | <center>33398 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-4_0bpw_exl2)**</center> | <center>35434 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-4_25bpw_exl2)**</center> | <center>37456 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-5_0bpw_exl2)**</center> | <center>43598 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-6_0bpw_exl2)**</center> | <center>51957 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-6_5bpw_exl2)**</center> | <center>56014 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-8_0bpw_exl2)**</center> | <center>60211 MB</center> | <center>8</center> | # Smaug-Llama-3-70B-Instruct ### Built with Meta Llama 3 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c14f6b02e1f8f67c73bd05/ZxYuHKmU_AtuEJbGtuEBC.png) This model was built using a new Smaug recipe for improving performance on real world multi-turn conversations applied to [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct). The model outperforms Llama-3-70B-Instruct substantially, and is on par with GPT-4-Turbo, on MT-Bench (see below). EDIT: Smaug-Llama-3-70B-Instruct is the top open source model on Arena-Hard currently! It is also nearly on par with Claude Opus - see below. We are conducting additional benchmark evaluations and will add those when available. ### Model Description - **Developed by:** [Abacus.AI](https://abacus.ai) - **License:** https://llama.meta.com/llama3/license/ - **Finetuned from model:** [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct). ## Evaluation ### Arena-Hard Score vs selected others (sourced from: (https://lmsys.org/blog/2024-04-19-arena-hard/#full-leaderboard-with-gpt-4-turbo-as-judge)) | Model | Score | 95% Confidence Interval | Average Tokens | | :---- | ---------: | ----------: | ------: | | GPT-4-Turbo-2024-04-09 | 82.6 | (-1.8, 1.6) | 662 | | Claude-3-Opus-20240229 | 60.4 | (-3.3, 2.4) | 541 | | **Smaug-Llama-3-70B-Instruct** | 56.7 | (-2.2, 2.6) | 661 | | GPT-4-0314 | 50.0 | (-0.0, 0.0) | 423 | | Claude-3-Sonnet-20240229 | 46.8 | (-2.1, 2.2) | 552 | | Llama-3-70B-Instruct | 41.1 | (-2.5, 2.4) | 583 | | GPT-4-0613 | 37.9 | (-2.2, 2.0) | 354 | | Mistral-Large-2402 | 37.7 | (-1.9, 2.6) | 400 | | Mixtral-8x22B-Instruct-v0.1 | 36.4 | (-2.7, 2.9) | 430 | | Qwen1.5-72B-Chat | 36.1 | (-2.5, 2.2) | 474 | | Command-R-Plus | 33.1 | (-2.1, 2.2) | 541 | | Mistral-Medium | 31.9 | (-2.3, 2.4) | 485 | | GPT-3.5-Turbo-0613 | 24.8 | (-1.6, 2.0) | 401 | ### MT-Bench ``` ########## First turn ########## score model turn Smaug-Llama-3-70B-Instruct 1 9.40000 GPT-4-Turbo 1 9.37500 Meta-Llama-3-70B-Instruct 1 9.21250 ########## Second turn ########## score model turn Smaug-Llama-3-70B-Instruct 2 9.0125 GPT-4-Turbo 2 9.0000 Meta-Llama-3-70B-Instruct 2 8.8000 ########## Average ########## score model Smaug-Llama-3-70B-Instruct 9.206250 GPT-4-Turbo 9.187500 Meta-Llama-3-70B-Instruct 9.006250 ``` | Model | First turn | Second Turn | Average | | :---- | ---------: | ----------: | ------: | | **Smaug-Llama-3-70B-Instruct** | 9.40 | 9.01 | 9.21 | | GPT-4-Turbo | 9.38 | 9.00 | 9.19 | | Meta-Llama-3-70B-Instruct | 9.21 | 8.80 | 9.01 | This version of Smaug uses new techniques and new data compared to [Smaug-72B](https://huggingface.co/abacusai/Smaug-72B-v0.1), and more information will be released later on. For now, see the previous Smaug paper: https://arxiv.org/abs/2402.13228.
DownwardSpiral33/gpt2-imdb-pos-roberta16-256_0_2-2024.05.20.04.41
DownwardSpiral33
2024-05-20T05:31:37Z
134
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-20T05:31:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DownwardSpiral33/gpt2-imdb-pos-roberta16-256_0_5-2024.05.20.04.21
DownwardSpiral33
2024-05-20T05:29:30Z
134
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-20T05:29:07Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-3_0bpw_exl2
Zoyd
2024-05-20T05:29:17Z
7
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "dataset:aqua_rat", "dataset:microsoft/orca-math-word-problems-200k", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "arxiv:2402.13228", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "3-bit", "exl2", "region:us" ]
text-generation
2024-05-19T11:14:38Z
--- library_name: transformers license: llama2 datasets: - aqua_rat - microsoft/orca-math-word-problems-200k - m-a-p/CodeFeedback-Filtered-Instruction --- **Exllamav2** quant (**exl2** / **3.0 bpw**) made with ExLlamaV2 v0.0.21 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-2_2bpw_exl2)**</center> | <center>20886 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-2_5bpw_exl2)**</center> | <center>23192 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-3_0bpw_exl2)**</center> | <center>27273 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-3_5bpw_exl2)**</center> | <center>31356 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-3_75bpw_exl2)**</center> | <center>33398 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-4_0bpw_exl2)**</center> | <center>35434 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-4_25bpw_exl2)**</center> | <center>37456 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-5_0bpw_exl2)**</center> | <center>43598 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-6_0bpw_exl2)**</center> | <center>51957 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-6_5bpw_exl2)**</center> | <center>56014 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-8_0bpw_exl2)**</center> | <center>60211 MB</center> | <center>8</center> | # Smaug-Llama-3-70B-Instruct ### Built with Meta Llama 3 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c14f6b02e1f8f67c73bd05/ZxYuHKmU_AtuEJbGtuEBC.png) This model was built using a new Smaug recipe for improving performance on real world multi-turn conversations applied to [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct). The model outperforms Llama-3-70B-Instruct substantially, and is on par with GPT-4-Turbo, on MT-Bench (see below). EDIT: Smaug-Llama-3-70B-Instruct is the top open source model on Arena-Hard currently! It is also nearly on par with Claude Opus - see below. We are conducting additional benchmark evaluations and will add those when available. ### Model Description - **Developed by:** [Abacus.AI](https://abacus.ai) - **License:** https://llama.meta.com/llama3/license/ - **Finetuned from model:** [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct). ## Evaluation ### Arena-Hard Score vs selected others (sourced from: (https://lmsys.org/blog/2024-04-19-arena-hard/#full-leaderboard-with-gpt-4-turbo-as-judge)) | Model | Score | 95% Confidence Interval | Average Tokens | | :---- | ---------: | ----------: | ------: | | GPT-4-Turbo-2024-04-09 | 82.6 | (-1.8, 1.6) | 662 | | Claude-3-Opus-20240229 | 60.4 | (-3.3, 2.4) | 541 | | **Smaug-Llama-3-70B-Instruct** | 56.7 | (-2.2, 2.6) | 661 | | GPT-4-0314 | 50.0 | (-0.0, 0.0) | 423 | | Claude-3-Sonnet-20240229 | 46.8 | (-2.1, 2.2) | 552 | | Llama-3-70B-Instruct | 41.1 | (-2.5, 2.4) | 583 | | GPT-4-0613 | 37.9 | (-2.2, 2.0) | 354 | | Mistral-Large-2402 | 37.7 | (-1.9, 2.6) | 400 | | Mixtral-8x22B-Instruct-v0.1 | 36.4 | (-2.7, 2.9) | 430 | | Qwen1.5-72B-Chat | 36.1 | (-2.5, 2.2) | 474 | | Command-R-Plus | 33.1 | (-2.1, 2.2) | 541 | | Mistral-Medium | 31.9 | (-2.3, 2.4) | 485 | | GPT-3.5-Turbo-0613 | 24.8 | (-1.6, 2.0) | 401 | ### MT-Bench ``` ########## First turn ########## score model turn Smaug-Llama-3-70B-Instruct 1 9.40000 GPT-4-Turbo 1 9.37500 Meta-Llama-3-70B-Instruct 1 9.21250 ########## Second turn ########## score model turn Smaug-Llama-3-70B-Instruct 2 9.0125 GPT-4-Turbo 2 9.0000 Meta-Llama-3-70B-Instruct 2 8.8000 ########## Average ########## score model Smaug-Llama-3-70B-Instruct 9.206250 GPT-4-Turbo 9.187500 Meta-Llama-3-70B-Instruct 9.006250 ``` | Model | First turn | Second Turn | Average | | :---- | ---------: | ----------: | ------: | | **Smaug-Llama-3-70B-Instruct** | 9.40 | 9.01 | 9.21 | | GPT-4-Turbo | 9.38 | 9.00 | 9.19 | | Meta-Llama-3-70B-Instruct | 9.21 | 8.80 | 9.01 | This version of Smaug uses new techniques and new data compared to [Smaug-72B](https://huggingface.co/abacusai/Smaug-72B-v0.1), and more information will be released later on. For now, see the previous Smaug paper: https://arxiv.org/abs/2402.13228.
Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-6_5bpw_exl2
Zoyd
2024-05-20T05:29:01Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "dataset:aqua_rat", "dataset:microsoft/orca-math-word-problems-200k", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "arxiv:2402.13228", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl2", "region:us" ]
text-generation
2024-05-20T05:05:52Z
--- library_name: transformers license: llama2 datasets: - aqua_rat - microsoft/orca-math-word-problems-200k - m-a-p/CodeFeedback-Filtered-Instruction --- **Exllamav2** quant (**exl2** / **6.5 bpw**) made with ExLlamaV2 v0.0.21 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-2_2bpw_exl2)**</center> | <center>20886 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-2_5bpw_exl2)**</center> | <center>23192 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-3_0bpw_exl2)**</center> | <center>27273 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-3_5bpw_exl2)**</center> | <center>31356 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-3_75bpw_exl2)**</center> | <center>33398 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-4_0bpw_exl2)**</center> | <center>35434 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-4_25bpw_exl2)**</center> | <center>37456 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-5_0bpw_exl2)**</center> | <center>43598 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-6_0bpw_exl2)**</center> | <center>51957 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-6_5bpw_exl2)**</center> | <center>56014 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-8_0bpw_exl2)**</center> | <center>60211 MB</center> | <center>8</center> | # Smaug-Llama-3-70B-Instruct ### Built with Meta Llama 3 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c14f6b02e1f8f67c73bd05/ZxYuHKmU_AtuEJbGtuEBC.png) This model was built using a new Smaug recipe for improving performance on real world multi-turn conversations applied to [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct). The model outperforms Llama-3-70B-Instruct substantially, and is on par with GPT-4-Turbo, on MT-Bench (see below). EDIT: Smaug-Llama-3-70B-Instruct is the top open source model on Arena-Hard currently! It is also nearly on par with Claude Opus - see below. We are conducting additional benchmark evaluations and will add those when available. ### Model Description - **Developed by:** [Abacus.AI](https://abacus.ai) - **License:** https://llama.meta.com/llama3/license/ - **Finetuned from model:** [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct). ## Evaluation ### Arena-Hard Score vs selected others (sourced from: (https://lmsys.org/blog/2024-04-19-arena-hard/#full-leaderboard-with-gpt-4-turbo-as-judge)) | Model | Score | 95% Confidence Interval | Average Tokens | | :---- | ---------: | ----------: | ------: | | GPT-4-Turbo-2024-04-09 | 82.6 | (-1.8, 1.6) | 662 | | Claude-3-Opus-20240229 | 60.4 | (-3.3, 2.4) | 541 | | **Smaug-Llama-3-70B-Instruct** | 56.7 | (-2.2, 2.6) | 661 | | GPT-4-0314 | 50.0 | (-0.0, 0.0) | 423 | | Claude-3-Sonnet-20240229 | 46.8 | (-2.1, 2.2) | 552 | | Llama-3-70B-Instruct | 41.1 | (-2.5, 2.4) | 583 | | GPT-4-0613 | 37.9 | (-2.2, 2.0) | 354 | | Mistral-Large-2402 | 37.7 | (-1.9, 2.6) | 400 | | Mixtral-8x22B-Instruct-v0.1 | 36.4 | (-2.7, 2.9) | 430 | | Qwen1.5-72B-Chat | 36.1 | (-2.5, 2.2) | 474 | | Command-R-Plus | 33.1 | (-2.1, 2.2) | 541 | | Mistral-Medium | 31.9 | (-2.3, 2.4) | 485 | | GPT-3.5-Turbo-0613 | 24.8 | (-1.6, 2.0) | 401 | ### MT-Bench ``` ########## First turn ########## score model turn Smaug-Llama-3-70B-Instruct 1 9.40000 GPT-4-Turbo 1 9.37500 Meta-Llama-3-70B-Instruct 1 9.21250 ########## Second turn ########## score model turn Smaug-Llama-3-70B-Instruct 2 9.0125 GPT-4-Turbo 2 9.0000 Meta-Llama-3-70B-Instruct 2 8.8000 ########## Average ########## score model Smaug-Llama-3-70B-Instruct 9.206250 GPT-4-Turbo 9.187500 Meta-Llama-3-70B-Instruct 9.006250 ``` | Model | First turn | Second Turn | Average | | :---- | ---------: | ----------: | ------: | | **Smaug-Llama-3-70B-Instruct** | 9.40 | 9.01 | 9.21 | | GPT-4-Turbo | 9.38 | 9.00 | 9.19 | | Meta-Llama-3-70B-Instruct | 9.21 | 8.80 | 9.01 | This version of Smaug uses new techniques and new data compared to [Smaug-72B](https://huggingface.co/abacusai/Smaug-72B-v0.1), and more information will be released later on. For now, see the previous Smaug paper: https://arxiv.org/abs/2402.13228.
Johnie2Turbo/llama-13b_adv_text
Johnie2Turbo
2024-05-20T05:27:35Z
2
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "ru", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-20T05:20:06Z
--- language: - ru --- Умеет писать рекламные тексты и рекламные объявления для машин и ноутбуков ) Правильное оформление инструкции: [INST] {prompt} [/INST] Обучался на инструкция "Напиши рекламный текст для ..." и "Напиши рекламное объявление для ..."
Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-2_2bpw_exl2
Zoyd
2024-05-20T05:27:09Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "dataset:aqua_rat", "dataset:microsoft/orca-math-word-problems-200k", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "arxiv:2402.13228", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl2", "region:us" ]
text-generation
2024-05-19T11:12:13Z
--- library_name: transformers license: llama2 datasets: - aqua_rat - microsoft/orca-math-word-problems-200k - m-a-p/CodeFeedback-Filtered-Instruction --- **Exllamav2** quant (**exl2** / **2.2 bpw**) made with ExLlamaV2 v0.0.21 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-2_2bpw_exl2)**</center> | <center>20886 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-2_5bpw_exl2)**</center> | <center>23192 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-3_0bpw_exl2)**</center> | <center>27273 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-3_5bpw_exl2)**</center> | <center>31356 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-3_75bpw_exl2)**</center> | <center>33398 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-4_0bpw_exl2)**</center> | <center>35434 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-4_25bpw_exl2)**</center> | <center>37456 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-5_0bpw_exl2)**</center> | <center>43598 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-6_0bpw_exl2)**</center> | <center>51957 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-6_5bpw_exl2)**</center> | <center>56014 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-8_0bpw_exl2)**</center> | <center>60211 MB</center> | <center>8</center> | # Smaug-Llama-3-70B-Instruct ### Built with Meta Llama 3 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c14f6b02e1f8f67c73bd05/ZxYuHKmU_AtuEJbGtuEBC.png) This model was built using a new Smaug recipe for improving performance on real world multi-turn conversations applied to [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct). The model outperforms Llama-3-70B-Instruct substantially, and is on par with GPT-4-Turbo, on MT-Bench (see below). EDIT: Smaug-Llama-3-70B-Instruct is the top open source model on Arena-Hard currently! It is also nearly on par with Claude Opus - see below. We are conducting additional benchmark evaluations and will add those when available. ### Model Description - **Developed by:** [Abacus.AI](https://abacus.ai) - **License:** https://llama.meta.com/llama3/license/ - **Finetuned from model:** [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct). ## Evaluation ### Arena-Hard Score vs selected others (sourced from: (https://lmsys.org/blog/2024-04-19-arena-hard/#full-leaderboard-with-gpt-4-turbo-as-judge)) | Model | Score | 95% Confidence Interval | Average Tokens | | :---- | ---------: | ----------: | ------: | | GPT-4-Turbo-2024-04-09 | 82.6 | (-1.8, 1.6) | 662 | | Claude-3-Opus-20240229 | 60.4 | (-3.3, 2.4) | 541 | | **Smaug-Llama-3-70B-Instruct** | 56.7 | (-2.2, 2.6) | 661 | | GPT-4-0314 | 50.0 | (-0.0, 0.0) | 423 | | Claude-3-Sonnet-20240229 | 46.8 | (-2.1, 2.2) | 552 | | Llama-3-70B-Instruct | 41.1 | (-2.5, 2.4) | 583 | | GPT-4-0613 | 37.9 | (-2.2, 2.0) | 354 | | Mistral-Large-2402 | 37.7 | (-1.9, 2.6) | 400 | | Mixtral-8x22B-Instruct-v0.1 | 36.4 | (-2.7, 2.9) | 430 | | Qwen1.5-72B-Chat | 36.1 | (-2.5, 2.2) | 474 | | Command-R-Plus | 33.1 | (-2.1, 2.2) | 541 | | Mistral-Medium | 31.9 | (-2.3, 2.4) | 485 | | GPT-3.5-Turbo-0613 | 24.8 | (-1.6, 2.0) | 401 | ### MT-Bench ``` ########## First turn ########## score model turn Smaug-Llama-3-70B-Instruct 1 9.40000 GPT-4-Turbo 1 9.37500 Meta-Llama-3-70B-Instruct 1 9.21250 ########## Second turn ########## score model turn Smaug-Llama-3-70B-Instruct 2 9.0125 GPT-4-Turbo 2 9.0000 Meta-Llama-3-70B-Instruct 2 8.8000 ########## Average ########## score model Smaug-Llama-3-70B-Instruct 9.206250 GPT-4-Turbo 9.187500 Meta-Llama-3-70B-Instruct 9.006250 ``` | Model | First turn | Second Turn | Average | | :---- | ---------: | ----------: | ------: | | **Smaug-Llama-3-70B-Instruct** | 9.40 | 9.01 | 9.21 | | GPT-4-Turbo | 9.38 | 9.00 | 9.19 | | Meta-Llama-3-70B-Instruct | 9.21 | 8.80 | 9.01 | This version of Smaug uses new techniques and new data compared to [Smaug-72B](https://huggingface.co/abacusai/Smaug-72B-v0.1), and more information will be released later on. For now, see the previous Smaug paper: https://arxiv.org/abs/2402.13228.
baldwin6/Bolaco
baldwin6
2024-05-20T05:24:52Z
0
2
null
[ "region:us" ]
null
2024-05-16T08:30:48Z
Checkpoints for the Bolaco. The code is available at https://github.com/Dereck0602/Bolaco.
danieljhand/distilbert-base-uncased-finetuned-wine
danieljhand
2024-05-20T05:24:43Z
123
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-17T02:07:53Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-wine results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-wine This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9082 - Accuracy: 0.7314 - F1: 0.7222 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 1.6559 | 1.0 | 1101 | 1.0917 | 0.6792 | 0.6623 | | 1.0185 | 2.0 | 2202 | 0.9466 | 0.7214 | 0.7103 | | 0.8851 | 3.0 | 3303 | 0.9082 | 0.7314 | 0.7222 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-5_0bpw_exl2
Zoyd
2024-05-20T05:20:36Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "dataset:aqua_rat", "dataset:microsoft/orca-math-word-problems-200k", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "arxiv:2402.13228", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "5-bit", "exl2", "region:us" ]
text-generation
2024-05-20T05:02:53Z
--- library_name: transformers license: llama2 datasets: - aqua_rat - microsoft/orca-math-word-problems-200k - m-a-p/CodeFeedback-Filtered-Instruction --- **Exllamav2** quant (**exl2** / **5.0 bpw**) made with ExLlamaV2 v0.0.21 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-2_2bpw_exl2)**</center> | <center>20886 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-2_5bpw_exl2)**</center> | <center>23192 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-3_0bpw_exl2)**</center> | <center>27273 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-3_5bpw_exl2)**</center> | <center>31356 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-3_75bpw_exl2)**</center> | <center>33398 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-4_0bpw_exl2)**</center> | <center>35434 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-4_25bpw_exl2)**</center> | <center>37456 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-5_0bpw_exl2)**</center> | <center>43598 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-6_0bpw_exl2)**</center> | <center>51957 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-6_5bpw_exl2)**</center> | <center>56014 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-8_0bpw_exl2)**</center> | <center>60211 MB</center> | <center>8</center> | # Smaug-Llama-3-70B-Instruct ### Built with Meta Llama 3 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c14f6b02e1f8f67c73bd05/ZxYuHKmU_AtuEJbGtuEBC.png) This model was built using a new Smaug recipe for improving performance on real world multi-turn conversations applied to [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct). The model outperforms Llama-3-70B-Instruct substantially, and is on par with GPT-4-Turbo, on MT-Bench (see below). EDIT: Smaug-Llama-3-70B-Instruct is the top open source model on Arena-Hard currently! It is also nearly on par with Claude Opus - see below. We are conducting additional benchmark evaluations and will add those when available. ### Model Description - **Developed by:** [Abacus.AI](https://abacus.ai) - **License:** https://llama.meta.com/llama3/license/ - **Finetuned from model:** [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct). ## Evaluation ### Arena-Hard Score vs selected others (sourced from: (https://lmsys.org/blog/2024-04-19-arena-hard/#full-leaderboard-with-gpt-4-turbo-as-judge)) | Model | Score | 95% Confidence Interval | Average Tokens | | :---- | ---------: | ----------: | ------: | | GPT-4-Turbo-2024-04-09 | 82.6 | (-1.8, 1.6) | 662 | | Claude-3-Opus-20240229 | 60.4 | (-3.3, 2.4) | 541 | | **Smaug-Llama-3-70B-Instruct** | 56.7 | (-2.2, 2.6) | 661 | | GPT-4-0314 | 50.0 | (-0.0, 0.0) | 423 | | Claude-3-Sonnet-20240229 | 46.8 | (-2.1, 2.2) | 552 | | Llama-3-70B-Instruct | 41.1 | (-2.5, 2.4) | 583 | | GPT-4-0613 | 37.9 | (-2.2, 2.0) | 354 | | Mistral-Large-2402 | 37.7 | (-1.9, 2.6) | 400 | | Mixtral-8x22B-Instruct-v0.1 | 36.4 | (-2.7, 2.9) | 430 | | Qwen1.5-72B-Chat | 36.1 | (-2.5, 2.2) | 474 | | Command-R-Plus | 33.1 | (-2.1, 2.2) | 541 | | Mistral-Medium | 31.9 | (-2.3, 2.4) | 485 | | GPT-3.5-Turbo-0613 | 24.8 | (-1.6, 2.0) | 401 | ### MT-Bench ``` ########## First turn ########## score model turn Smaug-Llama-3-70B-Instruct 1 9.40000 GPT-4-Turbo 1 9.37500 Meta-Llama-3-70B-Instruct 1 9.21250 ########## Second turn ########## score model turn Smaug-Llama-3-70B-Instruct 2 9.0125 GPT-4-Turbo 2 9.0000 Meta-Llama-3-70B-Instruct 2 8.8000 ########## Average ########## score model Smaug-Llama-3-70B-Instruct 9.206250 GPT-4-Turbo 9.187500 Meta-Llama-3-70B-Instruct 9.006250 ``` | Model | First turn | Second Turn | Average | | :---- | ---------: | ----------: | ------: | | **Smaug-Llama-3-70B-Instruct** | 9.40 | 9.01 | 9.21 | | GPT-4-Turbo | 9.38 | 9.00 | 9.19 | | Meta-Llama-3-70B-Instruct | 9.21 | 8.80 | 9.01 | This version of Smaug uses new techniques and new data compared to [Smaug-72B](https://huggingface.co/abacusai/Smaug-72B-v0.1), and more information will be released later on. For now, see the previous Smaug paper: https://arxiv.org/abs/2402.13228.
leafspark/Yi-1.5-34B-Chat-16K-Q4_K_M-GGUF
leafspark
2024-05-20T05:19:40Z
6
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-20T05:18:51Z
--- license: apache-2.0 tags: - llama-cpp - gguf-my-repo --- # leafspark/Yi-1.5-34B-Chat-16K-Q4_K_M-GGUF This model was converted to GGUF format from [`01-ai/Yi-1.5-34B-Chat-16K`](https://huggingface.co/01-ai/Yi-1.5-34B-Chat-16K) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/01-ai/Yi-1.5-34B-Chat-16K) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo leafspark/Yi-1.5-34B-Chat-16K-Q4_K_M-GGUF --model yi-1.5-34b-chat-16k.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo leafspark/Yi-1.5-34B-Chat-16K-Q4_K_M-GGUF --model yi-1.5-34b-chat-16k.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m yi-1.5-34b-chat-16k.Q4_K_M.gguf -n 128 ```
vvduc03/lora-llava-3b
vvduc03
2024-05-20T05:04:26Z
8
0
transformers
[ "transformers", "safetensors", "llava_mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-20T02:39:50Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ridham1317/whisper-small-ft-common-voice
ridham1317
2024-05-20T05:03:37Z
148
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-20T05:02:48Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
omezzinemariem/mistral-text-to-RULE2
omezzinemariem
2024-05-20T05:01:14Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2", "region:us" ]
null
2024-05-20T05:01:01Z
--- library_name: peft base_model: mistralai/Mistral-7B-Instruct-v0.2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
YYYYYYibo/nash_simple_online_iter_1
YYYYYYibo
2024-05-20T05:00:26Z
0
0
peft
[ "peft", "safetensors", "mistral", "alignment-handbook", "generated_from_trainer", "trl", "dpo", "dataset:updated", "dataset:original", "base_model:alignment-handbook/zephyr-7b-sft-full", "base_model:adapter:alignment-handbook/zephyr-7b-sft-full", "license:apache-2.0", "region:us" ]
null
2024-05-20T03:13:38Z
--- license: apache-2.0 library_name: peft tags: - alignment-handbook - generated_from_trainer - trl - dpo base_model: alignment-handbook/zephyr-7b-sft-full datasets: - updated - original model-index: - name: nash_simple_online_iter_1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nash_simple_online_iter_1 This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on the updated and the original datasets. It achieves the following results on the evaluation set: - Loss: 0.6779 - Rewards/chosen: 0.0306 - Rewards/rejected: -0.0021 - Rewards/accuracies: 0.6480 - Rewards/margins: 0.0327 - Logps/rejected: -257.7254 - Logps/chosen: -280.9805 - Logits/rejected: -2.6488 - Logits/chosen: -2.7230 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.6844 | 0.64 | 100 | 0.6779 | 0.0306 | -0.0021 | 0.6480 | 0.0327 | -257.7254 | -280.9805 | -2.6488 | -2.7230 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.3.0+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
mp1704/gpt-neo-sft-v2.1
mp1704
2024-05-20T04:59:21Z
107
0
transformers
[ "transformers", "safetensors", "gpt_neo", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-20T04:58:36Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
SyntaxTheRed/roberta-base-bne-finetuned-multi-sentiment
SyntaxTheRed
2024-05-20T04:55:22Z
110
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:BSC-LT/roberta-base-bne", "base_model:finetune:BSC-LT/roberta-base-bne", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-20T04:25:49Z
--- license: apache-2.0 base_model: BSC-TeMU/roberta-base-bne tags: - generated_from_trainer metrics: - accuracy model-index: - name: roberta-base-bne-finetuned-multi-sentiment results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-bne-finetuned-multi-sentiment This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7463 - Accuracy: 0.6543 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.8239 | 1.0 | 115 | 0.7485 | 0.6667 | | 0.6041 | 2.0 | 230 | 0.7463 | 0.6543 | ### Framework versions - Transformers 4.40.2 - Pytorch 1.13.1+cu117 - Datasets 2.19.1 - Tokenizers 0.19.1
ukung/Nusantara-2.7b-Indo-Chat-GGUF
ukung
2024-05-20T04:51:02Z
11
0
null
[ "gguf", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-20T04:04:01Z
--- license: apache-2.0 ---
Ruiz3/phi-2-kingshipAI-product-explainer
Ruiz3
2024-05-20T04:47:09Z
132
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-20T04:26:51Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
OwOpeepeepoopoo/NoSoup4U2
OwOpeepeepoopoo
2024-05-20T04:36:33Z
14
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-19T00:25:31Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ukung/Nusantara-4b-Indo-Chat-GGUF
ukung
2024-05-20T04:26:42Z
97
0
null
[ "gguf", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-20T02:43:48Z
--- license: apache-2.0 ---
HyunCello/EEVE-Korean-Instruct-10.8B-v1.0-test-0.1
HyunCello
2024-05-20T04:23:36Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-factory", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-20T02:51:33Z
--- library_name: transformers tags: - llama-factory --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DownwardSpiral33/gpt2-imdb-pos-roberta16-256_0_5-2024.05.20.03.08
DownwardSpiral33
2024-05-20T04:16:45Z
134
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-20T04:16:26Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
akiseid/AmharicNewsNonCleanedNonWeighted
akiseid
2024-05-20T04:13:43Z
118
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-20T03:12:02Z
--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: AmharicNewsNonCleanedNonWeighted results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # AmharicNewsNonCleanedNonWeighted This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1726 - Accuracy: 0.9564 - Precision: 0.9563 - Recall: 0.9564 - F1: 0.9564 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.2237 | 1.0 | 945 | 0.2308 | 0.9054 | 0.9145 | 0.9054 | 0.9031 | | 0.3067 | 2.0 | 1890 | 0.1760 | 0.9384 | 0.9388 | 0.9384 | 0.9379 | | 0.143 | 3.0 | 2835 | 0.1510 | 0.9480 | 0.9486 | 0.9480 | 0.9482 | | 0.1306 | 4.0 | 3780 | 0.1550 | 0.9544 | 0.9547 | 0.9544 | 0.9544 | | 0.0825 | 5.0 | 4725 | 0.1726 | 0.9564 | 0.9563 | 0.9564 | 0.9564 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
Raneechu/mininglarge
Raneechu
2024-05-20T04:11:28Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2024-05-20T04:11:24Z
--- license: llama2 library_name: peft tags: - trl - sft - generated_from_trainer base_model: meta-llama/Llama-2-7b-hf model-index: - name: mininglarge results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mininglarge This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1 ### Training results ### Framework versions - Transformers 4.40.1 - Pytorch 2.1.1+cu121 - Datasets 2.14.5 - Tokenizers 0.19.1 ## Training procedure ### Framework versions - PEFT 0.6.2
pksvi/logo_LORA
pksvi
2024-05-20T04:11:19Z
2
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:Lykon/dreamshaper-xl-lightning", "base_model:adapter:Lykon/dreamshaper-xl-lightning", "license:openrail++", "region:us" ]
text-to-image
2024-05-20T04:11:15Z
--- license: openrail++ library_name: diffusers tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers base_model: Lykon/dreamshaper-xl-lightning instance_prompt: TOK 'LA' logo on garment widget: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - pksvi/logo_LORA <Gallery /> ## Model description These are pksvi/logo_LORA LoRA adaption weights for Lykon/dreamshaper-xl-lightning. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use TOK 'LA' logo on garment to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](pksvi/logo_LORA/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
dendimaki/mistral-lora-token-classification
dendimaki
2024-05-20T04:09:44Z
1
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2024-05-20T04:09:42Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: mistralai/Mistral-7B-v0.1 model-index: - name: mistral-lora-token-classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral-lora-token-classification This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1-score | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:--------:|:--------:| | No log | 1.0 | 431 | nan | 0.0020 | 0.0444 | 0.0038 | 0.0444 | ### Framework versions - PEFT 0.11.1 - Transformers 4.41.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
abc88767/22c102
abc88767
2024-05-20T04:09:35Z
98
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-20T04:08:02Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
duyntnet/WizardLM-13B-Uncensored-imatrix-GGUF
duyntnet
2024-05-20T04:07:06Z
326
5
transformers
[ "transformers", "gguf", "imatrix", "WizardLM-13B-Uncensored", "text-generation", "en", "license:other", "region:us" ]
text-generation
2024-05-20T00:08:43Z
--- license: other language: - en pipeline_tag: text-generation inference: false tags: - transformers - gguf - imatrix - WizardLM-13B-Uncensored --- Quantizations of https://huggingface.co/cognitivecomputations/WizardLM-13B-Uncensored # From original readme This is WizardLM trained with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA. Shout out to the open source AI/ML community, and everyone who helped me out. Note: An uncensored model has no guardrails. You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car. Publishing anything this model generates is the same as publishing it yourself. You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it.
abc88767/9sc102
abc88767
2024-05-20T04:04:52Z
97
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-20T04:03:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
abc88767/8sc102
abc88767
2024-05-20T04:01:24Z
93
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-20T03:59:47Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tsavage68/MedQA_L3_600steps_1e7rate_01beta_CSFTDPO
tsavage68
2024-05-20T03:58:43Z
8
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "dpo", "generated_from_trainer", "conversational", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-20T02:46:28Z
--- license: llama3 base_model: meta-llama/Meta-Llama-3-8B-Instruct tags: - trl - dpo - generated_from_trainer model-index: - name: MedQA_L3_600steps_1e7rate_01beta_CSFTDPO results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MedQA_L3_600steps_1e7rate_01beta_CSFTDPO This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6692 - Rewards/chosen: 0.0482 - Rewards/rejected: -0.0053 - Rewards/accuracies: 0.6681 - Rewards/margins: 0.0535 - Logps/rejected: -21.3695 - Logps/chosen: -17.7404 - Logits/rejected: -0.9398 - Logits/chosen: -0.9393 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-07 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 600 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.6951 | 0.0489 | 50 | 0.6935 | 0.0003 | 0.0009 | 0.4901 | -0.0006 | -21.3079 | -18.2196 | -0.9258 | -0.9253 | | 0.6892 | 0.0977 | 100 | 0.6881 | 0.0374 | 0.0268 | 0.6044 | 0.0106 | -21.0482 | -17.8488 | -0.9281 | -0.9276 | | 0.6801 | 0.1466 | 150 | 0.6794 | 0.0588 | 0.0292 | 0.6418 | 0.0296 | -21.0241 | -17.6343 | -0.9314 | -0.9309 | | 0.6807 | 0.1954 | 200 | 0.6767 | 0.0584 | 0.0227 | 0.6549 | 0.0358 | -21.0897 | -17.6383 | -0.9345 | -0.9339 | | 0.6829 | 0.2443 | 250 | 0.6726 | 0.0560 | 0.0106 | 0.6571 | 0.0454 | -21.2109 | -17.6631 | -0.9367 | -0.9362 | | 0.6656 | 0.2931 | 300 | 0.6715 | 0.0540 | 0.0059 | 0.6505 | 0.0481 | -21.2575 | -17.6830 | -0.9382 | -0.9376 | | 0.6955 | 0.3420 | 350 | 0.6697 | 0.0524 | 0.0002 | 0.6571 | 0.0522 | -21.3145 | -17.6986 | -0.9384 | -0.9378 | | 0.6605 | 0.3908 | 400 | 0.6697 | 0.0493 | -0.0031 | 0.6505 | 0.0524 | -21.3476 | -17.7294 | -0.9393 | -0.9388 | | 0.6718 | 0.4397 | 450 | 0.6689 | 0.0495 | -0.0047 | 0.6527 | 0.0541 | -21.3631 | -17.7279 | -0.9396 | -0.9390 | | 0.6734 | 0.4885 | 500 | 0.6687 | 0.0486 | -0.0059 | 0.6505 | 0.0545 | -21.3751 | -17.7362 | -0.9397 | -0.9392 | | 0.6525 | 0.5374 | 550 | 0.6691 | 0.0482 | -0.0056 | 0.6615 | 0.0537 | -21.3720 | -17.7410 | -0.9398 | -0.9393 | | 0.6637 | 0.5862 | 600 | 0.6692 | 0.0482 | -0.0053 | 0.6681 | 0.0535 | -21.3695 | -17.7404 | -0.9398 | -0.9393 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.0.0+cu117 - Datasets 2.19.1 - Tokenizers 0.19.1
afiqlol/Malay-Sentiment
afiqlol
2024-05-20T03:52:40Z
107
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:citizenlab/twitter-xlm-roberta-base-sentiment-finetunned", "base_model:finetune:citizenlab/twitter-xlm-roberta-base-sentiment-finetunned", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-20T02:11:43Z
--- base_model: citizenlab/twitter-xlm-roberta-base-sentiment-finetunned tags: - generated_from_trainer metrics: - accuracy model-index: - name: Malay-Sentiment results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Malay-Sentiment This model is a fine-tuned version of [citizenlab/twitter-xlm-roberta-base-sentiment-finetunned](https://huggingface.co/citizenlab/twitter-xlm-roberta-base-sentiment-finetunned) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5757 - Accuracy: 0.7578 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7066 | 1.0 | 723 | 0.5757 | 0.7578 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.1+cpu - Datasets 2.14.5 - Tokenizers 0.15.0
PQlet/textual-inversion-v2-ablation-vec3-img1
PQlet
2024-05-20T03:48:22Z
3
0
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers-training", "lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-05-20T01:50:18Z
--- license: creativeml-openrail-m library_name: diffusers tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - diffusers-training - lora base_model: runwayml/stable-diffusion-v1-5 inference: true --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # Textual Inversion training - PQlet/textual-inversion-v2-ablation-vec3-img1 The generated images are below. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
kazuhasasd/eye_disease
kazuhasasd
2024-05-20T03:45:25Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-20T03:45:25Z
--- license: apache-2.0 ---
Andyhahaha/Junlong-Huanzong-twitter-financial-news-sentiment-analysis
Andyhahaha
2024-05-20T03:44:46Z
110
1
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-20T03:44:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
GrigoriiA/parler-tts-from-mini-Libretta-v0.2
GrigoriiA
2024-05-20T03:42:43Z
66
0
transformers
[ "transformers", "safetensors", "parler_tts", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-20T03:41:36Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/lodrick-the-lafted_-_Grafted-Hermetic-Platypus-C-2x7B-4bits
RichardErkhov
2024-05-20T03:39:17Z
78
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-20T03:34:05Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Grafted-Hermetic-Platypus-C-2x7B - bnb 4bits - Model creator: https://huggingface.co/lodrick-the-lafted/ - Original model: https://huggingface.co/lodrick-the-lafted/Grafted-Hermetic-Platypus-C-2x7B/ Original model description: --- license: apache-2.0 datasets: - lodrick-the-lafted/Hermes-217K - garage-bAInd/Open-Platypus - jondurbin/airoboros-3.2 model-index: - name: Grafted-Hermetic-Platypus-C-2x7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 58.96 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-C-2x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 82.77 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-C-2x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 62.08 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-C-2x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 60.87 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-C-2x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 77.74 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-C-2x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 43.9 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-C-2x7B name: Open LLM Leaderboard --- <img src=https://huggingface.co/lodrick-the-lafted/Grafted-Hermetic-Platypus-C-2x7B/resolve/main/ghp.png> # Grafted-Hermetic-Platypus-C-2x7B MoE merge of - [Platyboros-Instruct-7B](https://huggingface.co/lodrick-the-lafted/Platyboros-Instruct-7B) - [Hermes-Instruct-7B-217K](https://huggingface.co/lodrick-the-lafted/Hermes-Instruct-7B-217K) <br /> <br /> # Prompt Format Both the default Mistral-Instruct tags and Alpaca are fine, so either: ``` <s>[INST] {sys_prompt} {instruction} [/INST] ``` or ``` {sys_prompt} ### Instruction: {instruction} ### Response: ``` The tokenizer default is Alpaca this time around. <br /> <br /> # Usage ```python from transformers import AutoTokenizer import transformers import torch model = "lodrick-the-lafted/Grafted-Hermetic-Platypus-A-2x7B" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, model_kwargs={"torch_dtype": torch.bfloat16}, ) messages = [{"role": "user", "content": "Give me a cooking recipe for an peach pie."}] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_p=0.95) print(outputs[0]["generated_text"]) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lodrick-the-lafted__Grafted-Hermetic-Platypus-C-2x7B) | Metric |Value| |---------------------------------|----:| |Avg. |64.39| |AI2 Reasoning Challenge (25-Shot)|58.96| |HellaSwag (10-Shot) |82.77| |MMLU (5-Shot) |62.08| |TruthfulQA (0-shot) |60.87| |Winogrande (5-shot) |77.74| |GSM8k (5-shot) |43.90|
AleRothermel/mi-1.2-model
AleRothermel
2024-05-20T03:24:20Z
113
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-20T02:16:21Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: bert-base-cased metrics: - accuracy model-index: - name: mi-1.2-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mi-1.2-model This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.7264 - Accuracy: 0.58 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 1.6501 | 0.04 | 10 | 1.6095 | 0.235 | | 1.655 | 0.08 | 20 | 1.5876 | 0.23 | | 1.6465 | 0.12 | 30 | 1.5874 | 0.305 | | 1.6577 | 0.16 | 40 | 1.6006 | 0.2325 | | 1.5666 | 0.2 | 50 | 1.5611 | 0.245 | | 1.5667 | 0.24 | 60 | 1.4245 | 0.44 | | 1.4837 | 0.28 | 70 | 1.2916 | 0.4175 | | 1.2603 | 0.32 | 80 | 1.3869 | 0.3925 | | 1.2865 | 0.36 | 90 | 1.4055 | 0.3475 | | 1.4037 | 0.4 | 100 | 1.3934 | 0.32 | | 1.3201 | 0.44 | 110 | 1.4511 | 0.4125 | | 1.3977 | 0.48 | 120 | 1.2251 | 0.44 | | 1.1444 | 0.52 | 130 | 1.1517 | 0.5175 | | 1.1627 | 0.56 | 140 | 1.1211 | 0.5225 | | 1.21 | 0.6 | 150 | 1.1336 | 0.53 | | 1.2211 | 0.64 | 160 | 1.4186 | 0.4 | | 1.2985 | 0.68 | 170 | 1.1251 | 0.4725 | | 1.1856 | 0.72 | 180 | 1.1138 | 0.5075 | | 1.1027 | 0.76 | 190 | 1.0810 | 0.5075 | | 1.0998 | 0.8 | 200 | 1.1034 | 0.5225 | | 1.2546 | 0.84 | 210 | 1.1205 | 0.4925 | | 1.0265 | 0.88 | 220 | 1.1996 | 0.4925 | | 1.0898 | 0.92 | 230 | 1.1002 | 0.515 | | 1.19 | 0.96 | 240 | 1.0805 | 0.4925 | | 1.1456 | 1.0 | 250 | 1.0509 | 0.525 | | 0.9265 | 1.04 | 260 | 1.1092 | 0.51 | | 0.8554 | 1.08 | 270 | 1.0098 | 0.5325 | | 0.8695 | 1.12 | 280 | 1.0991 | 0.4975 | | 0.8505 | 1.16 | 290 | 1.0827 | 0.5075 | | 0.8892 | 1.2 | 300 | 1.1195 | 0.52 | | 0.8982 | 1.24 | 310 | 1.0691 | 0.51 | | 0.9301 | 1.28 | 320 | 1.0236 | 0.545 | | 1.052 | 1.32 | 330 | 1.0296 | 0.535 | | 0.8072 | 1.3600 | 340 | 1.0227 | 0.55 | | 0.8822 | 1.4 | 350 | 1.0494 | 0.53 | | 1.1561 | 1.44 | 360 | 1.2036 | 0.4925 | | 0.9526 | 1.48 | 370 | 1.0443 | 0.56 | | 0.9916 | 1.52 | 380 | 1.0378 | 0.555 | | 1.0388 | 1.56 | 390 | 1.0920 | 0.5375 | | 0.9326 | 1.6 | 400 | 1.0510 | 0.5375 | | 0.8453 | 1.6400 | 410 | 1.1247 | 0.5025 | | 1.03 | 1.6800 | 420 | 1.0281 | 0.565 | | 0.971 | 1.72 | 430 | 1.0322 | 0.54 | | 0.941 | 1.76 | 440 | 0.9858 | 0.565 | | 0.8615 | 1.8 | 450 | 0.9793 | 0.555 | | 0.8815 | 1.8400 | 460 | 0.9778 | 0.56 | | 0.7658 | 1.88 | 470 | 0.9760 | 0.56 | | 1.0073 | 1.92 | 480 | 1.0747 | 0.5175 | | 0.8929 | 1.96 | 490 | 0.9910 | 0.565 | | 0.9089 | 2.0 | 500 | 1.0512 | 0.535 | | 0.5102 | 2.04 | 510 | 1.0545 | 0.555 | | 0.6748 | 2.08 | 520 | 1.1621 | 0.5175 | | 0.5222 | 2.12 | 530 | 1.1038 | 0.5575 | | 0.7978 | 2.16 | 540 | 1.1728 | 0.53 | | 0.6749 | 2.2 | 550 | 1.1029 | 0.5475 | | 0.6621 | 2.24 | 560 | 1.0977 | 0.5425 | | 0.6808 | 2.2800 | 570 | 1.1776 | 0.545 | | 0.5728 | 2.32 | 580 | 1.1747 | 0.5325 | | 0.75 | 2.36 | 590 | 1.1707 | 0.5275 | | 0.6622 | 2.4 | 600 | 1.1082 | 0.555 | | 0.6008 | 2.44 | 610 | 1.0922 | 0.57 | | 0.6491 | 2.48 | 620 | 1.1375 | 0.545 | | 0.5876 | 2.52 | 630 | 1.0614 | 0.5675 | | 0.5326 | 2.56 | 640 | 1.0460 | 0.58 | | 0.4901 | 2.6 | 650 | 1.0864 | 0.58 | | 0.6151 | 2.64 | 660 | 1.1919 | 0.58 | | 0.6478 | 2.68 | 670 | 1.1301 | 0.5575 | | 0.4841 | 2.7200 | 680 | 1.1451 | 0.58 | | 0.6365 | 2.76 | 690 | 1.0701 | 0.575 | | 0.5284 | 2.8 | 700 | 1.1674 | 0.5325 | | 0.6506 | 2.84 | 710 | 1.1016 | 0.55 | | 0.6446 | 2.88 | 720 | 1.1340 | 0.57 | | 0.5193 | 2.92 | 730 | 1.1692 | 0.525 | | 0.6129 | 2.96 | 740 | 1.1717 | 0.5325 | | 0.6013 | 3.0 | 750 | 1.1374 | 0.55 | | 0.3392 | 3.04 | 760 | 1.2702 | 0.515 | | 0.3188 | 3.08 | 770 | 1.2584 | 0.515 | | 0.3272 | 3.12 | 780 | 1.3520 | 0.5225 | | 0.341 | 3.16 | 790 | 1.2752 | 0.5575 | | 0.3826 | 3.2 | 800 | 1.3126 | 0.55 | | 0.3062 | 3.24 | 810 | 1.4909 | 0.52 | | 0.2657 | 3.2800 | 820 | 1.3804 | 0.5575 | | 0.4609 | 3.32 | 830 | 1.3712 | 0.5625 | | 0.3388 | 3.36 | 840 | 1.4701 | 0.5275 | | 0.3007 | 3.4 | 850 | 1.3373 | 0.57 | | 0.2732 | 3.44 | 860 | 1.3699 | 0.575 | | 0.4551 | 3.48 | 870 | 1.3874 | 0.555 | | 0.3048 | 3.52 | 880 | 1.4913 | 0.5625 | | 0.4104 | 3.56 | 890 | 1.4586 | 0.565 | | 0.2633 | 3.6 | 900 | 1.4353 | 0.565 | | 0.4435 | 3.64 | 910 | 1.5246 | 0.555 | | 0.282 | 3.68 | 920 | 1.6866 | 0.5275 | | 0.5918 | 3.7200 | 930 | 1.5193 | 0.5525 | | 0.315 | 3.76 | 940 | 1.4276 | 0.565 | | 0.1276 | 3.8 | 950 | 1.4411 | 0.5625 | | 0.3389 | 3.84 | 960 | 1.5420 | 0.5625 | | 0.3248 | 3.88 | 970 | 1.4492 | 0.575 | | 0.3051 | 3.92 | 980 | 1.4321 | 0.5925 | | 0.3363 | 3.96 | 990 | 1.4374 | 0.5825 | | 0.4602 | 4.0 | 1000 | 1.4581 | 0.57 | | 0.1582 | 4.04 | 1010 | 1.4434 | 0.5675 | | 0.2344 | 4.08 | 1020 | 1.4551 | 0.5975 | | 0.2646 | 4.12 | 1030 | 1.4999 | 0.59 | | 0.1948 | 4.16 | 1040 | 1.5550 | 0.5625 | | 0.3058 | 4.2 | 1050 | 1.5955 | 0.5775 | | 0.1569 | 4.24 | 1060 | 1.5721 | 0.575 | | 0.1777 | 4.28 | 1070 | 1.6241 | 0.56 | | 0.1256 | 4.32 | 1080 | 1.5711 | 0.575 | | 0.2467 | 4.36 | 1090 | 1.5735 | 0.59 | | 0.1964 | 4.4 | 1100 | 1.5924 | 0.585 | | 0.0578 | 4.44 | 1110 | 1.6353 | 0.585 | | 0.1358 | 4.48 | 1120 | 1.6710 | 0.5775 | | 0.174 | 4.52 | 1130 | 1.6733 | 0.5725 | | 0.2022 | 4.5600 | 1140 | 1.6658 | 0.585 | | 0.028 | 4.6 | 1150 | 1.6708 | 0.585 | | 0.1222 | 4.64 | 1160 | 1.6989 | 0.5875 | | 0.2295 | 4.68 | 1170 | 1.7131 | 0.5825 | | 0.374 | 4.72 | 1180 | 1.7197 | 0.5725 | | 0.1342 | 4.76 | 1190 | 1.7237 | 0.575 | | 0.079 | 4.8 | 1200 | 1.7267 | 0.58 | | 0.154 | 4.84 | 1210 | 1.7204 | 0.585 | | 0.0403 | 4.88 | 1220 | 1.7183 | 0.58 | | 0.1964 | 4.92 | 1230 | 1.7253 | 0.5775 | | 0.1297 | 4.96 | 1240 | 1.7252 | 0.5775 | | 0.0834 | 5.0 | 1250 | 1.7264 | 0.58 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
Noursene/whisper-small-2000
Noursene
2024-05-20T03:18:45Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-20T02:40:25Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
danhergir/platzi
danhergir
2024-05-20T03:03:13Z
194
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:AI-Lab-Makerere/beans", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-03-20T04:26:50Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - AI-Lab-Makerere/beans metrics: - accuracy base_model: google/vit-base-patch16-224-in21k model-index: - name: platzi results: - task: type: image-classification name: Image Classification dataset: name: beans type: beans config: default split: validation args: default metrics: - type: accuracy value: 0.9924812030075187 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # platzi This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0317 - Accuracy: 0.9925 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.136 | 3.85 | 500 | 0.0317 | 0.9925 | ### Framework versions - Transformers 4.38.1 - Pytorch 2.1.2 - Datasets 2.1.0 - Tokenizers 0.15.2
DucPhanBa/Vietnamese_Llama2
DucPhanBa
2024-05-20T03:01:21Z
0
0
peft
[ "peft", "region:us" ]
null
2024-05-20T02:57:47Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0
tanganke/gpt2_sst2
tanganke
2024-05-20T02:47:39Z
215
0
transformers
[ "transformers", "safetensors", "gpt2", "text-classification", "dataset:nyu-mll/glue", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-05-20T02:17:28Z
--- datasets: - nyu-mll/glue metrics: - accuracy basemodel: - openai-community/gpt2 ---
razasju/aissmeditext
razasju
2024-05-20T02:44:50Z
77
0
transformers
[ "transformers", "safetensors", "blip-2", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
feature-extraction
2024-05-11T07:24:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Noursene/whisper-small-5000
Noursene
2024-05-20T02:40:08Z
77
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-04-18T08:16:01Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
foxxxx2/adnlp-bertqa-model
foxxxx2
2024-05-20T02:38:33Z
139
0
transformers
[ "transformers", "safetensors", "bert", "question-answering", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
question-answering
2024-05-20T02:38:22Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
OwOpeepeepoopoo/LittleJerry2
OwOpeepeepoopoo
2024-05-20T02:34:28Z
7
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-19T11:12:07Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
PengceWang/LLAMA2-Chinese-huma_emotion
PengceWang
2024-05-20T02:25:10Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:FlagAlpha/Llama2-Chinese-7b-Chat", "base_model:adapter:FlagAlpha/Llama2-Chinese-7b-Chat", "region:us" ]
null
2024-05-20T02:21:34Z
--- library_name: peft base_model: FlagAlpha/Llama2-Chinese-7b-Chat --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
animaRegem/gemma-2b-malayalam-model-vllm-4bit
animaRegem
2024-05-20T02:24:53Z
77
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "en", "base_model:unsloth/gemma-2b-bnb-4bit", "base_model:quantized:unsloth/gemma-2b-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-07T18:25:16Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - gemma - trl - sft base_model: unsloth/gemma-2b-bnb-4bit --- # Uploaded model - **Developed by:** animaRegem - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-2b-bnb-4bit This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Sorour/cls_sentiment_phi3_v1
Sorour
2024-05-20T02:22:20Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:microsoft/Phi-3-mini-4k-instruct", "base_model:adapter:microsoft/Phi-3-mini-4k-instruct", "license:mit", "region:us" ]
null
2024-05-20T01:45:59Z
--- license: mit library_name: peft tags: - trl - sft - generated_from_trainer base_model: microsoft/Phi-3-mini-4k-instruct datasets: - generator model-index: - name: cls_sentiment_phi3_v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cls_sentiment_phi3_v1 This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 0.7122 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.9066 | 0.2083 | 50 | 0.9011 | | 0.854 | 0.4167 | 100 | 0.8419 | | 0.787 | 0.625 | 150 | 0.8062 | | 0.7476 | 0.8333 | 200 | 0.7764 | | 0.7141 | 1.0417 | 250 | 0.7636 | | 0.6989 | 1.25 | 300 | 0.7528 | | 0.6482 | 1.4583 | 350 | 0.7397 | | 0.6537 | 1.6667 | 400 | 0.7207 | | 0.6526 | 1.875 | 450 | 0.7122 | ### Framework versions - PEFT 0.11.1 - Transformers 4.41.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
tanganke/gpt2_mrpc
tanganke
2024-05-20T02:15:46Z
228
0
transformers
[ "transformers", "safetensors", "gpt2", "text-classification", "dataset:nyu-mll/glue", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-05-20T02:01:40Z
--- datasets: - nyu-mll/glue metrics: - accuracy basemodel: - openai-community/gpt2 ---
Moon-Ahn/mistral_edit
Moon-Ahn
2024-05-20T02:12:39Z
4
0
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "finetuned", "conversational", "en", "ko", "arxiv:2308.06502", "arxiv:2308.06259", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-20T01:48:34Z
--- language: - en - ko pipeline_tag: text-generation tags: - finetuned --- # komt : korean multi task instruction tuning model ![multi task instruction tuning.jpg](https://github.com/davidkim205/komt/assets/16680469/c7f6ade7-247e-4b62-a94f-47e19abea68e) Recently, due to the success of ChatGPT, numerous large language models have emerged in an attempt to catch up with ChatGPT's capabilities. However, when it comes to Korean language performance, it has been observed that many models still struggle to provide accurate answers or generate Korean text effectively. This study addresses these challenges by introducing a multi-task instruction technique that leverages supervised datasets from various tasks to create training data for Large Language Models (LLMs). ## Model Details * **Model Developers** : davidkim(changyeon kim) * **Repository** : https://github.com/davidkim205/komt * **Model Architecture** : The komt-mistral-7b-v1 is is a fine-tuned version of the Mistral-7B-Instruct-v0.1. ## Dataset korean multi-task instruction dataset ## Hardware and Software - nvidia driver : 535.54.03 - CUDA Version: 12.2 ## Training Refer https://github.com/davidkim205/komt ## Prompt template: Mistral ``` <s>[INST] {prompt} [/INST]</s> ``` ## Usage ``` from transformers import AutoTokenizer, AutoModelForCausalLM from transformers import TextStreamer, GenerationConfig model_name='davidkim205/komt-mistral-7b-v1' model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto") tokenizer = AutoTokenizer.from_pretrained(model_name) streamer = TextStreamer(tokenizer) def gen(x): generation_config = GenerationConfig( temperature=0.8, top_p=0.8, top_k=100, max_new_tokens=1024, early_stopping=True, do_sample=True, ) q = f"[INST]{x} [/INST]" gened = model.generate( **tokenizer( q, return_tensors='pt', return_token_type_ids=False ).to('cuda'), generation_config=generation_config, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, streamer=streamer, ) result_str = tokenizer.decode(gened[0]) start_tag = f"\n\n### Response: " start_index = result_str.find(start_tag) if start_index != -1: result_str = result_str[start_index + len(start_tag):].strip() return result_str print(gen('제주도를 1박2일로 혼자 여행하려고 하는데 여행 코스를 만들어줘')) ``` output ``` <s> [INST]제주도를 1박2일로 혼자 여행하려고 하는데 여행 코스를 만들어줘 [/INST]1박 2일 여행 코스 제주 여행 코스는 다음과 같습니다: 첫째 날: * 아침: 제주시 신라로 이동하여 한라산 국립공원에 도착합니다. 오른쪽 발자국 광장에서 눈에 띄는 맥주숍을 찾아 한 병의 맥주를 마시고 계단으로 올라가 한라산 정상에 도착합니다. * 점심: 정상에서 계단을 내려와 중도에 위치한 골목에서 음식을 즐깁니다. 젓갈, 곰탕, 전복 등 맛있는 제주 특산물을 맛보며 힐링을 즐깁니다. * 오후: 중도에서 남하하여 남서쪽 제주로 이동합니다. 남서쪽 제주는 섬의 자연 경관과 백사장, 아름다운 바다 등 아름다운 풍광이 있는 곳입니다. 숙소에 숙박합니다. 둘째 날: * 아침: 남서쪽 제주에서 북서쪽 제주로 이동합니다. 이 지역은 푸른 잔디 해변과 투명한 바다가 펼쳐져 있는 아름다운 풍광입니다. 숙소에서 아침을 먹고 바다로 향하여 해변에서 휴식을 취합니다. * 점심: 북서쪽 제주의 앞바다에서 시원한 바다를 보며 한 끼의 해산물을 맛보고 계절에 따라 해산물 요리를 즐깁니다. * 오후: 남서쪽 제주에서 이동하여 서쪽 제주로 이동합니다. 서쪽 제주는 절벽과 암석 해변, 수수께끼의 돌 정원 등 독특한 경관이 있는 곳입니다. 이곳에서는 서쪽 제주의 대표적인 명소인 청량사를 방문하고 서쪽 제주의 아름다운 풍광을 감상합니다. * 저녁: 서쪽 제주에서 제주 시로 이동하여 한 끼의 제주 특산물을 맛보고 도착한 제주 도심에서 저녁을 즐깁니다. * 야간: 제주 시의 도심에서 야간 활동을 즐기며 1박 2일의 여행을 마무리합니다. 이렇게 제주를 1박 2일로 혼자 여행하면 제주의 아름다운 풍광, 푸른 잔디 해변, 투명한 바다 등을 경험할 수 있습니다. ``` ## Evaluation For objective model evaluation, we initially used EleutherAI's lm-evaluation-harness but obtained unsatisfactory results. Consequently, we conducted evaluations using ChatGPT, a widely used model, as described in [Self-Alignment with Instruction Backtranslation](https://arxiv.org/pdf/2308.06502.pdf) and [Three Ways of Using Large Language Models to Evaluate Chat](https://arxiv.org/pdf/2308.06259.pdf) . | model | score | average(0~5) | percentage | | --------------------------------------- |---------| ------------ | ---------- | | gpt-3.5-turbo(close) | 147 | 3.97 | 79.45% | | naver Cue(close) | 140 | 3.78 | 75.67% | | clova X(close) | 136 | 3.67 | 73.51% | | WizardLM-13B-V1.2(open) | 96 | 2.59 | 51.89% | | Llama-2-7b-chat-hf(open) | 67 | 1.81 | 36.21% | | Llama-2-13b-chat-hf(open) | 73 | 1.91 | 38.37% | | nlpai-lab/kullm-polyglot-12.8b-v2(open) | 70 | 1.89 | 37.83% | | kfkas/Llama-2-ko-7b-Chat(open) | 96 | 2.59 | 51.89% | | beomi/KoAlpaca-Polyglot-12.8B(open) | 100 | 2.70 | 54.05% | | **komt-llama2-7b-v1 (open)(ours)** | **117** | **3.16** | **63.24%** | | **komt-llama2-13b-v1 (open)(ours)** | **129** | **3.48** | **69.72%** | | **komt-llama-30b-v1 (open)(ours)** | **129** | **3.16** | **63.24%** | | **komt-mistral-7b-v1 (open)(ours)** | **131** | **3.54** | **70.81%** |
jrc/phi3-mini-math
jrc
2024-05-20T02:11:01Z
10
1
transformers
[ "transformers", "phi3", "text-generation", "torchtune", "minerva-math", "conversational", "custom_code", "en", "dataset:TIGER-Lab/MATH-plus", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-19T23:57:38Z
--- license: apache-2.0 datasets: - TIGER-Lab/MATH-plus language: - en tags: - torchtune - minerva-math library_name: transformers pipeline_tag: text-generation --- # jrc/phi3-mini-math <!-- Provide a quick summary of what the model is/does. --> Math majors - who needs em? This model can answer any math questions you have. ## How to Get Started with the Model Use the code below to get started with the model. ```python # Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("jrc/phi3-mini-math", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("jrc/phi3-mini-math", trust_remote_code=True) ``` ## Training Details Phi3 was trained using [torchtune](https://github.com/pytorch/torchtune) and the training script + config file are located in this repository. ```bash tune run lora_finetune_distributed.py --config mini_lora.yaml ``` You can see a full Weights & Biases run [here](https://api.wandb.ai/links/jcummings/hkey76vj). ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> This model was finetuned on the following datasets: * [TIGER-Lab/MATH-plus](https://huggingface.co/datasets/TIGER-Lab/MATH-plus): An advanced math-specific dataset with 894k samples. #### Hardware * Machines: 4 x NVIDIA A100 GPUs * Max VRAM used per GPU: 29 GB * Real time: 10 hours ## Evaluation The finetuned model is evaluated on [minerva-math](https://research.google/blog/minerva-solving-quantitative-reasoning-problems-with-language-models/) using [EleutherAI Eval Harness](https://github.com/EleutherAI/lm-evaluation-harness) through torchtune. ```bash tune run eleuther_eval --config eleuther_evaluation \ checkpoint.checkpoint_dir=./lora-phi3-math \ tasks=["minerva_math"] \ batch_size=32 ``` | Tasks |Version|Filter|n-shot| Metric |Value | |Stderr| |------------------------------------|-------|------|-----:|-----------|-----:|---|-----:| |minerva_math |N/A |none | 4|exact_match|0.1670|± |0.0051| | - minerva_math_algebra | 1|none | 4|exact_match|0.2502|± |0.0126| | - minerva_math_counting_and_prob | 1|none | 4|exact_match|0.1329|± |0.0156| | - minerva_math_geometry | 1|none | 4|exact_match|0.1232|± |0.0150| | - minerva_math_intermediate_algebra| 1|none | 4|exact_match|0.0576|± |0.0078| | - minerva_math_num_theory | 1|none | 4|exact_match|0.1148|± |0.0137| | - minerva_math_prealgebra | 1|none | 4|exact_match|0.3077|± |0.0156| | - minerva_math_precalc | 1|none | 4|exact_match|0.0623|± |0.0104| This shows a large improvement over the base Phi3 Mini model. ## Model Card Contact Drop me a line at @official_j3rck
Mitsua/elan-mt-tiny-ja-en
Mitsua
2024-05-20T02:10:39Z
115
0
transformers
[ "transformers", "safetensors", "marian", "text2text-generation", "translation", "ja", "en", "dataset:Mitsua/wikidata-parallel-descriptions-en-ja", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2024-05-20T02:09:55Z
--- license: cc-by-sa-4.0 datasets: - Mitsua/wikidata-parallel-descriptions-en-ja language: - ja - en metrics: - bleu - chrf library_name: transformers pipeline_tag: translation --- # ElanMT This model is a tiny variant of [**ElanMT-BT-ja-en**](https://huggingface.co/Mitsua/elan-mt-bt-ja-en) and is trained from scratch exclusively on openly licensed data and Wikipedia back translated data using [**ElanMT-base-en-ja**](https://huggingface.co/Mitsua/elan-mt-base-en-ja). ## Model Details This is a translation model based on [Marian MT](https://marian-nmt.github.io/) 4-layer encoder-decoder transformer architecture with sentencepiece tokenizer. - **Developed by**: [ELAN MITSUA Project](https://elanmitsua.com/en/) / Abstract Engine - **Model type**: Translation - **Source Language**: Japanese - **Target Language**: English - **License**: [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/) ## Usage [See here.](https://huggingface.co/Mitsua/elan-mt-bt-ja-en#usage) ## Training Data [See here.](https://huggingface.co/Mitsua/elan-mt-bt-ja-en#training-data) ## Training Procedure [See here.](https://huggingface.co/Mitsua/elan-mt-bt-ja-en#training-procedure) ## Evaluation [See here.](https://huggingface.co/Mitsua/elan-mt-bt-ja-en#evaluation) ## Disclaimer The translated result may be very incorrect, harmful or biased. The model was developed to investigate achievable performance with only a relatively small, licensed corpus, and is not suitable for use cases requiring high translation accuracy. Under Section 5 of the CC BY-SA 4.0 License, ELAN MITSUA Project / Abstract Engine is not responsible for any direct or indirect loss caused by the use of the model.
Mitsua/elan-mt-base-en-ja
Mitsua
2024-05-20T02:05:40Z
120
0
transformers
[ "transformers", "safetensors", "marian", "text2text-generation", "translation", "ja", "en", "dataset:Mitsua/wikidata-parallel-descriptions-en-ja", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2024-05-20T02:04:52Z
--- license: cc-by-sa-4.0 datasets: - Mitsua/wikidata-parallel-descriptions-en-ja language: - ja - en metrics: - bleu - chrf library_name: transformers pipeline_tag: translation --- # ElanMT This model is a pretrained checkpoint and is suitable for fine-tuning on a large dataset. For general use cases, using [**ElanMT-BT-en-ja**](https://huggingface.co/Mitsua/elan-mt-bt-en-ja) is strongly recommended. ## Model Details This is a translation model based on [Marian MT](https://marian-nmt.github.io/) 6-layer encoder-decoder transformer architecture with sentencepiece tokenizer. - **Developed by**: [ELAN MITSUA Project](https://elanmitsua.com/en/) / Abstract Engine - **Model type**: Translation - **Source Language**: English - **Target Language**: Japanese - **License**: [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/) ## Usage [See here.](https://huggingface.co/Mitsua/elan-mt-bt-en-ja#usage) ## Training Data [See here.](https://huggingface.co/Mitsua/elan-mt-bt-en-ja#training-data) ## Training Procedure [See here.](https://huggingface.co/Mitsua/elan-mt-bt-en-ja#training-procedure) ## Evaluation [See here.](https://huggingface.co/Mitsua/elan-mt-bt-en-ja#evaluation) ## Disclaimer The translated result may be very incorrect, harmful or biased. The model was developed to investigate achievable performance with only a relatively small, licensed corpus, and is not suitable for use cases requiring high translation accuracy. Under Section 5 of the CC BY-SA 4.0 License, ELAN MITSUA Project / Abstract Engine is not responsible for any direct or indirect loss caused by the use of the model.
Mitsua/elan-mt-base-ja-en
Mitsua
2024-05-20T02:03:41Z
113
0
transformers
[ "transformers", "safetensors", "marian", "text2text-generation", "translation", "ja", "en", "dataset:Mitsua/wikidata-parallel-descriptions-en-ja", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2024-05-20T02:00:59Z
--- license: cc-by-sa-4.0 datasets: - Mitsua/wikidata-parallel-descriptions-en-ja language: - ja - en metrics: - bleu - chrf library_name: transformers pipeline_tag: translation --- # ElanMT This model is a pretrained checkpoint and is suitable for fine-tuning on a large dataset. For general use cases, using [**ElanMT-BT-ja-en**](https://huggingface.co/Mitsua/elan-mt-bt-ja-en) is strongly recommended. ## Model Details This is a translation model based on [Marian MT](https://marian-nmt.github.io/) 6-layer encoder-decoder transformer architecture with sentencepiece tokenizer. - **Developed by**: [ELAN MITSUA Project](https://elanmitsua.com/en/) / Abstract Engine - **Model type**: Translation - **Source Language**: Japanese - **Target Language**: English - **License**: [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/) ## Usage [See here.](https://huggingface.co/Mitsua/elan-mt-bt-ja-en#usage) ## Training Data [See here.](https://huggingface.co/Mitsua/elan-mt-bt-ja-en#training-data) ## Training Procedure [See here.](https://huggingface.co/Mitsua/elan-mt-bt-ja-en#training-procedure) ## Evaluation [See here.](https://huggingface.co/Mitsua/elan-mt-bt-ja-en#evaluation) ## Disclaimer The translated result may be very incorrect, harmful or biased. The model was developed to investigate achievable performance with only a relatively small, licensed corpus, and is not suitable for use cases requiring high translation accuracy. Under Section 5 of the CC BY-SA 4.0 License, ELAN MITSUA Project / Abstract Engine is not responsible for any direct or indirect loss caused by the use of the model.
animaRegem/gemma-2b-malayalam-gguf
animaRegem
2024-05-20T01:56:59Z
8
1
transformers
[ "transformers", "gguf", "gemma", "text-generation-inference", "unsloth", "en", "base_model:unsloth/gemma-2b-bnb-4bit", "base_model:quantized:unsloth/gemma-2b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-07T18:21:13Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - gemma - gguf base_model: unsloth/gemma-2b-bnb-4bit --- # Uploaded model - **Developed by:** animaRegem - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-2b-bnb-4bit This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
RichardErkhov/dreamgen_-_WizardLM-2-7B-4bits
RichardErkhov
2024-05-20T01:56:23Z
78
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:2304.12244", "arxiv:2306.08568", "arxiv:2308.09583", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-20T01:53:09Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) WizardLM-2-7B - bnb 4bits - Model creator: https://huggingface.co/dreamgen/ - Original model: https://huggingface.co/dreamgen/WizardLM-2-7B/ Original model description: --- license: apache-2.0 --- <p style="font-size:20px;" align="center"> 🏠 <a href="https://wizardlm.github.io/WizardLM2" target="_blank">WizardLM-2 Release Blog</a> </p> <p align="center"> 🤗 <a href="https://huggingface.co/collections/microsoft/wizardlm-2-661d403f71e6c8257dbd598a" target="_blank">HF Repo</a> •🐱 <a href="https://github.com/victorsungo/WizardLM/tree/main/WizardLM-2" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br> </p> <p align="center"> 👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a> </p> ## News 🔥🔥🔥 [2024/04/15] We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, which have improved performance on complex chat, multilingual, reasoning and agent. New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B. - WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works and consistently outperforms all the existing state-of-the-art opensource models. - WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. - WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models. For more details of WizardLM-2 please read our [release blog post](https://wizardlm.github.io/WizardLM2) and upcoming paper. ## Model Details * **Model name**: WizardLM-2 7B * **Developed by**: WizardLM@Microsoft AI * **Base model**: [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) * **Parameters**: 7B * **Language(s)**: Multilingual * **Blog**: [Introducing WizardLM-2](https://wizardlm.github.io/WizardLM2) * **Repository**: [https://github.com/nlpxucan/WizardLM](https://github.com/nlpxucan/WizardLM) * **Paper**: WizardLM-2 (Upcoming) * **License**: Apache2.0 ## Model Capacities **MT-Bench** We also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models. The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models. Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales. <p align="center" width="100%"> <a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/mtbench.png" alt="MTBench" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> **Human Preferences Evaluation** We carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual. We report the win:loss rate without tie: - WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314. - WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat. - WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta. <p align="center" width="100%"> <a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/winall.png" alt="Win" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> ## Method Overview We built a **fully AI powered synthetic training system** to train WizardLM-2 models, please refer to our [blog](https://wizardlm.github.io/WizardLM2) for more details of this system. <p align="center" width="100%"> <a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/exp_1.png" alt="Method" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> ## Usage ❗<b>Note for model system prompts usage:</b> <b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports **multi-turn** conversation. The prompt should be as following: ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.</s> USER: Who are you? ASSISTANT: I am WizardLM.</s>...... ``` <b> Inference WizardLM-2 Demo Script</b> We provide a WizardLM-2 inference demo [code](https://github.com/nlpxucan/WizardLM/tree/main/demo) on our github.
animaRegem/gemma-2b-malayalam-model-adaptors
animaRegem
2024-05-20T01:54:45Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma", "trl", "en", "base_model:unsloth/gemma-2b-bnb-4bit", "base_model:finetune:unsloth/gemma-2b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-07T18:14:06Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - gemma - trl base_model: unsloth/gemma-2b-bnb-4bit --- # Uploaded model - **Developed by:** animaRegem - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-2b-bnb-4bit This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Mitsua/elan-mt-bt-en-ja
Mitsua
2024-05-20T01:53:38Z
626
8
transformers
[ "transformers", "safetensors", "marian", "text2text-generation", "translation", "ja", "en", "dataset:Mitsua/wikidata-parallel-descriptions-en-ja", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2024-05-20T01:51:18Z
--- license: cc-by-sa-4.0 datasets: - Mitsua/wikidata-parallel-descriptions-en-ja language: - ja - en metrics: - bleu - chrf library_name: transformers pipeline_tag: translation --- # ElanMT [**ElanMT-BT-en-ja**](https://huggingface.co/Mitsua/elan-mt-bt-en-ja) is a English to Japanese translation model developed by [ELAN MITSUA Project](https://elanmitsua.com/en/) / Abstract Engine. - [**ElanMT-base-en-ja**](https://huggingface.co/Mitsua/elan-mt-base-en-ja) and [**ElanMT-base-ja-en**](https://huggingface.co/Mitsua/elan-mt-base-ja-en) are trained from scratch, exclusively on openly licensed corpora such as CC0, CC BY and CC BY-SA. - This model is a fine-tuned checkpoint of **ElanMT-base-en-ja** and is trained exclusively on openly licensed data and Wikipedia back translated data using **ElanMT-base-ja-en**. - Web crawled or other machine translated corpora are **not** used during the entire training procedure for the **ElanMT** models. Despite the relatively low resource training, thanks to back-translation and [a newly built CC0 corpus](https://huggingface.co/datasets/Mitsua/wikidata-parallel-descriptions-en-ja), the model achieved comparable performance to the currently available open translation models. ## Model Details This is a translation model based on [Marian MT](https://marian-nmt.github.io/) 6-layer encoder-decoder transformer architecture with sentencepiece tokenizer. - **Developed by**: [ELAN MITSUA Project](https://elanmitsua.com/en/) / Abstract Engine - **Model type**: Translation - **Source Language**: English - **Target Language**: Japanese - **License**: [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/) ## Usage 1. Install the python packages `pip install transformers accelerate sentencepiece` * This model is verified on `transformers==4.40.2` 2. Run ```python from transformers import pipeline translator = pipeline('translation', model='Mitsua/elan-mt-bt-en-ja') translator('Hello. I am an AI.') ``` 3. For longer multiple sentences, using [pySBD](https://github.com/nipunsadvilkar/pySBD) is recommended. `pip install transformers accelerate sentencepiece pysbd` ```python import pysbd seg_en = pysbd.Segmenter(language="en", clean=False) txt = 'Hello. I am an AI. How are you doing?' print(translator(seg_en.segment(txt))) ``` This idea is from [FuguMT](https://huggingface.co/staka/fugumt-en-ja) repo. ## Training Data We heavily referred [FuguMT author's blog post](https://staka.jp/wordpress/?p=413) for dataset collection. - [Mitsua/wikidata-parallel-descriptions-en-ja](https://huggingface.co/datasets/Mitsua/wikidata-parallel-descriptions-en-ja) (CC0 1.0) - We newly built this 1.5M lines wikidata parallel corpus to augment the training data. This greatly improved the vocabulary on a word basis. - [The Kyoto Free Translation Task (KFTT)](https://www.phontron.com/kftt/) (CC BY-SA 3.0) - Graham Neubig, "The Kyoto Free Translation Task," http://www.phontron.com/kftt, 2011. - [Tatoeba](https://tatoeba.org/en/downloads) (CC BY 2.0 FR / CC0 1.0) - https://tatoeba.org/ - [wikipedia-interlanguage-titles](https://github.com/bhaddow/wikipedia-interlanguage-titles) (The MIT License / CC BY-SA 4.0) - We built parallel titles based on 2024-05-06 wikipedia dump. - [WikiMatrix](https://github.com/facebookresearch/LASER/tree/main/tasks/WikiMatrix) (CC BY-SA 4.0) - Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong and Francisco Guzmán, "WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia" - [MDN Web Docs](https://github.com/mdn/translated-content) (The MIT / CC0 1.0 / CC BY-SA 2.5) - https://github.com/mdn/translated-content - [Wikimedia contenttranslation dump](https://dumps.wikimedia.org/other/contenttranslation/) (CC BY-SA 4.0) - 2024-5-10 dump is used. *Even if the dataset itself is CC-licensed, we did not use it if the corpus contained in the dataset is based on web crawling, is based on unauthorized use of copyrighted works, or is based on the machine translation output of other translation models. ## Training Procedure We heavily referred "[Beating Edinburgh's WMT2017 system for en-de with Marian's Transformer model](https://github.com/marian-nmt/marian-examples/tree/master/wmt2017-transformer)" for training process and hyperparameter tuning. 1. Trains a sentencepiece tokenizer 32k vocab on 4M lines openly licensed corpus. 2. Trains `ja-en` back-translation model on 4M lines openly licensed corpus for 6 epochs. = **ElanMT-base-ja-en** 3. Trains `en-ja` base translation model on 4M lines openly licensed corpus for 6 epochs. = **ElanMT-base-en-ja** 4. Translates 20M lines `ja` Wikipedia to `en` using back-translation model. 5. Trains 4 `en-ja` models, which is finetuned from **ElanMT-base-en-ja** checkpoint, on 24M lines training data augmented with back-translated data for 6 epochs. 6. Merges 4 trained models that produces the best validation score on FLORES+ dev split. 7. Finetunes the merged model on 1M lines high quality corpus subset for 5 epochs. ## Evaluation ### Dataset - [FLORES+](https://github.com/openlanguagedata/flores) (CC BY-SA 4.0) devtest split is used for evaluation. - [NTREX](https://github.com/MicrosoftTranslator/NTREX) (CC BY-SA 4.0) ### Result | **Model** | **Params** | **FLORES+ BLEU** | **FLORES+ chrf** | **NTREX BLEU** | **NTREX chrf** | |:---|---:|---:|---:|---:|---:| | [**ElanMT-BT**](https://huggingface.co/Mitsua/elan-mt-bt-en-ja) | 61M | 29.96 | **38.43** | **25.63** | **35.41**| | [**ElanMT-base**](https://huggingface.co/Mitsua/elan-mt-base-en-ja) **w/o back-translation** | 61M | 26.55 | 35.28 | 23.04 | 32.94| | [**ElanMT-tiny**](https://huggingface.co/Mitsua/elan-mt-tiny-en-ja) | 15M | 25.93 | 34.69 | 22.78 | 33.00| | [staka/fugumt-en-ja](https://huggingface.co/staka/fugumt-en-ja) (*1) | 61M | **30.89** | 38.38 | 24.74 | 34.23| | [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) | 610M | 26.31 | 34.37 | 23.35 | 32.66| | [facebook/nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M) | 615M | 17.09 | 27.32 | 14.92 | 26.26| | [facebook/nllb-200-3.3B](https://huggingface.co/facebook/nllb-200-3.3B) | 3B | 20.04 | 30.33 | 17.07 | 28.46| | [google/madlad400-3b-mt](https://huggingface.co/google/madlad400-3b-mt) | 3B | 24.62 | 33.89 | 23.64 | 33.48| | [google/madlad400-7b-mt](https://huggingface.co/google/madlad400-7b-mt) | 7B | 25.57 | 34.59 | 24.60 | 34.43| - *1 tested on `transformers==4.29.2` and `num_beams=4` - *2 BLEU score is calculated by `sacreBLEU` with `tokenize=ja-mecab` ## Disclaimer - The translated result may be very incorrect, harmful or biased. The model was developed to investigate achievable performance with only a relatively small, licensed corpus, and is not suitable for use cases requiring high translation accuracy. Under Section 5 of the CC BY-SA 4.0 License, ELAN MITSUA Project / Abstract Engine is not responsible for any direct or indirect loss caused by the use of the model. - 免責事項:翻訳結果は不正確で、有害であったりバイアスがかかっている可能性があります。本モデルは比較的小規模でライセンスされたコーパスのみで達成可能な性能を調査するために開発されたモデルであり、翻訳の正確性が必要なユースケースでの使用には適していません。絵藍ミツアプロジェクト及び株式会社アブストラクトエンジンはCC BY-SA 4.0ライセンス第5条に基づき、本モデルの使用によって生じた直接的または間接的な損失に対して、一切の責任を負いません。
ruidanwang/minima
ruidanwang
2024-05-20T01:50:46Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-20T01:50:46Z
--- license: apache-2.0 ---
ukung/Nusantara-1.8b-Indo-Chat-GGUF
ukung
2024-05-20T01:49:49Z
3
0
null
[ "gguf", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-19T17:52:03Z
--- license: apache-2.0 ---
vr4sigma/gptchatbot
vr4sigma
2024-05-20T01:43:13Z
0
0
adapter-transformers
[ "adapter-transformers", "en", "dataset:HuggingFaceFW/fineweb", "dataset:PleIAs/YouTube-Commons", "arxiv:1910.09700", "license:wtfpl", "region:us" ]
null
2024-05-20T01:39:27Z
--- license: wtfpl datasets: - HuggingFaceFW/fineweb - PleIAs/YouTube-Commons language: - en metrics: - accuracy library_name: adapter-transformers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
PQlet/textual-inversion-v2-ablation-vec5-img9
PQlet
2024-05-20T01:31:44Z
17
0
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers-training", "lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-05-20T01:31:42Z
--- license: creativeml-openrail-m library_name: diffusers tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - diffusers-training - lora base_model: runwayml/stable-diffusion-v1-5 inference: true --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # Textual Inversion training - PQlet/textual-inversion-v2-ablation-vec5-img9 The generated images are below. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
JosineyJr/generate-conventional-commit-messages
JosineyJr
2024-05-20T01:31:35Z
0
0
unsloth
[ "unsloth", "safetensors", "code", "text2text-generation", "en", "base_model:meta-llama/Meta-Llama-Guard-2-8B", "base_model:finetune:meta-llama/Meta-Llama-Guard-2-8B", "license:apache-2.0", "region:us" ]
text2text-generation
2024-05-20T01:15:15Z
--- license: apache-2.0 language: - en library_name: unsloth tags: - code pipeline_tag: text2text-generation base_model: meta-llama/Meta-Llama-Guard-2-8B --- # About the project CommitWizard is a project that uses pre-trained language models to help automate the generation of commit messages based on code changes. It employs 4-bit quantization to optimize memory usage while maintaining model efficiency and accuracy.
TinyPixel/openelm-ct
TinyPixel
2024-05-20T01:29:49Z
135
0
transformers
[ "transformers", "safetensors", "openelm", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "region:us" ]
text-generation
2024-05-20T01:29:31Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Ho97/llama3-8b-apiq-w2a16g64
Ho97
2024-05-20T01:25:57Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-20T00:13:53Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hiba2/results_t5_wiki
hiba2
2024-05-20T01:21:52Z
109
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:ahmeddbahaa/t5-arabic-base-finetuned-wikilingua-ar", "base_model:finetune:ahmeddbahaa/t5-arabic-base-finetuned-wikilingua-ar", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-20T01:21:20Z
--- license: apache-2.0 base_model: ahmeddbahaa/t5-arabic-base-finetuned-wikilingua-ar tags: - generated_from_trainer metrics: - rouge model-index: - name: results_t5_wiki results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results_t5_wiki This model is a fine-tuned version of [ahmeddbahaa/t5-arabic-base-finetuned-wikilingua-ar](https://huggingface.co/ahmeddbahaa/t5-arabic-base-finetuned-wikilingua-ar) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0002 - Rouge1: 0.1188 - Rouge2: 0.0194 - Rougel: 0.1188 - Rougelsum: 0.1186 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 250 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:------:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 0.8768 | 0.2143 | 500 | 0.0228 | 0.1148 | 0.0128 | 0.1148 | 0.1147 | 19.0 | | 0.0437 | 0.4286 | 1000 | 0.0111 | 0.1164 | 0.0154 | 0.1168 | 0.1165 | 19.0 | | 0.0436 | 0.6429 | 1500 | 0.0060 | 0.1168 | 0.0163 | 0.1171 | 0.1169 | 19.0 | | 0.0212 | 0.8573 | 2000 | 0.0052 | 0.117 | 0.0165 | 0.1173 | 0.117 | 19.0 | | 0.0161 | 1.0716 | 2500 | 0.0018 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 | | 0.011 | 1.2859 | 3000 | 0.0018 | 0.1188 | 0.0193 | 0.1188 | 0.1186 | 19.0 | | 0.0094 | 1.5002 | 3500 | 0.0014 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 | | 0.0107 | 1.7145 | 4000 | 0.0007 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 | | 0.0069 | 1.9288 | 4500 | 0.0006 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 | | 0.007 | 2.1432 | 5000 | 0.0006 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 | | 0.0064 | 2.3575 | 5500 | 0.0006 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 | | 0.0062 | 2.5718 | 6000 | 0.0015 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 | | 0.0042 | 2.7861 | 6500 | 0.0005 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 | | 0.0043 | 3.0004 | 7000 | 0.0004 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 | | 0.0042 | 3.2147 | 7500 | 0.0012 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 | | 0.0047 | 3.4291 | 8000 | 0.0010 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 | | 0.0043 | 3.6434 | 8500 | 0.0008 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 | | 0.0024 | 3.8577 | 9000 | 0.0003 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 | | 0.0026 | 4.0720 | 9500 | 0.0005 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 | | 0.0029 | 4.2863 | 10000 | 0.0003 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 | | 0.0045 | 4.5006 | 10500 | 0.0006 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 | | 0.0024 | 4.7150 | 11000 | 0.0001 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 | | 0.0018 | 4.9293 | 11500 | 0.0002 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 | | 0.002 | 5.1436 | 12000 | 0.0002 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 | | 0.0022 | 5.3579 | 12500 | 0.0001 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 | | 0.0017 | 5.5722 | 13000 | 0.0003 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 | | 0.0014 | 5.7865 | 13500 | 0.0005 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 | | 0.0055 | 6.0009 | 14000 | 0.0012 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 16.3147 | | 0.0127 | 6.2152 | 14500 | 0.0002 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 | | 0.0012 | 6.4295 | 15000 | 0.0002 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 | ### Framework versions - Transformers 4.42.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
crisistransformers/CT-M3-OneLook
crisistransformers
2024-05-20T01:17:07Z
161
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "arxiv:2403.16614", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-07-10T03:42:07Z
# CrisisTransformers CrisisTransformers is a family of pre-trained language models and sentence encoders introduced in the papers "[CrisisTransformers: Pre-trained language models and sentence encoders for crisis-related social media texts](https://www.sciencedirect.com/science/article/pii/S0950705124005501)" and "[Semantically Enriched Cross-Lingual Sentence Embeddings for Crisis-related Social Media Texts](https://arxiv.org/abs/2403.16614)". The models were trained based on the RoBERTa pre-training procedure on a massive corpus of over 15 billion word tokens sourced from tweets associated with 30+ crisis events such as disease outbreaks, natural disasters, conflicts, etc. Please refer to the [associated paper](https://www.sciencedirect.com/science/article/pii/S0950705124005501) for more details. CrisisTransformers were evaluated on 18 public crisis-specific datasets against strong baselines. Our pre-trained models outperform the baselines across all 18 datasets in classification tasks, and our best-performing sentence-encoder (mono-lingual) outperforms the state-of-the-art by more than 17\% in sentence encoding tasks. The multi-lingual sentence encoders (support 50+ languages; see [associated paper](https://arxiv.org/abs/2403.16614)) are designed to approximate the embedding space of the best-performing mono-lingual sentence encoder. ## Uses CrisisTransformers has 8 pre-trained models, 1 mono-lingual and 2 multi-lingual sentence encoders. The pre-trained models should be finetuned for downstream tasks just like [BERT](https://huggingface.co/bert-base-cased) and [RoBERTa](https://huggingface.co/roberta-base). The sentence encoders can be used out-of-the-box just like [Sentence-Transformers](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) for sentence encoding to facilitate tasks such as semantic search, clustering, topic modelling. ## Models and naming conventions *CT-M1* models were trained from scratch up to 40 epochs, while *CT-M2* models were initialized with pre-trained RoBERTa's weights and *CT-M3* models were initialized with pre-trained BERTweet's weights and both trained for up to 20 epochs. *OneLook* represents the checkpoint after 1 epoch, *BestLoss* represents the checkpoint with the lowest loss during training, and *Complete* represents the checkpoint after completing all epochs. *SE* represents sentence encoder. | pre-trained model | source | |--|--| |CT-M1-BestLoss|[crisistransformers/CT-M1-BestLoss](https://huggingface.co/crisistransformers/CT-M1-BestLoss)| |CT-M1-Complete|[crisistransformers/CT-M1-Complete](https://huggingface.co/crisistransformers/CT-M1-Complete)| |CT-M2-OneLook|[crisistransformers/CT-M2-OneLook](https://huggingface.co/crisistransformers/CT-M2-OneLook)| |CT-M2-BestLoss|[crisistransformers/CT-M2-BestLoss](https://huggingface.co/crisistransformers/CT-M2-BestLoss)| |CT-M2-Complete|[crisistransformers/CT-M2-Complete](https://huggingface.co/crisistransformers/CT-M2-Complete)| |CT-M3-OneLook|[crisistransformers/CT-M3-OneLook](https://huggingface.co/crisistransformers/CT-M3-OneLook)| |CT-M3-BestLoss|[crisistransformers/CT-M3-BestLoss](https://huggingface.co/crisistransformers/CT-M3-BestLoss)| |CT-M3-Complete|[crisistransformers/CT-M3-Complete](https://huggingface.co/crisistransformers/CT-M3-Complete)| | sentence encoder | source | |--|--| |CT-M1-Complete-SE (mono-lingual: EN)|[crisistransformers/CT-M1-Complete-SE](https://huggingface.co/crisistransformers/CT-M1-Complete-SE)| |CT-XLMR-SE (multi-lingual)|[crisistransformers/CT-XLMR-SE](https://huggingface.co/crisistransformers/CT-XLMR-SE)| |CT-mBERT-SE (multi-lingual)|[crisistransformers/CT-mBERT-SE](https://huggingface.co/crisistransformers/CT-mBERT-SE)| Languages supported by the multi-lingual sentence encoders: Albanian, Arabic, Armenian, Bulgarian, Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, Estonian, Finnish, French, French (Canada), Galician, Georgian, German, Greek, Gujarati, Hebrew, Hindi, Hungarian, Indonesian, Italian, Japanese, Korean, Kurdish (Sorani), Latvian, Lithuanian, Macedonian, Malay, Marathi, Mongolian, Myanmar (Burmese), Norwegian, Persian, Polish, Portuguese, Portuguese (Brazil), Romanian, Russian, Serbian, Slovak, Slovenian, Spanish, Swedish, Thai, Turkish, Ukrainian, Urdu, and Vietnamese. ## Citation If you use CrisisTransformers and the mono-lingual sentence encoder, please cite the following paper: ``` @article{lamsal2023crisistransformers, title={CrisisTransformers: Pre-trained language models and sentence encoders for crisis-related social media texts}, author={Rabindra Lamsal and Maria Rodriguez Read and Shanika Karunasekera}, journal={Knowledge-Based Systems}, pages={111916}, year={2024}, publisher={Elsevier} } ``` If you use the multi-lingual sentence encoders, please cite the following paper: ``` @article{lamsal2024semantically, title={Semantically Enriched Cross-Lingual Sentence Embeddings for Crisis-related Social Media Texts}, author={Rabindra Lamsal and Maria Rodriguez Read and Shanika Karunasekera}, year={2024}, eprint={2403.16614}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
crisistransformers/CT-M2-BestLoss
crisistransformers
2024-05-20T01:16:54Z
161
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "arxiv:2403.16614", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-07-10T03:36:11Z
# CrisisTransformers CrisisTransformers is a family of pre-trained language models and sentence encoders introduced in the papers "[CrisisTransformers: Pre-trained language models and sentence encoders for crisis-related social media texts](https://www.sciencedirect.com/science/article/pii/S0950705124005501)" and "[Semantically Enriched Cross-Lingual Sentence Embeddings for Crisis-related Social Media Texts](https://arxiv.org/abs/2403.16614)". The models were trained based on the RoBERTa pre-training procedure on a massive corpus of over 15 billion word tokens sourced from tweets associated with 30+ crisis events such as disease outbreaks, natural disasters, conflicts, etc. Please refer to the [associated paper](https://www.sciencedirect.com/science/article/pii/S0950705124005501) for more details. CrisisTransformers were evaluated on 18 public crisis-specific datasets against strong baselines. Our pre-trained models outperform the baselines across all 18 datasets in classification tasks, and our best-performing sentence-encoder (mono-lingual) outperforms the state-of-the-art by more than 17\% in sentence encoding tasks. The multi-lingual sentence encoders (support 50+ languages; see [associated paper](https://arxiv.org/abs/2403.16614)) are designed to approximate the embedding space of the best-performing mono-lingual sentence encoder. ## Uses CrisisTransformers has 8 pre-trained models, 1 mono-lingual and 2 multi-lingual sentence encoders. The pre-trained models should be finetuned for downstream tasks just like [BERT](https://huggingface.co/bert-base-cased) and [RoBERTa](https://huggingface.co/roberta-base). The sentence encoders can be used out-of-the-box just like [Sentence-Transformers](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) for sentence encoding to facilitate tasks such as semantic search, clustering, topic modelling. ## Models and naming conventions *CT-M1* models were trained from scratch up to 40 epochs, while *CT-M2* models were initialized with pre-trained RoBERTa's weights and *CT-M3* models were initialized with pre-trained BERTweet's weights and both trained for up to 20 epochs. *OneLook* represents the checkpoint after 1 epoch, *BestLoss* represents the checkpoint with the lowest loss during training, and *Complete* represents the checkpoint after completing all epochs. *SE* represents sentence encoder. | pre-trained model | source | |--|--| |CT-M1-BestLoss|[crisistransformers/CT-M1-BestLoss](https://huggingface.co/crisistransformers/CT-M1-BestLoss)| |CT-M1-Complete|[crisistransformers/CT-M1-Complete](https://huggingface.co/crisistransformers/CT-M1-Complete)| |CT-M2-OneLook|[crisistransformers/CT-M2-OneLook](https://huggingface.co/crisistransformers/CT-M2-OneLook)| |CT-M2-BestLoss|[crisistransformers/CT-M2-BestLoss](https://huggingface.co/crisistransformers/CT-M2-BestLoss)| |CT-M2-Complete|[crisistransformers/CT-M2-Complete](https://huggingface.co/crisistransformers/CT-M2-Complete)| |CT-M3-OneLook|[crisistransformers/CT-M3-OneLook](https://huggingface.co/crisistransformers/CT-M3-OneLook)| |CT-M3-BestLoss|[crisistransformers/CT-M3-BestLoss](https://huggingface.co/crisistransformers/CT-M3-BestLoss)| |CT-M3-Complete|[crisistransformers/CT-M3-Complete](https://huggingface.co/crisistransformers/CT-M3-Complete)| | sentence encoder | source | |--|--| |CT-M1-Complete-SE (mono-lingual: EN)|[crisistransformers/CT-M1-Complete-SE](https://huggingface.co/crisistransformers/CT-M1-Complete-SE)| |CT-XLMR-SE (multi-lingual)|[crisistransformers/CT-XLMR-SE](https://huggingface.co/crisistransformers/CT-XLMR-SE)| |CT-mBERT-SE (multi-lingual)|[crisistransformers/CT-mBERT-SE](https://huggingface.co/crisistransformers/CT-mBERT-SE)| Languages supported by the multi-lingual sentence encoders: Albanian, Arabic, Armenian, Bulgarian, Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, Estonian, Finnish, French, French (Canada), Galician, Georgian, German, Greek, Gujarati, Hebrew, Hindi, Hungarian, Indonesian, Italian, Japanese, Korean, Kurdish (Sorani), Latvian, Lithuanian, Macedonian, Malay, Marathi, Mongolian, Myanmar (Burmese), Norwegian, Persian, Polish, Portuguese, Portuguese (Brazil), Romanian, Russian, Serbian, Slovak, Slovenian, Spanish, Swedish, Thai, Turkish, Ukrainian, Urdu, and Vietnamese. ## Citation If you use CrisisTransformers and the mono-lingual sentence encoder, please cite the following paper: ``` @article{lamsal2023crisistransformers, title={CrisisTransformers: Pre-trained language models and sentence encoders for crisis-related social media texts}, author={Rabindra Lamsal and Maria Rodriguez Read and Shanika Karunasekera}, journal={Knowledge-Based Systems}, pages={111916}, year={2024}, publisher={Elsevier} } ``` If you use the multi-lingual sentence encoders, please cite the following paper: ``` @article{lamsal2024semantically, title={Semantically Enriched Cross-Lingual Sentence Embeddings for Crisis-related Social Media Texts}, author={Rabindra Lamsal and Maria Rodriguez Read and Shanika Karunasekera}, year={2024}, eprint={2403.16614}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
crisistransformers/CT-M1-Complete
crisistransformers
2024-05-20T01:16:41Z
165
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "arxiv:2403.16614", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-08-08T22:32:23Z
# CrisisTransformers CrisisTransformers is a family of pre-trained language models and sentence encoders introduced in the papers "[CrisisTransformers: Pre-trained language models and sentence encoders for crisis-related social media texts](https://www.sciencedirect.com/science/article/pii/S0950705124005501)" and "[Semantically Enriched Cross-Lingual Sentence Embeddings for Crisis-related Social Media Texts](https://arxiv.org/abs/2403.16614)". The models were trained based on the RoBERTa pre-training procedure on a massive corpus of over 15 billion word tokens sourced from tweets associated with 30+ crisis events such as disease outbreaks, natural disasters, conflicts, etc. Please refer to the [associated paper](https://www.sciencedirect.com/science/article/pii/S0950705124005501) for more details. CrisisTransformers were evaluated on 18 public crisis-specific datasets against strong baselines. Our pre-trained models outperform the baselines across all 18 datasets in classification tasks, and our best-performing sentence-encoder (mono-lingual) outperforms the state-of-the-art by more than 17\% in sentence encoding tasks. The multi-lingual sentence encoders (support 50+ languages; see [associated paper](https://arxiv.org/abs/2403.16614)) are designed to approximate the embedding space of the best-performing mono-lingual sentence encoder. ## Uses CrisisTransformers has 8 pre-trained models, 1 mono-lingual and 2 multi-lingual sentence encoders. The pre-trained models should be finetuned for downstream tasks just like [BERT](https://huggingface.co/bert-base-cased) and [RoBERTa](https://huggingface.co/roberta-base). The sentence encoders can be used out-of-the-box just like [Sentence-Transformers](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) for sentence encoding to facilitate tasks such as semantic search, clustering, topic modelling. ## Models and naming conventions *CT-M1* models were trained from scratch up to 40 epochs, while *CT-M2* models were initialized with pre-trained RoBERTa's weights and *CT-M3* models were initialized with pre-trained BERTweet's weights and both trained for up to 20 epochs. *OneLook* represents the checkpoint after 1 epoch, *BestLoss* represents the checkpoint with the lowest loss during training, and *Complete* represents the checkpoint after completing all epochs. *SE* represents sentence encoder. | pre-trained model | source | |--|--| |CT-M1-BestLoss|[crisistransformers/CT-M1-BestLoss](https://huggingface.co/crisistransformers/CT-M1-BestLoss)| |CT-M1-Complete|[crisistransformers/CT-M1-Complete](https://huggingface.co/crisistransformers/CT-M1-Complete)| |CT-M2-OneLook|[crisistransformers/CT-M2-OneLook](https://huggingface.co/crisistransformers/CT-M2-OneLook)| |CT-M2-BestLoss|[crisistransformers/CT-M2-BestLoss](https://huggingface.co/crisistransformers/CT-M2-BestLoss)| |CT-M2-Complete|[crisistransformers/CT-M2-Complete](https://huggingface.co/crisistransformers/CT-M2-Complete)| |CT-M3-OneLook|[crisistransformers/CT-M3-OneLook](https://huggingface.co/crisistransformers/CT-M3-OneLook)| |CT-M3-BestLoss|[crisistransformers/CT-M3-BestLoss](https://huggingface.co/crisistransformers/CT-M3-BestLoss)| |CT-M3-Complete|[crisistransformers/CT-M3-Complete](https://huggingface.co/crisistransformers/CT-M3-Complete)| | sentence encoder | source | |--|--| |CT-M1-Complete-SE (mono-lingual: EN)|[crisistransformers/CT-M1-Complete-SE](https://huggingface.co/crisistransformers/CT-M1-Complete-SE)| |CT-XLMR-SE (multi-lingual)|[crisistransformers/CT-XLMR-SE](https://huggingface.co/crisistransformers/CT-XLMR-SE)| |CT-mBERT-SE (multi-lingual)|[crisistransformers/CT-mBERT-SE](https://huggingface.co/crisistransformers/CT-mBERT-SE)| Languages supported by the multi-lingual sentence encoders: Albanian, Arabic, Armenian, Bulgarian, Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, Estonian, Finnish, French, French (Canada), Galician, Georgian, German, Greek, Gujarati, Hebrew, Hindi, Hungarian, Indonesian, Italian, Japanese, Korean, Kurdish (Sorani), Latvian, Lithuanian, Macedonian, Malay, Marathi, Mongolian, Myanmar (Burmese), Norwegian, Persian, Polish, Portuguese, Portuguese (Brazil), Romanian, Russian, Serbian, Slovak, Slovenian, Spanish, Swedish, Thai, Turkish, Ukrainian, Urdu, and Vietnamese. ## Citation If you use CrisisTransformers and the mono-lingual sentence encoder, please cite the following paper: ``` @article{lamsal2023crisistransformers, title={CrisisTransformers: Pre-trained language models and sentence encoders for crisis-related social media texts}, author={Rabindra Lamsal and Maria Rodriguez Read and Shanika Karunasekera}, journal={Knowledge-Based Systems}, pages={111916}, year={2024}, publisher={Elsevier} } ``` If you use the multi-lingual sentence encoders, please cite the following paper: ``` @article{lamsal2024semantically, title={Semantically Enriched Cross-Lingual Sentence Embeddings for Crisis-related Social Media Texts}, author={Rabindra Lamsal and Maria Rodriguez Read and Shanika Karunasekera}, year={2024}, eprint={2403.16614}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
crisistransformers/CT-M1-Complete-SE
crisistransformers
2024-05-20T01:16:21Z
1
1
sentence-transformers
[ "sentence-transformers", "pytorch", "roberta", "feature-extraction", "sentence-similarity", "transformers", "arxiv:2403.16614", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-09-11T05:01:11Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # CrisisTransformers CrisisTransformers is a family of pre-trained language models and sentence encoders introduced in the papers "[CrisisTransformers: Pre-trained language models and sentence encoders for crisis-related social media texts](https://www.sciencedirect.com/science/article/pii/S0950705124005501)" and "[Semantically Enriched Cross-Lingual Sentence Embeddings for Crisis-related Social Media Texts](https://arxiv.org/abs/2403.16614)". The models were trained based on the RoBERTa pre-training procedure on a massive corpus of over 15 billion word tokens sourced from tweets associated with 30+ crisis events such as disease outbreaks, natural disasters, conflicts, etc. Please refer to the [associated paper](https://www.sciencedirect.com/science/article/pii/S0950705124005501) for more details. CrisisTransformers were evaluated on 18 public crisis-specific datasets against strong baselines. Our pre-trained models outperform the baselines across all 18 datasets in classification tasks, and our best-performing sentence-encoder (mono-lingual) outperforms the state-of-the-art by more than 17\% in sentence encoding tasks. The multi-lingual sentence encoders (support 50+ languages; see [associated paper](https://arxiv.org/abs/2403.16614)) are designed to approximate the embedding space of the best-performing mono-lingual sentence encoder. ## Uses CrisisTransformers has 8 pre-trained models, 1 mono-lingual and 2 multi-lingual sentence encoders. The pre-trained models should be finetuned for downstream tasks just like [BERT](https://huggingface.co/bert-base-cased) and [RoBERTa](https://huggingface.co/roberta-base). The sentence encoders can be used out-of-the-box just like [Sentence-Transformers](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) for sentence encoding to facilitate tasks such as semantic search, clustering, topic modelling. ## Models and naming conventions *CT-M1* models were trained from scratch up to 40 epochs, while *CT-M2* models were initialized with pre-trained RoBERTa's weights and *CT-M3* models were initialized with pre-trained BERTweet's weights and both trained for up to 20 epochs. *OneLook* represents the checkpoint after 1 epoch, *BestLoss* represents the checkpoint with the lowest loss during training, and *Complete* represents the checkpoint after completing all epochs. *SE* represents sentence encoder. | pre-trained model | source | |--|--| |CT-M1-BestLoss|[crisistransformers/CT-M1-BestLoss](https://huggingface.co/crisistransformers/CT-M1-BestLoss)| |CT-M1-Complete|[crisistransformers/CT-M1-Complete](https://huggingface.co/crisistransformers/CT-M1-Complete)| |CT-M2-OneLook|[crisistransformers/CT-M2-OneLook](https://huggingface.co/crisistransformers/CT-M2-OneLook)| |CT-M2-BestLoss|[crisistransformers/CT-M2-BestLoss](https://huggingface.co/crisistransformers/CT-M2-BestLoss)| |CT-M2-Complete|[crisistransformers/CT-M2-Complete](https://huggingface.co/crisistransformers/CT-M2-Complete)| |CT-M3-OneLook|[crisistransformers/CT-M3-OneLook](https://huggingface.co/crisistransformers/CT-M3-OneLook)| |CT-M3-BestLoss|[crisistransformers/CT-M3-BestLoss](https://huggingface.co/crisistransformers/CT-M3-BestLoss)| |CT-M3-Complete|[crisistransformers/CT-M3-Complete](https://huggingface.co/crisistransformers/CT-M3-Complete)| | sentence encoder | source | |--|--| |CT-M1-Complete-SE (mono-lingual: EN)|[crisistransformers/CT-M1-Complete-SE](https://huggingface.co/crisistransformers/CT-M1-Complete-SE)| |CT-XLMR-SE (multi-lingual)|[crisistransformers/CT-XLMR-SE](https://huggingface.co/crisistransformers/CT-XLMR-SE)| |CT-mBERT-SE (multi-lingual)|[crisistransformers/CT-mBERT-SE](https://huggingface.co/crisistransformers/CT-mBERT-SE)| Languages supported by the multi-lingual sentence encoders: Albanian, Arabic, Armenian, Bulgarian, Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, Estonian, Finnish, French, French (Canada), Galician, Georgian, German, Greek, Gujarati, Hebrew, Hindi, Hungarian, Indonesian, Italian, Japanese, Korean, Kurdish (Sorani), Latvian, Lithuanian, Macedonian, Malay, Marathi, Mongolian, Myanmar (Burmese), Norwegian, Persian, Polish, Portuguese, Portuguese (Brazil), Romanian, Russian, Serbian, Slovak, Slovenian, Spanish, Swedish, Thai, Turkish, Ukrainian, Urdu, and Vietnamese. ## Citation If you use CrisisTransformers and the mono-lingual sentence encoder, please cite the following paper: ``` @article{lamsal2023crisistransformers, title={CrisisTransformers: Pre-trained language models and sentence encoders for crisis-related social media texts}, author={Rabindra Lamsal and Maria Rodriguez Read and Shanika Karunasekera}, journal={Knowledge-Based Systems}, pages={111916}, year={2024}, publisher={Elsevier} } ``` If you use the multi-lingual sentence encoders, please cite the following paper: ``` @article{lamsal2024semantically, title={Semantically Enriched Cross-Lingual Sentence Embeddings for Crisis-related Social Media Texts}, author={Rabindra Lamsal and Maria Rodriguez Read and Shanika Karunasekera}, year={2024}, eprint={2403.16614}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
crisistransformers/CT-mBERT-SE
crisistransformers
2024-05-20T01:15:53Z
1,306
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "arxiv:2403.16614", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-02-13T23:07:56Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # CrisisTransformers CrisisTransformers is a family of pre-trained language models and sentence encoders introduced in the papers "[CrisisTransformers: Pre-trained language models and sentence encoders for crisis-related social media texts](https://www.sciencedirect.com/science/article/pii/S0950705124005501)" and "[Semantically Enriched Cross-Lingual Sentence Embeddings for Crisis-related Social Media Texts](https://arxiv.org/abs/2403.16614)". The models were trained based on the RoBERTa pre-training procedure on a massive corpus of over 15 billion word tokens sourced from tweets associated with 30+ crisis events such as disease outbreaks, natural disasters, conflicts, etc. Please refer to the [associated paper](https://www.sciencedirect.com/science/article/pii/S0950705124005501) for more details. CrisisTransformers were evaluated on 18 public crisis-specific datasets against strong baselines. Our pre-trained models outperform the baselines across all 18 datasets in classification tasks, and our best-performing sentence-encoder (mono-lingual) outperforms the state-of-the-art by more than 17\% in sentence encoding tasks. The multi-lingual sentence encoders (support 50+ languages; see [associated paper](https://arxiv.org/abs/2403.16614)) are designed to approximate the embedding space of the best-performing mono-lingual sentence encoder. ## Uses CrisisTransformers has 8 pre-trained models, 1 mono-lingual and 2 multi-lingual sentence encoders. The pre-trained models should be finetuned for downstream tasks just like [BERT](https://huggingface.co/bert-base-cased) and [RoBERTa](https://huggingface.co/roberta-base). The sentence encoders can be used out-of-the-box just like [Sentence-Transformers](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) for sentence encoding to facilitate tasks such as semantic search, clustering, topic modelling. ## Models and naming conventions *CT-M1* models were trained from scratch up to 40 epochs, while *CT-M2* models were initialized with pre-trained RoBERTa's weights and *CT-M3* models were initialized with pre-trained BERTweet's weights and both trained for up to 20 epochs. *OneLook* represents the checkpoint after 1 epoch, *BestLoss* represents the checkpoint with the lowest loss during training, and *Complete* represents the checkpoint after completing all epochs. *SE* represents sentence encoder. | pre-trained model | source | |--|--| |CT-M1-BestLoss|[crisistransformers/CT-M1-BestLoss](https://huggingface.co/crisistransformers/CT-M1-BestLoss)| |CT-M1-Complete|[crisistransformers/CT-M1-Complete](https://huggingface.co/crisistransformers/CT-M1-Complete)| |CT-M2-OneLook|[crisistransformers/CT-M2-OneLook](https://huggingface.co/crisistransformers/CT-M2-OneLook)| |CT-M2-BestLoss|[crisistransformers/CT-M2-BestLoss](https://huggingface.co/crisistransformers/CT-M2-BestLoss)| |CT-M2-Complete|[crisistransformers/CT-M2-Complete](https://huggingface.co/crisistransformers/CT-M2-Complete)| |CT-M3-OneLook|[crisistransformers/CT-M3-OneLook](https://huggingface.co/crisistransformers/CT-M3-OneLook)| |CT-M3-BestLoss|[crisistransformers/CT-M3-BestLoss](https://huggingface.co/crisistransformers/CT-M3-BestLoss)| |CT-M3-Complete|[crisistransformers/CT-M3-Complete](https://huggingface.co/crisistransformers/CT-M3-Complete)| | sentence encoder | source | |--|--| |CT-M1-Complete-SE (mono-lingual: EN)|[crisistransformers/CT-M1-Complete-SE](https://huggingface.co/crisistransformers/CT-M1-Complete-SE)| |CT-XLMR-SE (multi-lingual)|[crisistransformers/CT-XLMR-SE](https://huggingface.co/crisistransformers/CT-XLMR-SE)| |CT-mBERT-SE (multi-lingual)|[crisistransformers/CT-mBERT-SE](https://huggingface.co/crisistransformers/CT-mBERT-SE)| Languages supported by the multi-lingual sentence encoders: Albanian, Arabic, Armenian, Bulgarian, Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, Estonian, Finnish, French, French (Canada), Galician, Georgian, German, Greek, Gujarati, Hebrew, Hindi, Hungarian, Indonesian, Italian, Japanese, Korean, Kurdish (Sorani), Latvian, Lithuanian, Macedonian, Malay, Marathi, Mongolian, Myanmar (Burmese), Norwegian, Persian, Polish, Portuguese, Portuguese (Brazil), Romanian, Russian, Serbian, Slovak, Slovenian, Spanish, Swedish, Thai, Turkish, Ukrainian, Urdu, and Vietnamese. ## Citation If you use CrisisTransformers and the mono-lingual sentence encoder, please cite the following paper: ``` @article{lamsal2023crisistransformers, title={CrisisTransformers: Pre-trained language models and sentence encoders for crisis-related social media texts}, author={Rabindra Lamsal and Maria Rodriguez Read and Shanika Karunasekera}, journal={Knowledge-Based Systems}, pages={111916}, year={2024}, publisher={Elsevier} } ``` If you use the multi-lingual sentence encoders, please cite the following paper: ``` @article{lamsal2024semantically, title={Semantically Enriched Cross-Lingual Sentence Embeddings for Crisis-related Social Media Texts}, author={Rabindra Lamsal and Maria Rodriguez Read and Shanika Karunasekera}, year={2024}, eprint={2403.16614}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
TinyPixel/openelm-adapter2
TinyPixel
2024-05-20T01:15:09Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-20T01:15:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Sorour/cls_sentiment_mistral_v1
Sorour
2024-05-20T01:12:58Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "region:us" ]
null
2024-05-20T00:24:07Z
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer base_model: mistralai/Mistral-7B-Instruct-v0.2 datasets: - generator model-index: - name: cls_sentiment_mistral_v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cls_sentiment_mistral_v1 This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 0.5972 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.7365 | 0.1986 | 50 | 0.7344 | | 0.6778 | 0.3972 | 100 | 0.6852 | | 0.6548 | 0.5958 | 150 | 0.6588 | | 0.6728 | 0.7944 | 200 | 0.6333 | | 0.6148 | 0.9930 | 250 | 0.6106 | | 0.43 | 1.1917 | 300 | 0.6174 | | 0.4575 | 1.3903 | 350 | 0.6081 | | 0.4225 | 1.5889 | 400 | 0.6058 | | 0.4136 | 1.7875 | 450 | 0.5976 | | 0.441 | 1.9861 | 500 | 0.5972 | ### Framework versions - PEFT 0.11.1 - Transformers 4.41.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
WhoTookMyAmogusNickname/llama2-7b-megacode2_min100-GGML
WhoTookMyAmogusNickname
2024-05-20T01:11:13Z
0
0
null
[ "region:us" ]
null
2023-08-13T05:34:54Z
[llama2-7b-megacode2_min100](https://huggingface.co/andreaskoepf/llama2-7b-megacode2_min100) converted and quantized to GGML\ had to use a "[added_tokens.json](https://huggingface.co/andreaskoepf/llama2-7b-oasst-baseline/blob/main/added_tokens.json)" from another of their models, as the vocab size is strangely 32007
ebowwa/mario-ascii
ebowwa
2024-05-20T01:01:57Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-20T01:01:49Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-Instruct-bnb-4bit --- # Uploaded model - **Developed by:** ebowwa - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
jsfamily/korean-small_t33
jsfamily
2024-05-20T00:59:16Z
96
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "ko", "dataset:korean_samll_dataset3", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-20T00:57:22Z
--- language: - ko license: apache-2.0 tags: - hf-asr-leaderboard - generated_from_trainer base_model: openai/whisper-small datasets: - korean_samll_dataset3 model-index: - name: korean-small_t33 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # korean-small_t33 This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the korean_samll_dataset3 dataset. It achieves the following results on the evaluation set: - eval_loss: 0.1656 - eval_cer: 6.6580 - eval_runtime: 2081.9667 - eval_samples_per_second: 3.128 - eval_steps_per_second: 0.391 - step: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
ytcheng/llama-3-8B-pretrain_v2
ytcheng
2024-05-20T00:56:09Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-factory", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-20T00:52:01Z
--- library_name: transformers tags: - llama-factory --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lomov/targetsandgoalsv1
lomov
2024-05-20T00:52:59Z
124
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "text-classification", "autotrain", "dataset:targetsandgoalsv1/autotrain-data", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-20T00:51:53Z
--- tags: - autotrain - text-classification widget: - text: "I love AutoTrain" datasets: - targetsandgoalsv1/autotrain-data --- # Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics loss: 0.22812850773334503 f1_macro: 0.928605054676046 f1_micro: 0.9313725490196079 f1_weighted: 0.9297769573887364 precision_macro: 0.9294524189261031 precision_micro: 0.9313725490196079 precision_weighted: 0.930390072030939 recall_macro: 0.93 recall_micro: 0.9313725490196079 recall_weighted: 0.9313725490196079 accuracy: 0.9313725490196079
EthanRhys/Ashley-LA
EthanRhys
2024-05-20T00:52:06Z
0
0
null
[ "license:openrail++", "region:us" ]
null
2024-05-20T00:49:27Z
--- license: openrail++ ---
souvik0306/test_quant_merge_facebook_opt
souvik0306
2024-05-20T00:51:06Z
84
1
transformers
[ "transformers", "safetensors", "opt", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "awq", "region:us" ]
text-generation
2024-05-20T00:50:54Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
OscarGalavizC/roberta-base-bne-finetuned-multi-sentiment
OscarGalavizC
2024-05-20T00:45:17Z
109
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:BSC-LT/roberta-base-bne", "base_model:finetune:BSC-LT/roberta-base-bne", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-16T20:36:26Z
--- license: apache-2.0 base_model: BSC-TeMU/roberta-base-bne tags: - generated_from_trainer metrics: - accuracy model-index: - name: roberta-base-bne-finetuned-multi-sentiment results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-bne-finetuned-multi-sentiment This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7116 - Accuracy: 0.6914 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.809 | 1.0 | 115 | 0.7168 | 0.6852 | | 0.6101 | 2.0 | 230 | 0.7116 | 0.6914 | ### Framework versions - Transformers 4.40.2 - Pytorch 1.13.1+cu117 - Datasets 2.19.1 - Tokenizers 0.19.1
Mahnke/rick
Mahnke
2024-05-20T00:43:22Z
0
0
null
[ "license:artistic-2.0", "region:us" ]
null
2024-05-20T00:43:22Z
--- license: artistic-2.0 ---