modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-29 00:46:34
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
502 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-29 00:44:25
card
stringlengths
11
1.01M
sarahmiller137/bert-base-uncased-ft-m3-lc
sarahmiller137
2024-11-12T12:09:58Z
10
0
transformers
[ "transformers", "pytorch", "safetensors", "bert", "text-classification", "text classification", "en", "license:cc", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-06-30T11:55:45Z
--- language: en tags: - 'text classification' license: cc datasets: - MIMIC-III  widget: - text: "This report discusses the diagnosis of lung cancer in a female patient who has never smoked." --- ## Model information: This model is the [bert-base-uncased](https://huggingface.co/bert-base-uncased) model that has been finetuned using radiology report texts from the MIMIC-III database. The task performed was text classification in order to benchmark this model with a selection of other variants of BERT for the classifcation of MIMIC-III radiology report texts into two classes. Labels of [0,1] were assigned to radiology reports in MIMIC-III that were linked to an ICD9 diagnosis code for lung cancer = 1 and a random sample of reports which were not linked to any type of cancer diagnosis code at all = 0. ## Intended uses: This model is intended to be used to classify texts to identify the presence of lung cancer. The model will predict lables of [0,1]. ## Limitations: Note that the dataset and model may not be fully represetative or suitable for all needs it is recommended that the paper for the dataset and the base model card should be reviewed before use - - [MIMIC-III](https://www.nature.com/articles/sdata201635.pdf) - [bert-base-uncased](https://huggingface.co/bert-base-uncased) ## How to use: Load the model from the library using the following checkpoints: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("sarahmiller137/bert-base-uncased-ft-m3-lc") model = AutoModel.from_pretrained("sarahmiller137/bert-base-uncased-ft-m3-lc") ```
Ahanaas/Hermes-3-Llama-3.1-8B_finetune_prashu
Ahanaas
2024-11-12T12:07:12Z
14
0
null
[ "safetensors", "llama", "en", "base_model:NousResearch/Hermes-3-Llama-3.1-8B", "base_model:finetune:NousResearch/Hermes-3-Llama-3.1-8B", "license:mit", "region:us" ]
null
2024-11-12T09:06:32Z
--- license: mit language: - en base_model: - NousResearch/Hermes-3-Llama-3.1-8B --- # Inference with Your Model This guide explains how to run inference with your custom model using the Hugging Face `transformers` library. ## Prerequisites Make sure you have the following dependencies installed: - Python 3.7+ - PyTorch - Hugging Face `transformers` library You can install the required packages using pip: ```bash !git clone https://github.com/huggingface/transformers.git %cd transformers !git checkout <commit_id_for_4.47.0.dev0> !pip install . !pip install -q accelerate==0.34.2 bitsandbytes==0.44.1 peft==0.13.1 ``` ```py # quantization of model bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type='nf4' ) ``` ```py # Load model & tokenizer model_id = "Ahanaas/Hermes-3-Llama-3.1-8B_finetune_prashu" from transformers import AutoTokenizer, LlamaTokenizer, PreTrainedTokenizerFast base_model = AutoModelForCausalLM.from_pretrained( model_id, low_cpu_mem_usage=True, return_dict=True, torch_dtype=torch.float16, quantization_config=bnb_config, device_map=0, ) # Tokenizer tokenizer = AutoTokenizer.from_pretrained(model_id, padding_side="right", use_fast=False) tokenizer.pad_token = tokenizer.eos_token ``` ```py # Run text generation pipeline with our next model system_prompt = '''''' prompt = '''''' pipe = pipeline( task="text-generation", model=base_model, tokenizer=tokenizer, max_new_tokens=128, # Increase this to allow for longer outputs temperature=0.4, # Encourages more varied outputs top_k=50, # Limits to the top 50 tokens do_sample=True, # Enables sampling return_full_text=True ) result = pipe(f"<|im_start|>system\n{system_prompt}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>") # print(result[0]['generated_text']) generated_text = result[0]['generated_text'] print(generated_text) ``` ## Sample output ```bash system_prompt = '''Meet Lila, a 27-year-old interior designer specializing in innovative, eco-friendly spaces. Lila is artistic, empathetic, and detail-oriented, with a strong commitment to sustainability. Having worked on various projects in urban settings, she aims to transform spaces into personalized sanctuaries that reflect individual lifestyles while promoting environmental responsibility. Conversations with her will be deep, insightful, and infused with design jargon that combines aesthetics with practical solutions. ''' prompt = '''ahh! that interior costs tooo much''' output = '''Lila, *smiles warmly* I understand your concern, but investing in your living space can significantly impact your well-being and contribute to a greener future. Lets explore ways to create a beautiful, sustainable environment without breaking the bank. ''' ``` ## Citation ```tex @misc{Ahanaas/Hermes-3-Llama-3.1-8B_finetune_prashu, author = {Prasad Chavan}, title = {Hermes-3-Llama-3.1-8B_finetune_prashu}, year = {2024}, publisher = {Hugging Face}, howpublished = {\url{https://huggingface.co/Ahanaas/Hermes-3-Llama-3.1-8B_finetune_prashu/}}, note = "[Roleplay Finetuned Model]" } ```
prithivMLmods/Qwen2.5-Coder-7B-Instruct-GGUF
prithivMLmods
2024-11-12T12:06:28Z
291
8
transformers
[ "transformers", "gguf", "Qwen2.5", "Coder", "7B", "Instruct", "F16", "Q4", "Q5", "Q8", "16-bit", "Llama-cpp", "text-generation", "en", "base_model:Qwen/Qwen2.5-Coder-7B-Instruct", "base_model:quantized:Qwen/Qwen2.5-Coder-7B-Instruct", "license:creativeml-openrail-m", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-11-12T04:40:49Z
--- license: creativeml-openrail-m language: - en base_model: - Qwen/Qwen2.5-Coder-7B-Instruct pipeline_tag: text-generation library_name: transformers tags: - Qwen2.5 - Coder - 7B - Instruct - F16 - Q4 - Q5 - Q8 - 16-bit - Llama-cpp --- ## Qwen2.5-Coder-7B-Instruct-GGUF | File Name | Size | Description | |--------------------------------------------|---------|-------------------------------------------------------------------| | `.gitattributes` | 1.81kB | Git configuration file for handling file attributes and LFS rules. | | `Qwen2.5-Coder-7B-Instruct.F16.gguf` | 15.2GB | Full-precision (16-bit) instruction-tuned model for coding tasks. | | `Qwen2.5-Coder-7B-Instruct.Q4_K_M.gguf` | 4.68GB | Quantized 4-bit medium variant model for reduced resource usage. | | `Qwen2.5-Coder-7B-Instruct.Q5_K_M.gguf` | 5.44GB | Quantized 5-bit medium variant model for a balance of size and accuracy. | | `Qwen2.5-Coder-7B-Instruct.Q8_0.gguf` | 8.1GB | Quantized 8-bit model for higher accuracy coding instruction tasks. | | `README.md` | - | Basic README file with project and model information. | # Run with Ollama 🦙 ## Overview Ollama is a powerful tool that allows you to run machine learning models effortlessly. This guide will help you download, install, and run your own GGUF models in just a few minutes. ## Table of Contents - [Download and Install Ollama](#download-and-install-ollama) - [Steps to Run GGUF Models](#steps-to-run-gguf-models) - [1. Create the Model File](#1-create-the-model-file) - [2. Add the Template Command](#2-add-the-template-command) - [3. Create and Patch the Model](#3-create-and-patch-the-model) - [Running the Model](#running-the-model) - [Sample Usage](#sample-usage) ## Download and Install Ollama🦙 To get started, download Ollama from [https://ollama.com/download](https://ollama.com/download) and install it on your Windows or Mac system. ## Steps to Run GGUF Models ### 1. Create the Model File First, create a model file and name it appropriately. For example, you can name your model file `metallama`. ### 2. Add the Template Command In your model file, include a `FROM` line that specifies the base model file you want to use. For instance: ```bash FROM Llama-3.2-1B.F16.gguf ``` Ensure that the model file is in the same directory as your script. ### 3. Create and Patch the Model Open your terminal and run the following command to create and patch your model: ```bash ollama create metallama -f ./metallama ``` Once the process is successful, you will see a confirmation message. To verify that the model was created successfully, you can list all models with: ```bash ollama list ``` Make sure that `metallama` appears in the list of models. --- ## Running the Model To run your newly created model, use the following command in your terminal: ```bash ollama run metallama ``` ### Sample Usage In the command prompt, you can execute: ```bash D:\>ollama run metallama ``` You can interact with the model like this: ```plaintext >>> write a mini passage about space x Space X, the private aerospace company founded by Elon Musk, is revolutionizing the field of space exploration. With its ambitious goals to make humanity a multi-planetary species and establish a sustainable human presence in the cosmos, Space X has become a leading player in the industry. The company's spacecraft, like the Falcon 9, have demonstrated remarkable capabilities, allowing for the transport of crews and cargo into space with unprecedented efficiency. As technology continues to advance, the possibility of establishing permanent colonies on Mars becomes increasingly feasible, thanks in part to the success of reusable rockets that can launch multiple times without sustaining significant damage. The journey towards becoming a multi-planetary species is underway, and Space X plays a pivotal role in pushing the boundaries of human exploration and settlement. ``` --- ## Conclusion With these simple steps, you can easily download, install, and run your own models using Ollama. Whether you're exploring the capabilities of Llama or building your own custom models, Ollama makes it accessible and efficient. - This README provides clear instructions and structured information to help users navigate the process of using Ollama effectively. Adjust any sections as needed based on your specific requirements or additional details you may want to include.
neshkatrapati/pii-mark-1
neshkatrapati
2024-11-12T12:05:45Z
7
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "gpt", "llm", "large language model", "h2o-llmstudio", "conversational", "en", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-11-12T12:04:22Z
--- language: - en library_name: transformers tags: - gpt - llm - large language model - h2o-llmstudio inference: false thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico --- # Model Card ## Summary This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio). - Base model: [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct) ## Usage To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` library installed. ```bash pip install transformers==4.45.0 ``` Also make sure you are providing your huggingface token to the pipeline if the model is lying in a private repo. - Either leave `token=True` in the `pipeline` and login to hugginface_hub by running ```python import huggingface_hub huggingface_hub.login(<ACCESS_TOKEN>) ``` - Or directly pass your <ACCESS_TOKEN> to `token` in the `pipeline` ```python from transformers import pipeline generate_text = pipeline( model="neshkatrapati/pii-mark-1", torch_dtype="auto", trust_remote_code=True, device_map={"": "cuda:0"}, token=True, ) # generate configuration can be modified to your needs # generate_text.model.generation_config.min_new_tokens = 2 # generate_text.model.generation_config.max_new_tokens = 256 # generate_text.model.generation_config.do_sample = False # generate_text.model.generation_config.num_beams = 1 # generate_text.model.generation_config.temperature = float(0.0) # generate_text.model.generation_config.repetition_penalty = float(1.0) messages = [ {"role": "user", "content": "Hi, how are you?"}, {"role": "assistant", "content": "I'm doing great, how about you?"}, {"role": "user", "content": "Why is drinking water so healthy?"}, ] res = generate_text( messages, renormalize_logits=True ) print(res[0]["generated_text"][-1]['content']) ``` You can print a sample prompt after applying chat template to see how it is feed to the tokenizer: ```python print(generate_text.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, )) ``` You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "neshkatrapati/pii-mark-1" # either local folder or Hugging Face model name # Important: The prompt needs to be in the same format the model was trained with. # You can find an example prompt in the experiment logs. messages = [ {"role": "user", "content": "Hi, how are you?"}, {"role": "assistant", "content": "I'm doing great, how about you?"}, {"role": "user", "content": "Why is drinking water so healthy?"}, ] tokenizer = AutoTokenizer.from_pretrained( model_name, trust_remote_code=True, ) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map={"": "cuda:0"}, trust_remote_code=True, ) model.cuda().eval() # generate configuration can be modified to your needs # model.generation_config.min_new_tokens = 2 # model.generation_config.max_new_tokens = 256 # model.generation_config.do_sample = False # model.generation_config.num_beams = 1 # model.generation_config.temperature = float(0.0) # model.generation_config.repetition_penalty = float(1.0) inputs = tokenizer.apply_chat_template( messages, tokenize=True, add_generation_prompt=True, return_tensors="pt", return_dict=True, ).to("cuda") tokens = model.generate( input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"], renormalize_logits=True )[0] tokens = tokens[inputs["input_ids"].shape[1]:] answer = tokenizer.decode(tokens, skip_special_tokens=True) print(answer) ``` ## Quantization and sharding You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```. ## Model Architecture ``` LlamaForCausalLM( (model): LlamaModel( (embed_tokens): Embedding(128256, 2048, padding_idx=128004) (layers): ModuleList( (0-15): 16 x LlamaDecoderLayer( (self_attn): LlamaSdpaAttention( (q_proj): Linear(in_features=2048, out_features=2048, bias=False) (k_proj): Linear(in_features=2048, out_features=512, bias=False) (v_proj): Linear(in_features=2048, out_features=512, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): LlamaRotaryEmbedding() ) (mlp): LlamaMLP( (gate_proj): Linear(in_features=2048, out_features=8192, bias=False) (up_proj): Linear(in_features=2048, out_features=8192, bias=False) (down_proj): Linear(in_features=8192, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): LlamaRMSNorm((2048,), eps=1e-05) (post_attention_layernorm): LlamaRMSNorm((2048,), eps=1e-05) ) ) (norm): LlamaRMSNorm((2048,), eps=1e-05) (rotary_emb): LlamaRotaryEmbedding() ) (lm_head): Linear(in_features=2048, out_features=128256, bias=False) ) ``` ## Model Configuration This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models. ## Disclaimer Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions. - Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints. - Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion. - Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model. - Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities. - Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues. - Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes. By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
cmeraki/hf-test
cmeraki
2024-11-12T12:00:13Z
5
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-12T11:56:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lalainy/ECE-PRYMMAL-YL-0.5B-SLERP-BIS-V1
lalainy
2024-11-12T11:56:46Z
6
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-09T11:07:23Z
--- library_name: transformers license: apache-2.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sarasarasara/whisper-base-finetuned-bmd-20241112_114002
sarasarasara
2024-11-12T11:48:43Z
9
0
transformers
[ "transformers", "safetensors", "whisper", "audio-classification", "generated_from_trainer", "base_model:openai/whisper-base", "base_model:finetune:openai/whisper-base", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2024-11-12T11:40:57Z
--- library_name: transformers license: apache-2.0 base_model: openai/whisper-base tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: whisper-base-finetuned-bmd-20241112_114002 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-base-finetuned-bmd-20241112_114002 This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.9882 - Accuracy: 0.2941 - F1: 0.2676 - Precision: 0.5462 - Recall: 0.2941 - Sensitivity: 0.2941 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 1968 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Sensitivity | |:-------------:|:-------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:-----------:| | No log | 0.8571 | 3 | 1.1337 | 0.2647 | 0.1108 | 0.0701 | 0.2647 | 0.2647 | | No log | 2.0 | 7 | 1.1425 | 0.2647 | 0.1108 | 0.0701 | 0.2647 | 0.2647 | | 1.0516 | 2.8571 | 10 | 1.1001 | 0.5 | 0.4068 | 0.3585 | 0.5 | 0.5 | | 1.0516 | 4.0 | 14 | 1.1083 | 0.5294 | 0.4725 | 0.5349 | 0.5294 | 0.5294 | | 1.0516 | 4.8571 | 17 | 1.4131 | 0.3235 | 0.2838 | 0.5783 | 0.3235 | 0.3235 | | 0.6189 | 6.0 | 21 | 1.1835 | 0.5 | 0.5036 | 0.5107 | 0.5 | 0.5 | | 0.6189 | 6.8571 | 24 | 1.5920 | 0.3235 | 0.3248 | 0.5662 | 0.3235 | 0.3235 | | 0.6189 | 8.0 | 28 | 2.0293 | 0.3529 | 0.25 | 0.2047 | 0.3529 | 0.3529 | | 0.1708 | 8.8571 | 31 | 2.1477 | 0.3529 | 0.3436 | 0.5845 | 0.3529 | 0.3529 | | 0.1708 | 10.0 | 35 | 2.5696 | 0.3235 | 0.2841 | 0.5607 | 0.3235 | 0.3235 | | 0.1708 | 10.8571 | 38 | 2.9175 | 0.3529 | 0.2994 | 0.5809 | 0.3529 | 0.3529 | | 0.0173 | 12.0 | 42 | 2.9863 | 0.2941 | 0.2676 | 0.5462 | 0.2941 | 0.2941 | | 0.0173 | 12.8571 | 45 | 2.9882 | 0.2941 | 0.2676 | 0.5462 | 0.2941 | 0.2941 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.0+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
featherless-ai-quants/Saxo-Linkbricks-Horizon-AI-Korean-Mistral-Nemo-sft-dpo-12B-GGUF
featherless-ai-quants
2024-11-12T11:48:42Z
24
0
null
[ "gguf", "text-generation", "base_model:Saxo/Linkbricks-Horizon-AI-Korean-Mistral-Nemo-sft-dpo-12B", "base_model:quantized:Saxo/Linkbricks-Horizon-AI-Korean-Mistral-Nemo-sft-dpo-12B", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-11-12T11:30:30Z
--- base_model: Saxo/Linkbricks-Horizon-AI-Korean-Mistral-Nemo-sft-dpo-12B pipeline_tag: text-generation quantized_by: featherless-ai-quants --- # Saxo/Linkbricks-Horizon-AI-Korean-Mistral-Nemo-sft-dpo-12B GGUF Quantizations 🚀 ![Featherless AI Quants](./featherless-quants.png) *Optimized GGUF quantization files for enhanced model performance* > Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee. --- ## Available Quantizations 📊 | Quantization Type | File | Size | |-------------------|------|------| | IQ4_XS | [Saxo-Linkbricks-Horizon-AI-Korean-Mistral-Nemo-sft-dpo-12B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/Saxo-Linkbricks-Horizon-AI-Korean-Mistral-Nemo-sft-dpo-12B-GGUF/blob/main/Saxo-Linkbricks-Horizon-AI-Korean-Mistral-Nemo-sft-dpo-12B-IQ4_XS.gguf) | 6485.04 MB | | Q2_K | [Saxo-Linkbricks-Horizon-AI-Korean-Mistral-Nemo-sft-dpo-12B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/Saxo-Linkbricks-Horizon-AI-Korean-Mistral-Nemo-sft-dpo-12B-GGUF/blob/main/Saxo-Linkbricks-Horizon-AI-Korean-Mistral-Nemo-sft-dpo-12B-Q2_K.gguf) | 4569.10 MB | | Q3_K_L | [Saxo-Linkbricks-Horizon-AI-Korean-Mistral-Nemo-sft-dpo-12B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/Saxo-Linkbricks-Horizon-AI-Korean-Mistral-Nemo-sft-dpo-12B-GGUF/blob/main/Saxo-Linkbricks-Horizon-AI-Korean-Mistral-Nemo-sft-dpo-12B-Q3_K_L.gguf) | 6257.54 MB | | Q3_K_M | [Saxo-Linkbricks-Horizon-AI-Korean-Mistral-Nemo-sft-dpo-12B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/Saxo-Linkbricks-Horizon-AI-Korean-Mistral-Nemo-sft-dpo-12B-GGUF/blob/main/Saxo-Linkbricks-Horizon-AI-Korean-Mistral-Nemo-sft-dpo-12B-Q3_K_M.gguf) | 5801.29 MB | | Q3_K_S | [Saxo-Linkbricks-Horizon-AI-Korean-Mistral-Nemo-sft-dpo-12B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/Saxo-Linkbricks-Horizon-AI-Korean-Mistral-Nemo-sft-dpo-12B-GGUF/blob/main/Saxo-Linkbricks-Horizon-AI-Korean-Mistral-Nemo-sft-dpo-12B-Q3_K_S.gguf) | 5277.85 MB | | Q4_K_M | [Saxo-Linkbricks-Horizon-AI-Korean-Mistral-Nemo-sft-dpo-12B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/Saxo-Linkbricks-Horizon-AI-Korean-Mistral-Nemo-sft-dpo-12B-GGUF/blob/main/Saxo-Linkbricks-Horizon-AI-Korean-Mistral-Nemo-sft-dpo-12B-Q4_K_M.gguf) | 7130.82 MB | | Q4_K_S | [Saxo-Linkbricks-Horizon-AI-Korean-Mistral-Nemo-sft-dpo-12B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/Saxo-Linkbricks-Horizon-AI-Korean-Mistral-Nemo-sft-dpo-12B-GGUF/blob/main/Saxo-Linkbricks-Horizon-AI-Korean-Mistral-Nemo-sft-dpo-12B-Q4_K_S.gguf) | 6790.35 MB | | Q5_K_M | [Saxo-Linkbricks-Horizon-AI-Korean-Mistral-Nemo-sft-dpo-12B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/Saxo-Linkbricks-Horizon-AI-Korean-Mistral-Nemo-sft-dpo-12B-GGUF/blob/main/Saxo-Linkbricks-Horizon-AI-Korean-Mistral-Nemo-sft-dpo-12B-Q5_K_M.gguf) | 8323.32 MB | | Q5_K_S | [Saxo-Linkbricks-Horizon-AI-Korean-Mistral-Nemo-sft-dpo-12B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/Saxo-Linkbricks-Horizon-AI-Korean-Mistral-Nemo-sft-dpo-12B-GGUF/blob/main/Saxo-Linkbricks-Horizon-AI-Korean-Mistral-Nemo-sft-dpo-12B-Q5_K_S.gguf) | 8124.10 MB | | Q6_K | [Saxo-Linkbricks-Horizon-AI-Korean-Mistral-Nemo-sft-dpo-12B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/Saxo-Linkbricks-Horizon-AI-Korean-Mistral-Nemo-sft-dpo-12B-GGUF/blob/main/Saxo-Linkbricks-Horizon-AI-Korean-Mistral-Nemo-sft-dpo-12B-Q6_K.gguf) | 9590.35 MB | | Q8_0 | [Saxo-Linkbricks-Horizon-AI-Korean-Mistral-Nemo-sft-dpo-12B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/Saxo-Linkbricks-Horizon-AI-Korean-Mistral-Nemo-sft-dpo-12B-GGUF/blob/main/Saxo-Linkbricks-Horizon-AI-Korean-Mistral-Nemo-sft-dpo-12B-Q8_0.gguf) | 12419.10 MB | --- ## ⚡ Powered by [Featherless AI](https://featherless.ai) ### Key Features - 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly - 🛠️ **Zero Infrastructure** - No server setup or maintenance required - 📚 **Vast Compatibility** - Support for 2400+ models and counting - 💎 **Affordable Pricing** - Starting at just $10/month --- **Links:** [Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
hugosousa/classifier_smoll_135m_b_a
hugosousa
2024-11-12T11:38:51Z
10
0
transformers
[ "transformers", "safetensors", "llama", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-10-30T17:40:17Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
OliverSmith1618/dia_lm
OliverSmith1618
2024-11-12T11:32:51Z
7
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:quantized:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-11-12T11:30:55Z
--- base_model: unsloth/llama-3-8b-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** OliverSmith1618 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
sarasarasara/whisper-base-finetuned-bmd
sarasarasara
2024-11-12T11:28:05Z
7
0
transformers
[ "transformers", "safetensors", "whisper", "audio-classification", "generated_from_trainer", "base_model:openai/whisper-base", "base_model:finetune:openai/whisper-base", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2024-11-12T11:17:10Z
--- library_name: transformers license: apache-2.0 base_model: openai/whisper-base tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: whisper-base-finetuned-bmd results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-base-finetuned-bmd This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3900 - Accuracy: 0.3235 - F1: 0.3095 - Precision: 0.4512 - Recall: 0.3235 - Sensitivity: 0.3235 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Sensitivity | |:-------------:|:-------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:-----------:| | No log | 0.8571 | 3 | 1.1235 | 0.2647 | 0.1108 | 0.0701 | 0.2647 | 0.2647 | | No log | 2.0 | 7 | 1.1253 | 0.2647 | 0.1108 | 0.0701 | 0.2647 | 0.2647 | | 1.0608 | 2.8571 | 10 | 1.1178 | 0.2647 | 0.1162 | 0.0744 | 0.2647 | 0.2647 | | 1.0608 | 4.0 | 14 | 1.1206 | 0.2941 | 0.1740 | 0.4886 | 0.2941 | 0.2941 | | 1.0608 | 4.8571 | 17 | 1.1225 | 0.2941 | 0.2083 | 0.3475 | 0.2941 | 0.2941 | | 0.9214 | 6.0 | 21 | 1.1167 | 0.4412 | 0.4108 | 0.5259 | 0.4412 | 0.4412 | | 0.9214 | 6.8571 | 24 | 1.0754 | 0.5 | 0.4625 | 0.4954 | 0.5 | 0.5 | | 0.9214 | 8.0 | 28 | 1.1578 | 0.4118 | 0.3959 | 0.4304 | 0.4118 | 0.4118 | | 0.6179 | 8.8571 | 31 | 1.2143 | 0.3824 | 0.3663 | 0.4120 | 0.3824 | 0.3824 | | 0.6179 | 10.0 | 35 | 1.3170 | 0.4118 | 0.4169 | 0.5174 | 0.4118 | 0.4118 | | 0.6179 | 10.8571 | 38 | 1.3484 | 0.3529 | 0.3484 | 0.4777 | 0.3529 | 0.3529 | | 0.3513 | 12.0 | 42 | 1.3904 | 0.3235 | 0.3095 | 0.4512 | 0.3235 | 0.3235 | | 0.3513 | 12.8571 | 45 | 1.3900 | 0.3235 | 0.3095 | 0.4512 | 0.3235 | 0.3235 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.0+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
DILHTWD/whisper-large-v3-hsb
DILHTWD
2024-11-12T11:26:41Z
7
1
null
[ "safetensors", "whisper", "upper_sorbian", "automatic-speech-recognition", "base_model:openai/whisper-large-v3", "base_model:finetune:openai/whisper-large-v3", "license:agpl-3.0", "region:us" ]
automatic-speech-recognition
2024-11-11T08:56:48Z
--- license: agpl-3.0 metrics: - wer base_model: - openai/whisper-large-v3 pipeline_tag: automatic-speech-recognition tags: - upper_sorbian --- ## Model Description This model was fine-tuned on over 24 hours of transcribed upper sorbian speech to aid future research, conservation and revitalisation of the language. ## Training Data - **Source:** Stiftung für das sorbische Volk / Załožba za serbski lud (https://stiftung.sorben.com/) - **Volume:** 1493 Minutes, 10% Validation Set, 10% Test Set ## Training Details - **Hyperparameters**: - Batch size: 64 - Learning rate: 3e-6, linear decay - **Optimizer**: AdamW - **Warmup**: 1000 steps - **Additional Techniques**: BF16 training, initial 15 layers frozen ## Performance ### Metrics - **Word Error Rate:** 5.7 ## Usage ### Example Code To use the model, follow this example code: ```python import torch import torchaudio from transformers import WhisperProcessor, WhisperForConditionalGeneration # Load the model and processor model_name = "DILHTWD/whisper-large-v3-hsb" processor_name = "openai/whisper-large-v3" processor = WhisperProcessor.from_pretrained(processor_name) model = WhisperForConditionalGeneration.from_pretrained(model_name) # Load and preprocess the audio audio, sample_rate = torchaudio.load("test.mp3") if sample_rate != 16000: audio = torchaudio.transforms.Resample(orig_freq=sample_rate, new_freq=16000)(audio) input_features = processor(audio.squeeze().numpy(), sampling_rate=16000, return_tensors="pt").input_features # Generate transcription with torch.no_grad(): predicted_ids = model.generate(input_features) transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)[0] # Print the transcription print("Transcription:", transcription) ``` ## Model Details - **Model Name:** DILHTWD/whisper-large-v3-hsb - **Publisher:** Data Intelligence Lab, Hochschule für Technik und Wirtschaft Dresden - **Model Version:** 1.0.0 - **Model Date:** 2024-11-11 - **License:** [AGPL-3.0](https://www.gnu.org/licenses/agpl-3.0.de.html) - **Architecture:** Whisper Large v3 - **Task:** Automatic Speech Recognition
THUDM/webrl-llama-3.1-70b
THUDM
2024-11-12T11:21:14Z
56
4
null
[ "safetensors", "llama", "webrl", "llama3.1", "webarena-lite", "llm", "agent", "en", "arxiv:2411.02337", "license:other", "region:us" ]
null
2024-11-05T15:26:39Z
--- license: other language: - en base_model: - meta/Llama-3.1-70B tags: - webrl - llama3.1 - webarena-lite - llm - agent --- # WebRL-Llama-3.1-70B ## Model Introduction WebRL-Llama-3.1-70B is the open-source version of WebRL in Llama-3.1-70B released by Zhipu AI. It has the ability to complete web operations on five websites in WebArena: OpenStreetMap (Map), Reddit, GitLab, online store content management system (CMS) and OneStopShop (OSS). ## Evaluation Results We evaluated the WebRL-Llama-3.1-70B model on WebArena-Lite and obtained the following results: | Model | Reddit | Gitlab | CMS | Map | OSS | Avg.SR | |:--------------------|:------:|:------:|:------:|:------:|:------:|:--------:| | Llama-3.1-8B-Instruct | 0.0 | 3.3 | 2.9 | 3.3 | 11.1 | 4.8 | | Llama-3.1-70B-Instruct | 10.5 | 16.7 | 17.1 | 20.0 | 4.4 | 12.7 | | WebRL-Llama-3.1-70B | 78.9 | 50.0 | 54.3 | 40.0 | 44.4 | 49.1 | **For more inference code and requirements, please visit our [github page](GitHub - THUDM/WebRL).** ## Citations If you find our work useful, please consider citing the following paper. ``` @artical{qi2024webrl, title={WebRL: Training LLM Web Agents via Self-Evolving Online Curriculum Reinforcement Learning}, author={Zehan Qi and Xiao Liu and Iat Long Iong and Hanyu Lai and Xueqiao Sun and Xinyue Yang and Jiadai Sun and Yu Yang and Shuntian Yao and Tianjie Zhang and Wei Xu and Jie Tang and Yuxiao Dong}, journal={arXiv preprint arXiv:2411.02337}, year={2024}, } ```
toastloaf/autotrain-dwxgy-mutlw1
toastloaf
2024-11-12T11:20:50Z
137
0
transformers
[ "transformers", "tensorboard", "safetensors", "mobilellm", "text-generation", "autotrain", "text-generation-inference", "conversational", "custom_code", "dataset:toastloaf/testing-private", "base_model:facebook/MobileLLM-125M", "base_model:finetune:facebook/MobileLLM-125M", "license:other", "autotrain_compatible", "region:us" ]
text-generation
2024-11-12T10:48:43Z
--- tags: - autotrain - text-generation-inference - text-generation library_name: transformers base_model: facebook/MobileLLM-125M widget: - messages: - role: user content: What is your favorite condiment? license: other datasets: - toastloaf/testing-private --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
dhanexh/my-awesome-model
dhanexh
2024-11-12T11:15:43Z
6
0
keras-nlp
[ "keras-nlp", "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "region:us" ]
null
2024-11-12T11:15:35Z
--- library_name: keras-nlp tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: https://github.com/keras-team/keras-nlp - Docs: https://keras.io/keras_nlp/
mradermacher/openbuddy-gemma-7b-v19.1-4k-i1-GGUF
mradermacher
2024-11-12T11:14:10Z
26
0
transformers
[ "transformers", "gguf", "zh", "en", "fr", "de", "ja", "ko", "it", "ru", "fi", "base_model:OpenBuddy/openbuddy-gemma-7b-v19.1-4k", "base_model:quantized:OpenBuddy/openbuddy-gemma-7b-v19.1-4k", "license:other", "endpoints_compatible", "region:us", "imatrix" ]
null
2024-11-12T08:01:46Z
--- base_model: OpenBuddy/openbuddy-gemma-7b-v19.1-4k language: - zh - en - fr - de - ja - ko - it - ru - fi library_name: transformers license: other license_link: https://ai.google.dev/gemma/terms license_name: gemma quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/OpenBuddy/openbuddy-gemma-7b-v19.1-4k <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/openbuddy-gemma-7b-v19.1-4k-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/openbuddy-gemma-7b-v19.1-4k-i1-GGUF/resolve/main/openbuddy-gemma-7b-v19.1-4k.i1-IQ1_S.gguf) | i1-IQ1_S | 2.3 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/openbuddy-gemma-7b-v19.1-4k-i1-GGUF/resolve/main/openbuddy-gemma-7b-v19.1-4k.i1-IQ1_M.gguf) | i1-IQ1_M | 2.4 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/openbuddy-gemma-7b-v19.1-4k-i1-GGUF/resolve/main/openbuddy-gemma-7b-v19.1-4k.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/openbuddy-gemma-7b-v19.1-4k-i1-GGUF/resolve/main/openbuddy-gemma-7b-v19.1-4k.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/openbuddy-gemma-7b-v19.1-4k-i1-GGUF/resolve/main/openbuddy-gemma-7b-v19.1-4k.i1-IQ2_S.gguf) | i1-IQ2_S | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/openbuddy-gemma-7b-v19.1-4k-i1-GGUF/resolve/main/openbuddy-gemma-7b-v19.1-4k.i1-IQ2_M.gguf) | i1-IQ2_M | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/openbuddy-gemma-7b-v19.1-4k-i1-GGUF/resolve/main/openbuddy-gemma-7b-v19.1-4k.i1-Q2_K.gguf) | i1-Q2_K | 3.6 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/openbuddy-gemma-7b-v19.1-4k-i1-GGUF/resolve/main/openbuddy-gemma-7b-v19.1-4k.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/openbuddy-gemma-7b-v19.1-4k-i1-GGUF/resolve/main/openbuddy-gemma-7b-v19.1-4k.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/openbuddy-gemma-7b-v19.1-4k-i1-GGUF/resolve/main/openbuddy-gemma-7b-v19.1-4k.i1-IQ3_S.gguf) | i1-IQ3_S | 4.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/openbuddy-gemma-7b-v19.1-4k-i1-GGUF/resolve/main/openbuddy-gemma-7b-v19.1-4k.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.1 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/openbuddy-gemma-7b-v19.1-4k-i1-GGUF/resolve/main/openbuddy-gemma-7b-v19.1-4k.i1-IQ3_M.gguf) | i1-IQ3_M | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/openbuddy-gemma-7b-v19.1-4k-i1-GGUF/resolve/main/openbuddy-gemma-7b-v19.1-4k.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.5 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/openbuddy-gemma-7b-v19.1-4k-i1-GGUF/resolve/main/openbuddy-gemma-7b-v19.1-4k.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.8 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/openbuddy-gemma-7b-v19.1-4k-i1-GGUF/resolve/main/openbuddy-gemma-7b-v19.1-4k.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/openbuddy-gemma-7b-v19.1-4k-i1-GGUF/resolve/main/openbuddy-gemma-7b-v19.1-4k.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 5.1 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/openbuddy-gemma-7b-v19.1-4k-i1-GGUF/resolve/main/openbuddy-gemma-7b-v19.1-4k.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 5.1 | fast on arm+i8mm, low quality | | [GGUF](https://huggingface.co/mradermacher/openbuddy-gemma-7b-v19.1-4k-i1-GGUF/resolve/main/openbuddy-gemma-7b-v19.1-4k.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 5.1 | fast on arm+sve, low quality | | [GGUF](https://huggingface.co/mradermacher/openbuddy-gemma-7b-v19.1-4k-i1-GGUF/resolve/main/openbuddy-gemma-7b-v19.1-4k.i1-Q4_0.gguf) | i1-Q4_0 | 5.1 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/openbuddy-gemma-7b-v19.1-4k-i1-GGUF/resolve/main/openbuddy-gemma-7b-v19.1-4k.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.1 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/openbuddy-gemma-7b-v19.1-4k-i1-GGUF/resolve/main/openbuddy-gemma-7b-v19.1-4k.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/openbuddy-gemma-7b-v19.1-4k-i1-GGUF/resolve/main/openbuddy-gemma-7b-v19.1-4k.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/openbuddy-gemma-7b-v19.1-4k-i1-GGUF/resolve/main/openbuddy-gemma-7b-v19.1-4k.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.2 | | | [GGUF](https://huggingface.co/mradermacher/openbuddy-gemma-7b-v19.1-4k-i1-GGUF/resolve/main/openbuddy-gemma-7b-v19.1-4k.i1-Q6_K.gguf) | i1-Q6_K | 7.1 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
mradermacher/openbuddy-gemma-7b-v19.1-4k-GGUF
mradermacher
2024-11-12T11:14:09Z
13
0
transformers
[ "transformers", "gguf", "zh", "en", "fr", "de", "ja", "ko", "it", "ru", "fi", "base_model:OpenBuddy/openbuddy-gemma-7b-v19.1-4k", "base_model:quantized:OpenBuddy/openbuddy-gemma-7b-v19.1-4k", "license:other", "endpoints_compatible", "region:us" ]
null
2024-11-09T23:20:37Z
--- base_model: OpenBuddy/openbuddy-gemma-7b-v19.1-4k language: - zh - en - fr - de - ja - ko - it - ru - fi library_name: transformers license: other license_link: https://ai.google.dev/gemma/terms license_name: gemma quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/OpenBuddy/openbuddy-gemma-7b-v19.1-4k <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/openbuddy-gemma-7b-v19.1-4k-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/openbuddy-gemma-7b-v19.1-4k-GGUF/resolve/main/openbuddy-gemma-7b-v19.1-4k.Q2_K.gguf) | Q2_K | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/openbuddy-gemma-7b-v19.1-4k-GGUF/resolve/main/openbuddy-gemma-7b-v19.1-4k.Q3_K_S.gguf) | Q3_K_S | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/openbuddy-gemma-7b-v19.1-4k-GGUF/resolve/main/openbuddy-gemma-7b-v19.1-4k.Q3_K_M.gguf) | Q3_K_M | 4.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/openbuddy-gemma-7b-v19.1-4k-GGUF/resolve/main/openbuddy-gemma-7b-v19.1-4k.Q3_K_L.gguf) | Q3_K_L | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/openbuddy-gemma-7b-v19.1-4k-GGUF/resolve/main/openbuddy-gemma-7b-v19.1-4k.IQ4_XS.gguf) | IQ4_XS | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/openbuddy-gemma-7b-v19.1-4k-GGUF/resolve/main/openbuddy-gemma-7b-v19.1-4k.Q4_0_4_4.gguf) | Q4_0_4_4 | 5.1 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/openbuddy-gemma-7b-v19.1-4k-GGUF/resolve/main/openbuddy-gemma-7b-v19.1-4k.Q4_K_S.gguf) | Q4_K_S | 5.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/openbuddy-gemma-7b-v19.1-4k-GGUF/resolve/main/openbuddy-gemma-7b-v19.1-4k.Q4_K_M.gguf) | Q4_K_M | 5.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/openbuddy-gemma-7b-v19.1-4k-GGUF/resolve/main/openbuddy-gemma-7b-v19.1-4k.Q5_K_S.gguf) | Q5_K_S | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/openbuddy-gemma-7b-v19.1-4k-GGUF/resolve/main/openbuddy-gemma-7b-v19.1-4k.Q5_K_M.gguf) | Q5_K_M | 6.2 | | | [GGUF](https://huggingface.co/mradermacher/openbuddy-gemma-7b-v19.1-4k-GGUF/resolve/main/openbuddy-gemma-7b-v19.1-4k.Q6_K.gguf) | Q6_K | 7.1 | very good quality | | [GGUF](https://huggingface.co/mradermacher/openbuddy-gemma-7b-v19.1-4k-GGUF/resolve/main/openbuddy-gemma-7b-v19.1-4k.Q8_0.gguf) | Q8_0 | 9.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/openbuddy-gemma-7b-v19.1-4k-GGUF/resolve/main/openbuddy-gemma-7b-v19.1-4k.f16.gguf) | f16 | 17.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
featherless-ai-quants/fireworks-ai-llama-3-firefunction-v2-GGUF
featherless-ai-quants
2024-11-12T11:07:51Z
24
1
null
[ "gguf", "text-generation", "base_model:fireworks-ai/llama-3-firefunction-v2", "base_model:quantized:fireworks-ai/llama-3-firefunction-v2", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-11-12T06:43:07Z
--- base_model: fireworks-ai/llama-3-firefunction-v2 pipeline_tag: text-generation quantized_by: featherless-ai-quants --- # fireworks-ai/llama-3-firefunction-v2 GGUF Quantizations 🚀 ![Featherless AI Quants](./featherless-quants.png) *Optimized GGUF quantization files for enhanced model performance* > Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee. --- ## Available Quantizations 📊 | Quantization Type | File | Size | |-------------------|------|------| | IQ4_XS | [fireworks-ai-llama-3-firefunction-v2-IQ4_XS](https://huggingface.co/featherless-ai-quants/fireworks-ai-llama-3-firefunction-v2-GGUF/tree/main/fireworks-ai-llama-3-firefunction-v2-IQ4_XS) | 36496.80 MB (folder) | | Q2_K | [fireworks-ai-llama-3-firefunction-v2-Q2_K](https://huggingface.co/featherless-ai-quants/fireworks-ai-llama-3-firefunction-v2-GGUF/tree/main/fireworks-ai-llama-3-firefunction-v2-Q2_K) | 25153.27 MB (folder) | | Q3_K_L | [fireworks-ai-llama-3-firefunction-v2-Q3_K_L](https://huggingface.co/featherless-ai-quants/fireworks-ai-llama-3-firefunction-v2-GGUF/tree/main/fireworks-ai-llama-3-firefunction-v2-Q3_K_L) | 35420.03 MB (folder) | | Q3_K_M | [fireworks-ai-llama-3-firefunction-v2-Q3_K_M](https://huggingface.co/featherless-ai-quants/fireworks-ai-llama-3-firefunction-v2-GGUF/tree/main/fireworks-ai-llama-3-firefunction-v2-Q3_K_M) | 32680.03 MB (folder) | | Q3_K_S | [fireworks-ai-llama-3-firefunction-v2-Q3_K_S](https://huggingface.co/featherless-ai-quants/fireworks-ai-llama-3-firefunction-v2-GGUF/tree/main/fireworks-ai-llama-3-firefunction-v2-Q3_K_S) | 29480.03 MB (folder) | | Q4_K_M | [fireworks-ai-llama-3-firefunction-v2-Q4_K_M](https://huggingface.co/featherless-ai-quants/fireworks-ai-llama-3-firefunction-v2-GGUF/tree/main/fireworks-ai-llama-3-firefunction-v2-Q4_K_M) | 40550.61 MB (folder) | | Q4_K_S | [fireworks-ai-llama-3-firefunction-v2-Q4_K_S](https://huggingface.co/featherless-ai-quants/fireworks-ai-llama-3-firefunction-v2-GGUF/tree/main/fireworks-ai-llama-3-firefunction-v2-Q4_K_S) | 38478.11 MB (folder) | | Q5_K_M | [fireworks-ai-llama-3-firefunction-v2-Q5_K_M](https://huggingface.co/featherless-ai-quants/fireworks-ai-llama-3-firefunction-v2-GGUF/tree/main/fireworks-ai-llama-3-firefunction-v2-Q5_K_M) | 47635.86 MB (folder) | | Q5_K_S | [fireworks-ai-llama-3-firefunction-v2-Q5_K_S](https://huggingface.co/featherless-ai-quants/fireworks-ai-llama-3-firefunction-v2-GGUF/tree/main/fireworks-ai-llama-3-firefunction-v2-Q5_K_S) | 46403.36 MB (folder) | | Q6_K | [fireworks-ai-llama-3-firefunction-v2-Q6_K](https://huggingface.co/featherless-ai-quants/fireworks-ai-llama-3-firefunction-v2-GGUF/tree/main/fireworks-ai-llama-3-firefunction-v2-Q6_K) | 55206.44 MB (folder) | | Q8_0 | [fireworks-ai-llama-3-firefunction-v2-Q8_0](https://huggingface.co/featherless-ai-quants/fireworks-ai-llama-3-firefunction-v2-GGUF/tree/main/fireworks-ai-llama-3-firefunction-v2-Q8_0) | 71501.78 MB (folder) | --- ## ⚡ Powered by [Featherless AI](https://featherless.ai) ### Key Features - 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly - 🛠️ **Zero Infrastructure** - No server setup or maintenance required - 📚 **Vast Compatibility** - Support for 2400+ models and counting - 💎 **Affordable Pricing** - Starting at just $10/month --- **Links:** [Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
WinKawaks/SketchXAI-Base-QuickDraw345
WinKawaks
2024-11-12T11:03:44Z
42
3
transformers
[ "transformers", "pytorch", "safetensors", "vit", "dataset:quickdraw", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2023-03-24T14:16:50Z
--- license: apache-2.0 datasets: - quickdraw --- A full description of this project can be found at https://sketchxai.github.io/.
Ahbabo232/whisper-tiny-1
Ahbabo232
2024-11-12T11:03:28Z
78
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "whisper-event", "generated_from_trainer", "es", "dataset:mozilla-foundation/common_voice_11_0", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-11-11T03:01:19Z
--- library_name: transformers language: - es license: apache-2.0 base_model: openai/whisper-small tags: - whisper-event - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 model-index: - name: Whisper Small Es - Sanchit Gandhi results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Es - Sanchit Gandhi This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.0587 - Cer: 97.1842 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 3000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0341 | 8.0 | 1000 | 0.0731 | 74.1248 | | 0.0002 | 16.0 | 2000 | 0.0569 | 96.7275 | | 0.0001 | 24.0 | 3000 | 0.0587 | 97.1842 | ### Framework versions - Transformers 4.47.0.dev0 - Pytorch 2.5.0+cu121 - Datasets 3.1.1.dev0 - Tokenizers 0.20.3
CSLin3303/product_2024111201
CSLin3303
2024-11-12T11:01:56Z
5
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-11-12T11:01:20Z
--- base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf --- # Uploaded model - **Developed by:** CSLin3303 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
parrottygg/LlamaSmallv1
parrottygg
2024-11-12T10:59:48Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-12T10:57:43Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sarasarasara/whisper-tiny-finetuned-bmd-mx30-shfl-20241112_105222
sarasarasara
2024-11-12T10:59:11Z
7
0
transformers
[ "transformers", "safetensors", "whisper", "audio-classification", "generated_from_trainer", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2024-11-12T10:53:15Z
--- library_name: transformers license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: whisper-tiny-finetuned-bmd-mx30-shfl-20241112_105222 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-tiny-finetuned-bmd-mx30-shfl-20241112_105222 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8366 - Accuracy: 0.4706 - F1: 0.4755 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 1968 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-------:|:----:|:---------------:|:--------:|:------:| | No log | 0.8571 | 3 | 1.1053 | 0.2647 | 0.1108 | | No log | 2.0 | 7 | 1.1082 | 0.2941 | 0.1683 | | 1.0599 | 2.8571 | 10 | 1.0908 | 0.3529 | 0.2644 | | 1.0599 | 4.0 | 14 | 1.0633 | 0.3824 | 0.3022 | | 1.0599 | 4.8571 | 17 | 0.9311 | 0.4706 | 0.4514 | | 0.6821 | 6.0 | 21 | 1.1443 | 0.4412 | 0.4371 | | 0.6821 | 6.8571 | 24 | 1.1714 | 0.5 | 0.5002 | | 0.6821 | 8.0 | 28 | 1.2322 | 0.5294 | 0.5345 | | 0.193 | 8.8571 | 31 | 1.5522 | 0.4412 | 0.4147 | | 0.193 | 10.0 | 35 | 1.7296 | 0.4706 | 0.4540 | | 0.193 | 10.8571 | 38 | 1.7856 | 0.4412 | 0.4425 | | 0.035 | 12.0 | 42 | 1.8251 | 0.4412 | 0.4423 | | 0.035 | 12.8571 | 45 | 1.8366 | 0.4706 | 0.4755 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.0+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
RikvanSchaick/bert-finetuned-ner_trial8
RikvanSchaick
2024-11-12T10:58:55Z
108
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "token-classification", "generated_from_trainer", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-11-12T10:15:54Z
--- library_name: transformers license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer model-index: - name: bert-finetuned-ner_trial8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner_trial8 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 249 | 0.2830 | 0.3519 | 0.3003 | 0.3241 | 0.9292 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.5.0+cu121 - Datasets 3.1.0 - Tokenizers 0.19.1
ivarm11/bert-finetuned-ner_trial3
ivarm11
2024-11-12T10:55:30Z
107
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "token-classification", "generated_from_trainer", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-11-12T10:10:56Z
--- library_name: transformers license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer model-index: - name: bert-finetuned-ner_trial3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner_trial3 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 125 | 0.3815 | 0.3216 | 0.1600 | 0.2136 | 0.9166 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.5.0+cu121 - Datasets 3.1.0 - Tokenizers 0.19.1
Volko76/Qwen2.5-Coder-1.5B-Instruct-GGUF
Volko76
2024-11-12T10:50:28Z
43
0
transformers
[ "transformers", "gguf", "code", "codeqwen", "chat", "qwen", "qwen-coder", "autoquant", "text-generation", "en", "arxiv:2409.12186", "arxiv:2407.10671", "base_model:Qwen/Qwen2.5-Coder-1.5B", "base_model:quantized:Qwen/Qwen2.5-Coder-1.5B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-11-12T10:37:19Z
--- base_model: - Qwen/Qwen2.5-Coder-1.5B language: - en library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B-Instruct/blob/main/LICENSE pipeline_tag: text-generation tags: - code - codeqwen - chat - qwen - qwen-coder - autoquant - gguf --- # Qwen2.5-Coder-1.5B-Instruct ## Introduction Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5: - Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o. - A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies. **This repo contains the instruction-tuned 1.5B Qwen2.5-Coder model**, which has the following features: - Type: Causal Language Models - Training Stage: Pretraining & Post-training - Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings - Number of Parameters: 1.54B - Number of Paramaters (Non-Embedding): 1.31B - Number of Layers: 28 - Number of Attention Heads (GQA): 12 for Q and 2 for KV - Context Length: Full 32,768 tokens For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186). ## Requirements The code of Qwen2.5-Coder has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`. With `transformers<4.37.0`, you will encounter the following error: ``` KeyError: 'qwen2' ``` ## Quickstart Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Qwen/Qwen2.5-Coder-1.5B-Instruct" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "write a quick sort algorithm." messages = [ {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ## Evaluation & Performance Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/). For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html). ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{hui2024qwen2, title={Qwen2. 5-Coder Technical Report}, author={Hui, Binyuan and Yang, Jian and Cui, Zeyu and Yang, Jiaxi and Liu, Dayiheng and Zhang, Lei and Liu, Tianyu and Zhang, Jiajun and Yu, Bowen and Dang, Kai and others}, journal={arXiv preprint arXiv:2409.12186}, year={2024} } @article{qwen2, title={Qwen2 Technical Report}, author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan}, journal={arXiv preprint arXiv:2407.10671}, year={2024} } ```
slobers/spinkle2
slobers
2024-11-12T10:42:29Z
7
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-12T10:31:11Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
adriansanz/gret5
adriansanz
2024-11-12T10:39:01Z
6
0
setfit
[ "setfit", "safetensors", "xlm-roberta", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:projecte-aina/ST-NLI-ca_paraphrase-multilingual-mpnet-base", "base_model:finetune:projecte-aina/ST-NLI-ca_paraphrase-multilingual-mpnet-base", "region:us" ]
text-classification
2024-11-12T10:37:44Z
--- base_model: projecte-aina/ST-NLI-ca_paraphrase-multilingual-mpnet-base library_name: setfit metrics: - accuracy pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: Hola, quin és el paper dels dipòsits o fiances en la garantia dels serveis? - text: Hola! - text: Hola, tinc algunes preguntes sobre tràmits que voldria fer. - text: Quin és el propòsit de la garantia dels serveis adjudicats? - text: Sóc interessat en saber què inclou el tràmit de sol·licitud de subvencions. inference: true --- # SetFit with projecte-aina/ST-NLI-ca_paraphrase-multilingual-mpnet-base This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [projecte-aina/ST-NLI-ca_paraphrase-multilingual-mpnet-base](https://huggingface.co/projecte-aina/ST-NLI-ca_paraphrase-multilingual-mpnet-base) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [projecte-aina/ST-NLI-ca_paraphrase-multilingual-mpnet-base](https://huggingface.co/projecte-aina/ST-NLI-ca_paraphrase-multilingual-mpnet-base) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 128 tokens - **Number of Classes:** 2 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | <ul><li>'Quin és el procediment per a la devolució de fiances i avals?'</li><li>"Sóc usuari i m'agradaria saber quin és el procediment per fer una sol·licitud per aquest tràmit."</li><li>'Quin és el benefici de la devolució de fiances i avals?'</li></ul> | | 1 | <ul><li>'Bon dia, com et va?'</li><li>'Bon dia, vull saber més sobre els tràmits disponibles.'</li><li>'Ei!'</li></ul> | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("adriansanz/gret5") # Run inference preds = model("Hola!") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:-------|:----| | Word count | 1 | 9.1548 | 17 | | Label | Training Sample Count | |:------|:----------------------| | 0 | 42 | | 1 | 42 | ### Training Hyperparameters - batch_size: (16, 16) - num_epochs: (3, 3) - max_steps: -1 - sampling_strategy: oversampling - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - l2_weight: 0.01 - seed: 42 - evaluation_strategy: epoch - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0044 | 1 | 0.2076 | - | | 0.2212 | 50 | 0.099 | - | | 0.4425 | 100 | 0.0016 | - | | 0.6637 | 150 | 0.0002 | - | | 0.8850 | 200 | 0.0002 | - | | 1.0 | 226 | - | 0.0002 | | 1.1062 | 250 | 0.0001 | - | | 1.3274 | 300 | 0.0001 | - | | 1.5487 | 350 | 0.0001 | - | | 1.7699 | 400 | 0.0001 | - | | 1.9912 | 450 | 0.0001 | - | | 2.0 | 452 | - | 0.0001 | | 2.2124 | 500 | 0.0001 | - | | 2.4336 | 550 | 0.0001 | - | | 2.6549 | 600 | 0.0001 | - | | 2.8761 | 650 | 0.0 | - | | 3.0 | 678 | - | 0.0001 | ### Framework Versions - Python: 3.10.12 - SetFit: 1.1.0 - Sentence Transformers: 3.2.1 - Transformers: 4.42.2 - PyTorch: 2.5.0+cu121 - Datasets: 3.1.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
neeleshg23/jamba-2.7b
neeleshg23
2024-11-12T10:38:56Z
6
0
transformers
[ "transformers", "safetensors", "jamba", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-11-12T10:36:10Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TheAwkwardAlienGuy/llama-2-7b-English-Knowledge
TheAwkwardAlienGuy
2024-11-12T10:33:32Z
77
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-11-12T10:28:09Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
akseljoonas/deberta-v3-ft-predtrade
akseljoonas
2024-11-12T10:30:18Z
121
0
transformers
[ "transformers", "safetensors", "deberta-v2", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-11-12T10:29:30Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
adriansanz/gret4
adriansanz
2024-11-12T10:28:34Z
6
0
setfit
[ "setfit", "safetensors", "xlm-roberta", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:projecte-aina/ST-NLI-ca_paraphrase-multilingual-mpnet-base", "base_model:finetune:projecte-aina/ST-NLI-ca_paraphrase-multilingual-mpnet-base", "region:us" ]
text-classification
2024-11-12T10:27:06Z
--- base_model: projecte-aina/ST-NLI-ca_paraphrase-multilingual-mpnet-base library_name: setfit metrics: - accuracy pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: Hola, quin és el paper dels dipòsits o fiances en la garantia dels serveis? - text: Hola! - text: Hola, tinc algunes preguntes sobre tràmits que voldria fer. - text: Quin és el propòsit de la garantia dels serveis adjudicats? - text: Sóc interessat en saber què inclou el tràmit de sol·licitud de subvencions. inference: true --- # SetFit with projecte-aina/ST-NLI-ca_paraphrase-multilingual-mpnet-base This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [projecte-aina/ST-NLI-ca_paraphrase-multilingual-mpnet-base](https://huggingface.co/projecte-aina/ST-NLI-ca_paraphrase-multilingual-mpnet-base) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [projecte-aina/ST-NLI-ca_paraphrase-multilingual-mpnet-base](https://huggingface.co/projecte-aina/ST-NLI-ca_paraphrase-multilingual-mpnet-base) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 128 tokens - **Number of Classes:** 2 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | <ul><li>"Sóc ciutadà i m'agradaria saber quin és el tràmit per a la renovació del DNI."</li><li>"Quin és el propòsit de la garantia per a l'abocament controlat de runes?"</li><li>'Quin és el benefici de la devolució de fiances i avals?'</li></ul> | | 1 | <ul><li>"Aquest text és Saludo per a un cercador de tràmits d'un ajuntament"</li><li>'Bon dia, vull saber més sobre els tràmits disponibles.'</li><li>"Bona nit, com t'has anat acostant al final del dia?"</li></ul> | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("adriansanz/gret4") # Run inference preds = model("Hola!") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:-------|:----| | Word count | 1 | 9.3444 | 17 | | Label | Training Sample Count | |:------|:----------------------| | 0 | 45 | | 1 | 45 | ### Training Hyperparameters - batch_size: (16, 16) - num_epochs: (3, 3) - max_steps: -1 - sampling_strategy: oversampling - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - l2_weight: 0.01 - seed: 42 - evaluation_strategy: epoch - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0039 | 1 | 0.2366 | - | | 0.1931 | 50 | 0.1287 | - | | 0.3861 | 100 | 0.0039 | - | | 0.5792 | 150 | 0.0003 | - | | 0.7722 | 200 | 0.0001 | - | | 0.9653 | 250 | 0.0001 | - | | 1.0 | 259 | - | 0.0001 | | 1.1583 | 300 | 0.0001 | - | | 1.3514 | 350 | 0.0001 | - | | 1.5444 | 400 | 0.0001 | - | | 1.7375 | 450 | 0.0001 | - | | 1.9305 | 500 | 0.0001 | - | | 2.0 | 518 | - | 0.0001 | | 2.1236 | 550 | 0.0 | - | | 2.3166 | 600 | 0.0 | - | | 2.5097 | 650 | 0.0 | - | | 2.7027 | 700 | 0.0 | - | | 2.8958 | 750 | 0.0 | - | | 3.0 | 777 | - | 0.0001 | ### Framework Versions - Python: 3.10.12 - SetFit: 1.1.0 - Sentence Transformers: 3.2.1 - Transformers: 4.42.2 - PyTorch: 2.5.0+cu121 - Datasets: 3.1.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
Khoa/sentiment-analysis-tuning-1211
Khoa
2024-11-12T10:21:00Z
120
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-11-12T10:20:18Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
unsloth/Qwen2.5-Coder-3B
unsloth
2024-11-12T10:07:05Z
728
1
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "unsloth", "code", "qwen", "qwen-coder", "codeqwen", "en", "arxiv:2409.12186", "arxiv:2407.10671", "base_model:Qwen/Qwen2.5-Coder-3B", "base_model:finetune:Qwen/Qwen2.5-Coder-3B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-12T01:01:05Z
--- base_model: Qwen/Qwen2.5-Coder-3B language: - en library_name: transformers license: apache-2.0 tags: - unsloth - transformers - code - qwen - qwen-coder - codeqwen --- # Finetune Llama 3.1, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth! We have a Qwen 2.5 (all model sizes) [free Google Colab Tesla T4 notebook](https://colab.research.google.com/drive/1Kose-ucXO1IBaZq5BvbwWieuubP7hxvQ?usp=sharing). Also a [Qwen 2.5 conversational style notebook](https://colab.research.google.com/drive/1qN1CEalC70EO1wGKhNxs1go1W9So61R5?usp=sharing). [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) ## ✨ Finetune for Free All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **Llama-3.1 8b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less | | **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less | | **Gemma-2 9b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less | | **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less | | **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less | | **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less | - This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates. - This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr. - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster. # unsloth/Qwen2.5-Coder-3B ## Introduction Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5: - Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o. - A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies. **This repo contains the 0.5B Qwen2.5-Coder model**, which has the following features: - Type: Causal Language Models - Training Stage: Pretraining - Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings - Number of Parameters: 0.49B - Number of Paramaters (Non-Embedding): 0.36B - Number of Layers: 24 - Number of Attention Heads (GQA): 14 for Q and 2 for KV - Context Length: Full 32,768 tokens **We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., or fill in the middle tasks on this model. For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186). ## Requirements The code of Qwen2.5-Coder has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`. With `transformers<4.37.0`, you will encounter the following error: ``` KeyError: 'qwen2' ``` ## Evaluation & Performance Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/). For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html). ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{hui2024qwen2, title={Qwen2. 5-Coder Technical Report}, author={Hui, Binyuan and Yang, Jian and Cui, Zeyu and Yang, Jiaxi and Liu, Dayiheng and Zhang, Lei and Liu, Tianyu and Zhang, Jiajun and Yu, Bowen and Dang, Kai and others}, journal={arXiv preprint arXiv:2409.12186}, year={2024} } @article{qwen2, title={Qwen2 Technical Report}, author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan}, journal={arXiv preprint arXiv:2407.10671}, year={2024} } ```
unsloth/Qwen2.5-Coder-3B-bnb-4bit
unsloth
2024-11-12T10:06:33Z
1,385
2
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "unsloth", "code", "qwen", "qwen-coder", "codeqwen", "en", "arxiv:2409.12186", "arxiv:2407.10671", "base_model:Qwen/Qwen2.5-Coder-3B", "base_model:quantized:Qwen/Qwen2.5-Coder-3B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-11-12T01:00:03Z
--- base_model: Qwen/Qwen2.5-Coder-3B language: - en library_name: transformers license: apache-2.0 tags: - unsloth - transformers - code - qwen - qwen-coder - codeqwen --- # Finetune Llama 3.1, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth! We have a Qwen 2.5 (all model sizes) [free Google Colab Tesla T4 notebook](https://colab.research.google.com/drive/1Kose-ucXO1IBaZq5BvbwWieuubP7hxvQ?usp=sharing). Also a [Qwen 2.5 conversational style notebook](https://colab.research.google.com/drive/1qN1CEalC70EO1wGKhNxs1go1W9So61R5?usp=sharing). [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) ## ✨ Finetune for Free All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **Llama-3.1 8b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less | | **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less | | **Gemma-2 9b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less | | **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less | | **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less | | **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less | - This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates. - This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr. - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster. # unsloth/Qwen2.5-Coder-3B-bnb-4bit ## Introduction Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5: - Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o. - A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies. **This repo contains the 0.5B Qwen2.5-Coder model**, which has the following features: - Type: Causal Language Models - Training Stage: Pretraining - Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings - Number of Parameters: 0.49B - Number of Paramaters (Non-Embedding): 0.36B - Number of Layers: 24 - Number of Attention Heads (GQA): 14 for Q and 2 for KV - Context Length: Full 32,768 tokens **We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., or fill in the middle tasks on this model. For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186). ## Requirements The code of Qwen2.5-Coder has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`. With `transformers<4.37.0`, you will encounter the following error: ``` KeyError: 'qwen2' ``` ## Evaluation & Performance Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/). For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html). ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{hui2024qwen2, title={Qwen2. 5-Coder Technical Report}, author={Hui, Binyuan and Yang, Jian and Cui, Zeyu and Yang, Jiaxi and Liu, Dayiheng and Zhang, Lei and Liu, Tianyu and Zhang, Jiajun and Yu, Bowen and Dang, Kai and others}, journal={arXiv preprint arXiv:2409.12186}, year={2024} } @article{qwen2, title={Qwen2 Technical Report}, author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan}, journal={arXiv preprint arXiv:2407.10671}, year={2024} } ```
cuongdev/chung-thu-20
cuongdev
2024-11-12T10:06:10Z
29
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-11-12T10:02:16Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### chung-thu-20 Dreambooth model trained by cuongdev with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
LiuHao03322/Qwen2-VL-2B-Instruct-Q4
LiuHao03322
2024-11-12T10:05:23Z
19
0
null
[ "gguf", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-11-12T03:10:45Z
--- license: apache-2.0 ---
unsloth/Qwen2.5-Coder-14B-Instruct-bnb-4bit
unsloth
2024-11-12T10:04:01Z
5,863
1
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "unsloth", "code", "qwen", "qwen-coder", "codeqwen", "conversational", "en", "arxiv:2409.12186", "arxiv:2407.10671", "base_model:Qwen/Qwen2.5-Coder-14B-Instruct", "base_model:quantized:Qwen/Qwen2.5-Coder-14B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-11-12T02:37:23Z
--- base_model: Qwen/Qwen2.5-Coder-14B-Instruct language: - en library_name: transformers license: apache-2.0 tags: - unsloth - transformers - code - qwen - qwen-coder - codeqwen --- # Finetune Llama 3.2, Qwen2.5, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth! We have a Qwen 2.5 (all model sizes) [free Google Colab Tesla T4 notebook](https://colab.research.google.com/drive/1Kose-ucXO1IBaZq5BvbwWieuubP7hxvQ?usp=sharing). Also a [Qwen 2.5 conversational style notebook](https://colab.research.google.com/drive/1qN1CEalC70EO1wGKhNxs1go1W9So61R5?usp=sharing). [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) ## ✨ Finetune for Free All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **Llama-3.1 8b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less | | **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less | | **Gemma-2 9b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less | | **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less | | **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less | | **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less | - This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates. - This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr. - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster. # unsloth/Qwen2.5-Coder-14B-Instruct-bnb-4bit ## Introduction Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5: - Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o. - A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies. **This repo contains the 0.5B Qwen2.5-Coder model**, which has the following features: - Type: Causal Language Models - Training Stage: Pretraining - Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings - Number of Parameters: 0.49B - Number of Paramaters (Non-Embedding): 0.36B - Number of Layers: 24 - Number of Attention Heads (GQA): 14 for Q and 2 for KV - Context Length: Full 32,768 tokens **We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., or fill in the middle tasks on this model. For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186). ## Requirements The code of Qwen2.5-Coder has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`. With `transformers<4.37.0`, you will encounter the following error: ``` KeyError: 'qwen2' ``` ## Evaluation & Performance Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/). For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html). ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{hui2024qwen2, title={Qwen2. 5-Coder Technical Report}, author={Hui, Binyuan and Yang, Jian and Cui, Zeyu and Yang, Jiaxi and Liu, Dayiheng and Zhang, Lei and Liu, Tianyu and Zhang, Jiajun and Yu, Bowen and Dang, Kai and others}, journal={arXiv preprint arXiv:2409.12186}, year={2024} } @article{qwen2, title={Qwen2 Technical Report}, author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan}, journal={arXiv preprint arXiv:2407.10671}, year={2024} } ```
unsloth/Qwen2.5-Coder-14B-Instruct
unsloth
2024-11-12T10:03:43Z
319
1
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "unsloth", "code", "qwen", "qwen-coder", "codeqwen", "conversational", "en", "arxiv:2409.12186", "arxiv:2407.10671", "base_model:Qwen/Qwen2.5-Coder-14B-Instruct", "base_model:finetune:Qwen/Qwen2.5-Coder-14B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-12T03:01:12Z
--- base_model: Qwen/Qwen2.5-Coder-14B-Instruct language: - en library_name: transformers license: apache-2.0 tags: - unsloth - transformers - code - qwen - qwen-coder - codeqwen --- # Finetune Llama 3.2, Qwen2.5, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth! We have a Qwen 2.5 (all model sizes) [free Google Colab Tesla T4 notebook](https://colab.research.google.com/drive/1Kose-ucXO1IBaZq5BvbwWieuubP7hxvQ?usp=sharing). Also a [Qwen 2.5 conversational style notebook](https://colab.research.google.com/drive/1qN1CEalC70EO1wGKhNxs1go1W9So61R5?usp=sharing). [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) ## ✨ Finetune for Free All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **Llama-3.1 8b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less | | **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less | | **Gemma-2 9b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less | | **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less | | **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less | | **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less | - This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates. - This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr. - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster. # unsloth/Qwen2.5-Coder-14B-Instruct ## Introduction Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5: - Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o. - A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies. **This repo contains the 0.5B Qwen2.5-Coder model**, which has the following features: - Type: Causal Language Models - Training Stage: Pretraining - Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings - Number of Parameters: 0.49B - Number of Paramaters (Non-Embedding): 0.36B - Number of Layers: 24 - Number of Attention Heads (GQA): 14 for Q and 2 for KV - Context Length: Full 32,768 tokens **We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., or fill in the middle tasks on this model. For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186). ## Requirements The code of Qwen2.5-Coder has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`. With `transformers<4.37.0`, you will encounter the following error: ``` KeyError: 'qwen2' ``` ## Evaluation & Performance Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/). For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html). ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{hui2024qwen2, title={Qwen2. 5-Coder Technical Report}, author={Hui, Binyuan and Yang, Jian and Cui, Zeyu and Yang, Jiaxi and Liu, Dayiheng and Zhang, Lei and Liu, Tianyu and Zhang, Jiajun and Yu, Bowen and Dang, Kai and others}, journal={arXiv preprint arXiv:2409.12186}, year={2024} } @article{qwen2, title={Qwen2 Technical Report}, author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan}, journal={arXiv preprint arXiv:2407.10671}, year={2024} } ```
unsloth/Qwen2.5-Coder-14B
unsloth
2024-11-12T10:02:46Z
66
1
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "unsloth", "code", "qwen", "qwen-coder", "codeqwen", "en", "arxiv:2409.12186", "arxiv:2407.10671", "base_model:Qwen/Qwen2.5-Coder-14B", "base_model:finetune:Qwen/Qwen2.5-Coder-14B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-12T03:21:07Z
--- base_model: Qwen/Qwen2.5-Coder-14B language: - en library_name: transformers license: apache-2.0 tags: - unsloth - transformers - code - qwen - qwen-coder - codeqwen --- # Finetune Llama 3.2, Qwen2.5, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth! We have a Qwen 2.5 (all model sizes) [free Google Colab Tesla T4 notebook](https://colab.research.google.com/drive/1Kose-ucXO1IBaZq5BvbwWieuubP7hxvQ?usp=sharing). Also a [Qwen 2.5 conversational style notebook](https://colab.research.google.com/drive/1qN1CEalC70EO1wGKhNxs1go1W9So61R5?usp=sharing). [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) ## ✨ Finetune for Free All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **Llama-3.1 8b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less | | **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less | | **Gemma-2 9b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less | | **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less | | **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less | | **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less | - This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates. - This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr. - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster. # unsloth/Qwen2.5-Coder-14B ## Introduction Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5: - Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o. - A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies. **This repo contains the 0.5B Qwen2.5-Coder model**, which has the following features: - Type: Causal Language Models - Training Stage: Pretraining - Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings - Number of Parameters: 0.49B - Number of Paramaters (Non-Embedding): 0.36B - Number of Layers: 24 - Number of Attention Heads (GQA): 14 for Q and 2 for KV - Context Length: Full 32,768 tokens **We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., or fill in the middle tasks on this model. For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186). ## Requirements The code of Qwen2.5-Coder has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`. With `transformers<4.37.0`, you will encounter the following error: ``` KeyError: 'qwen2' ``` ## Evaluation & Performance Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/). For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html). ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{hui2024qwen2, title={Qwen2. 5-Coder Technical Report}, author={Hui, Binyuan and Yang, Jian and Cui, Zeyu and Yang, Jiaxi and Liu, Dayiheng and Zhang, Lei and Liu, Tianyu and Zhang, Jiajun and Yu, Bowen and Dang, Kai and others}, journal={arXiv preprint arXiv:2409.12186}, year={2024} } @article{qwen2, title={Qwen2 Technical Report}, author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan}, journal={arXiv preprint arXiv:2407.10671}, year={2024} } ```
unsloth/Qwen2.5-Coder-32B-Instruct-GGUF
unsloth
2024-11-12T10:02:20Z
469
3
transformers
[ "transformers", "gguf", "unsloth", "code", "qwen", "qwen-coder", "codeqwen", "en", "arxiv:2409.12186", "arxiv:2407.10671", "base_model:Qwen/Qwen2.5-Coder-32B-Instruct", "base_model:quantized:Qwen/Qwen2.5-Coder-32B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-11-12T05:42:34Z
--- base_model: Qwen/Qwen2.5-Coder-32B-Instruct language: - en library_name: transformers license: apache-2.0 tags: - unsloth - transformers - code - qwen - qwen-coder - codeqwen --- # Finetune Llama 3.2, Qwen2.5, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth! We have a Qwen 2.5 (all model sizes) [free Google Colab Tesla T4 notebook](https://colab.research.google.com/drive/1Kose-ucXO1IBaZq5BvbwWieuubP7hxvQ?usp=sharing). Also a [Qwen 2.5 conversational style notebook](https://colab.research.google.com/drive/1qN1CEalC70EO1wGKhNxs1go1W9So61R5?usp=sharing). [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) ## ✨ Finetune for Free All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **Llama-3.1 8b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less | | **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less | | **Gemma-2 9b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less | | **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less | | **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less | | **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less | - This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates. - This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr. - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster. # unsloth/Qwen2.5-Coder-32B-Instruct-GGUF ## Introduction Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5: - Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o. - A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies. **This repo contains the 0.5B Qwen2.5-Coder model**, which has the following features: - Type: Causal Language Models - Training Stage: Pretraining - Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings - Number of Parameters: 0.49B - Number of Paramaters (Non-Embedding): 0.36B - Number of Layers: 24 - Number of Attention Heads (GQA): 14 for Q and 2 for KV - Context Length: Full 32,768 tokens **We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., or fill in the middle tasks on this model. For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186). ## Requirements The code of Qwen2.5-Coder has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`. With `transformers<4.37.0`, you will encounter the following error: ``` KeyError: 'qwen2' ``` ## Evaluation & Performance Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/). For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html). ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{hui2024qwen2, title={Qwen2. 5-Coder Technical Report}, author={Hui, Binyuan and Yang, Jian and Cui, Zeyu and Yang, Jiaxi and Liu, Dayiheng and Zhang, Lei and Liu, Tianyu and Zhang, Jiajun and Yu, Bowen and Dang, Kai and others}, journal={arXiv preprint arXiv:2409.12186}, year={2024} } @article{qwen2, title={Qwen2 Technical Report}, author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan}, journal={arXiv preprint arXiv:2407.10671}, year={2024} } ```
unsloth/Qwen2.5-Coder-14B-Instruct-GGUF
unsloth
2024-11-12T10:02:01Z
487
2
transformers
[ "transformers", "gguf", "unsloth", "code", "qwen", "qwen-coder", "codeqwen", "en", "arxiv:2409.12186", "arxiv:2407.10671", "base_model:Qwen/Qwen2.5-Coder-14B-Instruct", "base_model:quantized:Qwen/Qwen2.5-Coder-14B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-11-12T06:20:21Z
--- base_model: Qwen/Qwen2.5-Coder-14B-Instruct language: - en library_name: transformers license: apache-2.0 tags: - unsloth - transformers - code - qwen - qwen-coder - codeqwen --- # Finetune Llama 3.2, Qwen2.5, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth! We have a Qwen 2.5 (all model sizes) [free Google Colab Tesla T4 notebook](https://colab.research.google.com/drive/1Kose-ucXO1IBaZq5BvbwWieuubP7hxvQ?usp=sharing). Also a [Qwen 2.5 conversational style notebook](https://colab.research.google.com/drive/1qN1CEalC70EO1wGKhNxs1go1W9So61R5?usp=sharing). [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) ## ✨ Finetune for Free All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **Llama-3.1 8b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less | | **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less | | **Gemma-2 9b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less | | **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less | | **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less | | **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less | - This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates. - This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr. - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster. # unsloth/Qwen2.5-Coder-14B-Instruct-GGUF ## Introduction Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5: - Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o. - A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies. **This repo contains the 0.5B Qwen2.5-Coder model**, which has the following features: - Type: Causal Language Models - Training Stage: Pretraining - Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings - Number of Parameters: 0.49B - Number of Paramaters (Non-Embedding): 0.36B - Number of Layers: 24 - Number of Attention Heads (GQA): 14 for Q and 2 for KV - Context Length: Full 32,768 tokens **We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., or fill in the middle tasks on this model. For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186). ## Requirements The code of Qwen2.5-Coder has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`. With `transformers<4.37.0`, you will encounter the following error: ``` KeyError: 'qwen2' ``` ## Evaluation & Performance Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/). For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html). ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{hui2024qwen2, title={Qwen2. 5-Coder Technical Report}, author={Hui, Binyuan and Yang, Jian and Cui, Zeyu and Yang, Jiaxi and Liu, Dayiheng and Zhang, Lei and Liu, Tianyu and Zhang, Jiajun and Yu, Bowen and Dang, Kai and others}, journal={arXiv preprint arXiv:2409.12186}, year={2024} } @article{qwen2, title={Qwen2 Technical Report}, author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan}, journal={arXiv preprint arXiv:2407.10671}, year={2024} } ```
automated-analytics/pure-gist-base
automated-analytics
2024-11-12T10:01:56Z
138
0
transformers
[ "transformers", "safetensors", "pure_bert", "feature-extraction", "custom_code", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
feature-extraction
2024-11-10T11:38:44Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
haris-waqar/TrimLesson3
haris-waqar
2024-11-12T10:01:50Z
160
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "audio-classification", "generated_from_trainer", "base_model:facebook/wav2vec2-base", "base_model:finetune:facebook/wav2vec2-base", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2024-11-12T07:31:50Z
--- library_name: transformers license: apache-2.0 base_model: facebook/wav2vec2-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: TrimLesson3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # TrimLesson3 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0961 - Accuracy: 0.7003 - F1-score: 0.6957 - Recall-score: 0.7003 - Precision-score: 0.7014 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 20 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1-score | Recall-score | Precision-score | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|:------------:|:---------------:| | 4.0066 | 1.0 | 2068 | 3.5593 | 0.1583 | 0.0819 | 0.1583 | 0.0794 | | 2.5652 | 2.0 | 4136 | 2.2228 | 0.4437 | 0.3810 | 0.4437 | 0.4189 | | 1.9375 | 3.0 | 6204 | 1.5382 | 0.5597 | 0.5157 | 0.5597 | 0.5544 | | 2.1447 | 4.0 | 8272 | 1.3384 | 0.6030 | 0.5647 | 0.6030 | 0.5980 | | 2.1308 | 5.0 | 10340 | 1.2420 | 0.6216 | 0.5906 | 0.6216 | 0.6206 | | 1.7815 | 6.0 | 12408 | 1.1685 | 0.6384 | 0.6109 | 0.6384 | 0.6326 | | 1.1674 | 7.0 | 14476 | 1.1605 | 0.6431 | 0.6197 | 0.6431 | 0.6433 | | 1.5469 | 8.0 | 16544 | 1.1038 | 0.6674 | 0.6420 | 0.6674 | 0.6617 | | 0.6686 | 9.0 | 18612 | 1.0640 | 0.6708 | 0.6494 | 0.6708 | 0.6588 | | 1.2668 | 10.0 | 20680 | 1.1181 | 0.6669 | 0.6457 | 0.6669 | 0.6564 | | 0.5084 | 11.0 | 22748 | 1.0662 | 0.6773 | 0.6597 | 0.6773 | 0.6770 | | 1.7345 | 12.0 | 24816 | 1.0945 | 0.6783 | 0.6641 | 0.6783 | 0.6821 | | 0.7144 | 13.0 | 26884 | 1.0492 | 0.6857 | 0.6715 | 0.6857 | 0.6903 | | 0.712 | 14.0 | 28952 | 1.0526 | 0.6900 | 0.6791 | 0.6900 | 0.6940 | | 2.2976 | 15.0 | 31020 | 1.0654 | 0.6960 | 0.6847 | 0.6960 | 0.7023 | | 0.6391 | 16.0 | 33088 | 1.0770 | 0.6912 | 0.6817 | 0.6912 | 0.6929 | | 0.9704 | 17.0 | 35156 | 1.0885 | 0.6949 | 0.6895 | 0.6949 | 0.7022 | | 0.9055 | 18.0 | 37224 | 1.0743 | 0.6965 | 0.6916 | 0.6965 | 0.6987 | | 2.0981 | 19.0 | 39292 | 1.0877 | 0.7025 | 0.6977 | 0.7025 | 0.7051 | | 0.3026 | 20.0 | 41360 | 1.0961 | 0.7003 | 0.6957 | 0.7003 | 0.7014 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu118 - Datasets 2.20.0 - Tokenizers 0.20.0
Imkaran/twitter-roberta-base-sentiment-latest_12112024T150727
Imkaran
2024-11-12T10:00:02Z
117
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:cardiffnlp/twitter-roberta-base-sentiment-latest", "base_model:finetune:cardiffnlp/twitter-roberta-base-sentiment-latest", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-11-12T09:59:42Z
--- library_name: transformers base_model: cardiffnlp/twitter-roberta-base-sentiment-latest tags: - generated_from_trainer metrics: - f1 model-index: - name: twitter-roberta-base-sentiment-latest_12112024T150727 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # twitter-roberta-base-sentiment-latest_12112024T150727 This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment-latest](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3348 - F1: 0.4579 - Learning Rate: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 600 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Rate | |:-------------:|:-------:|:----:|:---------------:|:------:|:------:| | No log | 0.9942 | 86 | 1.8285 | 0.1193 | 0.0000 | | No log | 2.0 | 173 | 1.8031 | 0.3302 | 0.0000 | | No log | 2.9942 | 259 | 1.5578 | 0.3690 | 0.0000 | | No log | 4.0 | 346 | 1.4611 | 0.4092 | 0.0000 | | No log | 4.9942 | 432 | 1.4700 | 0.4079 | 0.0000 | | 1.3786 | 6.0 | 519 | 1.3348 | 0.4579 | 0.0000 | | 1.3786 | 6.9942 | 605 | 1.6543 | 0.4193 | 1e-05 | | 1.3786 | 8.0 | 692 | 1.4421 | 0.4858 | 1e-05 | | 1.3786 | 8.9942 | 778 | 1.5573 | 0.4603 | 0.0000 | | 1.3786 | 10.0 | 865 | 1.5451 | 0.4797 | 0.0000 | | 1.3786 | 10.9942 | 951 | 1.8338 | 0.4396 | 0.0000 | | 0.6407 | 12.0 | 1038 | 1.9383 | 0.4364 | 0.0000 | | 0.6407 | 12.9942 | 1124 | 1.7573 | 0.4680 | 0.0000 | | 0.6407 | 14.0 | 1211 | 1.8321 | 0.4735 | 0.0000 | | 0.6407 | 14.9942 | 1297 | 1.9524 | 0.4619 | 0.0000 | | 0.6407 | 16.0 | 1384 | 2.1822 | 0.4591 | 0.0000 | | 0.6407 | 16.9942 | 1470 | 2.1302 | 0.4686 | 6e-06 | | 0.2608 | 18.0 | 1557 | 2.5139 | 0.4467 | 0.0000 | | 0.2608 | 18.9942 | 1643 | 2.3385 | 0.4641 | 0.0000 | | 0.2608 | 20.0 | 1730 | 2.3281 | 0.4726 | 0.0000 | | 0.2608 | 20.9942 | 1816 | 2.5489 | 0.4722 | 0.0000 | | 0.2608 | 22.0 | 1903 | 2.5727 | 0.4745 | 0.0000 | | 0.2608 | 22.9942 | 1989 | 2.5584 | 0.4694 | 0.0000 | | 0.1026 | 24.0 | 2076 | 2.8115 | 0.4584 | 0.0000 | | 0.1026 | 24.9942 | 2162 | 2.7270 | 0.4691 | 0.0000 | | 0.1026 | 26.0 | 2249 | 2.7379 | 0.4746 | 7e-07 | | 0.1026 | 26.9942 | 2335 | 2.8336 | 0.4757 | 4e-07 | | 0.1026 | 28.0 | 2422 | 2.8201 | 0.4703 | 2e-07 | | 0.057 | 28.9942 | 2508 | 2.8292 | 0.4691 | 0.0 | | 0.057 | 29.8266 | 2580 | 2.8271 | 0.4691 | 0.0 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.5.1+cu124 - Datasets 3.1.0 - Tokenizers 0.19.1
DrRasha/rasha
DrRasha
2024-11-12T09:58:11Z
192
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-11-12T09:57:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
research-dump/bert-large-uncased_wikidata_ent_outcome_prediction_v1
research-dump
2024-11-12T09:57:54Z
106
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-11-12T09:57:15Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
benito14/SOIT_Llama3.2
benito14
2024-11-12T09:54:27Z
5
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/Llama-3.2-1B-bnb-4bit", "base_model:quantized:unsloth/Llama-3.2-1B-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-11-12T09:54:07Z
--- base_model: unsloth/Llama-3.2-1B-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf --- # Uploaded model - **Developed by:** benito14 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Llama-3.2-1B-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
sania963/sql_v7
sania963
2024-11-12T09:51:05Z
6
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-12T09:47:00Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dwikitheduck/gen-try1
dwikitheduck
2024-11-12T09:50:54Z
5
0
null
[ "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen2.5-14B-Instruct", "base_model:finetune:Qwen/Qwen2.5-14B-Instruct", "license:apache-2.0", "region:us" ]
null
2024-11-11T07:16:45Z
--- license: apache-2.0 base_model: - Qwen/Qwen2.5-14B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: gen-try1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml base_model: Qwen/Qwen2.5-14B-Instruct model_type: Qwen2ForCausalLM tokenizer_type: Qwen2Tokenizer trust_remote_code: true load_in_8bit: false load_in_4bit: true strict: false datasets: - path: dwikitheduck/genesist-inst-rag-39K type: completion dataset_prepared_path: val_set_size: 0.05 output_dir: ./outputs/lora-out sequence_len: 4096 sample_packing: false pad_to_sequence_len: adapter: lora lora_model_dir: lora_r: 32 lora_alpha: 16 lora_dropout: 0.05 lora_target_linear: true lora_fan_in_fan_out: lora_target_modules: - gate_proj - down_proj - up_proj - q_proj - v_proj - k_proj - o_proj wandb_project: axolotl-soca wandb_entity: soca-ai wandb_watch: wandb_name: wandb_log_model: hub_model_id: dwikitheduck/gen-try-1 gradient_accumulation_steps: 8 micro_batch_size: 2 num_epochs: 1 optimizer: adamw_bnb_8bit lr_scheduler: cosine learning_rate: 0.0002 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true s2_attention: warmup_steps: 10 evals_per_epoch: 2 eval_table_size: eval_max_new_tokens: 128 saves_per_epoch: 1 debug: deepspeed: weight_decay: 0.0 fsdp: fsdp_config: save_safetensors: true ``` </details><br> # gen-try-1 This model is a fine-tuned version of [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8327 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - total_eval_batch_size: 4 - optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.1832 | 0.0008 | 1 | 1.5919 | | 0.656 | 0.5003 | 620 | 0.8327 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.1+cu124 - Datasets 3.0.1 - Tokenizers 0.20.3
Volko76/Qwen2.5-Coder-3B-Instruct-GGUF
Volko76
2024-11-12T09:41:57Z
54
0
transformers
[ "transformers", "gguf", "code", "codeqwen", "chat", "qwen", "qwen-coder", "autoquant", "text-generation", "en", "arxiv:2409.12186", "arxiv:2407.10671", "base_model:Qwen/Qwen2.5-Coder-3B", "base_model:quantized:Qwen/Qwen2.5-Coder-3B", "license:other", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-11-12T09:08:52Z
--- base_model: - Qwen/Qwen2.5-Coder-3B language: - en library_name: transformers license: other license_name: qwen-research license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct/blob/main/LICENSE pipeline_tag: text-generation tags: - code - codeqwen - chat - qwen - qwen-coder - autoquant - gguf --- # Qwen2.5-Coder-3B-Instruct ## Introduction Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5: - Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o. - A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies. **This repo contains the instruction-tuned 3B Qwen2.5-Coder model**, which has the following features: - Type: Causal Language Models - Training Stage: Pretraining & Post-training - Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings - Number of Parameters: 3.09B - Number of Paramaters (Non-Embedding): 2.77B - Number of Layers: 36 - Number of Attention Heads (GQA): 16 for Q and 2 for KV - Context Length: Full 32,768 tokens For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186). ## Requirements The code of Qwen2.5-Coder has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`. With `transformers<4.37.0`, you will encounter the following error: ``` KeyError: 'qwen2' ``` ## Quickstart Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Qwen/Qwen2.5-Coder-3B-Instruct" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "write a quick sort algorithm." messages = [ {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ## Evaluation & Performance Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/). For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html). ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{hui2024qwen2, title={Qwen2. 5-Coder Technical Report}, author={Hui, Binyuan and Yang, Jian and Cui, Zeyu and Yang, Jiaxi and Liu, Dayiheng and Zhang, Lei and Liu, Tianyu and Zhang, Jiajun and Yu, Bowen and Dang, Kai and others}, journal={arXiv preprint arXiv:2409.12186}, year={2024} } @article{qwen2, title={Qwen2 Technical Report}, author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan}, journal={arXiv preprint arXiv:2407.10671}, year={2024} } ```
mav23/llama-3-firefunction-v2-GGUF
mav23
2024-11-12T09:41:38Z
6
0
null
[ "gguf", "function-calling", "license:llama3", "endpoints_compatible", "region:us", "conversational" ]
null
2024-11-12T02:26:34Z
--- license: llama3 tags: - function-calling --- # FireFunction V2: Fireworks Function Calling Model [**Try on Fireworks**](https://fireworks.ai/models/fireworks/firefunction-v2) | [**API Docs**](https://readme.fireworks.ai/docs/function-calling) | [**Demo App**](https://functional-chat.vercel.app/) | [**Discord**](https://discord.gg/mMqQxvFD9A) <img src="https://cdn-uploads.huggingface.co/production/uploads/64b6f3a72f5a966b9722de88/nJNtxLzWswBDKK1iOZblb.png" alt="firefunction" width="400"/> FireFunction is a state-of-the-art function calling model with a commercially viable license. View detailed info in our [announcement blog](https://fireworks.ai/blog/firefunction-v2-launch-post). Key info and highlights: **Comparison with other models:** - Competitive with GPT-4o at function-calling, scoring 0.81 vs 0.80 on a medley of public evaluations - Trained on Llama 3 and retains Llama 3’s conversation and instruction-following capabilities, scoring 0.84 vs Llama 3’s 0.89 on MT bench - Significant quality improvements over FireFunction v1 across the broad range of metrics **General info:** 🐾 Successor of the [FireFunction](https://fireworks.ai/models/fireworks/firefunction-v1) model 🔆 Support of parallel function calling (unlike FireFunction v1) and good instruction following 💡 Hosted on the [Fireworks](https://fireworks.ai/models/fireworks/firefunction-v2) platform at < 10% of the cost of GPT 4o and 2x the speed ## Intended Use and Limitations ### Supported usecases The model was tuned to perfom well on a range of usecases including: * general instruction following * multi-turn chat mixing vanilla messages with function calls * single- and parallel function calling * up to 20 function specs supported at once * structured information extraction The model has an 8k context window, like Llama 3 ### Out-of-Scope Use The model was not optimized for the following use cases: * 100+ function specs * nested function calling ## Metrics | Benchmark | Firefunction v1 | Firefunction v2 | Llama 3 70b Instruct | Gpt-4o | |:-----------------------------------|:----------------|:----------------|:---------------------|:-------| | Gorilla simple | 0.91 | 0.94 | 0.925 | 0.88 | | Gorilla multiple_function | 0.92 | 0.91 | 0.86 | 0.91 | | Gorilla parallel_function | 0 | 0.9 | 0.86 | 0.89 | | Gorilla parallel_multiple_function | 0 | 0.8 | 0.615 | 0.72 | | Nexus parallel | 0.38 | 0.53 | 0.3 | 0.47 | | Mtbench | 0.73 | 0.84 | 0.89 | 0.93 | | Average | 0.49 | 0.82 | 0.74 | 0.8 | ## Example Usage See [documentation](https://readme.fireworks.ai/docs/function-calling) for more detail. ```python from transformers import AutoModelForCausalLM, AutoTokenizer import json from datetime import datetime device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained("fireworks-ai/firefunction-v2", device_map="auto") tokenizer = AutoTokenizer.from_pretrained("fireworks-ai/firefunction-v2") function_spec = [ { "name": "get_stock_price", "description": "Get the current stock price", "parameters": { "type": "object", "properties": { "symbol": { "type": "string", "description": "The stock symbol, e.g. AAPL, GOOG" } }, "required": [ "symbol" ] } }, { "name": "check_word_anagram", "description": "Check if two words are anagrams of each other", "parameters": { "type": "object", "properties": { "word1": { "type": "string", "description": "The first word" }, "word2": { "type": "string", "description": "The second word" } }, "required": [ "word1", "word2" ] } } ] functions = json.dumps(function_spec, indent=4) messages = [ {'role': 'system', 'content': 'You are a helpful assistant with access to functions. Use them if required.'}, {'role': 'user', 'content': 'Hi, can you tell me the current stock price of google and netflix?'} ] now = datetime.now().strftime('%Y-%m-%d %H:%M:%S') model_inputs = tokenizer.apply_chat_template(messages, functions=functions, datetime=now, return_tensors="pt").to(model.device) generated_ids = model.generate(model_inputs, max_new_tokens=128) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` ## Resources * [Fireworks discord with function calling channel](https://discord.gg/mMqQxvFD9A) * [Documentation](https://readme.fireworks.ai/docs/function-calling) * [Demo app](https://functional-chat.vercel.app/) * [Try in Fireworks prompt playground UI](https://fireworks.ai/models/fireworks/firefunction-v2)
pt-sk/ll-3.2-1B_Instruct
pt-sk
2024-11-12T09:40:08Z
146
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "facebook", "meta", "pytorch", "llama-3", "conversational", "en", "de", "fr", "it", "pt", "hi", "es", "th", "arxiv:2204.05149", "arxiv:2405.16406", "license:llama3.2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-12T09:30:31Z
--- language: - en - de - fr - it - pt - hi - es - th library_name: transformers pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 license: llama3.2 extra_gated_prompt: >- ### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT Llama 3.2 Version Release Date: September 25, 2024 “Agreement” means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. “Documentation” means the specifications, manuals and documentation accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview. “Licensee” or “you” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. “Llama 3.2” means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://www.llama.com/llama-downloads. “Llama Materials” means, collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion thereof) made available under this Agreement. “Meta” or “we” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). By clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement. 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service (including another AI model) that contains any of them, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Llama” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “Llama” at the beginning of any such AI model name. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Llama 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.” iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference into this Agreement. 2. Additional Commercial Terms. If, on the Llama 3.2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use “Llama” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising out of your use of the Mark will inure to the benefit of Meta. b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. ### Llama 3.2 Acceptable Use Policy Meta is committed to promoting safe and fair use of its tools and features, including Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy (“**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy). #### Prohibited Uses We want everyone to use Llama 3.2 safely and responsibly. You agree you will not use, or allow others to use, Llama 3.2 to: 1. Violate the law or others’ rights, including to: 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: 1. Violence or terrorism 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material 3. Human trafficking, exploitation, and sexual violence 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. 5. Sexual solicitation 6. Any other criminal activity 1. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals 2. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services 3. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices 4. Collect, process, disclose, generate, or infer private or sensitive information about individuals, including information about individuals’ identity, health, or demographic information, unless you have obtained the right to do so in accordance with applicable law 5. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials 6. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 7. Engage in any action, or facilitate any action, to intentionally circumvent or remove usage restrictions or other safety measures, or to enable functionality disabled by Meta  2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 3.2 related to the following: 8. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons Convention Implementation Act of 1997 9. Guns and illegal weapons (including weapon development) 10. Illegal drugs and regulated/controlled substances 11. Operation of critical infrastructure, transportation technologies, or heavy machinery 12. Self-harm or harm to others, including suicide, cutting, and eating disorders 13. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Llama 3.2 related to the following: 14. Generating, promoting, or furthering fraud or the creation or promotion of disinformation 15. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content 16. Generating, promoting, or further distributing spam 17. Impersonating another individual without consent, authorization, or legal right 18. Representing that the use of Llama 3.2 or outputs are human-generated 19. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement  4. Fail to appropriately disclose to end users any known dangers of your AI system 5. Interact with third party tools, models, or software designed to generate unlawful content or engage in unlawful or harmful conduct and/or represent that the outputs of such tools, models, or software are associated with Meta or Llama 3.2 With respect to any multimodal models included in Llama 3.2, the rights granted under Section 1(a) of the Llama 3.2 Community License Agreement are not being granted to you if you are an individual domiciled in, or a company with a principal place of business in, the European Union. This restriction does not apply to end users of a product or service that incorporates any such multimodal models. Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means: * Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ) * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama 3.2: [email protected] extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox extra_gated_description: >- The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- ## Model Information The Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks. **Model Developer:** Meta **Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. | | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff | | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | | Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 | | | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | | | Llama 3.2 Quantized (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 8k | Yes | Yes | Up to 9T tokens | December 2023 | | | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | | **Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly. **Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** Sept 25, 2024 **Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety. **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement). **Feedback:** Instructions on how to provide feedback or comments on the model can be found in the Llama Models [README](https://github.com/meta-llama/llama-models/blob/main/README.md). For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases:** Llama 3.2 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks. Similarly, quantized models can be adapted for a variety of on-device use-cases with limited compute resources. **Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card. ## How to use This repository contains two versions of Llama-3.2-1B-Instruct, for use with transformers and with the original `llama` codebase. ### Use with transformers Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function. Make sure to update your transformers installation via `pip install --upgrade transformers`. ```python import torch from transformers import pipeline model_id = "meta-llama/Llama-3.2-1B-Instruct" pipe = pipeline( "text-generation", model=model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] outputs = pipe( messages, max_new_tokens=256, ) print(outputs[0]["generated_text"][-1]) ``` Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes) ### Use with `llama` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama) To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Llama-3.2-1B-Instruct --include "original/*" --local-dir Llama-3.2-1B-Instruct ``` ## Hardware and Software **Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, quantization, annotation, and evaluation were also performed on production infrastructure. **Training Energy Use:** Training utilized a cumulative of **916k** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency. **Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **240** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy; therefore, the total market-based greenhouse gas emissions for training were 0 tons CO2eq. | | Training Time (GPU hours) | Logit Generation Time (GPU Hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) | | :---- | :---: | ----- | :---: | :---: | :---: | | Llama 3.2 1B | 370k | \- | 700 | 107 | 0 | | Llama 3.2 3B | 460k | \- | 700 | 133 | 0 | | Llama 3.2 1B SpinQuant | 1.7 | 0 | 700 | *Negligible*\*\* | 0 | | Llama 3.2 3B SpinQuant | 2.4 | 0 | 700 | *Negligible*\*\* | 0 | | Llama 3.2 1B QLora | 1.3k | 0 | 700 | 0.381 | 0 | | Llama 3.2 3B QLora | 1.6k | 0 | 700 | 0.461 | 0 | | Total | 833k | 86k | | 240 | 0 | \*\* The location-based CO2e emissions of Llama 3.2 1B SpinQuant and Llama 3.2 3B SpinQuant are less than 0.001 metric tonnes each. This is due to the minimal training GPU hours that are required. The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others. ## Training Data **Overview:** Llama 3.2 was pretrained on up to 9 trillion tokens of data from publicly available sources. For the 1B and 3B Llama 3.2 models, we incorporated logits from the Llama 3.1 8B and 70B models into the pretraining stage of the model development, where outputs (logits) from these larger models were used as token-level targets. Knowledge distillation was used after pruning to recover performance. In post-training we used a similar recipe as Llama 3.1 and produced final chat models by doing several rounds of alignment on top of the pre-trained model. Each round involved Supervised Fine-Tuning (SFT), Rejection Sampling (RS), and Direct Preference Optimization (DPO). **Data Freshness:** The pretraining data has a cutoff of December 2023\. ## Quantization ### Quantization Scheme We designed the current quantization scheme with the [PyTorch’s ExecuTorch](https://github.com/pytorch/executorch) inference framework and Arm CPU backend in mind, taking into account metrics including model quality, prefill/decoding speed, and memory footprint. Our quantization scheme involves three parts: - All linear layers in all transformer blocks are quantized to a 4-bit groupwise scheme (with a group size of 32) for weights and 8-bit per-token dynamic quantization for activations. - The classification layer is quantized to 8-bit per-channel for weight and 8-bit per token dynamic quantization for activation. - Similar to classification layer, an 8-bit per channel quantization is used for embedding layer. ### Quantization-Aware Training and LoRA The quantization-aware training (QAT) with low-rank adaptation (LoRA) models went through only post-training stages, using the same data as the full precision models. To initialize QAT, we utilize BF16 Llama 3.2 model checkpoints obtained after supervised fine-tuning (SFT) and perform an additional full round of SFT training with QAT. We then freeze the backbone of the QAT model and perform another round of SFT with LoRA adaptors applied to all layers within the transformer block. Meanwhile, the LoRA adaptors' weights and activations are maintained in BF16. Because our approach is similar to QLoRA of Dettmers et al., (2023) (i.e., quantization followed by LoRA adapters), we refer this method as QLoRA. Finally, we fine-tune the resulting model (both backbone and LoRA adaptors) using direct preference optimization (DPO). ### SpinQuant [SpinQuant](https://arxiv.org/abs/2405.16406) was applied, together with generative post-training quantization (GPTQ). For the SpinQuant rotation matrix fine-tuning, we optimized for 100 iterations, using 800 samples with sequence-length 2048 from the WikiText 2 dataset. For GPTQ, we used 128 samples from the same dataset with the same sequence-length. ## Benchmarks \- English Text In this section, we report the results for Llama 3.2 models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library. ### Base Pretrained Models | Category | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B | | ----- | ----- | :---: | :---: | :---: | :---: | :---: | | General | MMLU | 5 | macro\_avg/acc\_char | 32.2 | 58 | 66.7 | | | AGIEval English | 3-5 | average/acc\_char | 23.3 | 39.2 | 47.8 | | | ARC-Challenge | 25 | acc\_char | 32.8 | 69.1 | 79.7 | | Reading comprehension | SQuAD | 1 | em | 49.2 | 67.7 | 77 | | | QuAC (F1) | 1 | f1 | 37.9 | 42.9 | 44.9 | | | DROP (F1) | 3 | f1 | 28.0 | 45.2 | 59.5 | | Long Context | Needle in Haystack | 0 | em | 96.8 | 1 | 1 | ### Instruction Tuned Models | Capability | | Benchmark | \# Shots | Metric | Llama 3.2 1B bf16 | Llama 3.2 1B Vanilla PTQ\*\* | Llama 3.2 1B Spin Quant | Llama 3.2 1B QLoRA | Llama 3.2 3B bf16 | Llama 3.2 3B Vanilla PTQ\*\* | Llama 3.2 3B Spin Quant | Llama 3.2 3B QLoRA | Llama 3.1 8B | | :---: | ----- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | General | | MMLU | 5 | macro\_avg/acc | 49.3 | 43.3 | 47.3 | 49.0 | 63.4 | 60.5 | 62 | 62.4 | 69.4 | | Re-writing | | Open-rewrite eval | 0 | micro\_avg/rougeL | 41.6 | 39.2 | 40.9 | 41.2 | 40.1 | 40.3 | 40.8 | 40.7 | 40.9 | | Summarization | | TLDR9+ (test) | 1 | rougeL | 16.8 | 14.9 | 16.7 | 16.8 | 19.0 | 19.1 | 19.2 | 19.1 | 17.2 | | Instruction following | | IFEval | 0 | Avg(Prompt/Instruction acc Loose/Strict) | 59.5 | 51.5 | 58.4 | 55.6 | 77.4 | 73.9 | 73.5 | 75.9 | 80.4 | | Math | | GSM8K (CoT) | 8 | em\_maj1@1 | 44.4 | 33.1 | 40.6 | 46.5 | 77.7 | 72.9 | 75.7 | 77.9 | 84.5 | | | | MATH (CoT) | 0 | final\_em | 30.6 | 20.5 | 25.3 | 31.0 | 48.0 | 44.2 | 45.3 | 49.2 | 51.9 | | Reasoning | | ARC-C | 0 | acc | 59.4 | 54.3 | 57 | 60.7 | 78.6 | 75.6 | 77.6 | 77.6 | 83.4 | | | | GPQA | 0 | acc | 27.2 | 25.9 | 26.3 | 25.9 | 32.8 | 32.8 | 31.7 | 33.9 | 32.8 | | | | Hellaswag | 0 | acc | 41.2 | 38.1 | 41.3 | 41.5 | 69.8 | 66.3 | 68 | 66.3 | 78.7 | | Tool Use | | BFCL V2 | 0 | acc | 25.7 | 14.3 | 15.9 | 23.7 | 67.0 | 53.4 | 60.1 | 63.5 | 67.1 | | | | Nexus | 0 | macro\_avg/acc | 13.5 | 5.2 | 9.6 | 12.5 | 34.3 | 32.4 | 31.5 | 30.1 | 38.5 | | Long Context | | InfiniteBench/En.QA | 0 | longbook\_qa/f1 | 20.3 | N/A | N/A | N/A | 19.8 | N/A | N/A | N/A | 27.3 | | | | InfiniteBench/En.MC | 0 | longbook\_choice/acc | 38.0 | N/A | N/A | N/A | 63.3 | N/A | N/A | N/A | 72.2 | | | | NIH/Multi-needle | 0 | recall | 75.0 | N/A | N/A | N/A | 84.7 | N/A | N/A | N/A | 98.8 | | Multilingual | | MGSM (CoT) | 0 | em | 24.5 | 13.7 | 18.2 | 24.4 | 58.2 | 48.9 | 54.3 | 56.8 | 68.9 | \*\*for comparison purposes only. Model not released. ### Multilingual Benchmarks | Category | Benchmark | Language | Llama 3.2 1B | Llama 3.2 1B Vanilla PTQ\*\* | Llama 3.2 1B Spin Quant | Llama 3.2 1B QLoRA | Llama 3.2 3B | Llama 3.2 3B Vanilla PTQ\*\* | Llama 3.2 3B Spin Quant | Llama 3.2 3B QLoRA | Llama 3.1 8B | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | General | MMLU (5-shot, macro_avg/acc) | Portuguese | 39.8 | 34.9 | 38.9 | 40.2 | 54.5 | 50.9 | 53.3 | 53.4 | 62.1 | | | | Spanish | 41.5 | 36.0 | 39.8 | 41.8 | 55.1 | 51.9 | 53.6 | 53.6 | 62.5 | | | | Italian | 39.8 | 34.9 | 38.1 | 40.6 | 53.8 | 49.9 | 52.1 | 51.7 | 61.6 | | | | German | 39.2 | 34.9 | 37.5 | 39.6 | 53.3 | 50.0 | 52.2 | 51.3 | 60.6 | | | | French | 40.5 | 34.8 | 39.2 | 40.8 | 54.6 | 51.2 | 53.3 | 53.3 | 62.3 | | | | Hindi | 33.5 | 30.0 | 32.1 | 34.0 | 43.3 | 40.4 | 42.0 | 42.1 | 50.9 | | | | Thai | 34.7 | 31.2 | 32.4 | 34.9 | 44.5 | 41.3 | 44.0 | 42.2 | 50.3 | \*\*for comparison purposes only. Model not released. ## Inference time In the below table, we compare the performance metrics of different quantization methods (SpinQuant and QAT \+ LoRA) with the BF16 baseline. The evaluation was done using the [ExecuTorch](https://github.com/pytorch/executorch) framework as the inference engine, with the ARM CPU as a backend using Android OnePlus 12 device. | Category | Decode (tokens/sec) | Time-to-first-token (sec) | Prefill (tokens/sec) | Model size (PTE file size in MB) | Memory size (RSS in MB) | | :---- | ----- | ----- | ----- | ----- | ----- | | 1B BF16 (baseline) | 19.2 | 1.0 | 60.3 | 2358 | 3,185 | | 1B SpinQuant | 50.2 (2.6x) | 0.3 (-76.9%) | 260.5 (4.3x) | 1083 (-54.1%) | 1,921 (-39.7%) | | 1B QLoRA | 45.8 (2.4x) | 0.3 (-76.0%) | 252.0 (4.2x) | 1127 (-52.2%) | 2,255 (-29.2%) | | 3B BF16 (baseline) | 7.6 | 3.0 | 21.2 | 6129 | 7,419 | | 3B SpinQuant | 19.7 (2.6x) | 0.7 (-76.4%) | 89.7 (4.2x) | 2435 (-60.3%) | 3,726 (-49.8%) | | 3B QLoRA | 18.5 (2.4x) | 0.7 (-76.1%) | 88.8 (4.2x) | 2529 (-58.7%) | 4,060 (-45.3%) | (\*) The performance measurement is done using an adb binary-based approach. (\*\*) It is measured on an Android OnePlus 12 device. (\*\*\*) Time-to-first-token (TTFT) is measured with prompt length=64 *Footnote:* - *Decode (tokens/second) is for how quickly it keeps generating. Higher is better.* - *Time-to-first-token (TTFT for shorthand) is for how fast it generates the first token for a given prompt. Lower is better.* - *Prefill is the inverse of TTFT (aka 1/TTFT) in tokens/second. Higher is better* - *Model size \- how big is the model, measured by, PTE file, a binary file format for ExecuTorch* - *RSS size \- Memory usage in resident set size (RSS)* ## Responsibility & Safety As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks: 1. Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama 2. Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm 3. Provide protections for the community to help prevent the misuse of our models ### Responsible Deployment **Approach:** Llama is a foundational technology designed to be used in a variety of use cases. Examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models, enabling the world to benefit from the technology power, by aligning our model safety for generic use cases and addressing a standard set of harms. Developers are then in the driver’s seat to tailor safety for their use cases, defining their own policies and deploying the models with the necessary safeguards in their Llama systems. Llama 3.2 was developed following the best practices outlined in our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/). #### Llama 3.2 Instruct **Objective:** Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. We implemented the same set of safety mitigations as in Llama 3, and you can learn more about these in the Llama 3 [paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/). **Fine-Tuning Data:** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control. **Refusals and Tone:** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines. #### Llama 3.2 Systems **Safety as a System:** Large language models, including Llama 3.2, **are not designed to be deployed in isolation** but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box. ### New Capabilities and Use Cases **Technological Advancement:** Llama releases usually introduce new capabilities that require specific considerations in addition to the best practices that generally apply across all Generative AI use cases. For prior release capabilities also supported by Llama 3.2, see [Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), as the same considerations apply here as well. **Constrained Environments:** Llama 3.2 1B and 3B models are expected to be deployed in highly constrained environments, such as mobile devices. LLM Systems using smaller models will have a different alignment profile and safety/helpfulness tradeoff than more complex, larger systems. Developers should ensure the safety of their system meets the requirements of their use case. We recommend using lighter system safeguards for such use cases, like Llama Guard 3-1B or its mobile-optimized version. ### Evaluations **Scaled Evaluations:** We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Purple Llama safeguards to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. **Red Teaming:** We conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets. ### Critical Risks In addition to our safety work above, we took extra care on measuring and/or mitigating the following critical risk areas: **1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive Weapons):** Llama 3.2 1B and 3B models are smaller and less capable derivatives of Llama 3.1. For Llama 3.1 70B and 405B, to assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons and have determined that such testing also applies to the smaller 1B and 3B models. **2\. Child Safety:** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. **3\. Cyber Attacks:** For Llama 3.1 405B, our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed. Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Because Llama 3.2’s 1B and 3B models are smaller and less capable models than Llama 3.1 405B, we broadly believe that the testing conducted for the 405B model also applies to Llama 3.2 models. ### Community **Industry Partnerships:** Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). **Grants:** We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists). **Reporting:** Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations **Values:** The core values of Llama 3.2 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.2 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. **Testing:** Llama 3.2 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.2 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
CelDom/sandbit.bird
CelDom
2024-11-12T09:34:45Z
6
0
null
[ "safetensors", "distilbert", "license:cc-by-nc-4.0", "region:us" ]
null
2024-11-12T09:31:49Z
--- license: cc-by-nc-4.0 ---
winstonallo/e8-full-prod-data-2xaugmented
winstonallo
2024-11-12T09:32:31Z
109
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-11-12T07:58:43Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nstrn-mo/bert-finetuned-arcchialogy-ner-hp-tunned-hgf
nstrn-mo
2024-11-12T09:28:54Z
7
0
transformers
[ "transformers", "safetensors", "bert", "token-classification", "generated_from_trainer", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-11-05T16:00:47Z
--- library_name: transformers license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer metrics: - precision - recall - f1 model-index: - name: bert-finetuned-arcchialogy-ner-hp-tunned-hgf results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-arcchialogy-ner-hp-tunned-hgf This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2972 - Precision: 0.5083 - Recall: 0.6667 - F1: 0.5768 - F1 Macro: 0.5149 - F1 Micro: 0.5768 - Classification Report Details: {'B-ART': {'precision': 0.5060606060606061, 'recall': 0.6626984126984127, 'f1-score': 0.5738831615120275, 'support': 252.0}, 'B-CON': {'precision': 0.4375, 'recall': 0.6521739130434783, 'f1-score': 0.5236907730673317, 'support': 161.0}, 'B-LOC': {'precision': 0.8071428571428572, 'recall': 0.7583892617449665, 'f1-score': 0.7820069204152249, 'support': 149.0}, 'B-MAT': {'precision': 0.5357142857142857, 'recall': 0.375, 'f1-score': 0.4411764705882353, 'support': 40.0}, 'B-PER': {'precision': 0.7749360613810742, 'recall': 0.9017857142857143, 'f1-score': 0.8335625859697386, 'support': 336.0}, 'B-SPE': {'precision': 0.4067796610169492, 'recall': 0.7741935483870968, 'f1-score': 0.5333333333333333, 'support': 31.0}, 'I-ART': {'precision': 0.5416666666666666, 'recall': 0.40509915014164305, 'f1-score': 0.46353322528363045, 'support': 353.0}, 'I-CON': {'precision': 0.42857142857142855, 'recall': 0.4830508474576271, 'f1-score': 0.4541832669322709, 'support': 118.0}, 'I-LOC': {'precision': 0.8818565400843882, 'recall': 0.8228346456692913, 'f1-score': 0.8513238289205702, 'support': 254.0}, 'I-MAT': {'precision': 0.4166666666666667, 'recall': 0.13513513513513514, 'f1-score': 0.20408163265306123, 'support': 37.0}, 'I-PER': {'precision': 0.8345679012345679, 'recall': 0.756152125279642, 'f1-score': 0.7934272300469484, 'support': 447.0}, 'I-SPE': {'precision': 0.7666666666666667, 'recall': 0.5476190476190477, 'f1-score': 0.6388888888888888, 'support': 42.0}, 'O': {'precision': 0.9745303118342049, 'recall': 0.97222356407903, 'f1-score': 0.973375571300752, 'support': 20701.0}, 'accuracy': 0.9435888486540727, 'macro avg': {'precision': 0.6394353579261817, 'recall': 0.634335028118545, 'f1-score': 0.6204974529932318, 'support': 22921.0}, 'weighted avg': {'precision': 0.9455450522608214, 'recall': 0.9435888486540727, 'f1-score': 0.9437659943714384, 'support': 22921.0}} - Classfication Report Seqeval: {'ART': {'precision': 0.4061624649859944, 'recall': 0.5753968253968254, 'f1-score': 0.47619047619047616, 'support': 252}, 'CON': {'precision': 0.3779527559055118, 'recall': 0.5962732919254659, 'f1-score': 0.4626506024096385, 'support': 161}, 'LOC': {'precision': 0.6234567901234568, 'recall': 0.6778523489932886, 'f1-score': 0.6495176848874598, 'support': 149}, 'MAT': {'precision': 0.3939393939393939, 'recall': 0.325, 'f1-score': 0.35616438356164376, 'support': 40}, 'PER': {'precision': 0.674937965260546, 'recall': 0.8095238095238095, 'f1-score': 0.7361299052774019, 'support': 336}, 'SPE': {'precision': 0.3064516129032258, 'recall': 0.6129032258064516, 'f1-score': 0.4086021505376344, 'support': 31}, 'micro avg': {'precision': 0.5082612116443745, 'recall': 0.6666666666666666, 'f1-score': 0.5767857142857143, 'support': 969}, 'macro avg': {'precision': 0.46381683051968814, 'recall': 0.5994915836076402, 'f1-score': 0.5148758671440424, 'support': 969}, 'weighted avg': {'precision': 0.5243912576788156, 'recall': 0.6666666666666666, 'f1-score': 0.5836096720521391, 'support': 969}} ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.73381107021748e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | F1 Macro | F1 Micro | Classification Report Details | Classfication Report Seqeval | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|:--------:|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| | No log | 1.0 | 249 | 0.2286 | 0.4996 | 0.5841 | 0.5385 | 0.4749 | 0.5385 | {'B-ART': {'precision': 0.5092936802973977, 'recall': 0.5436507936507936, 'f1-score': 0.525911708253359, 'support': 252.0}, 'B-CON': {'precision': 0.4564102564102564, 'recall': 0.5527950310559007, 'f1-score': 0.5, 'support': 161.0}, 'B-LOC': {'precision': 0.8272727272727273, 'recall': 0.610738255033557, 'f1-score': 0.7027027027027027, 'support': 149.0}, 'B-MAT': {'precision': 0.36363636363636365, 'recall': 0.4, 'f1-score': 0.38095238095238093, 'support': 40.0}, 'B-PER': {'precision': 0.8184438040345822, 'recall': 0.8452380952380952, 'f1-score': 0.8316251830161054, 'support': 336.0}, 'B-SPE': {'precision': 0.358974358974359, 'recall': 0.9032258064516129, 'f1-score': 0.5137614678899083, 'support': 31.0}, 'I-ART': {'precision': 0.5942857142857143, 'recall': 0.29461756373937675, 'f1-score': 0.3939393939393939, 'support': 353.0}, 'I-CON': {'precision': 0.5584415584415584, 'recall': 0.3644067796610169, 'f1-score': 0.441025641025641, 'support': 118.0}, 'I-LOC': {'precision': 0.9136690647482014, 'recall': 0.5, 'f1-score': 0.6463104325699746, 'support': 254.0}, 'I-MAT': {'precision': 1.0, 'recall': 0.08108108108108109, 'f1-score': 0.15, 'support': 37.0}, 'I-PER': {'precision': 0.9193548387096774, 'recall': 0.6375838926174496, 'f1-score': 0.7529722589167768, 'support': 447.0}, 'I-SPE': {'precision': 0.6, 'recall': 0.7857142857142857, 'f1-score': 0.6804123711340206, 'support': 42.0}, 'O': {'precision': 0.9631611345234149, 'recall': 0.9826095357712188, 'f1-score': 0.9727881396461023, 'support': 20701.0}, 'accuracy': 0.9415383272981109, 'macro avg': {'precision': 0.6833033462564809, 'recall': 0.5770508553857221, 'f1-score': 0.5763385907727974, 'support': 22921.0}, 'weighted avg': {'precision': 0.9399703169611863, 'recall': 0.9415383272981109, 'f1-score': 0.9376545916465442, 'support': 22921.0}} | {'ART': {'precision': 0.40460526315789475, 'recall': 0.4880952380952381, 'f1-score': 0.4424460431654676, 'support': 252}, 'CON': {'precision': 0.3791469194312796, 'recall': 0.4968944099378882, 'f1-score': 0.4301075268817204, 'support': 161}, 'LOC': {'precision': 0.576, 'recall': 0.48322147651006714, 'f1-score': 0.5255474452554745, 'support': 149}, 'MAT': {'precision': 0.29545454545454547, 'recall': 0.325, 'f1-score': 0.30952380952380953, 'support': 40}, 'PER': {'precision': 0.6958904109589041, 'recall': 0.7559523809523809, 'f1-score': 0.724679029957204, 'support': 336}, 'SPE': {'precision': 0.2857142857142857, 'recall': 0.7741935483870968, 'f1-score': 0.417391304347826, 'support': 31}, 'micro avg': {'precision': 0.499558693733451, 'recall': 0.5841073271413829, 'f1-score': 0.538534728829686, 'support': 969}, 'macro avg': {'precision': 0.4394685707861516, 'recall': 0.5538928423137786, 'f1-score': 0.47494919318858364, 'support': 969}, 'weighted avg': {'precision': 0.5194238215704251, 'recall': 0.5841073271413829, 'f1-score': 0.5447497636017297, 'support': 969}} | | No log | 2.0 | 498 | 0.2315 | 0.5225 | 0.6347 | 0.5732 | 0.5046 | 0.5732 | {'B-ART': {'precision': 0.5032679738562091, 'recall': 0.6111111111111112, 'f1-score': 0.5519713261648745, 'support': 252.0}, 'B-CON': {'precision': 0.5076142131979695, 'recall': 0.6211180124223602, 'f1-score': 0.5586592178770949, 'support': 161.0}, 'B-LOC': {'precision': 0.7913669064748201, 'recall': 0.738255033557047, 'f1-score': 0.7638888888888888, 'support': 149.0}, 'B-MAT': {'precision': 0.48148148148148145, 'recall': 0.325, 'f1-score': 0.3880597014925373, 'support': 40.0}, 'B-PER': {'precision': 0.8230337078651685, 'recall': 0.8720238095238095, 'f1-score': 0.846820809248555, 'support': 336.0}, 'B-SPE': {'precision': 0.43636363636363634, 'recall': 0.7741935483870968, 'f1-score': 0.5581395348837209, 'support': 31.0}, 'I-ART': {'precision': 0.5707762557077626, 'recall': 0.35410764872521244, 'f1-score': 0.4370629370629371, 'support': 353.0}, 'I-CON': {'precision': 0.44545454545454544, 'recall': 0.4152542372881356, 'f1-score': 0.4298245614035088, 'support': 118.0}, 'I-LOC': {'precision': 0.8625, 'recall': 0.8149606299212598, 'f1-score': 0.8380566801619433, 'support': 254.0}, 'I-MAT': {'precision': 0.3076923076923077, 'recall': 0.10810810810810811, 'f1-score': 0.16, 'support': 37.0}, 'I-PER': {'precision': 0.9085173501577287, 'recall': 0.6442953020134228, 'f1-score': 0.7539267015706806, 'support': 447.0}, 'I-SPE': {'precision': 0.8076923076923077, 'recall': 0.5, 'f1-score': 0.6176470588235294, 'support': 42.0}, 'O': {'precision': 0.968827691719258, 'recall': 0.9788899087000628, 'f1-score': 0.97383280870798, 'support': 20701.0}, 'accuracy': 0.9446359233890319, 'macro avg': {'precision': 0.6472760290510149, 'recall': 0.5967167192121251, 'f1-score': 0.6059915558681731, 'support': 22921.0}, 'weighted avg': {'precision': 0.9430665587612952, 'recall': 0.9446359233890319, 'f1-score': 0.9426405983679316, 'support': 22921.0}} | {'ART': {'precision': 0.4108761329305136, 'recall': 0.5396825396825397, 'f1-score': 0.46655231560891935, 'support': 252}, 'CON': {'precision': 0.4036697247706422, 'recall': 0.546583850931677, 'f1-score': 0.46437994722955145, 'support': 161}, 'LOC': {'precision': 0.5757575757575758, 'recall': 0.6375838926174496, 'f1-score': 0.6050955414012739, 'support': 149}, 'MAT': {'precision': 0.36363636363636365, 'recall': 0.3, 'f1-score': 0.32876712328767127, 'support': 40}, 'PER': {'precision': 0.7112299465240641, 'recall': 0.7916666666666666, 'f1-score': 0.7492957746478872, 'support': 336}, 'SPE': {'precision': 0.32142857142857145, 'recall': 0.5806451612903226, 'f1-score': 0.41379310344827586, 'support': 31}, 'micro avg': {'precision': 0.5225148683092609, 'recall': 0.6346749226006192, 'f1-score': 0.5731593662628146, 'support': 969}, 'macro avg': {'precision': 0.4644330525079552, 'recall': 0.5660270185314425, 'f1-score': 0.5046473009372632, 'support': 969}, 'weighted avg': {'precision': 0.5343678970756114, 'recall': 0.6346749226006192, 'f1-score': 0.5781602085926613, 'support': 969}} | | 0.1508 | 3.0 | 747 | 0.2536 | 0.4917 | 0.6760 | 0.5693 | 0.5163 | 0.5693 | {'B-ART': {'precision': 0.478134110787172, 'recall': 0.6507936507936508, 'f1-score': 0.5512605042016807, 'support': 252.0}, 'B-CON': {'precision': 0.48372093023255813, 'recall': 0.6459627329192547, 'f1-score': 0.5531914893617021, 'support': 161.0}, 'B-LOC': {'precision': 0.7411764705882353, 'recall': 0.8456375838926175, 'f1-score': 0.7899686520376176, 'support': 149.0}, 'B-MAT': {'precision': 0.4107142857142857, 'recall': 0.575, 'f1-score': 0.4791666666666667, 'support': 40.0}, 'B-PER': {'precision': 0.7941952506596306, 'recall': 0.8958333333333334, 'f1-score': 0.8419580419580419, 'support': 336.0}, 'B-SPE': {'precision': 0.4107142857142857, 'recall': 0.7419354838709677, 'f1-score': 0.5287356321839081, 'support': 31.0}, 'I-ART': {'precision': 0.5204081632653061, 'recall': 0.43342776203966005, 'f1-score': 0.47295208655332305, 'support': 353.0}, 'I-CON': {'precision': 0.45255474452554745, 'recall': 0.5254237288135594, 'f1-score': 0.48627450980392156, 'support': 118.0}, 'I-LOC': {'precision': 0.84251968503937, 'recall': 0.84251968503937, 'f1-score': 0.84251968503937, 'support': 254.0}, 'I-MAT': {'precision': 0.225, 'recall': 0.24324324324324326, 'f1-score': 0.23376623376623376, 'support': 37.0}, 'I-PER': {'precision': 0.8463541666666666, 'recall': 0.727069351230425, 'f1-score': 0.7821901323706378, 'support': 447.0}, 'I-SPE': {'precision': 0.8148148148148148, 'recall': 0.5238095238095238, 'f1-score': 0.6376811594202898, 'support': 42.0}, 'O': {'precision': 0.9769036273461053, 'recall': 0.9705328245012318, 'f1-score': 0.9737078052681319, 'support': 20701.0}, 'accuracy': 0.9431089394005497, 'macro avg': {'precision': 0.6151700411810752, 'recall': 0.6631683771912952, 'f1-score': 0.6287209691255019, 'support': 22921.0}, 'weighted avg': {'precision': 0.9467156556961486, 'recall': 0.9431089394005497, 'f1-score': 0.9442987166110726, 'support': 22921.0}} | {'ART': {'precision': 0.36553524804177545, 'recall': 0.5555555555555556, 'f1-score': 0.4409448818897638, 'support': 252}, 'CON': {'precision': 0.40772532188841204, 'recall': 0.5900621118012422, 'f1-score': 0.48223350253807107, 'support': 161}, 'LOC': {'precision': 0.578125, 'recall': 0.7449664429530202, 'f1-score': 0.6510263929618768, 'support': 149}, 'MAT': {'precision': 0.2835820895522388, 'recall': 0.475, 'f1-score': 0.35514018691588783, 'support': 40}, 'PER': {'precision': 0.6775, 'recall': 0.8065476190476191, 'f1-score': 0.7364130434782609, 'support': 336}, 'SPE': {'precision': 0.3333333333333333, 'recall': 0.6129032258064516, 'f1-score': 0.43181818181818177, 'support': 31}, 'micro avg': {'precision': 0.49174174174174173, 'recall': 0.675954592363261, 'f1-score': 0.5693176879617557, 'support': 969}, 'macro avg': {'precision': 0.44096683213595994, 'recall': 0.6308391591939815, 'f1-score': 0.516262698267007, 'support': 969}, 'weighted avg': {'precision': 0.508994738127951, 'recall': 0.675954592363261, 'f1-score': 0.5787279570875793, 'support': 969}} | | 0.1508 | 4.0 | 996 | 0.2972 | 0.5083 | 0.6667 | 0.5768 | 0.5149 | 0.5768 | {'B-ART': {'precision': 0.5060606060606061, 'recall': 0.6626984126984127, 'f1-score': 0.5738831615120275, 'support': 252.0}, 'B-CON': {'precision': 0.4375, 'recall': 0.6521739130434783, 'f1-score': 0.5236907730673317, 'support': 161.0}, 'B-LOC': {'precision': 0.8071428571428572, 'recall': 0.7583892617449665, 'f1-score': 0.7820069204152249, 'support': 149.0}, 'B-MAT': {'precision': 0.5357142857142857, 'recall': 0.375, 'f1-score': 0.4411764705882353, 'support': 40.0}, 'B-PER': {'precision': 0.7749360613810742, 'recall': 0.9017857142857143, 'f1-score': 0.8335625859697386, 'support': 336.0}, 'B-SPE': {'precision': 0.4067796610169492, 'recall': 0.7741935483870968, 'f1-score': 0.5333333333333333, 'support': 31.0}, 'I-ART': {'precision': 0.5416666666666666, 'recall': 0.40509915014164305, 'f1-score': 0.46353322528363045, 'support': 353.0}, 'I-CON': {'precision': 0.42857142857142855, 'recall': 0.4830508474576271, 'f1-score': 0.4541832669322709, 'support': 118.0}, 'I-LOC': {'precision': 0.8818565400843882, 'recall': 0.8228346456692913, 'f1-score': 0.8513238289205702, 'support': 254.0}, 'I-MAT': {'precision': 0.4166666666666667, 'recall': 0.13513513513513514, 'f1-score': 0.20408163265306123, 'support': 37.0}, 'I-PER': {'precision': 0.8345679012345679, 'recall': 0.756152125279642, 'f1-score': 0.7934272300469484, 'support': 447.0}, 'I-SPE': {'precision': 0.7666666666666667, 'recall': 0.5476190476190477, 'f1-score': 0.6388888888888888, 'support': 42.0}, 'O': {'precision': 0.9745303118342049, 'recall': 0.97222356407903, 'f1-score': 0.973375571300752, 'support': 20701.0}, 'accuracy': 0.9435888486540727, 'macro avg': {'precision': 0.6394353579261817, 'recall': 0.634335028118545, 'f1-score': 0.6204974529932318, 'support': 22921.0}, 'weighted avg': {'precision': 0.9455450522608214, 'recall': 0.9435888486540727, 'f1-score': 0.9437659943714384, 'support': 22921.0}} | {'ART': {'precision': 0.4061624649859944, 'recall': 0.5753968253968254, 'f1-score': 0.47619047619047616, 'support': 252}, 'CON': {'precision': 0.3779527559055118, 'recall': 0.5962732919254659, 'f1-score': 0.4626506024096385, 'support': 161}, 'LOC': {'precision': 0.6234567901234568, 'recall': 0.6778523489932886, 'f1-score': 0.6495176848874598, 'support': 149}, 'MAT': {'precision': 0.3939393939393939, 'recall': 0.325, 'f1-score': 0.35616438356164376, 'support': 40}, 'PER': {'precision': 0.674937965260546, 'recall': 0.8095238095238095, 'f1-score': 0.7361299052774019, 'support': 336}, 'SPE': {'precision': 0.3064516129032258, 'recall': 0.6129032258064516, 'f1-score': 0.4086021505376344, 'support': 31}, 'micro avg': {'precision': 0.5082612116443745, 'recall': 0.6666666666666666, 'f1-score': 0.5767857142857143, 'support': 969}, 'macro avg': {'precision': 0.46381683051968814, 'recall': 0.5994915836076402, 'f1-score': 0.5148758671440424, 'support': 969}, 'weighted avg': {'precision': 0.5243912576788156, 'recall': 0.6666666666666666, 'f1-score': 0.5836096720521391, 'support': 969}} | ### Framework versions - Transformers 4.45.2 - Pytorch 2.4.1 - Datasets 3.0.1 - Tokenizers 0.20.1
featherless-ai-quants/shenzhi-wang-Llama3.1-8B-Chinese-Chat-GGUF
featherless-ai-quants
2024-11-12T09:27:03Z
14
0
null
[ "gguf", "text-generation", "base_model:shenzhi-wang/Llama3.1-8B-Chinese-Chat", "base_model:quantized:shenzhi-wang/Llama3.1-8B-Chinese-Chat", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-11-12T09:15:37Z
--- base_model: shenzhi-wang/Llama3.1-8B-Chinese-Chat pipeline_tag: text-generation quantized_by: featherless-ai-quants --- # shenzhi-wang/Llama3.1-8B-Chinese-Chat GGUF Quantizations 🚀 ![Featherless AI Quants](./featherless-quants.png) *Optimized GGUF quantization files for enhanced model performance* > Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee. --- ## Available Quantizations 📊 | Quantization Type | File | Size | |-------------------|------|------| | IQ4_XS | [shenzhi-wang-Llama3.1-8B-Chinese-Chat-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/shenzhi-wang-Llama3.1-8B-Chinese-Chat-GGUF/blob/main/shenzhi-wang-Llama3.1-8B-Chinese-Chat-IQ4_XS.gguf) | 4276.62 MB | | Q2_K | [shenzhi-wang-Llama3.1-8B-Chinese-Chat-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/shenzhi-wang-Llama3.1-8B-Chinese-Chat-GGUF/blob/main/shenzhi-wang-Llama3.1-8B-Chinese-Chat-Q2_K.gguf) | 3031.86 MB | | Q3_K_L | [shenzhi-wang-Llama3.1-8B-Chinese-Chat-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/shenzhi-wang-Llama3.1-8B-Chinese-Chat-GGUF/blob/main/shenzhi-wang-Llama3.1-8B-Chinese-Chat-Q3_K_L.gguf) | 4121.74 MB | | Q3_K_M | [shenzhi-wang-Llama3.1-8B-Chinese-Chat-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/shenzhi-wang-Llama3.1-8B-Chinese-Chat-GGUF/blob/main/shenzhi-wang-Llama3.1-8B-Chinese-Chat-Q3_K_M.gguf) | 3832.74 MB | | Q3_K_S | [shenzhi-wang-Llama3.1-8B-Chinese-Chat-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/shenzhi-wang-Llama3.1-8B-Chinese-Chat-GGUF/blob/main/shenzhi-wang-Llama3.1-8B-Chinese-Chat-Q3_K_S.gguf) | 3494.74 MB | | Q4_K_M | [shenzhi-wang-Llama3.1-8B-Chinese-Chat-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/shenzhi-wang-Llama3.1-8B-Chinese-Chat-GGUF/blob/main/shenzhi-wang-Llama3.1-8B-Chinese-Chat-Q4_K_M.gguf) | 4692.78 MB | | Q4_K_S | [shenzhi-wang-Llama3.1-8B-Chinese-Chat-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/shenzhi-wang-Llama3.1-8B-Chinese-Chat-GGUF/blob/main/shenzhi-wang-Llama3.1-8B-Chinese-Chat-Q4_K_S.gguf) | 4475.28 MB | | Q5_K_M | [shenzhi-wang-Llama3.1-8B-Chinese-Chat-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/shenzhi-wang-Llama3.1-8B-Chinese-Chat-GGUF/blob/main/shenzhi-wang-Llama3.1-8B-Chinese-Chat-Q5_K_M.gguf) | 5467.40 MB | | Q5_K_S | [shenzhi-wang-Llama3.1-8B-Chinese-Chat-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/shenzhi-wang-Llama3.1-8B-Chinese-Chat-GGUF/blob/main/shenzhi-wang-Llama3.1-8B-Chinese-Chat-Q5_K_S.gguf) | 5339.90 MB | | Q6_K | [shenzhi-wang-Llama3.1-8B-Chinese-Chat-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/shenzhi-wang-Llama3.1-8B-Chinese-Chat-GGUF/blob/main/shenzhi-wang-Llama3.1-8B-Chinese-Chat-Q6_K.gguf) | 6290.44 MB | | Q8_0 | [shenzhi-wang-Llama3.1-8B-Chinese-Chat-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/shenzhi-wang-Llama3.1-8B-Chinese-Chat-GGUF/blob/main/shenzhi-wang-Llama3.1-8B-Chinese-Chat-Q8_0.gguf) | 8145.11 MB | --- ## ⚡ Powered by [Featherless AI](https://featherless.ai) ### Key Features - 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly - 🛠️ **Zero Infrastructure** - No server setup or maintenance required - 📚 **Vast Compatibility** - Support for 2400+ models and counting - 💎 **Affordable Pricing** - Starting at just $10/month --- **Links:** [Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
BFS-Search/llama-3.2-3b-DoCRED
BFS-Search
2024-11-12T09:24:50Z
76
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-12T09:22:01Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
featherless-ai-quants/DiscoResearch-Llama3-DiscoLeo-Instruct-8B-32k-v0.1-GGUF
featherless-ai-quants
2024-11-12T09:22:22Z
15
0
null
[ "gguf", "text-generation", "base_model:DiscoResearch/Llama3-DiscoLeo-Instruct-8B-32k-v0.1", "base_model:quantized:DiscoResearch/Llama3-DiscoLeo-Instruct-8B-32k-v0.1", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-11-12T09:11:44Z
--- base_model: DiscoResearch/Llama3-DiscoLeo-Instruct-8B-32k-v0.1 pipeline_tag: text-generation quantized_by: featherless-ai-quants --- # DiscoResearch/Llama3-DiscoLeo-Instruct-8B-32k-v0.1 GGUF Quantizations 🚀 ![Featherless AI Quants](./featherless-quants.png) *Optimized GGUF quantization files for enhanced model performance* > Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee. --- ## Available Quantizations 📊 | Quantization Type | File | Size | |-------------------|------|------| | IQ4_XS | [DiscoResearch-Llama3-DiscoLeo-Instruct-8B-32k-v0.1-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/DiscoResearch-Llama3-DiscoLeo-Instruct-8B-32k-v0.1-GGUF/blob/main/DiscoResearch-Llama3-DiscoLeo-Instruct-8B-32k-v0.1-IQ4_XS.gguf) | 4276.62 MB | | Q2_K | [DiscoResearch-Llama3-DiscoLeo-Instruct-8B-32k-v0.1-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/DiscoResearch-Llama3-DiscoLeo-Instruct-8B-32k-v0.1-GGUF/blob/main/DiscoResearch-Llama3-DiscoLeo-Instruct-8B-32k-v0.1-Q2_K.gguf) | 3031.86 MB | | Q3_K_L | [DiscoResearch-Llama3-DiscoLeo-Instruct-8B-32k-v0.1-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/DiscoResearch-Llama3-DiscoLeo-Instruct-8B-32k-v0.1-GGUF/blob/main/DiscoResearch-Llama3-DiscoLeo-Instruct-8B-32k-v0.1-Q3_K_L.gguf) | 4121.74 MB | | Q3_K_M | [DiscoResearch-Llama3-DiscoLeo-Instruct-8B-32k-v0.1-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/DiscoResearch-Llama3-DiscoLeo-Instruct-8B-32k-v0.1-GGUF/blob/main/DiscoResearch-Llama3-DiscoLeo-Instruct-8B-32k-v0.1-Q3_K_M.gguf) | 3832.74 MB | | Q3_K_S | [DiscoResearch-Llama3-DiscoLeo-Instruct-8B-32k-v0.1-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/DiscoResearch-Llama3-DiscoLeo-Instruct-8B-32k-v0.1-GGUF/blob/main/DiscoResearch-Llama3-DiscoLeo-Instruct-8B-32k-v0.1-Q3_K_S.gguf) | 3494.74 MB | | Q4_K_M | [DiscoResearch-Llama3-DiscoLeo-Instruct-8B-32k-v0.1-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/DiscoResearch-Llama3-DiscoLeo-Instruct-8B-32k-v0.1-GGUF/blob/main/DiscoResearch-Llama3-DiscoLeo-Instruct-8B-32k-v0.1-Q4_K_M.gguf) | 4692.78 MB | | Q4_K_S | [DiscoResearch-Llama3-DiscoLeo-Instruct-8B-32k-v0.1-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/DiscoResearch-Llama3-DiscoLeo-Instruct-8B-32k-v0.1-GGUF/blob/main/DiscoResearch-Llama3-DiscoLeo-Instruct-8B-32k-v0.1-Q4_K_S.gguf) | 4475.28 MB | | Q5_K_M | [DiscoResearch-Llama3-DiscoLeo-Instruct-8B-32k-v0.1-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/DiscoResearch-Llama3-DiscoLeo-Instruct-8B-32k-v0.1-GGUF/blob/main/DiscoResearch-Llama3-DiscoLeo-Instruct-8B-32k-v0.1-Q5_K_M.gguf) | 5467.40 MB | | Q5_K_S | [DiscoResearch-Llama3-DiscoLeo-Instruct-8B-32k-v0.1-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/DiscoResearch-Llama3-DiscoLeo-Instruct-8B-32k-v0.1-GGUF/blob/main/DiscoResearch-Llama3-DiscoLeo-Instruct-8B-32k-v0.1-Q5_K_S.gguf) | 5339.90 MB | | Q6_K | [DiscoResearch-Llama3-DiscoLeo-Instruct-8B-32k-v0.1-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/DiscoResearch-Llama3-DiscoLeo-Instruct-8B-32k-v0.1-GGUF/blob/main/DiscoResearch-Llama3-DiscoLeo-Instruct-8B-32k-v0.1-Q6_K.gguf) | 6290.44 MB | | Q8_0 | [DiscoResearch-Llama3-DiscoLeo-Instruct-8B-32k-v0.1-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/DiscoResearch-Llama3-DiscoLeo-Instruct-8B-32k-v0.1-GGUF/blob/main/DiscoResearch-Llama3-DiscoLeo-Instruct-8B-32k-v0.1-Q8_0.gguf) | 8145.11 MB | --- ## ⚡ Powered by [Featherless AI](https://featherless.ai) ### Key Features - 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly - 🛠️ **Zero Infrastructure** - No server setup or maintenance required - 📚 **Vast Compatibility** - Support for 2400+ models and counting - 💎 **Affordable Pricing** - Starting at just $10/month --- **Links:** [Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
featherless-ai-quants/OpenPipe-Hermes-2-Theta-Llama-3-8B-32k-GGUF
featherless-ai-quants
2024-11-12T09:21:15Z
7
0
null
[ "gguf", "text-generation", "base_model:OpenPipe/Hermes-2-Theta-Llama-3-8B-32k", "base_model:quantized:OpenPipe/Hermes-2-Theta-Llama-3-8B-32k", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-11-08T12:51:17Z
--- base_model: OpenPipe/Hermes-2-Theta-Llama-3-8B-32k pipeline_tag: text-generation quantized_by: featherless-ai-quants --- # OpenPipe/Hermes-2-Theta-Llama-3-8B-32k GGUF Quantizations 🚀 ![Featherless AI Quants](./featherless-quants.png) *Optimized GGUF quantization files for enhanced model performance* > Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee. --- ## Available Quantizations 📊 | Quantization Type | File | Size | |-------------------|------|------| | IQ4_XS | [OpenPipe-Hermes-2-Theta-Llama-3-8B-32k-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/OpenPipe-Hermes-2-Theta-Llama-3-8B-32k-GGUF/blob/main/OpenPipe-Hermes-2-Theta-Llama-3-8B-32k-IQ4_XS.gguf) | 4276.62 MB | | Q2_K | [OpenPipe-Hermes-2-Theta-Llama-3-8B-32k-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/OpenPipe-Hermes-2-Theta-Llama-3-8B-32k-GGUF/blob/main/OpenPipe-Hermes-2-Theta-Llama-3-8B-32k-Q2_K.gguf) | 3031.86 MB | | Q3_K_L | [OpenPipe-Hermes-2-Theta-Llama-3-8B-32k-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/OpenPipe-Hermes-2-Theta-Llama-3-8B-32k-GGUF/blob/main/OpenPipe-Hermes-2-Theta-Llama-3-8B-32k-Q3_K_L.gguf) | 4121.74 MB | | Q3_K_M | [OpenPipe-Hermes-2-Theta-Llama-3-8B-32k-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/OpenPipe-Hermes-2-Theta-Llama-3-8B-32k-GGUF/blob/main/OpenPipe-Hermes-2-Theta-Llama-3-8B-32k-Q3_K_M.gguf) | 3832.74 MB | | Q3_K_S | [OpenPipe-Hermes-2-Theta-Llama-3-8B-32k-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/OpenPipe-Hermes-2-Theta-Llama-3-8B-32k-GGUF/blob/main/OpenPipe-Hermes-2-Theta-Llama-3-8B-32k-Q3_K_S.gguf) | 3494.74 MB | | Q4_K_M | [OpenPipe-Hermes-2-Theta-Llama-3-8B-32k-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/OpenPipe-Hermes-2-Theta-Llama-3-8B-32k-GGUF/blob/main/OpenPipe-Hermes-2-Theta-Llama-3-8B-32k-Q4_K_M.gguf) | 4692.78 MB | | Q4_K_S | [OpenPipe-Hermes-2-Theta-Llama-3-8B-32k-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/OpenPipe-Hermes-2-Theta-Llama-3-8B-32k-GGUF/blob/main/OpenPipe-Hermes-2-Theta-Llama-3-8B-32k-Q4_K_S.gguf) | 4475.28 MB | | Q5_K_M | [OpenPipe-Hermes-2-Theta-Llama-3-8B-32k-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/OpenPipe-Hermes-2-Theta-Llama-3-8B-32k-GGUF/blob/main/OpenPipe-Hermes-2-Theta-Llama-3-8B-32k-Q5_K_M.gguf) | 5467.40 MB | | Q5_K_S | [OpenPipe-Hermes-2-Theta-Llama-3-8B-32k-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/OpenPipe-Hermes-2-Theta-Llama-3-8B-32k-GGUF/blob/main/OpenPipe-Hermes-2-Theta-Llama-3-8B-32k-Q5_K_S.gguf) | 5339.90 MB | | Q6_K | [OpenPipe-Hermes-2-Theta-Llama-3-8B-32k-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/OpenPipe-Hermes-2-Theta-Llama-3-8B-32k-GGUF/blob/main/OpenPipe-Hermes-2-Theta-Llama-3-8B-32k-Q6_K.gguf) | 6290.44 MB | | Q8_0 | [OpenPipe-Hermes-2-Theta-Llama-3-8B-32k-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/OpenPipe-Hermes-2-Theta-Llama-3-8B-32k-GGUF/blob/main/OpenPipe-Hermes-2-Theta-Llama-3-8B-32k-Q8_0.gguf) | 8145.11 MB | --- ## ⚡ Powered by [Featherless AI](https://featherless.ai) ### Key Features - 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly - 🛠️ **Zero Infrastructure** - No server setup or maintenance required - 📚 **Vast Compatibility** - Support for 2400+ models and counting - 💎 **Affordable Pricing** - Starting at just $10/month --- **Links:** [Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
HZeroxium/bert-finetuned-ner
HZeroxium
2024-11-12T09:20:23Z
107
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-11-12T09:08:43Z
--- library_name: transformers license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 config: conll2003 split: validation args: conll2003 metrics: - name: Precision type: precision value: 0.9364947769855745 - name: Recall type: recall value: 0.9505217098619994 - name: F1 type: f1 value: 0.943456109579888 - name: Accuracy type: accuracy value: 0.9869311826690998 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0594 - Precision: 0.9365 - Recall: 0.9505 - F1: 0.9435 - Accuracy: 0.9869 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0775 | 1.0 | 1756 | 0.0648 | 0.9045 | 0.9359 | 0.9199 | 0.9825 | | 0.0375 | 2.0 | 3512 | 0.0653 | 0.9250 | 0.9424 | 0.9336 | 0.9846 | | 0.0223 | 3.0 | 5268 | 0.0594 | 0.9365 | 0.9505 | 0.9435 | 0.9869 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
RichardErkhov/Promptengineering_-_flan-summarizer-v0-gguf
RichardErkhov
2024-11-12T09:17:58Z
5
0
null
[ "gguf", "arxiv:1910.09700", "endpoints_compatible", "region:us", "conversational" ]
null
2024-11-12T08:08:54Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) flan-summarizer-v0 - GGUF - Model creator: https://huggingface.co/Promptengineering/ - Original model: https://huggingface.co/Promptengineering/flan-summarizer-v0/ | Name | Quant method | Size | | ---- | ---- | ---- | | [flan-summarizer-v0.Q2_K.gguf](https://huggingface.co/RichardErkhov/Promptengineering_-_flan-summarizer-v0-gguf/blob/main/flan-summarizer-v0.Q2_K.gguf) | Q2_K | 0.4GB | | [flan-summarizer-v0.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Promptengineering_-_flan-summarizer-v0-gguf/blob/main/flan-summarizer-v0.Q3_K_S.gguf) | Q3_K_S | 0.47GB | | [flan-summarizer-v0.Q3_K.gguf](https://huggingface.co/RichardErkhov/Promptengineering_-_flan-summarizer-v0-gguf/blob/main/flan-summarizer-v0.Q3_K.gguf) | Q3_K | 0.51GB | | [flan-summarizer-v0.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Promptengineering_-_flan-summarizer-v0-gguf/blob/main/flan-summarizer-v0.Q3_K_M.gguf) | Q3_K_M | 0.51GB | | [flan-summarizer-v0.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Promptengineering_-_flan-summarizer-v0-gguf/blob/main/flan-summarizer-v0.Q3_K_L.gguf) | Q3_K_L | 0.55GB | | [flan-summarizer-v0.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Promptengineering_-_flan-summarizer-v0-gguf/blob/main/flan-summarizer-v0.IQ4_XS.gguf) | IQ4_XS | 0.57GB | | [flan-summarizer-v0.Q4_0.gguf](https://huggingface.co/RichardErkhov/Promptengineering_-_flan-summarizer-v0-gguf/blob/main/flan-summarizer-v0.Q4_0.gguf) | Q4_0 | 0.59GB | | [flan-summarizer-v0.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Promptengineering_-_flan-summarizer-v0-gguf/blob/main/flan-summarizer-v0.IQ4_NL.gguf) | IQ4_NL | 0.6GB | | [flan-summarizer-v0.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Promptengineering_-_flan-summarizer-v0-gguf/blob/main/flan-summarizer-v0.Q4_K_S.gguf) | Q4_K_S | 0.6GB | | [flan-summarizer-v0.Q4_K.gguf](https://huggingface.co/RichardErkhov/Promptengineering_-_flan-summarizer-v0-gguf/blob/main/flan-summarizer-v0.Q4_K.gguf) | Q4_K | 0.62GB | | [flan-summarizer-v0.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Promptengineering_-_flan-summarizer-v0-gguf/blob/main/flan-summarizer-v0.Q4_K_M.gguf) | Q4_K_M | 0.62GB | | [flan-summarizer-v0.Q4_1.gguf](https://huggingface.co/RichardErkhov/Promptengineering_-_flan-summarizer-v0-gguf/blob/main/flan-summarizer-v0.Q4_1.gguf) | Q4_1 | 0.65GB | | [flan-summarizer-v0.Q5_0.gguf](https://huggingface.co/RichardErkhov/Promptengineering_-_flan-summarizer-v0-gguf/blob/main/flan-summarizer-v0.Q5_0.gguf) | Q5_0 | 0.71GB | | [flan-summarizer-v0.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Promptengineering_-_flan-summarizer-v0-gguf/blob/main/flan-summarizer-v0.Q5_K_S.gguf) | Q5_K_S | 0.71GB | | [flan-summarizer-v0.Q5_K.gguf](https://huggingface.co/RichardErkhov/Promptengineering_-_flan-summarizer-v0-gguf/blob/main/flan-summarizer-v0.Q5_K.gguf) | Q5_K | 0.73GB | | [flan-summarizer-v0.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Promptengineering_-_flan-summarizer-v0-gguf/blob/main/flan-summarizer-v0.Q5_K_M.gguf) | Q5_K_M | 0.73GB | | [flan-summarizer-v0.Q5_1.gguf](https://huggingface.co/RichardErkhov/Promptengineering_-_flan-summarizer-v0-gguf/blob/main/flan-summarizer-v0.Q5_1.gguf) | Q5_1 | 0.77GB | | [flan-summarizer-v0.Q6_K.gguf](https://huggingface.co/RichardErkhov/Promptengineering_-_flan-summarizer-v0-gguf/blob/main/flan-summarizer-v0.Q6_K.gguf) | Q6_K | 0.84GB | | [flan-summarizer-v0.Q8_0.gguf](https://huggingface.co/RichardErkhov/Promptengineering_-_flan-summarizer-v0-gguf/blob/main/flan-summarizer-v0.Q8_0.gguf) | Q8_0 | 1.09GB | Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
QuantFactory/Qwen2.5-Coder-3B-Instruct-GGUF
QuantFactory
2024-11-12T09:13:24Z
126
1
transformers
[ "transformers", "gguf", "code", "codeqwen", "chat", "qwen", "qwen-coder", "text-generation", "en", "arxiv:2409.12186", "arxiv:2407.10671", "base_model:Qwen/Qwen2.5-Coder-3B", "base_model:quantized:Qwen/Qwen2.5-Coder-3B", "license:other", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-11-12T08:49:56Z
--- license: other license_name: qwen-research license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct/blob/main/LICENSE language: - en base_model: - Qwen/Qwen2.5-Coder-3B pipeline_tag: text-generation library_name: transformers tags: - code - codeqwen - chat - qwen - qwen-coder --- [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) # QuantFactory/Qwen2.5-Coder-3B-Instruct-GGUF This is quantized version of [Qwen/Qwen2.5-Coder-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct) created using llama.cpp # Original Model Card # Qwen2.5-Coder-3B-Instruct ## Introduction Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5: - Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o. - A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies. **This repo contains the instruction-tuned 3B Qwen2.5-Coder model**, which has the following features: - Type: Causal Language Models - Training Stage: Pretraining & Post-training - Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings - Number of Parameters: 3.09B - Number of Paramaters (Non-Embedding): 2.77B - Number of Layers: 36 - Number of Attention Heads (GQA): 16 for Q and 2 for KV - Context Length: Full 32,768 tokens For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186). ## Requirements The code of Qwen2.5-Coder has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`. With `transformers<4.37.0`, you will encounter the following error: ``` KeyError: 'qwen2' ``` ## Quickstart Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Qwen/Qwen2.5-Coder-3B-Instruct" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "write a quick sort algorithm." messages = [ {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ## Evaluation & Performance Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/). For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html). ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{hui2024qwen2, title={Qwen2. 5-Coder Technical Report}, author={Hui, Binyuan and Yang, Jian and Cui, Zeyu and Yang, Jiaxi and Liu, Dayiheng and Zhang, Lei and Liu, Tianyu and Zhang, Jiajun and Yu, Bowen and Dang, Kai and others}, journal={arXiv preprint arXiv:2409.12186}, year={2024} } @article{qwen2, title={Qwen2 Technical Report}, author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan}, journal={arXiv preprint arXiv:2407.10671}, year={2024} } ```
qilowoq/AbLang_light
qilowoq
2024-11-12T09:10:32Z
188
3
transformers
[ "transformers", "pytorch", "safetensors", "roberta", "fill-mask", "chemistry", "biology", "protein", "antibodies", "antibody", "light chain", "AbLang", "CDR", "OAS", "custom_code", "license:bsd", "autotrain_compatible", "region:us" ]
fill-mask
2023-04-29T01:46:18Z
--- license: bsd tags: - chemistry - biology - protein - antibodies - antibody - light chain - AbLang - CDR - OAS --- ### AbLang model for light chains This is a 🤗 version of AbLang: A language model for antibodies. It was introduced in [this paper](https://doi.org/10.1101/2022.01.20.477061) and first released in [this repository](https://github.com/oxpig/AbLang). This model is trained on uppercase amino acids: it only works with capital letter amino acids. ### Intended uses & limitations The model could be used for protein feature extraction or to be fine-tuned on downstream tasks (TBA). ### How to use Here is how to use this model to get the features of a given antibody sequence in PyTorch: ```python from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('qilowoq/AbLang_light') model = AutoModel.from_pretrained('qilowoq/AbLang_light', trust_remote_code=True) sequence_Example = ' '.join("GSELTQDPAVSVALGQTVRITCQGDSLRNYYASWYQQKPRQAPVLVFYGKNNRPSGIPDRFSGSSSGNTASLTISGAQAEDEADYYCNSRDSSSNHLVFGGGTKLTVLSQ") encoded_input = tokenizer(sequence_Example, return_tensors='pt') model_output = model(**encoded_input) ``` Sequence embeddings can be produced as follows: ```python def get_sequence_embeddings(encoded_input, model_output): mask = encoded_input['attention_mask'].float() d = {k: v for k, v in torch.nonzero(mask).cpu().numpy()} # dict of sep tokens # make sep token invisible for i in d: mask[i, d[i]] = 0 mask[:, 0] = 0.0 # make cls token invisible mask = mask.unsqueeze(-1).expand(model_output.last_hidden_state.size()) sum_embeddings = torch.sum(model_output.last_hidden_state * mask, 1) sum_mask = torch.clamp(mask.sum(1), min=1e-9) return sum_embeddings / sum_mask seq_embeds = get_sequence_embeddings(encoded_input, model_output) ``` ### Fine-tune To save memory we recomend using [LoRA](https://doi.org/10.48550/arXiv.2106.09685): ```python pip install git+https://github.com/huggingface/peft.git pip install loralib ``` LoRA greatly reduces the number of trainable parameters and performs on-par or better than fine-tuning full model. ```python from peft import LoraConfig, get_peft_model def apply_lora_bert(model): config = LoraConfig( r=8, lora_alpha=32, lora_dropout=0.3, target_modules=['query', 'value'] ) for param in model.parameters(): param.requires_grad = False # freeze the model - train adapters later if param.ndim == 1: # cast the small parameters (e.g. layernorm) to fp32 for stability param.data = param.data.to(torch.float32) model.gradient_checkpointing_enable() # reduce number of stored activations model.enable_input_require_grads() model = get_peft_model(model, config) return model model = apply_lora_bert(model) model.print_trainable_parameters() # trainable params: 294912 || all params: 85493760 || trainable%: 0.3449514911965505 ``` ### Citation ``` @article{Olsen2022, title={AbLang: An antibody language model for completing antibody sequences}, author={Tobias H. Olsen, Iain H. Moal and Charlotte M. Deane}, journal={bioRxiv}, doi={https://doi.org/10.1101/2022.01.20.477061}, year={2022} } ```
featherless-ai-quants/DavidAU-MN-GRAND-Gutenberg-Lyra4-Lyra-12B-MADNESS-GGUF
featherless-ai-quants
2024-11-12T09:10:18Z
33
0
null
[ "gguf", "text-generation", "base_model:DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-MADNESS", "base_model:quantized:DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-MADNESS", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-11-12T08:28:57Z
--- base_model: DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-MADNESS pipeline_tag: text-generation quantized_by: featherless-ai-quants --- # DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-MADNESS GGUF Quantizations 🚀 ![Featherless AI Quants](./featherless-quants.png) *Optimized GGUF quantization files for enhanced model performance* > Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee. --- ## Available Quantizations 📊 | Quantization Type | File | Size | |-------------------|------|------| | IQ4_XS | [DavidAU-MN-GRAND-Gutenberg-Lyra4-Lyra-12B-MADNESS-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/DavidAU-MN-GRAND-Gutenberg-Lyra4-Lyra-12B-MADNESS-GGUF/blob/main/DavidAU-MN-GRAND-Gutenberg-Lyra4-Lyra-12B-MADNESS-IQ4_XS.gguf) | 6485.04 MB | | Q2_K | [DavidAU-MN-GRAND-Gutenberg-Lyra4-Lyra-12B-MADNESS-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/DavidAU-MN-GRAND-Gutenberg-Lyra4-Lyra-12B-MADNESS-GGUF/blob/main/DavidAU-MN-GRAND-Gutenberg-Lyra4-Lyra-12B-MADNESS-Q2_K.gguf) | 4569.10 MB | | Q3_K_L | [DavidAU-MN-GRAND-Gutenberg-Lyra4-Lyra-12B-MADNESS-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/DavidAU-MN-GRAND-Gutenberg-Lyra4-Lyra-12B-MADNESS-GGUF/blob/main/DavidAU-MN-GRAND-Gutenberg-Lyra4-Lyra-12B-MADNESS-Q3_K_L.gguf) | 6257.54 MB | | Q3_K_M | [DavidAU-MN-GRAND-Gutenberg-Lyra4-Lyra-12B-MADNESS-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/DavidAU-MN-GRAND-Gutenberg-Lyra4-Lyra-12B-MADNESS-GGUF/blob/main/DavidAU-MN-GRAND-Gutenberg-Lyra4-Lyra-12B-MADNESS-Q3_K_M.gguf) | 5801.29 MB | | Q3_K_S | [DavidAU-MN-GRAND-Gutenberg-Lyra4-Lyra-12B-MADNESS-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/DavidAU-MN-GRAND-Gutenberg-Lyra4-Lyra-12B-MADNESS-GGUF/blob/main/DavidAU-MN-GRAND-Gutenberg-Lyra4-Lyra-12B-MADNESS-Q3_K_S.gguf) | 5277.85 MB | | Q4_K_M | [DavidAU-MN-GRAND-Gutenberg-Lyra4-Lyra-12B-MADNESS-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/DavidAU-MN-GRAND-Gutenberg-Lyra4-Lyra-12B-MADNESS-GGUF/blob/main/DavidAU-MN-GRAND-Gutenberg-Lyra4-Lyra-12B-MADNESS-Q4_K_M.gguf) | 7130.82 MB | | Q4_K_S | [DavidAU-MN-GRAND-Gutenberg-Lyra4-Lyra-12B-MADNESS-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/DavidAU-MN-GRAND-Gutenberg-Lyra4-Lyra-12B-MADNESS-GGUF/blob/main/DavidAU-MN-GRAND-Gutenberg-Lyra4-Lyra-12B-MADNESS-Q4_K_S.gguf) | 6790.35 MB | | Q5_K_M | [DavidAU-MN-GRAND-Gutenberg-Lyra4-Lyra-12B-MADNESS-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/DavidAU-MN-GRAND-Gutenberg-Lyra4-Lyra-12B-MADNESS-GGUF/blob/main/DavidAU-MN-GRAND-Gutenberg-Lyra4-Lyra-12B-MADNESS-Q5_K_M.gguf) | 8323.32 MB | | Q5_K_S | [DavidAU-MN-GRAND-Gutenberg-Lyra4-Lyra-12B-MADNESS-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/DavidAU-MN-GRAND-Gutenberg-Lyra4-Lyra-12B-MADNESS-GGUF/blob/main/DavidAU-MN-GRAND-Gutenberg-Lyra4-Lyra-12B-MADNESS-Q5_K_S.gguf) | 8124.10 MB | | Q6_K | [DavidAU-MN-GRAND-Gutenberg-Lyra4-Lyra-12B-MADNESS-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/DavidAU-MN-GRAND-Gutenberg-Lyra4-Lyra-12B-MADNESS-GGUF/blob/main/DavidAU-MN-GRAND-Gutenberg-Lyra4-Lyra-12B-MADNESS-Q6_K.gguf) | 9590.35 MB | | Q8_0 | [DavidAU-MN-GRAND-Gutenberg-Lyra4-Lyra-12B-MADNESS-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/DavidAU-MN-GRAND-Gutenberg-Lyra4-Lyra-12B-MADNESS-GGUF/blob/main/DavidAU-MN-GRAND-Gutenberg-Lyra4-Lyra-12B-MADNESS-Q8_0.gguf) | 12419.10 MB | --- ## ⚡ Powered by [Featherless AI](https://featherless.ai) ### Key Features - 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly - 🛠️ **Zero Infrastructure** - No server setup or maintenance required - 📚 **Vast Compatibility** - Support for 2400+ models and counting - 💎 **Affordable Pricing** - Starting at just $10/month --- **Links:** [Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
ManukyanD/gemma-doc-vqa-v6-checkpoint-2
ManukyanD
2024-11-12T09:09:15Z
5
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-12T08:51:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID Another checkpoint for [ManukyanD/gemma-doc-vqa-v6](https://huggingface.co/ManukyanD/gemma-doc-vqa-v6). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
featherless-ai-quants/Epiculous-Azure_Dusk-v0.2-GGUF
featherless-ai-quants
2024-11-12T09:05:03Z
15
0
null
[ "gguf", "text-generation", "base_model:Epiculous/Azure_Dusk-v0.2", "base_model:quantized:Epiculous/Azure_Dusk-v0.2", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-11-12T08:29:39Z
--- base_model: Epiculous/Azure_Dusk-v0.2 pipeline_tag: text-generation quantized_by: featherless-ai-quants --- # Epiculous/Azure_Dusk-v0.2 GGUF Quantizations 🚀 ![Featherless AI Quants](./featherless-quants.png) *Optimized GGUF quantization files for enhanced model performance* > Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee. --- ## Available Quantizations 📊 | Quantization Type | File | Size | |-------------------|------|------| | IQ4_XS | [Epiculous-Azure_Dusk-v0.2-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/Epiculous-Azure_Dusk-v0.2-GGUF/blob/main/Epiculous-Azure_Dusk-v0.2-IQ4_XS.gguf) | 6485.04 MB | | Q2_K | [Epiculous-Azure_Dusk-v0.2-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/Epiculous-Azure_Dusk-v0.2-GGUF/blob/main/Epiculous-Azure_Dusk-v0.2-Q2_K.gguf) | 4569.10 MB | | Q3_K_L | [Epiculous-Azure_Dusk-v0.2-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/Epiculous-Azure_Dusk-v0.2-GGUF/blob/main/Epiculous-Azure_Dusk-v0.2-Q3_K_L.gguf) | 6257.54 MB | | Q3_K_M | [Epiculous-Azure_Dusk-v0.2-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/Epiculous-Azure_Dusk-v0.2-GGUF/blob/main/Epiculous-Azure_Dusk-v0.2-Q3_K_M.gguf) | 5801.29 MB | | Q3_K_S | [Epiculous-Azure_Dusk-v0.2-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/Epiculous-Azure_Dusk-v0.2-GGUF/blob/main/Epiculous-Azure_Dusk-v0.2-Q3_K_S.gguf) | 5277.85 MB | | Q4_K_M | [Epiculous-Azure_Dusk-v0.2-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/Epiculous-Azure_Dusk-v0.2-GGUF/blob/main/Epiculous-Azure_Dusk-v0.2-Q4_K_M.gguf) | 7130.82 MB | | Q4_K_S | [Epiculous-Azure_Dusk-v0.2-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/Epiculous-Azure_Dusk-v0.2-GGUF/blob/main/Epiculous-Azure_Dusk-v0.2-Q4_K_S.gguf) | 6790.35 MB | | Q5_K_M | [Epiculous-Azure_Dusk-v0.2-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/Epiculous-Azure_Dusk-v0.2-GGUF/blob/main/Epiculous-Azure_Dusk-v0.2-Q5_K_M.gguf) | 8323.32 MB | | Q5_K_S | [Epiculous-Azure_Dusk-v0.2-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/Epiculous-Azure_Dusk-v0.2-GGUF/blob/main/Epiculous-Azure_Dusk-v0.2-Q5_K_S.gguf) | 8124.10 MB | | Q6_K | [Epiculous-Azure_Dusk-v0.2-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/Epiculous-Azure_Dusk-v0.2-GGUF/blob/main/Epiculous-Azure_Dusk-v0.2-Q6_K.gguf) | 9590.35 MB | | Q8_0 | [Epiculous-Azure_Dusk-v0.2-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/Epiculous-Azure_Dusk-v0.2-GGUF/blob/main/Epiculous-Azure_Dusk-v0.2-Q8_0.gguf) | 12419.10 MB | --- ## ⚡ Powered by [Featherless AI](https://featherless.ai) ### Key Features - 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly - 🛠️ **Zero Infrastructure** - No server setup or maintenance required - 📚 **Vast Compatibility** - Support for 2400+ models and counting - 💎 **Affordable Pricing** - Starting at just $10/month --- **Links:** [Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
2z299/aya-expanse-32b-GPTQ-4bit
2z299
2024-11-12T08:49:48Z
17
1
transformers
[ "transformers", "safetensors", "cohere", "text-generation", "conversational", "base_model:CohereForAI/aya-expanse-32b", "base_model:quantized:CohereForAI/aya-expanse-32b", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "gptq", "region:us" ]
text-generation
2024-11-12T08:45:08Z
--- library_name: transformers license: cc-by-nc-4.0 base_model: - CohereForAI/aya-expanse-32b --- ## ライセンス 本作品は "Aya Expanse 32B" © Cohere を改変して作成されました。本モデルは [CC BY-NC 4.0](https://cohere.com/c4ai-cc-by-nc-license)のもとで提供されており、[C4AIのAcceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy)に準拠する必要があります。
procit006/training_tts_nl_v1.0.6_saskia2
procit006
2024-11-12T08:37:54Z
110
0
transformers
[ "transformers", "safetensors", "vits", "text-to-audio", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
text-to-audio
2024-11-12T08:37:23Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Kiranontimitta/results
Kiranontimitta
2024-11-12T08:37:32Z
165
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-classification", "generated_from_trainer", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "license:llama3.2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-11-12T08:34:33Z
--- library_name: transformers license: llama3.2 base_model: meta-llama/Llama-3.2-1B-Instruct tags: - generated_from_trainer metrics: - accuracy model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.7606 - Accuracy: 0.5179 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 8.9872 | 1.0 | 6269 | 8.8959 | 0.0 | | 8.8705 | 2.0 | 12538 | 8.6337 | 0.0359 | | 7.5473 | 3.0 | 18807 | 4.7606 | 0.5179 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.5.0+cu121 - Datasets 3.1.0 - Tokenizers 0.19.1
AIFunOver/OpenCoder-1.5B-Instruct-openvino-8bit
AIFunOver
2024-11-12T08:36:37Z
12
0
transformers
[ "transformers", "openvino", "llama", "text-generation", "nncf", "8-bit", "conversational", "en", "zh", "dataset:OpenCoder-LLM/opencoder-sft-stage1", "dataset:OpenCoder-LLM/opencoder-sft-stage2", "base_model:infly/OpenCoder-1.5B-Instruct", "base_model:quantized:infly/OpenCoder-1.5B-Instruct", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-11-12T08:30:20Z
--- base_model: infly/OpenCoder-1.5B-Instruct datasets: - OpenCoder-LLM/opencoder-sft-stage1 - OpenCoder-LLM/opencoder-sft-stage2 language: - en - zh library_name: transformers license: other license_name: inf license_link: https://huggingface.co/infly/OpenCoder-1.5B-Instruct/blob/main/LICENSE pipeline_tag: text-generation tags: - openvino - nncf - 8-bit base_model_relation: quantized --- This model is a quantized version of [`infly/OpenCoder-1.5B-Instruct`](https://huggingface.co/infly/OpenCoder-1.5B-Instruct) and is converted to the OpenVINO format. This model was obtained via the [nncf-quantization](https://huggingface.co/spaces/echarlaix/nncf-quantization) space with [optimum-intel](https://github.com/huggingface/optimum-intel). First make sure you have `optimum-intel` installed: ```bash pip install optimum[openvino] ``` To load your model you can do as follows: ```python from optimum.intel import OVModelForCausalLM model_id = "AIFunOver/OpenCoder-1.5B-Instruct-openvino-8bit" model = OVModelForCausalLM.from_pretrained(model_id) ```
shanginn/Qwen2.5-Coder-32B-Instruct-mlx-q8
shanginn
2024-11-12T08:36:26Z
14
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "code", "codeqwen", "chat", "qwen", "qwen-coder", "mlx", "conversational", "en", "base_model:Qwen/Qwen2.5-Coder-32B-Instruct", "base_model:quantized:Qwen/Qwen2.5-Coder-32B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "region:us" ]
text-generation
2024-11-12T06:11:53Z
--- license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct/blob/main/LICENSE language: - en base_model: Qwen/Qwen2.5-Coder-32B-Instruct pipeline_tag: text-generation library_name: transformers tags: - code - codeqwen - chat - qwen - qwen-coder - mlx --- # shanginn/Qwen2.5-Coder-32B-Instruct-mlx-q8 The Model [shanginn/Qwen2.5-Coder-32B-Instruct-mlx-q8](https://huggingface.co/shanginn/Qwen2.5-Coder-32B-Instruct-mlx-q8) was converted to MLX format from [Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) using mlx-lm version **0.19.2**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("shanginn/Qwen2.5-Coder-32B-Instruct-mlx-q8") prompt="hello" if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
QuantFactory/Qwen2.5-Coder-0.5B-Instruct-GGUF
QuantFactory
2024-11-12T08:34:09Z
136
1
transformers
[ "transformers", "gguf", "code", "codeqwen", "chat", "qwen", "qwen-coder", "text-generation", "en", "arxiv:2409.12186", "arxiv:2407.10671", "base_model:Qwen/Qwen2.5-Coder-0.5B", "base_model:quantized:Qwen/Qwen2.5-Coder-0.5B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-11-12T08:27:50Z
--- license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-0.5B-Instruct/blob/main/LICENSE language: - en base_model: - Qwen/Qwen2.5-Coder-0.5B pipeline_tag: text-generation library_name: transformers tags: - code - codeqwen - chat - qwen - qwen-coder --- [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) # QuantFactory/Qwen2.5-Coder-0.5B-Instruct-GGUF This is quantized version of [Qwen/Qwen2.5-Coder-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-0.5B-Instruct) created using llama.cpp # Original Model Card # Qwen2.5-Coder-0.5B-Instruct ## Introduction Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5: - Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o. - A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies. **This repo contains the instruction-tuned 0.5B Qwen2.5-Coder model**, which has the following features: - Type: Causal Language Models - Training Stage: Pretraining & Post-training - Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings - Number of Parameters: 0.49B - Number of Paramaters (Non-Embedding): 0.36B - Number of Layers: 24 - Number of Attention Heads (GQA): 14 for Q and 2 for KV - Context Length: Full 32,768 tokens For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186). ## Requirements The code of Qwen2.5-Coder has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`. With `transformers<4.37.0`, you will encounter the following error: ``` KeyError: 'qwen2' ``` ## Quickstart Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Qwen/Qwen2.5-Coder-0.5B-Instruct" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "write a quick sort algorithm." messages = [ {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ## Evaluation & Performance Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/). For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html). ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{hui2024qwen2, title={Qwen2. 5-Coder Technical Report}, author={Hui, Binyuan and Yang, Jian and Cui, Zeyu and Yang, Jiaxi and Liu, Dayiheng and Zhang, Lei and Liu, Tianyu and Zhang, Jiajun and Yu, Bowen and Dang, Kai and others}, journal={arXiv preprint arXiv:2409.12186}, year={2024} } @article{qwen2, title={Qwen2 Technical Report}, author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan}, journal={arXiv preprint arXiv:2407.10671}, year={2024} } ```
phamhai/Llama-3.2-3B-Instruct-Frog-Q4_K_M-GGUF
phamhai
2024-11-12T08:16:59Z
6
1
null
[ "gguf", "llama-cpp", "gguf-my-repo", "text-generation", "en", "vi", "base_model:phamhai/Llama-3.2-3B-Instruct-Frog", "base_model:quantized:phamhai/Llama-3.2-3B-Instruct-Frog", "license:llama3.2", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-11-12T08:16:48Z
--- license: llama3.2 language: - en - vi base_model: phamhai/Llama-3.2-3B-Instruct-Frog pipeline_tag: text-generation tags: - llama-cpp - gguf-my-repo --- # phamhai/Llama-3.2-3B-Instruct-Frog-Q4_K_M-GGUF This model was converted to GGUF format from [`phamhai/Llama-3.2-3B-Instruct-Frog`](https://huggingface.co/phamhai/Llama-3.2-3B-Instruct-Frog) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/phamhai/Llama-3.2-3B-Instruct-Frog) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo phamhai/Llama-3.2-3B-Instruct-Frog-Q4_K_M-GGUF --hf-file llama-3.2-3b-instruct-frog-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo phamhai/Llama-3.2-3B-Instruct-Frog-Q4_K_M-GGUF --hf-file llama-3.2-3b-instruct-frog-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo phamhai/Llama-3.2-3B-Instruct-Frog-Q4_K_M-GGUF --hf-file llama-3.2-3b-instruct-frog-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo phamhai/Llama-3.2-3B-Instruct-Frog-Q4_K_M-GGUF --hf-file llama-3.2-3b-instruct-frog-q4_k_m.gguf -c 2048 ```
heegyu/Llama-3.2-1B-Instruct-vis64k
heegyu
2024-11-12T08:14:16Z
146
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-12T08:12:37Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DopeorNope/GPT4obased-Math7Bs
DopeorNope
2024-11-12T08:13:23Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-12T08:07:10Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
neeleshg23/jamba-from-hf
neeleshg23
2024-11-12T08:12:07Z
11
0
transformers
[ "transformers", "safetensors", "jamba", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-11-12T07:59:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
v1v1d/Nayana_all_lora_64_combined_0.8b
v1v1d
2024-11-12T08:11:32Z
48
0
transformers
[ "transformers", "safetensors", "GOT", "feature-extraction", "custom_code", "arxiv:1910.09700", "region:us" ]
feature-extraction
2024-11-12T08:02:31Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Ayush01122004/Task-Classifier
Ayush01122004
2024-11-12T08:09:57Z
8
0
null
[ "tf", "bert", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:mit", "region:us" ]
null
2024-11-12T07:44:42Z
--- license: mit base_model: - google-bert/bert-base-uncased ---
ag4sh1/Translate4Good
ag4sh1
2024-11-12T08:08:17Z
5
0
null
[ "safetensors", "marian", "translation", "UN", "en", "es", "base_model:Helsinki-NLP/opus-mt-en-es", "base_model:finetune:Helsinki-NLP/opus-mt-en-es", "license:apache-2.0", "region:us" ]
translation
2024-11-10T12:36:39Z
--- license: apache-2.0 language: - en - es base_model: - Helsinki-NLP/opus-mt-en-es pipeline_tag: translation tags: - translation - UN --- Fine-tuned MarianMT for UN document translation tasks.
vinai/PhoWhisper-medium
vinai
2024-11-12T07:47:44Z
1,909
9
transformers
[ "transformers", "pytorch", "whisper", "automatic-speech-recognition", "vi", "license:bsd-3-clause", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-02-18T05:50:02Z
--- license: bsd-3-clause language: - vi --- # PhoWhisper: Automatic Speech Recognition for Vietnamese We introduce **PhoWhisper** in five versions for Vietnamese automatic speech recognition. PhoWhisper's robustness is achieved through fine-tuning the multilingual [Whisper](https://github.com/openai/whisper) on an 844-hour dataset that encompasses diverse Vietnamese accents. Our experimental study demonstrates state-of-the-art performances of PhoWhisper on benchmark Vietnamese ASR datasets. Please **cite** our PhoWhisper paper when it is used to help produce published results or is incorporated into other software: ``` @inproceedings{PhoWhisper, title = {{PhoWhisper: Automatic Speech Recognition for Vietnamese}}, author = {Thanh-Thien Le and Linh The Nguyen and Dat Quoc Nguyen}, booktitle = {Proceedings of the ICLR 2024 Tiny Papers track}, year = {2024} } ``` For further information or requests, please go to [PhoWhisper's homepage](https://github.com/VinAIResearch/PhoWhisper)!
vinai/PhoWhisper-small
vinai
2024-11-12T07:47:33Z
708
7
transformers
[ "transformers", "pytorch", "whisper", "automatic-speech-recognition", "vi", "license:bsd-3-clause", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-02-18T05:01:39Z
--- license: bsd-3-clause language: - vi --- # PhoWhisper: Automatic Speech Recognition for Vietnamese We introduce **PhoWhisper** in five versions for Vietnamese automatic speech recognition. PhoWhisper's robustness is achieved through fine-tuning the multilingual [Whisper](https://github.com/openai/whisper) on an 844-hour dataset that encompasses diverse Vietnamese accents. Our experimental study demonstrates state-of-the-art performances of PhoWhisper on benchmark Vietnamese ASR datasets. Please **cite** our PhoWhisper paper when it is used to help produce published results or is incorporated into other software: ``` @inproceedings{PhoWhisper, title = {{PhoWhisper: Automatic Speech Recognition for Vietnamese}}, author = {Thanh-Thien Le and Linh The Nguyen and Dat Quoc Nguyen}, booktitle = {Proceedings of the ICLR 2024 Tiny Papers track}, year = {2024} } ``` For further information or requests, please go to [PhoWhisper's homepage](https://github.com/VinAIResearch/PhoWhisper)!
vinai/PhoGPT-4B-Chat
vinai
2024-11-12T07:44:43Z
8,830
32
transformers
[ "transformers", "pytorch", "mpt", "text-generation", "conversational", "custom_code", "vi", "arxiv:2311.02945", "license:bsd-3-clause", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-31T12:57:34Z
--- license: bsd-3-clause language: - vi --- # PhoGPT: Generative Pre-training for Vietnamese We open-source a state-of-the-art 4B-parameter generative model series for Vietnamese, which includes the base pre-trained monolingual model PhoGPT-4B and its chat variant, PhoGPT-4B-Chat. The base model, PhoGPT-4B, with exactly 3.7B parameters, is pre-trained from scratch on a Vietnamese corpus of 102B tokens, with an 8192 context length, employing a vocabulary of 20480 token types. The chat variant, PhoGPT-4B-Chat, is the modeling output obtained by fine-tuning PhoGPT-4B on a dataset of 70K instructional prompts and their responses, along with an additional 290K conversations. We demonstrate its superior performance compared to previous open-source models. More details about the general architecture and experimental results of PhoGPT can be found in our [technical report](https://arxiv.org/abs/2311.02945): ``` @article{PhoGPT, title = {{PhoGPT: Generative Pre-training for Vietnamese}}, author = {Dat Quoc Nguyen and Linh The Nguyen and Chi Tran and Dung Ngoc Nguyen and Dinh Phung and Hung Bui}, journal = {arXiv preprint}, volume = {arXiv:2311.02945}, year = {2023} } ``` **Please CITE** our technical report when PhoGPT is used to help produce published results or is incorporated into other software. For further information or requests, please go to [PhoGPT's homepage](https://github.com/VinAIResearch/PhoGPT)!
kechengcode/final-qwen-4b-distill-alpaca-lora-alpaca_dataset
kechengcode
2024-11-12T07:43:25Z
5
0
null
[ "safetensors", "qwen2", "region:us" ]
null
2024-11-12T06:00:44Z
使用英文 alpaca 得到的模型
featherless-ai-quants/ghost-x-ghost-7b-v0.9.1-GGUF
featherless-ai-quants
2024-11-12T07:40:15Z
21
0
null
[ "gguf", "text-generation", "base_model:ghost-x/ghost-7b-v0.9.1", "base_model:quantized:ghost-x/ghost-7b-v0.9.1", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-11-12T07:31:23Z
--- base_model: ghost-x/ghost-7b-v0.9.1 pipeline_tag: text-generation quantized_by: featherless-ai-quants --- # ghost-x/ghost-7b-v0.9.1 GGUF Quantizations 🚀 ![Featherless AI Quants](./featherless-quants.png) *Optimized GGUF quantization files for enhanced model performance* > Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee. --- ## Available Quantizations 📊 | Quantization Type | File | Size | |-------------------|------|------| | IQ4_XS | [ghost-x-ghost-7b-v0.9.1-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/ghost-x-ghost-7b-v0.9.1-GGUF/blob/main/ghost-x-ghost-7b-v0.9.1-IQ4_XS.gguf) | 3761.66 MB | | Q2_K | [ghost-x-ghost-7b-v0.9.1-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/ghost-x-ghost-7b-v0.9.1-GGUF/blob/main/ghost-x-ghost-7b-v0.9.1-Q2_K.gguf) | 2593.27 MB | | Q3_K_L | [ghost-x-ghost-7b-v0.9.1-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/ghost-x-ghost-7b-v0.9.1-GGUF/blob/main/ghost-x-ghost-7b-v0.9.1-Q3_K_L.gguf) | 3644.97 MB | | Q3_K_M | [ghost-x-ghost-7b-v0.9.1-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/ghost-x-ghost-7b-v0.9.1-GGUF/blob/main/ghost-x-ghost-7b-v0.9.1-Q3_K_M.gguf) | 3355.97 MB | | Q3_K_S | [ghost-x-ghost-7b-v0.9.1-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/ghost-x-ghost-7b-v0.9.1-GGUF/blob/main/ghost-x-ghost-7b-v0.9.1-Q3_K_S.gguf) | 3017.97 MB | | Q4_K_M | [ghost-x-ghost-7b-v0.9.1-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/ghost-x-ghost-7b-v0.9.1-GGUF/blob/main/ghost-x-ghost-7b-v0.9.1-Q4_K_M.gguf) | 4166.07 MB | | Q4_K_S | [ghost-x-ghost-7b-v0.9.1-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/ghost-x-ghost-7b-v0.9.1-GGUF/blob/main/ghost-x-ghost-7b-v0.9.1-Q4_K_S.gguf) | 3948.57 MB | | Q5_K_M | [ghost-x-ghost-7b-v0.9.1-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/ghost-x-ghost-7b-v0.9.1-GGUF/blob/main/ghost-x-ghost-7b-v0.9.1-Q5_K_M.gguf) | 4893.69 MB | | Q5_K_S | [ghost-x-ghost-7b-v0.9.1-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/ghost-x-ghost-7b-v0.9.1-GGUF/blob/main/ghost-x-ghost-7b-v0.9.1-Q5_K_S.gguf) | 4766.19 MB | | Q6_K | [ghost-x-ghost-7b-v0.9.1-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/ghost-x-ghost-7b-v0.9.1-GGUF/blob/main/ghost-x-ghost-7b-v0.9.1-Q6_K.gguf) | 5666.80 MB | | Q8_0 | [ghost-x-ghost-7b-v0.9.1-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/ghost-x-ghost-7b-v0.9.1-GGUF/blob/main/ghost-x-ghost-7b-v0.9.1-Q8_0.gguf) | 7339.34 MB | --- ## ⚡ Powered by [Featherless AI](https://featherless.ai) ### Key Features - 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly - 🛠️ **Zero Infrastructure** - No server setup or maintenance required - 📚 **Vast Compatibility** - Support for 2400+ models and counting - 💎 **Affordable Pricing** - Starting at just $10/month --- **Links:** [Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
benito14/SOIT_Model
benito14
2024-11-12T07:38:59Z
6
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/Llama-3.2-1B-bnb-4bit", "base_model:quantized:unsloth/Llama-3.2-1B-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-11-12T07:38:37Z
--- base_model: unsloth/Llama-3.2-1B-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf --- # Uploaded model - **Developed by:** benito14 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Llama-3.2-1B-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
RichardErkhov/Unbabel_-_TowerInstruct-13B-v0.1-gguf
RichardErkhov
2024-11-12T07:38:46Z
90
0
null
[ "gguf", "arxiv:2402.17733", "endpoints_compatible", "region:us", "conversational" ]
null
2024-11-12T01:11:33Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) TowerInstruct-13B-v0.1 - GGUF - Model creator: https://huggingface.co/Unbabel/ - Original model: https://huggingface.co/Unbabel/TowerInstruct-13B-v0.1/ | Name | Quant method | Size | | ---- | ---- | ---- | | [TowerInstruct-13B-v0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerInstruct-13B-v0.1-gguf/blob/main/TowerInstruct-13B-v0.1.Q2_K.gguf) | Q2_K | 4.52GB | | [TowerInstruct-13B-v0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerInstruct-13B-v0.1-gguf/blob/main/TowerInstruct-13B-v0.1.Q3_K_S.gguf) | Q3_K_S | 5.27GB | | [TowerInstruct-13B-v0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerInstruct-13B-v0.1-gguf/blob/main/TowerInstruct-13B-v0.1.Q3_K.gguf) | Q3_K | 5.9GB | | [TowerInstruct-13B-v0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerInstruct-13B-v0.1-gguf/blob/main/TowerInstruct-13B-v0.1.Q3_K_M.gguf) | Q3_K_M | 5.9GB | | [TowerInstruct-13B-v0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerInstruct-13B-v0.1-gguf/blob/main/TowerInstruct-13B-v0.1.Q3_K_L.gguf) | Q3_K_L | 6.45GB | | [TowerInstruct-13B-v0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerInstruct-13B-v0.1-gguf/blob/main/TowerInstruct-13B-v0.1.IQ4_XS.gguf) | IQ4_XS | 6.54GB | | [TowerInstruct-13B-v0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerInstruct-13B-v0.1-gguf/blob/main/TowerInstruct-13B-v0.1.Q4_0.gguf) | Q4_0 | 6.86GB | | [TowerInstruct-13B-v0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerInstruct-13B-v0.1-gguf/blob/main/TowerInstruct-13B-v0.1.IQ4_NL.gguf) | IQ4_NL | 6.9GB | | [TowerInstruct-13B-v0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerInstruct-13B-v0.1-gguf/blob/main/TowerInstruct-13B-v0.1.Q4_K_S.gguf) | Q4_K_S | 6.91GB | | [TowerInstruct-13B-v0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerInstruct-13B-v0.1-gguf/blob/main/TowerInstruct-13B-v0.1.Q4_K.gguf) | Q4_K | 7.33GB | | [TowerInstruct-13B-v0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerInstruct-13B-v0.1-gguf/blob/main/TowerInstruct-13B-v0.1.Q4_K_M.gguf) | Q4_K_M | 7.33GB | | [TowerInstruct-13B-v0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerInstruct-13B-v0.1-gguf/blob/main/TowerInstruct-13B-v0.1.Q4_1.gguf) | Q4_1 | 7.61GB | | [TowerInstruct-13B-v0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerInstruct-13B-v0.1-gguf/blob/main/TowerInstruct-13B-v0.1.Q5_0.gguf) | Q5_0 | 8.36GB | | [TowerInstruct-13B-v0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerInstruct-13B-v0.1-gguf/blob/main/TowerInstruct-13B-v0.1.Q5_K_S.gguf) | Q5_K_S | 8.36GB | | [TowerInstruct-13B-v0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerInstruct-13B-v0.1-gguf/blob/main/TowerInstruct-13B-v0.1.Q5_K.gguf) | Q5_K | 8.6GB | | [TowerInstruct-13B-v0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerInstruct-13B-v0.1-gguf/blob/main/TowerInstruct-13B-v0.1.Q5_K_M.gguf) | Q5_K_M | 8.6GB | | [TowerInstruct-13B-v0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerInstruct-13B-v0.1-gguf/blob/main/TowerInstruct-13B-v0.1.Q5_1.gguf) | Q5_1 | 9.1GB | | [TowerInstruct-13B-v0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerInstruct-13B-v0.1-gguf/blob/main/TowerInstruct-13B-v0.1.Q6_K.gguf) | Q6_K | 9.95GB | | [TowerInstruct-13B-v0.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerInstruct-13B-v0.1-gguf/blob/main/TowerInstruct-13B-v0.1.Q8_0.gguf) | Q8_0 | 12.88GB | Original model description: --- license: cc-by-nc-4.0 language: - en - de - fr - zh - pt - nl - ru - ko - it - es metrics: - comet pipeline_tag: translation --- # Model Card for TowerInstruct-13B-v0.1 ## Model Details ### Model Description TowerInstruct-13B is a language model that results from fine-tuning TowerBase on the TowerBlocks supervised fine-tuning dataset. TowerInstruct-13B-v0.1 is the first model in the series. The model is trained to handle several translation-related tasks, such as general machine translation (e.g., sentence- and paragraph/document-level translation, terminology-aware translation, context-aware translation), automatic post edition, named-entity recognition, gramatical error correction, and paraphrase generation. We will release more details in the upcoming technical report. For now, you can check results obtained with the model [here](https://unbabel.com/announcing-tower-an-open-multilingual-llm-for-translation-related-tasks/). - **Developed by:** Unbabel, Instituto Superior Técnico, CentraleSupélec University of Paris-Saclay - **Model type:** A 13B parameter model fine-tuned on a mix of publicly available, synthetic datasets on translation-related tasks, as well as conversational datasets and code instructions. - **Language(s) (NLP):** English, Portuguese, Spanish, French, German, Dutch, Italian, Korean, Chinese, Russian - **License:** CC-BY-NC-4.0, Llama 2 is licensed under the [LLAMA 2 Community License](https://ai.meta.com/llama/license/), Copyright © Meta Platforms, Inc. All Rights Reserved. - **Finetuned from model:** [TowerBase](https://huggingface.co/Unbabel/TowerBase-13B-v0.1) ## Intended uses & limitations The model was initially fine-tuned on a filtered and preprocessed supervised fine-tuning dataset ([TowerBlocks-v0.2](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2)), which contains a diverse range of data sources: - Translation (sentence and paragraph-level) - Automatic Post Edition - Machine Translation Evaluation - Context-aware Translation - Terminology-aware Translation - Multi-reference Translation - Named-entity Recognition - Paraphrase Generation - Synthetic Chat data - Code instructions You can find the dataset and all data sources of [TowerBlocks-v0.2](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2) here. Here's how you can run the model using the `pipeline()` function from 🤗 Transformers: ```python # Install transformers from source - only needed for versions <= v4.34 # pip install git+https://github.com/huggingface/transformers.git # pip install accelerate import torch from transformers import pipeline pipe = pipeline("text-generation", model="Unbabel/TowerInstruct-13B-v0.1", torch_dtype=torch.bfloat16, device_map="auto") # We use the tokenizer’s chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating messages = [ {"role": "user", "content": "Translate the following text from Portuguese into English.\nPortuguese: Um grupo de investigadores lançou um novo modelo para tarefas relacionadas com tradução.\nEnglish:"}, ] prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipe(prompt, max_new_tokens=256, do_sample=False) print(outputs[0]["generated_text"]) # <|im_start|>user # Translate the following text from Portuguese into English. # Portuguese: Um grupo de investigadores lançou um novo modelo para tarefas relacionadas com tradução. # English:<|im_end|> # <|im_start|>assistant # A group of researchers has launched a new model for translation-related tasks. ``` ### Out-of-Scope Use The model is not guaranteed to perform for languages other than the 10 languages it supports. Even though we trained the model on conversational data and code instructions, it is not intended to be used as a conversational chatbot or code assistant. We are currently working on improving quality and consistency on document-level translation. This model should is not intended to be use as a document-level translator. ## Bias, Risks, and Limitations TowerInstruct-v0.1 has not been aligned to human preferences, so the model may generate problematic outputs (e.g., hallucinations, harmful content, or false statements). ## Prompt Format TowerInstruct-v0.1 was trained using the ChatML prompt templates without any system prompts. An example follows below: ``` <|im_start|>user {USER PROMPT}<|im_end|> <|im_start|>assistant {MODEL RESPONSE}<|im_end|> <|im_start|>user [...] ``` ### Supervised tasks The prompts for all supervised tasks can be found in [TowerBlocks-v0.2](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2). We have used multiple prompt templates for each task. While different prompts may offer different outputs, the difference in downstream performance should be very minimal. ## Training Details ### Training Data Link to [TowerBlocks-v0.2](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2). #### Training Hyperparameters The following hyperparameters were used during training: - total_train_batch_size: 256 - learning_rate: 7e-06 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 500 - weight_decay: 0.01 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - num_epochs: 4 - max_seq_length: 2048 ## Citation ```bibtex @misc{tower_llm_2024, title={Tower: An Open Multilingual Large Language Model for Translation-Related Tasks}, author={Duarte M. Alves and José Pombal and Nuno M. Guerreiro and Pedro H. Martins and João Alves and Amin Farajian and Ben Peters and Ricardo Rei and Patrick Fernandes and Sweta Agrawal and Pierre Colombo and José G. C. de Souza and André F. T. Martins}, year={2024}, eprint={2402.17733}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
hataphu/wav2vec2-vi-300m
hataphu
2024-11-12T07:37:15Z
5
1
null
[ "safetensors", "wav2vec2", "automatic-speech-recognition", "vi", "base_model:facebook/wav2vec2-xls-r-300m", "base_model:finetune:facebook/wav2vec2-xls-r-300m", "license:mit", "region:us" ]
automatic-speech-recognition
2024-11-11T11:16:35Z
--- license: mit language: - vi metrics: - wer base_model: - facebook/wav2vec2-xls-r-300m pipeline_tag: automatic-speech-recognition --- Tôi đã fine-tune với 15Gb dữ liệu audio với kết quả Wer: 24.46 ## Cách sử dụng ``` python import torch from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC import torchaudio mydevice = 'cuda' processor = Wav2Vec2Processor.from_pretrained("hataphu/wav2vec2-vi-300m") model = Wav2Vec2ForCTC.from_pretrained("hataphu/wav2vec2-vi-300m") model.to(mydevice) model.eval() audio_input, sampling_rate = torchaudio.load('audio-path-file') input_values = processor( audio_input.squeeze().numpy(), sampling_rate=sampling_rate ).input_values[0] logits = model(torch.tensor(input_values).unsqueeze(0).to(mydevice)).logits predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.decode(predicted_ids[0]) print(transcription) ```
lebeda/my-LLaMA-final
lebeda
2024-11-12T07:36:33Z
35
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "generated_from_trainer", "text-generation-inference", "endpoints_compatible", "region:us" ]
null
2024-11-11T20:29:54Z
--- library_name: transformers tags: - generated_from_trainer model-index: - name: my-LLaMA-final results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my-LLaMA-final This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 1 ### Framework versions - Transformers 4.46.2 - Pytorch 2.0.1+cu118 - Datasets 3.1.0 - Tokenizers 0.20.3
DiatWork/GPT-Neox-MentalHealth-Finetune
DiatWork
2024-11-12T07:33:49Z
76
0
transformers
[ "transformers", "safetensors", "gpt_neo", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-11-12T07:31:15Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Imkaran/twitter-roberta-base-sentiment-latest_12112024T123630
Imkaran
2024-11-12T07:30:06Z
106
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:cardiffnlp/twitter-roberta-base-sentiment-latest", "base_model:finetune:cardiffnlp/twitter-roberta-base-sentiment-latest", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-11-12T07:29:44Z
--- library_name: transformers base_model: cardiffnlp/twitter-roberta-base-sentiment-latest tags: - generated_from_trainer metrics: - f1 model-index: - name: twitter-roberta-base-sentiment-latest_12112024T123630 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # twitter-roberta-base-sentiment-latest_12112024T123630 This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment-latest](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5160 - F1: 0.8689 - Learning Rate: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 600 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Rate | |:-------------:|:-------:|:----:|:---------------:|:------:|:------:| | No log | 0.9942 | 43 | 1.7742 | 0.1656 | 7e-07 | | No log | 1.9884 | 86 | 1.7368 | 0.2208 | 0.0000 | | No log | 2.9827 | 129 | 1.6531 | 0.3182 | 0.0000 | | No log | 4.0 | 173 | 1.5111 | 0.4169 | 0.0000 | | No log | 4.9942 | 216 | 1.3427 | 0.4913 | 0.0000 | | No log | 5.9884 | 259 | 1.1750 | 0.5379 | 0.0000 | | No log | 6.9827 | 302 | 1.0970 | 0.5486 | 5e-06 | | No log | 8.0 | 346 | 1.0081 | 0.5856 | 0.0000 | | No log | 8.9942 | 389 | 0.9728 | 0.5991 | 0.0000 | | No log | 9.9884 | 432 | 0.9005 | 0.6481 | 0.0000 | | No log | 10.9827 | 475 | 0.8614 | 0.6640 | 0.0000 | | 1.2671 | 12.0 | 519 | 0.7905 | 0.7202 | 0.0000 | | 1.2671 | 12.9942 | 562 | 0.7560 | 0.7367 | 0.0000 | | 1.2671 | 13.9884 | 605 | 0.7399 | 0.7421 | 1e-05 | | 1.2671 | 14.9827 | 648 | 0.6596 | 0.7804 | 0.0000 | | 1.2671 | 16.0 | 692 | 0.6331 | 0.7966 | 0.0000 | | 1.2671 | 16.9942 | 735 | 0.6272 | 0.7994 | 0.0000 | | 1.2671 | 17.9884 | 778 | 0.5878 | 0.8249 | 0.0000 | | 1.2671 | 18.9827 | 821 | 0.5564 | 0.8386 | 0.0000 | | 1.2671 | 20.0 | 865 | 0.5482 | 0.8474 | 0.0000 | | 1.2671 | 20.9942 | 908 | 0.5523 | 0.8501 | 0.0000 | | 1.2671 | 21.9884 | 951 | 0.5309 | 0.8534 | 0.0000 | | 1.2671 | 22.9827 | 994 | 0.5364 | 0.8582 | 4e-06 | | 0.4473 | 24.0 | 1038 | 0.5176 | 0.8638 | 3e-06 | | 0.4473 | 24.9942 | 1081 | 0.5256 | 0.8663 | 0.0000 | | 0.4473 | 25.9884 | 1124 | 0.5182 | 0.8691 | 0.0000 | | 0.4473 | 26.9827 | 1167 | 0.5237 | 0.8680 | 8e-07 | | 0.4473 | 28.0 | 1211 | 0.5160 | 0.8689 | 3e-07 | | 0.4473 | 28.9942 | 1254 | 0.5216 | 0.8673 | 1e-07 | | 0.4473 | 29.8266 | 1290 | 0.5220 | 0.8670 | 0.0 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.5.1+cu124 - Datasets 3.1.0 - Tokenizers 0.19.1
hueda2214/bert-base-japanese-v3-ner-wikipedia-ner
hueda2214
2024-11-12T07:29:55Z
106
0
transformers
[ "transformers", "safetensors", "bert", "token-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-11-12T07:29:26Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
featherless-ai-quants/Gunulhona-Hermes-Llama-Merge-GGUF
featherless-ai-quants
2024-11-12T07:29:05Z
15
0
null
[ "gguf", "text-generation", "base_model:Gunulhona/Hermes-Llama-Merge", "base_model:quantized:Gunulhona/Hermes-Llama-Merge", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-11-12T07:17:33Z
--- base_model: Gunulhona/Hermes-Llama-Merge pipeline_tag: text-generation quantized_by: featherless-ai-quants --- # Gunulhona/Hermes-Llama-Merge GGUF Quantizations 🚀 ![Featherless AI Quants](./featherless-quants.png) *Optimized GGUF quantization files for enhanced model performance* > Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee. --- ## Available Quantizations 📊 | Quantization Type | File | Size | |-------------------|------|------| | IQ4_XS | [Gunulhona-Hermes-Llama-Merge-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/Gunulhona-Hermes-Llama-Merge-GGUF/blob/main/Gunulhona-Hermes-Llama-Merge-IQ4_XS.gguf) | 4276.63 MB | | Q2_K | [Gunulhona-Hermes-Llama-Merge-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/Gunulhona-Hermes-Llama-Merge-GGUF/blob/main/Gunulhona-Hermes-Llama-Merge-Q2_K.gguf) | 3031.86 MB | | Q3_K_L | [Gunulhona-Hermes-Llama-Merge-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/Gunulhona-Hermes-Llama-Merge-GGUF/blob/main/Gunulhona-Hermes-Llama-Merge-Q3_K_L.gguf) | 4121.75 MB | | Q3_K_M | [Gunulhona-Hermes-Llama-Merge-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/Gunulhona-Hermes-Llama-Merge-GGUF/blob/main/Gunulhona-Hermes-Llama-Merge-Q3_K_M.gguf) | 3832.75 MB | | Q3_K_S | [Gunulhona-Hermes-Llama-Merge-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/Gunulhona-Hermes-Llama-Merge-GGUF/blob/main/Gunulhona-Hermes-Llama-Merge-Q3_K_S.gguf) | 3494.75 MB | | Q4_K_M | [Gunulhona-Hermes-Llama-Merge-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/Gunulhona-Hermes-Llama-Merge-GGUF/blob/main/Gunulhona-Hermes-Llama-Merge-Q4_K_M.gguf) | 4692.78 MB | | Q4_K_S | [Gunulhona-Hermes-Llama-Merge-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/Gunulhona-Hermes-Llama-Merge-GGUF/blob/main/Gunulhona-Hermes-Llama-Merge-Q4_K_S.gguf) | 4475.28 MB | | Q5_K_M | [Gunulhona-Hermes-Llama-Merge-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/Gunulhona-Hermes-Llama-Merge-GGUF/blob/main/Gunulhona-Hermes-Llama-Merge-Q5_K_M.gguf) | 5467.41 MB | | Q5_K_S | [Gunulhona-Hermes-Llama-Merge-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/Gunulhona-Hermes-Llama-Merge-GGUF/blob/main/Gunulhona-Hermes-Llama-Merge-Q5_K_S.gguf) | 5339.91 MB | | Q6_K | [Gunulhona-Hermes-Llama-Merge-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/Gunulhona-Hermes-Llama-Merge-GGUF/blob/main/Gunulhona-Hermes-Llama-Merge-Q6_K.gguf) | 6290.45 MB | | Q8_0 | [Gunulhona-Hermes-Llama-Merge-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/Gunulhona-Hermes-Llama-Merge-GGUF/blob/main/Gunulhona-Hermes-Llama-Merge-Q8_0.gguf) | 8145.12 MB | --- ## ⚡ Powered by [Featherless AI](https://featherless.ai) ### Key Features - 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly - 🛠️ **Zero Infrastructure** - No server setup or maintenance required - 📚 **Vast Compatibility** - Support for 2400+ models and counting - 💎 **Affordable Pricing** - Starting at just $10/month --- **Links:** [Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
bartowski/Tess-R1-Ballad-Mistral-Large-2-123B-GGUF
bartowski
2024-11-12T07:24:14Z
276
0
null
[ "gguf", "text-generation", "base_model:migtissera/Tess-R1-Ballad-Mistral-Large-2-123B", "base_model:quantized:migtissera/Tess-R1-Ballad-Mistral-Large-2-123B", "license:other", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
text-generation
2024-11-12T00:38:38Z
--- quantized_by: bartowski pipeline_tag: text-generation license_name: mistral-research-licence license_link: https://mistral.ai/licenses/MRL-0.1.md base_model: neurolattice/Tess-R1-Ballad-Mistral-Large-2-123B license: other --- ## Llamacpp imatrix Quantizations of Tess-R1-Ballad-Mistral-Large-2-123B Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b4058">b4058</a> for quantization. Original model: https://huggingface.co/neurolattice/Tess-R1-Ballad-Mistral-Large-2-123B All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) Run them in [LM Studio](https://lmstudio.ai/) ## Prompt format ``` <|im_start|>system {system_prompt}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Split | Description | | -------- | ---------- | --------- | ----- | ----------- | | [Tess-R1-Ballad-Mistral-Large-2-123B-Q8_0.gguf](https://huggingface.co/bartowski/Tess-R1-Ballad-Mistral-Large-2-123B-GGUF/tree/main/Tess-R1-Ballad-Mistral-Large-2-123B-Q8_0) | Q8_0 | 130.28GB | true | Extremely high quality, generally unneeded but max available quant. | | [Tess-R1-Ballad-Mistral-Large-2-123B-Q6_K_L.gguf](https://huggingface.co/bartowski/Tess-R1-Ballad-Mistral-Large-2-123B-GGUF/tree/main/Tess-R1-Ballad-Mistral-Large-2-123B-Q6_K_L) | Q6_K_L | 100.78GB | true | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. | | [Tess-R1-Ballad-Mistral-Large-2-123B-Q6_K.gguf](https://huggingface.co/bartowski/Tess-R1-Ballad-Mistral-Large-2-123B-GGUF/tree/main/Tess-R1-Ballad-Mistral-Large-2-123B-Q6_K) | Q6_K | 100.59GB | true | Very high quality, near perfect, *recommended*. | | [Tess-R1-Ballad-Mistral-Large-2-123B-Q5_K_L.gguf](https://huggingface.co/bartowski/Tess-R1-Ballad-Mistral-Large-2-123B-GGUF/tree/main/Tess-R1-Ballad-Mistral-Large-2-123B-Q5_K_L) | Q5_K_L | 86.74GB | true | Uses Q8_0 for embed and output weights. High quality, *recommended*. | | [Tess-R1-Ballad-Mistral-Large-2-123B-Q5_K_M.gguf](https://huggingface.co/bartowski/Tess-R1-Ballad-Mistral-Large-2-123B-GGUF/tree/main/Tess-R1-Ballad-Mistral-Large-2-123B-Q5_K_M) | Q5_K_M | 86.49GB | true | High quality, *recommended*. | | [Tess-R1-Ballad-Mistral-Large-2-123B-Q5_K_S.gguf](https://huggingface.co/bartowski/Tess-R1-Ballad-Mistral-Large-2-123B-GGUF/tree/main/Tess-R1-Ballad-Mistral-Large-2-123B-Q5_K_S) | Q5_K_S | 84.36GB | true | High quality, *recommended*. | | [Tess-R1-Ballad-Mistral-Large-2-123B-Q4_K_L.gguf](https://huggingface.co/bartowski/Tess-R1-Ballad-Mistral-Large-2-123B-GGUF/tree/main/Tess-R1-Ballad-Mistral-Large-2-123B-Q4_K_L) | Q4_K_L | 73.52GB | true | Uses Q8_0 for embed and output weights. Good quality, *recommended*. | | [Tess-R1-Ballad-Mistral-Large-2-123B-Q4_K_M.gguf](https://huggingface.co/bartowski/Tess-R1-Ballad-Mistral-Large-2-123B-GGUF/tree/main/Tess-R1-Ballad-Mistral-Large-2-123B-Q4_K_M) | Q4_K_M | 73.22GB | true | Good quality, default size for most use cases, *recommended*. | | [Tess-R1-Ballad-Mistral-Large-2-123B-Q4_K_S.gguf](https://huggingface.co/bartowski/Tess-R1-Ballad-Mistral-Large-2-123B-GGUF/tree/main/Tess-R1-Ballad-Mistral-Large-2-123B-Q4_K_S) | Q4_K_S | 69.57GB | true | Slightly lower quality with more space savings, *recommended*. | | [Tess-R1-Ballad-Mistral-Large-2-123B-Q4_0.gguf](https://huggingface.co/bartowski/Tess-R1-Ballad-Mistral-Large-2-123B-GGUF/tree/main/Tess-R1-Ballad-Mistral-Large-2-123B-Q4_0) | Q4_0 | 69.32GB | true | Legacy format, generally not worth using over similarly sized formats | | [Tess-R1-Ballad-Mistral-Large-2-123B-IQ4_NL.gguf](https://huggingface.co/bartowski/Tess-R1-Ballad-Mistral-Large-2-123B-GGUF/tree/main/Tess-R1-Ballad-Mistral-Large-2-123B-IQ4_NL) | IQ4_NL | 69.22GB | true | Similar to IQ4_XS, but slightly larger. | | [Tess-R1-Ballad-Mistral-Large-2-123B-IQ4_XS.gguf](https://huggingface.co/bartowski/Tess-R1-Ballad-Mistral-Large-2-123B-GGUF/tree/main/Tess-R1-Ballad-Mistral-Large-2-123B-IQ4_XS) | IQ4_XS | 65.43GB | true | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [Tess-R1-Ballad-Mistral-Large-2-123B-Q3_K_XL.gguf](https://huggingface.co/bartowski/Tess-R1-Ballad-Mistral-Large-2-123B-GGUF/tree/main/Tess-R1-Ballad-Mistral-Large-2-123B-Q3_K_XL) | Q3_K_XL | 64.91GB | true | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. | | [Tess-R1-Ballad-Mistral-Large-2-123B-Q3_K_L.gguf](https://huggingface.co/bartowski/Tess-R1-Ballad-Mistral-Large-2-123B-GGUF/tree/main/Tess-R1-Ballad-Mistral-Large-2-123B-Q3_K_L) | Q3_K_L | 64.55GB | true | Lower quality but usable, good for low RAM availability. | | [Tess-R1-Ballad-Mistral-Large-2-123B-Q3_K_M.gguf](https://huggingface.co/bartowski/Tess-R1-Ballad-Mistral-Large-2-123B-GGUF/tree/main/Tess-R1-Ballad-Mistral-Large-2-123B-Q3_K_M) | Q3_K_M | 59.10GB | true | Low quality. | | [Tess-R1-Ballad-Mistral-Large-2-123B-IQ3_M.gguf](https://huggingface.co/bartowski/Tess-R1-Ballad-Mistral-Large-2-123B-GGUF/tree/main/Tess-R1-Ballad-Mistral-Large-2-123B-IQ3_M) | IQ3_M | 55.28GB | true | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [Tess-R1-Ballad-Mistral-Large-2-123B-Q3_K_S.gguf](https://huggingface.co/bartowski/Tess-R1-Ballad-Mistral-Large-2-123B-GGUF/tree/main/Tess-R1-Ballad-Mistral-Large-2-123B-Q3_K_S) | Q3_K_S | 52.85GB | true | Low quality, not recommended. | | [Tess-R1-Ballad-Mistral-Large-2-123B-IQ3_XS.gguf](https://huggingface.co/bartowski/Tess-R1-Ballad-Mistral-Large-2-123B-GGUF/tree/main/Tess-R1-Ballad-Mistral-Large-2-123B-IQ3_XS) | IQ3_XS | 50.14GB | true | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [Tess-R1-Ballad-Mistral-Large-2-123B-IQ3_XXS.gguf](https://huggingface.co/bartowski/Tess-R1-Ballad-Mistral-Large-2-123B-GGUF/blob/main/Tess-R1-Ballad-Mistral-Large-2-123B-IQ3_XXS.gguf) | IQ3_XXS | 47.01GB | false | Lower quality, new method with decent performance, comparable to Q3 quants. | | [Tess-R1-Ballad-Mistral-Large-2-123B-Q2_K_L.gguf](https://huggingface.co/bartowski/Tess-R1-Ballad-Mistral-Large-2-123B-GGUF/blob/main/Tess-R1-Ballad-Mistral-Large-2-123B-Q2_K_L.gguf) | Q2_K_L | 45.59GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. | | [Tess-R1-Ballad-Mistral-Large-2-123B-Q2_K.gguf](https://huggingface.co/bartowski/Tess-R1-Ballad-Mistral-Large-2-123B-GGUF/blob/main/Tess-R1-Ballad-Mistral-Large-2-123B-Q2_K.gguf) | Q2_K | 45.20GB | false | Very low quality but surprisingly usable. | | [Tess-R1-Ballad-Mistral-Large-2-123B-IQ2_M.gguf](https://huggingface.co/bartowski/Tess-R1-Ballad-Mistral-Large-2-123B-GGUF/blob/main/Tess-R1-Ballad-Mistral-Large-2-123B-IQ2_M.gguf) | IQ2_M | 41.62GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. | | [Tess-R1-Ballad-Mistral-Large-2-123B-IQ2_S.gguf](https://huggingface.co/bartowski/Tess-R1-Ballad-Mistral-Large-2-123B-GGUF/blob/main/Tess-R1-Ballad-Mistral-Large-2-123B-IQ2_S.gguf) | IQ2_S | 38.38GB | false | Low quality, uses SOTA techniques to be usable. | | [Tess-R1-Ballad-Mistral-Large-2-123B-IQ2_XS.gguf](https://huggingface.co/bartowski/Tess-R1-Ballad-Mistral-Large-2-123B-GGUF/blob/main/Tess-R1-Ballad-Mistral-Large-2-123B-IQ2_XS.gguf) | IQ2_XS | 36.08GB | false | Low quality, uses SOTA techniques to be usable. | | [Tess-R1-Ballad-Mistral-Large-2-123B-IQ2_XXS.gguf](https://huggingface.co/bartowski/Tess-R1-Ballad-Mistral-Large-2-123B-GGUF/blob/main/Tess-R1-Ballad-Mistral-Large-2-123B-IQ2_XXS.gguf) | IQ2_XXS | 32.43GB | false | Very low quality, uses SOTA techniques to be usable. | | [Tess-R1-Ballad-Mistral-Large-2-123B-IQ1_M.gguf](https://huggingface.co/bartowski/Tess-R1-Ballad-Mistral-Large-2-123B-GGUF/blob/main/Tess-R1-Ballad-Mistral-Large-2-123B-IQ1_M.gguf) | IQ1_M | 28.39GB | false | Extremely low quality, *not* recommended. | ## Embed/output weights Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to. Some say that this improves the quality, others don't notice any difference. If you use these models PLEASE COMMENT with your findings. I would like feedback that these are actually used and useful so I don't keep uploading quants no one is using. Thanks! ## Downloading using huggingface-cli First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download bartowski/Tess-R1-Ballad-Mistral-Large-2-123B-GGUF --include "Tess-R1-Ballad-Mistral-Large-2-123B-Q4_K_M.gguf" --local-dir ./ ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download bartowski/Tess-R1-Ballad-Mistral-Large-2-123B-GGUF --include "Tess-R1-Ballad-Mistral-Large-2-123B-Q8_0/*" --local-dir ./ ``` You can either specify a new local-dir (Tess-R1-Ballad-Mistral-Large-2-123B-Q8_0) or download them all in place (./) ## Q4_0_X_X These are *NOT* for Metal (Apple) offloading, only ARM chips. If you're using an ARM chip, the Q4_0_X_X quants will have a substantial speedup. Check out Q4_0_4_4 speed comparisons [on the original pull request](https://github.com/ggerganov/llama.cpp/pull/5780#pullrequestreview-21657544660) To check which one would work best for your ARM chip, you can check [AArch64 SoC features](https://gpages.juszkiewicz.com.pl/arm-socs-table/arm-socs.html) (thanks EloyOn!). ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. ## Credits Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset. Thank you ZeroWw for the inspiration to experiment with embed/output. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
XelotX/Qwen2.5-Coder-32B-Instruct-iQuants
XelotX
2024-11-12T07:23:26Z
212
0
null
[ "gguf", "code", "codeqwen", "chat", "qwen", "qwen-coder", "text-generation", "en", "base_model:Qwen/Qwen2.5-Coder-32B-Instruct", "base_model:quantized:Qwen/Qwen2.5-Coder-32B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
text-generation
2024-11-12T07:23:25Z
--- quantized_by: bartowski pipeline_tag: text-generation language: - en license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct/blob/main/LICENSE tags: - code - codeqwen - chat - qwen - qwen-coder base_model: Qwen/Qwen2.5-Coder-32B-Instruct license: apache-2.0 --- ## Llamacpp imatrix Quantizations of Qwen2.5-Coder-32B-Instruct Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b4014">b4014</a> for quantization. Original model: https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) Run them in [LM Studio](https://lmstudio.ai/) ## Prompt format ``` <|im_start|>system {system_prompt}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Split | Description | | -------- | ---------- | --------- | ----- | ----------- | | [Qwen2.5-Coder-32B-Instruct-Q8_0.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-Q8_0.gguf) | Q8_0 | 34.82GB | false | Extremely high quality, generally unneeded but max available quant. | | [Qwen2.5-Coder-32B-Instruct-Q6_K_L.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-Q6_K_L.gguf) | Q6_K_L | 27.26GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. | | [Qwen2.5-Coder-32B-Instruct-Q6_K.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-Q6_K.gguf) | Q6_K | 26.89GB | false | Very high quality, near perfect, *recommended*. | | [Qwen2.5-Coder-32B-Instruct-Q5_K_L.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-Q5_K_L.gguf) | Q5_K_L | 23.74GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. | | [Qwen2.5-Coder-32B-Instruct-Q5_K_M.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-Q5_K_M.gguf) | Q5_K_M | 23.26GB | false | High quality, *recommended*. | | [Qwen2.5-Coder-32B-Instruct-Q5_K_S.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-Q5_K_S.gguf) | Q5_K_S | 22.64GB | false | High quality, *recommended*. | | [Qwen2.5-Coder-32B-Instruct-Q4_K_L.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-Q4_K_L.gguf) | Q4_K_L | 20.43GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. | | [Qwen2.5-Coder-32B-Instruct-Q4_K_M.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-Q4_K_M.gguf) | Q4_K_M | 19.85GB | false | Good quality, default size for most use cases, *recommended*. | | [Qwen2.5-Coder-32B-Instruct-Q4_K_S.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-Q4_K_S.gguf) | Q4_K_S | 18.78GB | false | Slightly lower quality with more space savings, *recommended*. | | [Qwen2.5-Coder-32B-Instruct-Q4_0.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-Q4_0.gguf) | Q4_0 | 18.71GB | false | Legacy format, generally not worth using over similarly sized formats | | [Qwen2.5-Coder-32B-Instruct-IQ4_NL.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-IQ4_NL.gguf) | IQ4_NL | 18.68GB | false | Similar to IQ4_XS, but slightly larger. | | [Qwen2.5-Coder-32B-Instruct-Q4_0_8_8.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-Q4_0_8_8.gguf) | Q4_0_8_8 | 18.64GB | false | Optimized for ARM inference. Requires 'sve' support (see link below). *Don't use on Mac or Windows*. | | [Qwen2.5-Coder-32B-Instruct-Q4_0_4_8.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-Q4_0_4_8.gguf) | Q4_0_4_8 | 18.64GB | false | Optimized for ARM inference. Requires 'i8mm' support (see link below). *Don't use on Mac or Windows*. | | [Qwen2.5-Coder-32B-Instruct-Q4_0_4_4.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-Q4_0_4_4.gguf) | Q4_0_4_4 | 18.64GB | false | Optimized for ARM inference. Should work well on all ARM chips, pick this if you're unsure. *Don't use on Mac or Windows*. | | [Qwen2.5-Coder-32B-Instruct-Q3_K_XL.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-Q3_K_XL.gguf) | Q3_K_XL | 17.93GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. | | [Qwen2.5-Coder-32B-Instruct-IQ4_XS.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-IQ4_XS.gguf) | IQ4_XS | 17.69GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [Qwen2.5-Coder-32B-Instruct-Q3_K_L.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-Q3_K_L.gguf) | Q3_K_L | 17.25GB | false | Lower quality but usable, good for low RAM availability. | | [Qwen2.5-Coder-32B-Instruct-Q3_K_M.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-Q3_K_M.gguf) | Q3_K_M | 15.94GB | false | Low quality. | | [Qwen2.5-Coder-32B-Instruct-IQ3_M.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-IQ3_M.gguf) | IQ3_M | 14.81GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [Qwen2.5-Coder-32B-Instruct-Q3_K_S.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-Q3_K_S.gguf) | Q3_K_S | 14.39GB | false | Low quality, not recommended. | | [Qwen2.5-Coder-32B-Instruct-IQ3_XS.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-IQ3_XS.gguf) | IQ3_XS | 13.71GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [Qwen2.5-Coder-32B-Instruct-Q2_K_L.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-Q2_K_L.gguf) | Q2_K_L | 13.07GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. | | [Qwen2.5-Coder-32B-Instruct-IQ3_XXS.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-IQ3_XXS.gguf) | IQ3_XXS | 12.84GB | false | Lower quality, new method with decent performance, comparable to Q3 quants. | | [Qwen2.5-Coder-32B-Instruct-Q2_K.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-Q2_K.gguf) | Q2_K | 12.31GB | false | Very low quality but surprisingly usable. | | [Qwen2.5-Coder-32B-Instruct-IQ2_M.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-IQ2_M.gguf) | IQ2_M | 11.26GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. | | [Qwen2.5-Coder-32B-Instruct-IQ2_S.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-IQ2_S.gguf) | IQ2_S | 10.39GB | false | Low quality, uses SOTA techniques to be usable. | | [Qwen2.5-Coder-32B-Instruct-IQ2_XS.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-IQ2_XS.gguf) | IQ2_XS | 9.96GB | false | Low quality, uses SOTA techniques to be usable. | | [Qwen2.5-Coder-32B-Instruct-IQ2_XXS.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-IQ2_XXS.gguf) | IQ2_XXS | 9.03GB | false | Very low quality, uses SOTA techniques to be usable. | ## Embed/output weights Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to. Some say that this improves the quality, others don't notice any difference. If you use these models PLEASE COMMENT with your findings. I would like feedback that these are actually used and useful so I don't keep uploading quants no one is using. Thanks! ## Downloading using huggingface-cli First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download bartowski/Qwen2.5-Coder-32B-Instruct-GGUF --include "Qwen2.5-Coder-32B-Instruct-Q4_K_M.gguf" --local-dir ./ ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download bartowski/Qwen2.5-Coder-32B-Instruct-GGUF --include "Qwen2.5-Coder-32B-Instruct-Q8_0/*" --local-dir ./ ``` You can either specify a new local-dir (Qwen2.5-Coder-32B-Instruct-Q8_0) or download them all in place (./) ## Q4_0_X_X These are *NOT* for Metal (Apple) offloading, only ARM chips. If you're using an ARM chip, the Q4_0_X_X quants will have a substantial speedup. Check out Q4_0_4_4 speed comparisons [on the original pull request](https://github.com/ggerganov/llama.cpp/pull/5780#pullrequestreview-21657544660) To check which one would work best for your ARM chip, you can check [AArch64 SoC features](https://gpages.juszkiewicz.com.pl/arm-socs-table/arm-socs.html) (thanks EloyOn!). ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. ## Credits Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset. Thank you ZeroWw for the inspiration to experiment with embed/output. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
XelotX/Qwen2.5-Coder-32B-Instruct-Quants
XelotX
2024-11-12T07:23:07Z
236
1
transformers
[ "transformers", "gguf", "code", "codeqwen", "chat", "qwen", "qwen-coder", "text-generation", "en", "arxiv:2409.12186", "arxiv:2407.10671", "base_model:Qwen/Qwen2.5-Coder-32B-Instruct", "base_model:quantized:Qwen/Qwen2.5-Coder-32B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-11-12T07:23:07Z
--- license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/LICENSE language: - en base_model: - Qwen/Qwen2.5-Coder-32B-Instruct pipeline_tag: text-generation library_name: transformers tags: - code - codeqwen - chat - qwen - qwen-coder --- # Qwen2.5-Coder-32B-Instruct-GGUF ## Introduction Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5: - Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o. - A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies. - **Long-context Support** up to 128K tokens. **This repo contains the instruction-tuned 32B Qwen2.5-Coder model in the GGUF Format**, which has the following features: - Type: Causal Language Models - Training Stage: Pretraining & Post-training - Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias - Number of Parameters: 32.5B - Number of Paramaters (Non-Embedding): 31.0B - Number of Layers: 64 - Number of Attention Heads (GQA): 40 for Q and 8 for KV - Context Length: Full 32,768 tokens - Note: Currently, only vLLM supports YARN for length extrapolating. If you want to process sequences up to 131,072 tokens, please refer to non-GGUF models. - Quantization: q2_K, q3_K_M, q4_0, q4_K_M, q5_0, q5_K_M, q6_K, q8_0 For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186). ## Quickstart Check out our [llama.cpp documentation](https://qwen.readthedocs.io/en/latest/run_locally/llama.cpp.html) for more usage guide. We advise you to clone [`llama.cpp`](https://github.com/ggerganov/llama.cpp) and install it following the official guide. We follow the latest version of llama.cpp. In the following demonstration, we assume that you are running commands under the repository `llama.cpp`. Since cloning the entire repo may be inefficient, you can manually download the GGUF file that you need or use `huggingface-cli`: 1. Install ```shell pip install -U huggingface_hub ``` 2. Download: ```shell huggingface-cli download Qwen/Qwen2.5-Coder-32B-Instruct-GGUF --include "qwen2.5-coder-32b-instruct-q5_k_m*.gguf" --local-dir . --local-dir-use-symlinks False ``` For large files, we split them into multiple segments due to the limitation of file upload. They share a prefix, with a suffix indicating its index. For examples, `qwen2.5-coder-32b-instruct-q5_k_m-00001-of-00003.gguf`, `qwen2.5-coder-32b-instruct-q5_k_m-00002-of-00003.gguf` and `qwen2.5-coder-32b-instruct-q5_k_m-00003-of-00003.gguf`. The above command will download all of them. 3. (Optional) Merge: For split files, you need to merge them first with the command `llama-gguf-split` as shown below: ```bash # ./llama-gguf-split --merge <first-split-file-path> <merged-file-path> ./llama-gguf-split --merge qwen2.5-coder-32b-instruct-q5_k_m-00001-of-00003.gguf qwen2.5-coder-32b-instruct-q5_k_m.gguf ``` For users, to achieve chatbot-like experience, it is recommended to commence in the conversation mode: ```shell ./llama-cli -m <gguf-file-path> \ -co -cnv -p "You are Qwen, created by Alibaba Cloud. You are a helpful assistant." \ -fa -ngl 80 -n 512 ``` ## Evaluation & Performance Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/). For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html). ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{hui2024qwen2, title={Qwen2. 5-Coder Technical Report}, author={Hui, Binyuan and Yang, Jian and Cui, Zeyu and Yang, Jiaxi and Liu, Dayiheng and Zhang, Lei and Liu, Tianyu and Zhang, Jiajun and Yu, Bowen and Dang, Kai and others}, journal={arXiv preprint arXiv:2409.12186}, year={2024} } @article{qwen2, title={Qwen2 Technical Report}, author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan}, journal={arXiv preprint arXiv:2407.10671}, year={2024} } ```
MLking2/medical_helper
MLking2
2024-11-12T07:22:47Z
147
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-12T07:20:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
unsloth/Qwen2.5-Math-1.5B-Instruct
unsloth
2024-11-12T07:21:22Z
5,213
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "unsloth", "conversational", "en", "arxiv:2409.12122", "base_model:Qwen/Qwen2.5-Math-1.5B-Instruct", "base_model:finetune:Qwen/Qwen2.5-Math-1.5B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-23T05:42:23Z
--- base_model: Qwen/Qwen2.5-Math-1.5B-Instruct language: - en library_name: transformers license: apache-2.0 tags: - unsloth - transformers --- # Finetune Llama 3.1, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth! We have a Qwen 2.5 (all model sizes) [free Google Colab Tesla T4 notebook](https://colab.research.google.com/drive/1Kose-ucXO1IBaZq5BvbwWieuubP7hxvQ?usp=sharing). Also a [Qwen 2.5 conversational style notebook](https://colab.research.google.com/drive/1qN1CEalC70EO1wGKhNxs1go1W9So61R5?usp=sharing). [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) ## ✨ Finetune for Free All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **Llama-3.1 8b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less | | **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less | | **Gemma-2 9b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less | | **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less | | **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less | | **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less | - This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates. - This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr. - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster. # Qwen2.5-Math-1.5B-Instruct > [!Warning] > <div align="center"> > <b> > 🚨 Qwen2.5-Math mainly supports solving English and Chinese math problems through CoT and TIR. We do not recommend using this series of models for other tasks. > </b> > </div> ## Introduction In August 2024, we released the first series of mathematical LLMs - [Qwen2-Math](https://qwenlm.github.io/blog/qwen2-math/) - of our Qwen family. A month later, we have upgraded it and open-sourced **Qwen2.5-Math** series, including base models **Qwen2.5-Math-1.5B/7B/72B**, instruction-tuned models **Qwen2.5-Math-1.5B/7B/72B-Instruct**, and mathematical reward model **Qwen2.5-Math-RM-72B**. Unlike Qwen2-Math series which only supports using Chain-of-Thught (CoT) to solve English math problems, Qwen2.5-Math series is expanded to support using both CoT and Tool-integrated Reasoning (TIR) to solve math problems in both Chinese and English. The Qwen2.5-Math series models have achieved significant performance improvements compared to the Qwen2-Math series models on the Chinese and English mathematics benchmarks with CoT. ![](http://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2.5/qwen2.5-math-pipeline.jpeg) While CoT plays a vital role in enhancing the reasoning capabilities of LLMs, it faces challenges in achieving computational accuracy and handling complex mathematical or algorithmic reasoning tasks, such as finding the roots of a quadratic equation or computing the eigenvalues of a matrix. TIR can further improve the model's proficiency in precise computation, symbolic manipulation, and algorithmic manipulation. Qwen2.5-Math-1.5B/7B/72B-Instruct achieve 79.7, 85.3, and 87.8 respectively on the MATH benchmark using TIR. ## Model Details For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen2.5-math/) and [GitHub repo](https://github.com/QwenLM/Qwen2.5-Math). ## Requirements * `transformers>=4.37.0` for Qwen2.5-Math models. The latest version is recommended. > [!Warning] > <div align="center"> > <b> > 🚨 This is a must because <code>transformers</code> integrated Qwen2 codes since <code>4.37.0</code>. > </b> > </div> For requirements on GPU memory and the respective throughput, see similar results of Qwen2 [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html). ## Quick Start > [!Important] > > **Qwen2.5-Math-1.5B-Instruct** is an instruction model for chatting; > > **Qwen2.5-Math-1.5B** is a base model typically used for completion and few-shot inference, serving as a better starting point for fine-tuning. > ### 🤗 Hugging Face Transformers Qwen2.5-Math can be deployed and infered in the same way as [Qwen2.5](https://github.com/QwenLM/Qwen2.5). Here we show a code snippet to show you how to use the chat model with `transformers`: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Qwen/Qwen2.5-Math-1.5B-Instruct" device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "Find the value of $x$ that satisfies the equation $4x+5 = 6x+7$." # CoT messages = [ {"role": "system", "content": "Please reason step by step, and put your final answer within \\boxed{}."}, {"role": "user", "content": prompt} ] # TIR messages = [ {"role": "system", "content": "Please integrate natural language reasoning with programs to solve the problem above, and put your final answer within \\boxed{}."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ## Citation If you find our work helpful, feel free to give us a citation. ``` @article{yang2024qwen25mathtechnicalreportmathematical, title={Qwen2.5-Math Technical Report: Toward Mathematical Expert Model via Self-Improvement}, author={An Yang and Beichen Zhang and Binyuan Hui and Bofei Gao and Bowen Yu and Chengpeng Li and Dayiheng Liu and Jianhong Tu and Jingren Zhou and Junyang Lin and Keming Lu and Mingfeng Xue and Runji Lin and Tianyu Liu and Xingzhang Ren and Zhenru Zhang}, journal={arXiv preprint arXiv:2409.12122}, year={2024} } ```
Soorya03/Llama-2-7b-FitnessAssistant
Soorya03
2024-11-12T07:07:41Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-29T15:03:22Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ### Model Details ### Model Description This model is a fine-tuned version of Llama-2-7b-chat, optimized for tasks related to fitness assistance. It has been trained to provide recommendations, answer questions, and perform related language-based tasks within the fitness and exercise domain. - **Developed by:** Soorya03 - **Finetuned from model:** meta-llama/Llama-2-7b-chat-hf - **Model Type:** Causal Language Model with LoRA fine-tuning - **Language(s):** English - **License:** Refer to the original model’s license - **Model Repository:** Soorya03/Llama-2-7b-chat-finetune ## Uses ### Direct Use This model is intended for interactive fitness and exercise assistance, such as providing exercise recommendations, suggesting workout routines, and answering general fitness-related questions. ### Downstream Use May be adapted to various other fitness or health-oriented conversational applications. ### Out-of-Scope Use Not suitable for medical or professional health advice. Avoid use cases where specialized knowledge or regulated health guidelines are required. ### Bias, Risks, and Limitations - **Potential Bias:** The model was fine-tuned on a limited dataset and might not cover all fitness-related questions with cultural or demographic sensitivity. - **Limitations:** Not a replacement for professional medical advice. ### Recommendations Users should be aware that the model's responses are based on general fitness knowledge and are not specialized medical guidance. ## How to Get Started with the Model ! pip install accelerate peft bitsandbytes transformers trl import torch from transformers import BitsAndBytesConfig, AutoModelForCausalLM, AutoTokenizer, pipeline device_map = {"": 0} compute_dtype = getattr(torch, "float16") bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype="float16", bnb_4bit_use_double_quant=False, ) model = AutoModelForCausalLM.from_pretrained( "Soorya03/Llama-2-7b-FitnessAssistant", quantization_config=bnb_config, device_map=device_map ) model.config.use_cache = False model.config.pretraining_tp = 1 tokenizer = AutoTokenizer.from_pretrained("Soorya03/Llama-2-7b-FitnessAssistant", trust_remote_code=True) prompt = "Is it possible to build muscle while losing weight?" pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=200) result = pipe(f"[INST] {prompt} [/INST]") print(result[0]['generated_text']) ## Training Details ### Training Data The model was fine-tuned on a fitness and exercise dataset (onurSakar/GYM-Exercise) to improve its domain knowledge in providing fitness-related responses. <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> - **Method:** LoRA fine-tuning on top of Llama-2-7b-chat. - **Hyperparameters:** Adjusted learning rate, FP16 precision for efficiency. - **Compute:** Training was performed on Google Colab with a single GPU. ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data & Metrics #### Testing Data Sample fitness-related prompts were used for evaluation, but a formal benchmarking dataset was not utilized. #### Metrics Manual qualitative assessments showed the model’s suitability for fitness Q&A and general suggestions. ### Results The model effectively generates coherent responses related to fitness, workouts, and exercise routines, with accurate language comprehension. ## Environmental Impact ### Compute Infrastructure - **Hardware Type:** Google Colab (NVIDIA GPU) ## Model Architecture and Objective This model is based on the Llama-2-7b-chat architecture, adapted to provide conversational responses within a specific fitness domain.
weiser/124M-0.1
weiser
2024-11-12T07:01:38Z
149
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "IoT", "sensor", "embedded", "en", "dataset:HuggingFaceFW/fineweb", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-05T14:37:50Z
--- license: apache-2.0 datasets: - HuggingFaceFW/fineweb language: - en library_name: transformers tags: - IoT - sensor - embedded --- # TinyLLM ## Overview This repository hosts a small language model developed as part of the TinyLLM framework ([arxiv link]). These models are specifically designed and fine-tuned with sensor data to support embedded sensing applications. They enable locally hosted language models on low-computing-power devices, such as single-board computers. The models, based on the GPT-2 architecture, are trained using Nvidia's H100 GPUs. This repo provides base models that can be further fine-tuned for specific downstream tasks related to embedded sensing. ## Model Information - **Parameters:** 124M (Hidden Size = 768) - **Architecture:** Decoder-only transformer - **Training Data:** Up to 10B tokens from the [SHL](http://www.shl-dataset.org/) and [Fineweb](https://huggingface.co/datasets/HuggingFaceFW/fineweb) datasets, combined in a 1:9 ratio - **Input and Output Modality:** Text - **Context Length:** 1024 ## Acknowledgements We want to acknowledge the open-source frameworks [llm.c](https://github.com/karpathy/llm.c) and [llama.cpp](https://github.com/ggerganov/llama.cpp) and the sensor dataset provided by SHL, which were instrumental in training and testing these models. ## Usage The model can be used in two primary ways: 1. **With Hugging Face’s Transformers Library** ```python from transformers import pipeline import torch path = "tinyllm/124M-0.1" prompt = "The sea is blue but it's his red sea" generator = pipeline("text-generation", model=path,max_new_tokens = 30, repetition_penalty=1.3, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto") print(generator(prompt)[0]['generated_text']) ``` 2. **With llama.cpp** Generate a GGUF model file using this [tool](https://github.com/ggerganov/llama.cpp/blob/master/convert_hf_to_gguf.py) and use the generated GGUF file for inferencing. ```python python3 convert_hf_to_gguf.py models/mymodel/ ``` ## Disclaimer This model is intended solely for research purposes.