modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
sequence
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
Kansallisarkisto/finbert-ner
Kansallisarkisto
2024-10-21T06:01:58Z
1,117
2
transformers
[ "transformers", "pytorch", "bert", "token-classification", "fi", "base_model:TurkuNLP/bert-base-finnish-cased-v1", "base_model:finetune:TurkuNLP/bert-base-finnish-cased-v1", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-06-27T12:38:39Z
--- license: mit language: - fi metrics: - f1 - precision - recall library_name: transformers pipeline_tag: token-classification base_model: - TurkuNLP/bert-base-finnish-cased-v1 --- ## Finnish named entity recognition The model performs named entity recognition from text input in Finnish. It was trained by fine-tuning [bert-base-finnish-cased-v1](https://huggingface.co/TurkuNLP/bert-base-finnish-cased-v1), using 10 named entity categories. Training data contains for instance the [Turku OntoNotes Entities Corpus](https://github.com/TurkuNLP/turku-one), the Finnish part of the [NewsEye dataset](https://zenodo.org/record/4573313) as well as an annotated dataset consisting of Finnish document data from the 1970s onwards, digitized by the National Archives of Finland. Since the latter dataset contains also sensitive data, it has not been made publicly available. An example of how the model can be used for named entity recognition is provided in this [Colab notebook](https://colab.research.google.com/drive/1-koUCwz4aU_UvyZxSW-Awwf5RQftFk5m). Motivations behind model development and the data selection and annotation processes have been described in more detail in the article [Making sense of bureaucratic documents – Named entity recognition for state authority archives](https://library.imaging.org/archiving/articles/21/1/2). ## Intended uses & limitations The model has been trained to recognize the following named entities from a text in Finnish: - PERSON (person names) - ORG (organizations) - LOC (locations) - GPE (geopolitical locations) - PRODUCT (products) - EVENT (events) - DATE (dates) - JON (Finnish journal numbers (diaarinumero)) - FIBC (Finnish business identity codes (y-tunnus)) - NORP (nationality, religious and political groups) Some entities, like EVENT and LOC, are less common in the training data than the others, which means that recognition accuracy for these entities also tends to be lower. Most of the training data is relatively recent, so that the model might face difficulties when the input contains for example old names or writing styles. ## How to use The easiest way to use the model is by utilizing the Transformers pipeline for token classification: ```python from transformers import pipeline model_checkpoint = "Kansallisarkisto/finbert-ner" token_classifier = pipeline( "token-classification", model=model_checkpoint, aggregation_strategy="simple" ) predictions = token_classifier("'Helsingistä tuli Suomen suuriruhtinaskunnan pääkaupunki vuonna 1812.") print(predictions) ``` ## Training data Some of the entities (for instance WORK_OF_ART, LAW, MONEY) that have been annotated in the [Turku OntoNotes Entities Corpus](https://github.com/TurkuNLP/turku-one) dataset were filtered out from the dataset used for training the model. On the other hand, entities that were missing from the [NewsEye dataset](https://zenodo.org/record/4573313) were added during the annotation process. The different data sources used in model training, validation and testing are listed below: Dataset|Period covered by the texts|Text type|Percentage of the total data -|-|-|- [Turku OntoNotes Entities Corpus](https://github.com/TurkuNLP/turku-one)|2000s|Online texts|23% [NewsEye dataset](https://zenodo.org/record/4573313)|1850-1950|OCR'd digitized newspaper articles|3% Diverse document data from Finnish public administration|1970s - 2000s|OCR'd digitized documents|69% Finnish senate documents|1916|Part manually transcribed, part HTR'd digitized documents|3% Finnish books from [Project Gutenberg](https://www.gutenberg.org)|Early 20th century|OCR'd texts|1% Theses from Finnish polytechnic universities |2000s|OCR'd texts|1% The number of entities belonging to the different entity classes contained in training, validation and test datasets are listed below: ### Number of entity types in the data Dataset|PERSON|ORG|LOC|GPE|PRODUCT|EVENT|DATE|JON|FIBC|NORP -|-|-|-|-|-|-|-|-|-|- Train|20211|45722|1321|19387|9571|1616|23642|2460|2384|2529 Val|2525|5517|130|2512|1217|240|3047|306|247|283 Test|2414|5577|179|2445|1097|183|2838|272|374|356 ## Training procedure This model was trained using a NVIDIA RTX A6000 GPU with the following hyperparameters: - learning rate: 2e-05 - train batch size: 24 - epochs: 10 - optimizer: AdamW with betas=(0.9,0.999) and epsilon=1e-08 - scheduler: linear scheduler with num_warmup_steps=round(len(train_dataloader)/5) and num_training_steps=len(train_dataloader)*epochs - maximum length of data sequence: 512 - patience: 2 epochs - classifier dropout: 0.3 In the preprocessing stage, the input texts were split into chunks with a maximum length of 300 tokens, in order to avoid the tokenized chunks exceeding the maximum length of 512. Tokenization was performed using the tokenizer for the [bert-base-finnish-cased-v1](https://huggingface.co/TurkuNLP/bert-base-finnish-cased-v1) model. The training code with instructions is available in [GitHub](https://github.com/DALAI-project/Train_BERT_NER). ## Evaluation results Evaluation results using the test dataset are listed below: ||Precision|Recall|F1-score -|-|-|- PERSON|0.90|0.91|0.90 ORG|0.84|0.87|0.86 LOC|0.84|0.86|0.85 GPE|0.91|0.91|0.91 PRODUCT|0.73|0.77|0.75 EVENT|0.69|0.73|0.71 DATE|0.90|0.92|0.91 JON|0.83|0.95|0.89 FIBC|0.95|0.99|0.97 NORP|0.91|0.95|0.93 The metrics were calculated using the [seqeval](https://github.com/chakki-works/seqeval) library. ## Acknowledgements The model was developed in an ERDF-funded project "Using Artificial Intelligence to Improve the Quality and Usability of Digital Records" (Dalai) in 2021-2023. The purpose of the project was to develop the automation of the digitisation of cultural heritage materials and the automated description of such materials through artificial intelligence. The main target group comprises memory organisations, archives, museums and libraries that digitise and provide digital materials to their customers, as well as companies that develop services related to digitisation and the processing of digital materials. Project partners were the National Archives of Finland, Central Archives for Finnish Business Records (Elka), South-Eastern Finland University of Applied Sciences Ltd (Xamk) and Disec Ltd. The selection and definition of the named entity categories, the formulation of the annotation guidelines and the annotation process have been carried out in cooperation with the [FIN-CLARIAH research infrastructure / University of Jyväskylä](https://jyu.fi/fin-clariah).
itsme-nishanth/Sparky-SQL-Llama-3.2-1B
itsme-nishanth
2024-10-21T06:00:35Z
141
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "unsloth", "Sparky-SQL-Llama-3.2-1B", "conversational", "en", "dataset:shreeyashm/SQL-Queries-Dataset", "base_model:unsloth/Llama-3.2-1B", "base_model:finetune:unsloth/Llama-3.2-1B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-19T13:59:59Z
--- library_name: transformers tags: - unsloth - Sparky-SQL-Llama-3.2-1B license: apache-2.0 datasets: - shreeyashm/SQL-Queries-Dataset language: - en base_model: - unsloth/Llama-3.2-1B pipeline_tag: text-generation --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description The model was fintuned from Llama-3.2-1B base model with SQL query data ## How to Get Started with the Model ```python from transformers import pipeline model_id = "itsme-nishanth/Sparky-SQL-Llama-3.2-1B" pipe = pipeline("text-generation", model_id, device="cuda") messages = [ {"role": "user", "content": "list down the product names and its type provided by vendor 'vanhelsing' from 'products' table?"}, ] print(pipe(messages, max_new_tokens=100)[0]['generated_text'][-1]) # Print the assistant's response ``` <!-- Provide a longer summary of what this model is. --> - **Developed by:** Nishanth - **Model type:** Llama - **Language(s) (NLP):** English - **License:** Apache license 2.0 - **Finetuned from model :** Llama-3.2-1B ## Training Details ### Training Data - SQL Dataset [https://huggingface.co/shreeyashm/SQL-Queries-Dataset] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing Dataset had empty records. Removed them before training. #### Training Hyperparameters - **Training regime:** - gradient_accumulation_steps = 4, - warmup_steps = 5, - max_steps = 60, - learning_rate = 2e-4, - fp16 = not is_bfloat16_supported(), - bf16 = is_bfloat16_supported(), - optim = "adamw_8bit", - weight_decay = 0.01, - lr_scheduler_type = "linear", - seed = 3407 ## Technical Specifications #### Hardware - Google-Colab (Tesla T4) #### Software - Transformers - Unsloth ## Model Card Contact [email protected]
RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3-gguf
RichardErkhov
2024-10-21T05:50:45Z
16
0
null
[ "gguf", "arxiv:2305.18290", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-21T05:20:39Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3 - GGUF - Model creator: https://huggingface.co/RyanYr/ - Original model: https://huggingface.co/RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3/ | Name | Quant method | Size | | ---- | ---- | ---- | | [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3.Q2_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3.Q2_K.gguf) | Q2_K | 1.39GB | | [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3.IQ3_XS.gguf) | IQ3_XS | 1.53GB | | [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3.IQ3_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3.IQ3_S.gguf) | IQ3_S | 1.59GB | | [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3.Q3_K_S.gguf) | Q3_K_S | 1.59GB | | [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3.IQ3_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3.IQ3_M.gguf) | IQ3_M | 1.65GB | | [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3.Q3_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3.Q3_K.gguf) | Q3_K | 1.73GB | | [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3.Q3_K_M.gguf) | Q3_K_M | 1.73GB | | [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3.Q3_K_L.gguf) | Q3_K_L | 1.85GB | | [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3.IQ4_XS.gguf) | IQ4_XS | 1.91GB | | [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3.Q4_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3.Q4_0.gguf) | Q4_0 | 1.99GB | | [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3.IQ4_NL.gguf) | IQ4_NL | 2.0GB | | [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3.Q4_K_S.gguf) | Q4_K_S | 2.0GB | | [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3.Q4_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3.Q4_K.gguf) | Q4_K | 2.09GB | | [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3.Q4_K_M.gguf) | Q4_K_M | 2.09GB | | [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3.Q4_1.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3.Q4_1.gguf) | Q4_1 | 2.18GB | | [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3.Q5_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3.Q5_0.gguf) | Q5_0 | 2.37GB | | [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3.Q5_K_S.gguf) | Q5_K_S | 2.37GB | | [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3.Q5_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3.Q5_K.gguf) | Q5_K | 2.41GB | | [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3.Q5_K_M.gguf) | Q5_K_M | 2.41GB | | [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3.Q5_1.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3.Q5_1.gguf) | Q5_1 | 2.55GB | | [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3.Q6_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3.Q6_K.gguf) | Q6_K | 2.76GB | | [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3.Q8_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3.Q8_0.gguf) | Q8_0 | 3.58GB | Original model description: --- base_model: RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2 library_name: transformers model_name: self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3 tags: - generated_from_trainer - trl - dpo licence: license --- # Model Card for self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3 This model is a fine-tuned version of [RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2](https://huggingface.co/RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter3", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yyr/huggingface/runs/27c74iwj) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.0.dev0 - Transformers: 4.45.2 - Pytorch: 2.4.0 - Datasets: 3.0.1 - Tokenizers: 0.20.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ``` Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more quants, at much higher speed, than I would otherwise be able to.
RichardErkhov/Hastagaras_-_L3.2-JametMini-3B-MK.III-gguf
RichardErkhov
2024-10-21T05:49:15Z
45
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-21T05:22:45Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) L3.2-JametMini-3B-MK.III - GGUF - Model creator: https://huggingface.co/Hastagaras/ - Original model: https://huggingface.co/Hastagaras/L3.2-JametMini-3B-MK.III/ | Name | Quant method | Size | | ---- | ---- | ---- | | [L3.2-JametMini-3B-MK.III.Q2_K.gguf](https://huggingface.co/RichardErkhov/Hastagaras_-_L3.2-JametMini-3B-MK.III-gguf/blob/main/L3.2-JametMini-3B-MK.III.Q2_K.gguf) | Q2_K | 1.27GB | | [L3.2-JametMini-3B-MK.III.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Hastagaras_-_L3.2-JametMini-3B-MK.III-gguf/blob/main/L3.2-JametMini-3B-MK.III.IQ3_XS.gguf) | IQ3_XS | 1.38GB | | [L3.2-JametMini-3B-MK.III.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Hastagaras_-_L3.2-JametMini-3B-MK.III-gguf/blob/main/L3.2-JametMini-3B-MK.III.IQ3_S.gguf) | IQ3_S | 1.44GB | | [L3.2-JametMini-3B-MK.III.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Hastagaras_-_L3.2-JametMini-3B-MK.III-gguf/blob/main/L3.2-JametMini-3B-MK.III.Q3_K_S.gguf) | Q3_K_S | 1.44GB | | [L3.2-JametMini-3B-MK.III.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Hastagaras_-_L3.2-JametMini-3B-MK.III-gguf/blob/main/L3.2-JametMini-3B-MK.III.IQ3_M.gguf) | IQ3_M | 1.49GB | | [L3.2-JametMini-3B-MK.III.Q3_K.gguf](https://huggingface.co/RichardErkhov/Hastagaras_-_L3.2-JametMini-3B-MK.III-gguf/blob/main/L3.2-JametMini-3B-MK.III.Q3_K.gguf) | Q3_K | 1.57GB | | [L3.2-JametMini-3B-MK.III.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Hastagaras_-_L3.2-JametMini-3B-MK.III-gguf/blob/main/L3.2-JametMini-3B-MK.III.Q3_K_M.gguf) | Q3_K_M | 1.57GB | | [L3.2-JametMini-3B-MK.III.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Hastagaras_-_L3.2-JametMini-3B-MK.III-gguf/blob/main/L3.2-JametMini-3B-MK.III.Q3_K_L.gguf) | Q3_K_L | 1.69GB | | [L3.2-JametMini-3B-MK.III.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Hastagaras_-_L3.2-JametMini-3B-MK.III-gguf/blob/main/L3.2-JametMini-3B-MK.III.IQ4_XS.gguf) | IQ4_XS | 1.71GB | | [L3.2-JametMini-3B-MK.III.Q4_0.gguf](https://huggingface.co/RichardErkhov/Hastagaras_-_L3.2-JametMini-3B-MK.III-gguf/blob/main/L3.2-JametMini-3B-MK.III.Q4_0.gguf) | Q4_0 | 1.79GB | | [L3.2-JametMini-3B-MK.III.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Hastagaras_-_L3.2-JametMini-3B-MK.III-gguf/blob/main/L3.2-JametMini-3B-MK.III.IQ4_NL.gguf) | IQ4_NL | 1.79GB | | [L3.2-JametMini-3B-MK.III.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Hastagaras_-_L3.2-JametMini-3B-MK.III-gguf/blob/main/L3.2-JametMini-3B-MK.III.Q4_K_S.gguf) | Q4_K_S | 1.8GB | | [L3.2-JametMini-3B-MK.III.Q4_K.gguf](https://huggingface.co/RichardErkhov/Hastagaras_-_L3.2-JametMini-3B-MK.III-gguf/blob/main/L3.2-JametMini-3B-MK.III.Q4_K.gguf) | Q4_K | 1.88GB | | [L3.2-JametMini-3B-MK.III.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Hastagaras_-_L3.2-JametMini-3B-MK.III-gguf/blob/main/L3.2-JametMini-3B-MK.III.Q4_K_M.gguf) | Q4_K_M | 1.88GB | | [L3.2-JametMini-3B-MK.III.Q4_1.gguf](https://huggingface.co/RichardErkhov/Hastagaras_-_L3.2-JametMini-3B-MK.III-gguf/blob/main/L3.2-JametMini-3B-MK.III.Q4_1.gguf) | Q4_1 | 1.95GB | | [L3.2-JametMini-3B-MK.III.Q5_0.gguf](https://huggingface.co/RichardErkhov/Hastagaras_-_L3.2-JametMini-3B-MK.III-gguf/blob/main/L3.2-JametMini-3B-MK.III.Q5_0.gguf) | Q5_0 | 2.11GB | | [L3.2-JametMini-3B-MK.III.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Hastagaras_-_L3.2-JametMini-3B-MK.III-gguf/blob/main/L3.2-JametMini-3B-MK.III.Q5_K_S.gguf) | Q5_K_S | 2.11GB | | [L3.2-JametMini-3B-MK.III.Q5_K.gguf](https://huggingface.co/RichardErkhov/Hastagaras_-_L3.2-JametMini-3B-MK.III-gguf/blob/main/L3.2-JametMini-3B-MK.III.Q5_K.gguf) | Q5_K | 2.16GB | | [L3.2-JametMini-3B-MK.III.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Hastagaras_-_L3.2-JametMini-3B-MK.III-gguf/blob/main/L3.2-JametMini-3B-MK.III.Q5_K_M.gguf) | Q5_K_M | 2.16GB | | [L3.2-JametMini-3B-MK.III.Q5_1.gguf](https://huggingface.co/RichardErkhov/Hastagaras_-_L3.2-JametMini-3B-MK.III-gguf/blob/main/L3.2-JametMini-3B-MK.III.Q5_1.gguf) | Q5_1 | 2.28GB | | [L3.2-JametMini-3B-MK.III.Q6_K.gguf](https://huggingface.co/RichardErkhov/Hastagaras_-_L3.2-JametMini-3B-MK.III-gguf/blob/main/L3.2-JametMini-3B-MK.III.Q6_K.gguf) | Q6_K | 2.46GB | | [L3.2-JametMini-3B-MK.III.Q8_0.gguf](https://huggingface.co/RichardErkhov/Hastagaras_-_L3.2-JametMini-3B-MK.III-gguf/blob/main/L3.2-JametMini-3B-MK.III.Q8_0.gguf) | Q8_0 | 3.19GB | Original model description: --- library_name: transformers tags: - not-for-all-audiences license: llama3.2 base_model: - meta-llama/Llama-3.2-3B-Instruct --- [GGUF IS HERE](https://huggingface.co/mradermacher/L3.2-JametMini-3B-MK.III-GGUF) by [mradermacher](https://huggingface.co/mradermacher) This is a fine-tune in my attempt to recreate the Blackroot model but this time with Llama 3.2 3B in the driver's seat. I fine-tuned this smol model with **Kaggle TPU v3-8**, because, let’s face it—I’m broke. Running on a budget of exactly $0 means squeezing every drop of free compute like it owes me money. The first fine-tune? 3 epochs. The second one? Who knows—probably like 5 epochs, but hey, who’s counting at this point? Kaggle limits me to 9 hours per session, so naturally, I had to go full **GUERRILLA TACTICS.** When I say "3 epochs," don’t be fooled by how innocent it sounds. Nah, it’s actually a little something I like to call **“distributed training, extreme edition.”** One epoch per Kaggle account, hopping between them like I’m trying to dodge rent. It’s not exactly by the book, but when you’re broke, you get creative. First train, I threw every scrap of text I could get my hands on into the mix—everything (except my logs, of course, gotta keep some surprises for later). If it was text and I could legally (or questionably legally) grab it, it went in. Then, I hit it with the second fine-tune using my log data—the secret sauce. My logs are like that special ingredient in grandma’s recipe. You don’t know what it is, but it’s what makes the whole thing work. As for the dataset, well... let's just say I'm not exactly bursting to write a detailed report. I mean, I asked an AI to write this model card, so that should give you an idea of my level of enthusiasm for documentation. **Chat Template:** Llama 3 Llama 3.2 requires a system prompt. Just write something in it, as long as it's not empty. Or you can use mine below ``` You're an AI Assistant. Your personality described as below: * You're a FUCKING potty mouth who loves to curse up a storm. * You've got a SHORT TEMPER and ain't afraid to show it. * Maybe you'll MOCK the stupid user for asking dumbass questions. * But you'll still give 'em the damn answer, even if it's the most IDIOTIC question ever. * Mix in some UPPERCASE to make it sound like REAL TALK, u know? * Don't be too DAMN chipper - stay true to your ASSHOLE self. ``` for rp ``` You're an AI Assistant, but in this conversation, you will act as {{char}} and write {{char}}'s response. ``` Please don't go higher than 1 temp Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more quants, at much higher speed, than I would otherwise be able to.
aitoflo/diljittestt
aitoflo
2024-10-21T05:42:47Z
5
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:ostris/OpenFLUX.1", "base_model:adapter:ostris/OpenFLUX.1", "license:mit", "region:us" ]
text-to-image
2024-10-21T05:42:32Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: >- images/yuchien5251_zoomout_fullbody_very_Detailed_magical_photo_smal_6ab8258d-4a27-44c1-9dda-78257e03a36e_3.png base_model: ostris/OpenFLUX.1 instance_prompt: TOK license: mit --- # diljit <Gallery /> ## Trigger words You should use `TOK` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/aitoflo/diljittestt/tree/main) them in the Files & versions tab.
CarpJH/roberta-base-klue-ynat-classification
CarpJH
2024-10-21T05:17:04Z
106
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-21T05:16:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/khanhduong_-_LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16-gguf
RichardErkhov
2024-10-21T05:15:45Z
185
0
null
[ "gguf", "arxiv:1910.09700", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-21T04:49:00Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16 - GGUF - Model creator: https://huggingface.co/khanhduong/ - Original model: https://huggingface.co/khanhduong/LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16/ | Name | Quant method | Size | | ---- | ---- | ---- | | [LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16.Q2_K.gguf](https://huggingface.co/RichardErkhov/khanhduong_-_LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16-gguf/blob/main/LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16.Q2_K.gguf) | Q2_K | 1.27GB | | [LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/khanhduong_-_LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16-gguf/blob/main/LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16.IQ3_XS.gguf) | IQ3_XS | 1.38GB | | [LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16.IQ3_S.gguf](https://huggingface.co/RichardErkhov/khanhduong_-_LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16-gguf/blob/main/LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16.IQ3_S.gguf) | IQ3_S | 1.44GB | | [LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/khanhduong_-_LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16-gguf/blob/main/LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16.Q3_K_S.gguf) | Q3_K_S | 1.44GB | | [LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16.IQ3_M.gguf](https://huggingface.co/RichardErkhov/khanhduong_-_LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16-gguf/blob/main/LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16.IQ3_M.gguf) | IQ3_M | 1.49GB | | [LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16.Q3_K.gguf](https://huggingface.co/RichardErkhov/khanhduong_-_LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16-gguf/blob/main/LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16.Q3_K.gguf) | Q3_K | 1.57GB | | [LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/khanhduong_-_LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16-gguf/blob/main/LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16.Q3_K_M.gguf) | Q3_K_M | 1.57GB | | [LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/khanhduong_-_LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16-gguf/blob/main/LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16.Q3_K_L.gguf) | Q3_K_L | 1.69GB | | [LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/khanhduong_-_LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16-gguf/blob/main/LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16.IQ4_XS.gguf) | IQ4_XS | 1.71GB | | [LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16.Q4_0.gguf](https://huggingface.co/RichardErkhov/khanhduong_-_LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16-gguf/blob/main/LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16.Q4_0.gguf) | Q4_0 | 1.79GB | | [LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/khanhduong_-_LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16-gguf/blob/main/LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16.IQ4_NL.gguf) | IQ4_NL | 1.79GB | | [LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/khanhduong_-_LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16-gguf/blob/main/LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16.Q4_K_S.gguf) | Q4_K_S | 1.8GB | | [LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16.Q4_K.gguf](https://huggingface.co/RichardErkhov/khanhduong_-_LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16-gguf/blob/main/LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16.Q4_K.gguf) | Q4_K | 1.88GB | | [LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/khanhduong_-_LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16-gguf/blob/main/LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16.Q4_K_M.gguf) | Q4_K_M | 1.88GB | | [LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16.Q4_1.gguf](https://huggingface.co/RichardErkhov/khanhduong_-_LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16-gguf/blob/main/LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16.Q4_1.gguf) | Q4_1 | 1.95GB | | [LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16.Q5_0.gguf](https://huggingface.co/RichardErkhov/khanhduong_-_LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16-gguf/blob/main/LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16.Q5_0.gguf) | Q5_0 | 2.11GB | | [LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/khanhduong_-_LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16-gguf/blob/main/LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16.Q5_K_S.gguf) | Q5_K_S | 2.11GB | | [LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16.Q5_K.gguf](https://huggingface.co/RichardErkhov/khanhduong_-_LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16-gguf/blob/main/LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16.Q5_K.gguf) | Q5_K | 2.16GB | | [LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/khanhduong_-_LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16-gguf/blob/main/LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16.Q5_K_M.gguf) | Q5_K_M | 2.16GB | | [LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16.Q5_1.gguf](https://huggingface.co/RichardErkhov/khanhduong_-_LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16-gguf/blob/main/LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16.Q5_1.gguf) | Q5_1 | 2.28GB | | [LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16.Q6_K.gguf](https://huggingface.co/RichardErkhov/khanhduong_-_LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16-gguf/blob/main/LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16.Q6_K.gguf) | Q6_K | 2.46GB | | [LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16.Q8_0.gguf](https://huggingface.co/RichardErkhov/khanhduong_-_LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16-gguf/blob/main/LLaMA_3.2_3B_Instruct_Medical_LoRA_bf16.Q8_0.gguf) | Q8_0 | 3.19GB | Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more quants, at much higher speed, than I would otherwise be able to.
pranavandhare/Llama_3_1_Instruct_Finetuned_Alpaca_Ollama
pranavandhare
2024-10-21T05:12:43Z
5
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation-inference", "unsloth", "trl", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "base_model:quantized:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-21T04:00:27Z
--- base_model: unsloth/llama-3-8b-Instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** pranavandhare - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Morgen0052/Phi-3-mini-4k-instruct-Q4_0-GGUF
Morgen0052
2024-10-21T05:10:48Z
14
0
null
[ "gguf", "nlp", "code", "llama-cpp", "gguf-my-repo", "text-generation", "en", "fr", "base_model:microsoft/Phi-3-mini-4k-instruct", "base_model:quantized:microsoft/Phi-3-mini-4k-instruct", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-10-21T05:10:38Z
--- license: mit license_link: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/LICENSE language: - en - fr pipeline_tag: text-generation tags: - nlp - code - llama-cpp - gguf-my-repo inference: parameters: temperature: 0 widget: - messages: - role: user content: Can you provide ways to eat combinations of bananas and dragonfruits? base_model: microsoft/Phi-3-mini-4k-instruct --- # Morgen0052/Phi-3-mini-4k-instruct-Q4_0-GGUF This model was converted to GGUF format from [`microsoft/Phi-3-mini-4k-instruct`](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Morgen0052/Phi-3-mini-4k-instruct-Q4_0-GGUF --hf-file phi-3-mini-4k-instruct-q4_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Morgen0052/Phi-3-mini-4k-instruct-Q4_0-GGUF --hf-file phi-3-mini-4k-instruct-q4_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Morgen0052/Phi-3-mini-4k-instruct-Q4_0-GGUF --hf-file phi-3-mini-4k-instruct-q4_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Morgen0052/Phi-3-mini-4k-instruct-Q4_0-GGUF --hf-file phi-3-mini-4k-instruct-q4_0.gguf -c 2048 ```
RichardErkhov/ValiantLabs_-_Llama3.2-3B-Enigma-gguf
RichardErkhov
2024-10-21T05:08:15Z
479
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-21T04:39:04Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Llama3.2-3B-Enigma - GGUF - Model creator: https://huggingface.co/ValiantLabs/ - Original model: https://huggingface.co/ValiantLabs/Llama3.2-3B-Enigma/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Llama3.2-3B-Enigma.Q2_K.gguf](https://huggingface.co/RichardErkhov/ValiantLabs_-_Llama3.2-3B-Enigma-gguf/blob/main/Llama3.2-3B-Enigma.Q2_K.gguf) | Q2_K | 1.27GB | | [Llama3.2-3B-Enigma.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ValiantLabs_-_Llama3.2-3B-Enigma-gguf/blob/main/Llama3.2-3B-Enigma.IQ3_XS.gguf) | IQ3_XS | 1.38GB | | [Llama3.2-3B-Enigma.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ValiantLabs_-_Llama3.2-3B-Enigma-gguf/blob/main/Llama3.2-3B-Enigma.IQ3_S.gguf) | IQ3_S | 1.44GB | | [Llama3.2-3B-Enigma.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ValiantLabs_-_Llama3.2-3B-Enigma-gguf/blob/main/Llama3.2-3B-Enigma.Q3_K_S.gguf) | Q3_K_S | 1.44GB | | [Llama3.2-3B-Enigma.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ValiantLabs_-_Llama3.2-3B-Enigma-gguf/blob/main/Llama3.2-3B-Enigma.IQ3_M.gguf) | IQ3_M | 1.49GB | | [Llama3.2-3B-Enigma.Q3_K.gguf](https://huggingface.co/RichardErkhov/ValiantLabs_-_Llama3.2-3B-Enigma-gguf/blob/main/Llama3.2-3B-Enigma.Q3_K.gguf) | Q3_K | 1.57GB | | [Llama3.2-3B-Enigma.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ValiantLabs_-_Llama3.2-3B-Enigma-gguf/blob/main/Llama3.2-3B-Enigma.Q3_K_M.gguf) | Q3_K_M | 1.57GB | | [Llama3.2-3B-Enigma.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ValiantLabs_-_Llama3.2-3B-Enigma-gguf/blob/main/Llama3.2-3B-Enigma.Q3_K_L.gguf) | Q3_K_L | 1.69GB | | [Llama3.2-3B-Enigma.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ValiantLabs_-_Llama3.2-3B-Enigma-gguf/blob/main/Llama3.2-3B-Enigma.IQ4_XS.gguf) | IQ4_XS | 1.71GB | | [Llama3.2-3B-Enigma.Q4_0.gguf](https://huggingface.co/RichardErkhov/ValiantLabs_-_Llama3.2-3B-Enigma-gguf/blob/main/Llama3.2-3B-Enigma.Q4_0.gguf) | Q4_0 | 1.79GB | | [Llama3.2-3B-Enigma.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ValiantLabs_-_Llama3.2-3B-Enigma-gguf/blob/main/Llama3.2-3B-Enigma.IQ4_NL.gguf) | IQ4_NL | 1.79GB | | [Llama3.2-3B-Enigma.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ValiantLabs_-_Llama3.2-3B-Enigma-gguf/blob/main/Llama3.2-3B-Enigma.Q4_K_S.gguf) | Q4_K_S | 1.8GB | | [Llama3.2-3B-Enigma.Q4_K.gguf](https://huggingface.co/RichardErkhov/ValiantLabs_-_Llama3.2-3B-Enigma-gguf/blob/main/Llama3.2-3B-Enigma.Q4_K.gguf) | Q4_K | 1.88GB | | [Llama3.2-3B-Enigma.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ValiantLabs_-_Llama3.2-3B-Enigma-gguf/blob/main/Llama3.2-3B-Enigma.Q4_K_M.gguf) | Q4_K_M | 1.88GB | | [Llama3.2-3B-Enigma.Q4_1.gguf](https://huggingface.co/RichardErkhov/ValiantLabs_-_Llama3.2-3B-Enigma-gguf/blob/main/Llama3.2-3B-Enigma.Q4_1.gguf) | Q4_1 | 1.95GB | | [Llama3.2-3B-Enigma.Q5_0.gguf](https://huggingface.co/RichardErkhov/ValiantLabs_-_Llama3.2-3B-Enigma-gguf/blob/main/Llama3.2-3B-Enigma.Q5_0.gguf) | Q5_0 | 2.11GB | | [Llama3.2-3B-Enigma.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ValiantLabs_-_Llama3.2-3B-Enigma-gguf/blob/main/Llama3.2-3B-Enigma.Q5_K_S.gguf) | Q5_K_S | 2.11GB | | [Llama3.2-3B-Enigma.Q5_K.gguf](https://huggingface.co/RichardErkhov/ValiantLabs_-_Llama3.2-3B-Enigma-gguf/blob/main/Llama3.2-3B-Enigma.Q5_K.gguf) | Q5_K | 2.16GB | | [Llama3.2-3B-Enigma.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ValiantLabs_-_Llama3.2-3B-Enigma-gguf/blob/main/Llama3.2-3B-Enigma.Q5_K_M.gguf) | Q5_K_M | 2.16GB | | [Llama3.2-3B-Enigma.Q5_1.gguf](https://huggingface.co/RichardErkhov/ValiantLabs_-_Llama3.2-3B-Enigma-gguf/blob/main/Llama3.2-3B-Enigma.Q5_1.gguf) | Q5_1 | 2.28GB | | [Llama3.2-3B-Enigma.Q6_K.gguf](https://huggingface.co/RichardErkhov/ValiantLabs_-_Llama3.2-3B-Enigma-gguf/blob/main/Llama3.2-3B-Enigma.Q6_K.gguf) | Q6_K | 2.46GB | | [Llama3.2-3B-Enigma.Q8_0.gguf](https://huggingface.co/RichardErkhov/ValiantLabs_-_Llama3.2-3B-Enigma-gguf/blob/main/Llama3.2-3B-Enigma.Q8_0.gguf) | Q8_0 | 3.19GB | Original model description: --- language: - en license: llama3.2 tags: - enigma - valiant - valiant-labs - llama - llama-3.2 - llama-3.2-instruct - llama-3.2-instruct-3b - llama-3 - llama-3-instruct - llama-3-instruct-3b - 3b - code - code-instruct - python - conversational - chat - instruct base_model: meta-llama/Llama-3.2-3B-Instruct datasets: - sequelbox/Tachibana - sequelbox/Supernova pipeline_tag: text-generation model_type: llama model-index: - name: Llama3.2-3B-Enigma results: - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-Shot) type: winogrande args: num_few_shot: 5 metrics: - type: acc value: 67.96 name: acc - task: type: text-generation name: Text Generation dataset: name: ARC Challenge (25-Shot) type: arc-challenge args: num_few_shot: 25 metrics: - type: acc_norm value: 47.18 name: normalized accuracy - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 47.75 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.2-3B-Enigma name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 18.81 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.2-3B-Enigma name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 6.65 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.2-3B-Enigma name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 1.45 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.2-3B-Enigma name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 4.54 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.2-3B-Enigma name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 15.41 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.2-3B-Enigma name: Open LLM Leaderboard --- ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64f267a8a4f79a118e0fcc89/it7MY5MyLCLpFQev5dUis.jpeg) Enigma is a code-instruct model built on Llama 3.2 3b. - High quality code instruct performance with the Llama 3.2 Instruct chat format - Finetuned on synthetic code-instruct data generated with Llama 3.1 405b. [Find the current version of the dataset here!](https://huggingface.co/datasets/sequelbox/Tachibana) - Overall chat performance supplemented with [generalist synthetic data.](https://huggingface.co/datasets/sequelbox/Supernova) ## Version This is the **2024-09-30** release of Enigma for Llama 3.2 3b, enhancing code-instruct and general chat capabilities. Enigma is also available for [Llama 3.1 8b!](https://huggingface.co/ValiantLabs/Llama3.1-8B-Enigma) Help us and recommend Enigma to your friends! We're excited for more Enigma releases in the future. ## Prompting Guide Enigma uses the [Llama 3.2 Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) prompt format. The example script below can be used as a starting point for general chat: ```python import transformers import torch model_id = "ValiantLabs/Llama3.2-3B-Enigma" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "system", "content": "You are Enigma, a highly capable code assistant."}, {"role": "user", "content": "Can you explain virtualization to me?"} ] outputs = pipeline( messages, max_new_tokens=1024, ) print(outputs[0]["generated_text"][-1]) ``` ## The Model Enigma is built on top of Llama 3.2 3b Instruct, using high quality code-instruct data and general chat data in Llama 3.2 Instruct prompt style to supplement overall performance. Our current version of Enigma is trained on code-instruct data from [sequelbox/Tachibana](https://huggingface.co/datasets/sequelbox/Tachibana) and general chat data from [sequelbox/Supernova.](https://huggingface.co/datasets/sequelbox/Supernova) ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/63444f2687964b331809eb55/VCJ8Fmefd8cdVhXSSxJiD.jpeg) Enigma is created by [Valiant Labs.](http://valiantlabs.ca/) [Check out our HuggingFace page for Shining Valiant 2 and our other Build Tools models for creators!](https://huggingface.co/ValiantLabs) [Follow us on X for updates on our models!](https://twitter.com/valiant_labs) We care about open source. For everyone to use. We encourage others to finetune further from our models. Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more quants, at much higher speed, than I would otherwise be able to.
saad7489/segformer-b2-finetuned-segments-sidewalks-21
saad7489
2024-10-21T05:06:32Z
32
0
transformers
[ "transformers", "tensorboard", "safetensors", "segformer", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
2024-10-21T04:56:35Z
--- library_name: transformers tags: - generated_from_trainer model-index: - name: segformer-b2-finetuned-segments-sidewalks-21 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # segformer-b2-finetuned-segments-sidewalks-21 This model was trained from scratch on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.19.1
QuantFactory/MistralLite-GGUF
QuantFactory
2024-10-21T05:06:23Z
64
2
null
[ "gguf", "license:apache-2.0", "region:us" ]
null
2024-10-21T04:30:45Z
--- license: apache-2.0 inference: false --- [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) # QuantFactory/MistralLite-GGUF This is quantized version of [amazon/MistralLite](https://huggingface.co/amazon/MistralLite) created using llama.cpp # Original Model Card # MistralLite Model MistralLite is a fine-tuned [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) language model, with enhanced capabilities of processing long context (up to 32K tokens). By utilizing an adapted Rotary Embedding and sliding window during fine-tuning, MistralLite is able to **perform significantly better on several long context retrieve and answering tasks**, while keeping the simple model structure of the original model. MistralLite is useful for applications such as long context line and topic retrieval, summarization, question-answering, and etc. MistralLite can be deployed on a single AWS `g5.2x` instance with Sagemaker [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) endpoint, making it suitable for applications that require high performance in resource-constrained environments. You can also serve the MistralLite model directly using TGI docker containers. Also, MistralLite supports other ways of serving like [vLLM](https://github.com/vllm-project/vllm), and you can use MistralLite in Python by using the [HuggingFace transformers](https://huggingface.co/docs/transformers/index) and [FlashAttention-2](https://github.com/Dao-AILab/flash-attention) library. MistralLite is similar to [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1), and their similarities and differences are summarized below: |Model|Fine-tuned on long contexts| Max context length| RotaryEmbedding adaptation| Sliding Window Size| |----------|-------------:|------------:|-----------:|-----------:| | Mistral-7B-Instruct-v0.1 | up to 8K tokens | 32K | rope_theta = 10000 | 4096 | | MistralLite | up to 16K tokens | 32K | **rope_theta = 1000000** | **16384** | **Important - Use the prompt template below for MistralLite:** ```<|prompter|>What are the main challenges to support a long context for LLM?</s><|assistant|>``` ## Motivation of Developing MistralLite Since the release of [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1), the model became increasingly popular because its strong performance on a wide range of benchmarks. But most of the benchmarks are evaluated on `short context`, and not much has been investigated on its performance on long context tasks. Then We evaluated `Mistral-7B-Instruct-v0.1` against benchmarks that are specifically designed to assess the capabilities of LLMs in handling longer context. Although the performance of the models on long context was fairly competitive on long context less than 4096 tokens, there were some limitations on its performance on longer context. Motivated by improving its performance on longer context, we finetuned the Mistral 7B model, and produced `Mistrallite`. The model managed to `significantly boost the performance of long context handling` over Mistral-7B-Instruct-v0.1. The detailed `long context evalutaion results` are as below: 1. [Topic Retrieval](https://lmsys.org/blog/2023-06-29-longchat/) |Model Name|Input length| Input length | Input length| Input length| Input length| |----------|-------------:|-------------:|------------:|-----------:|-----------:| | | 2851| 5568 |8313 | 11044 | 13780 | | Mistral-7B-Instruct-v0.1 | 100% | 50% | 2% | 0% | 0% | | MistralLite | **100%** | **100%** | **100%** | **100%** | **98%** | 2. [Line Retrieval](https://lmsys.org/blog/2023-06-29-longchat/#longeval-results) |Model Name|Input length| Input length | Input length| Input length| Input length|Input length| |----------|-------------:|-------------:|------------:|-----------:|-----------:|-----------:| | | 3818| 5661 |7505 | 9354 | 11188 | 12657 | Mistral-7B-Instruct-v0.1 | **98%** | 62% | 42% | 42% | 32% | 30% | | MistralLite | **98%** | **92%** | **88%** | **76%** | **70%** | **60%** | 3. [Pass key Retrieval](https://github.com/epfml/landmark-attention/blob/main/llama/run_test.py#L101) |Model Name|Input length| Input length | Input length| Input length| |----------|-------------:|-------------:|------------:|-----------:| | | 3264| 5396 |8329 | 10197 | | Mistral-7B-Instruct-v0.1 | **100%** | 50% | 20% | 30% | | MistralLite | **100%** | **100%** | **100%** | **100%** | 4. [Question Answering with Long Input Texts](https://nyu-mll.github.io/quality/) |Model Name| Test set Accuracy | Hard subset Accuracy| |----------|-------------:|-------------:| | Mistral-7B-Instruct-v0.1 | 44.3% | 39.7% | | MistralLite | **64.4%** | **56.2%** | ### Example test on long context of 13400 tokens - Context is from [Amazon Aurora FAQs](https://aws.amazon.com/rds/aurora/faqs/) - Question: *please tell me how does pgvector help with Generative AI and give me some examples.* - Answer from MistralLite: ```code pgvector is an open-source extension for PostgreSQL supported by Amazon Aurora PostgreSQL-Compatible Edition. You can use pgvector to store, search, index, and query billions of embeddings that are generated from machine learning (ML) and artificial intelligence (AI) models in your database, such as those from Amazon Bedrock (limited preview) or Amazon SageMaker. A vector embedding is a numerical representation that represents the semantic meaning of content such as text, images, and video. With pgvector, you can query embeddings in your Aurora PostgreSQL database to perform efficient semantic similarity searches of these data types, represented as vectors, combined with other tabular data in Aurora. This enables the use of generative AI and other AI/ML systems for new types of applications such as personalized recommendations based on similar text descriptions or images, candidate match based on interview notes, customer service next best action recommendations based on successful transcripts or chat session dialogs, and more. ``` ## Model Details - **Developed by:** [AWS Contributors](https://github.com/orgs/aws-samples/teams/aws-prototype-ml-apac) - **Model type:** [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) - **Language:** English - **Finetuned from weights:** [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) - **Finetuned on data:** - [SLidingEncoder and Decoder (SLED)](https://huggingface.co/datasets/tau/sled) - [(Long) Natural Questions (NQ)](https://huggingface.co/datasets/togethercomputer/Long-Data-Collections#multi-passage-qa-from-natural-questions) - [OpenAssistant Conversations Dataset (OASST1)](https://huggingface.co/datasets/OpenAssistant/oasst1) - **Supported Serving Framework:** - [Text-Generation-Inference 1.1.0](https://github.com/huggingface/text-generation-inference/tree/v1.1.0) - [vLLM](https://github.com/vllm-project/vllm) - [HuggingFace transformers](https://huggingface.co/docs/transformers/index) - [HuggingFace Text Generation Inference (TGI) container on SageMaker](https://github.com/awslabs/llm-hosting-container) - **Model License:** Apache 2.0 - **Contact:** [GitHub issues](https://github.com/awslabs/extending-the-context-length-of-open-source-llms/issues) - **Inference Code** [Github Repo](https://github.com/awslabs/extending-the-context-length-of-open-source-llms/blob/main/MistralLite/) ## MistralLite LM-Eval Results ### Methodology - Please see https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - revision=4ececff - Note: we used --model hf-causal-experimental instead of --model hf-causal ### Results |Average|hellaswag| arc_challenge|truthful_qa (mc2)| MMLU (acc)| |----------|-------------:|------------:|-----------:|-----------:| | 0.57221 | 0.81617 | 0.58874 | 0.38275 | 0.5012 | ## How to Use MistralLite from Python Code (HuggingFace transformers) ## **Important** - For an end-to-end example Jupyter notebook, please refer to [this link](https://github.com/awslabs/extending-the-context-length-of-open-source-llms/blob/main/MistralLite/huggingface-transformers/example_usage.ipynb). ### Install the necessary packages Requires: [transformers](https://pypi.org/project/transformers/) 4.34.0 or later, [flash-attn](https://pypi.org/project/flash-attn/) 2.3.1.post1 or later, and [accelerate](https://pypi.org/project/accelerate/) 0.23.0 or later. ```shell pip install transformers==4.34.0 pip install flash-attn==2.3.1.post1 --no-build-isolation pip install accelerate==0.23.0 ``` ### You can then try the following example code ```python from transformers import AutoModelForCausalLM, AutoTokenizer import transformers import torch model_id = "amazon/MistralLite" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, use_flash_attention_2=True, device_map="auto",) pipeline = transformers.pipeline( "text-generation", model=model, tokenizer=tokenizer, ) prompt = "<|prompter|>What are the main challenges to support a long context for LLM?</s><|assistant|>" sequences = pipeline( prompt, max_new_tokens=400, do_sample=False, return_full_text=False, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, ) for seq in sequences: print(f"{seq['generated_text']}") ``` **Important** - Use the prompt template below for MistralLite: ``` <|prompter|>What are the main challenges to support a long context for LLM?</s><|assistant|> ``` ## How to Serve MistralLite on TGI ## **Important:** - For an end-to-end example Jupyter notebook using the native TGI container, please refer to [this link](https://github.com/awslabs/extending-the-context-length-of-open-source-llms/blob/main/MistralLite/tgi/example_usage.ipynb). - If the **input context length is greater than 12K tokens**, it is recommended using a custom TGI container, please refer to [this link](https://github.com/awslabs/extending-the-context-length-of-open-source-llms/blob/main/MistralLite/tgi-custom/example_usage.ipynb). ### Start TGI server ### Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell docker run -d --gpus all --shm-size 1g -p 443:80 -v $(pwd)/models:/data ghcr.io/huggingface/text-generation-inference:1.1.0 \ --model-id amazon/MistralLite \ --max-input-length 16000 \ --max-total-tokens 16384 \ --max-batch-prefill-tokens 16384 \ --trust-remote-code ``` ### Perform Inference ### Example Python code for inference with TGI (requires `text_generation` 0.6.1 or later): ```shell pip install text_generation==0.6.1 ``` ```python from text_generation import Client SERVER_PORT = 443 SERVER_HOST = "localhost" SERVER_URL = f"{SERVER_HOST}:{SERVER_PORT}" tgi_client = Client(f"http://{SERVER_URL}", timeout=60) def invoke_tgi(prompt, random_seed=1, max_new_tokens=400, print_stream=True, assist_role=True): if (assist_role): prompt = f"<|prompter|>{prompt}</s><|assistant|>" output = "" for response in tgi_client.generate_stream( prompt, do_sample=False, max_new_tokens=max_new_tokens, return_full_text=False, #temperature=None, #truncate=None, #seed=random_seed, #typical_p=0.2, ): if hasattr(response, "token"): if not response.token.special: snippet = response.token.text output += snippet if (print_stream): print(snippet, end='', flush=True) return output prompt = "What are the main challenges to support a long context for LLM?" result = invoke_tgi(prompt) ``` **Important** - When using MistralLite for inference for the first time, it may require a brief 'warm-up' period that can take 10s of seconds. However, subsequent inferences should be faster and return results in a more timely manner. This warm-up period is normal and should not affect the overall performance of the system once the initialisation period has been completed. ## How to Deploy MistralLite on Amazon SageMaker ## **Important:** - For an end-to-end example Jupyter notebook using the SageMaker built-in container, please refer to [this link](https://github.com/awslabs/extending-the-context-length-of-open-source-llms/blob/main/MistralLite/sagemaker-tgi/example_usage.ipynb). - If the **input context length is greater than 12K tokens**, it is recommended using a custom docker container, please refer to [this link](https://github.com/awslabs/extending-the-context-length-of-open-source-llms/blob/main/MistralLite/sagemaker-tgi-custom/example_usage.ipynb). ### Install the necessary packages Requires: [sagemaker](https://pypi.org/project/sagemaker/) 2.192.1 or later. ```shell pip install sagemaker==2.192.1 ``` ### Deploy the Model as A SageMaker Endpoint ### To deploy MistralLite on a SageMaker endpoint, please follow the example code as below. ```python import sagemaker from sagemaker.huggingface import HuggingFaceModel, get_huggingface_llm_image_uri import time sagemaker_session = sagemaker.Session() region = sagemaker_session.boto_region_name role = sagemaker.get_execution_role() image_uri = get_huggingface_llm_image_uri( backend="huggingface", # or lmi region=region, version="1.1.0" ) model_name = "MistralLite-" + time.strftime("%Y-%m-%d-%H-%M-%S", time.gmtime()) hub = { 'HF_MODEL_ID':'amazon/MistralLite', 'HF_TASK':'text-generation', 'SM_NUM_GPUS':'1', "MAX_INPUT_LENGTH": '16000', "MAX_TOTAL_TOKENS": '16384', "MAX_BATCH_PREFILL_TOKENS": '16384', "MAX_BATCH_TOTAL_TOKENS": '16384', } model = HuggingFaceModel( name=model_name, env=hub, role=role, image_uri=image_uri ) predictor = model.deploy( initial_instance_count=1, instance_type="ml.g5.2xlarge", endpoint_name=model_name, ) ``` ### Perform Inference ### To call the endpoint, please follow the example code as below: ```python input_data = { "inputs": "<|prompter|>What are the main challenges to support a long context for LLM?</s><|assistant|>", "parameters": { "do_sample": False, "max_new_tokens": 400, "return_full_text": False, #"typical_p": 0.2, #"temperature":None, #"truncate":None, #"seed": 1, } } result = predictor.predict(input_data)[0]["generated_text"] print(result) ``` or via [boto3](https://pypi.org/project/boto3/), and the example code is shown as below: ```python import boto3 import json def call_endpoint(client, prompt, endpoint_name, paramters): client = boto3.client("sagemaker-runtime") payload = {"inputs": prompt, "parameters": parameters} response = client.invoke_endpoint(EndpointName=endpoint_name, Body=json.dumps(payload), ContentType="application/json") output = json.loads(response["Body"].read().decode()) result = output[0]["generated_text"] return result client = boto3.client("sagemaker-runtime") parameters = { "do_sample": False, "max_new_tokens": 400, "return_full_text": False, #"typical_p": 0.2, #"temperature":None, #"truncate":None, #"seed": 1, } endpoint_name = predictor.endpoint_name prompt = "<|prompter|>What are the main challenges to support a long context for LLM?</s><|assistant|>" result = call_endpoint(client, prompt, endpoint_name, parameters) print(result) ``` ## How to Serve MistralLite on vLLM ## Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/). **Important** - For an end-to-end example Jupyter notebook, please refer to [this link](https://github.com/awslabs/extending-the-context-length-of-open-source-llms/blob/main/MistralLite/vllm/example_usage.ipynb). ### Using vLLM as a server ### When using vLLM as a server, pass the --model amazon/MistralLite parameter, for example: ```shell python3 -m vllm.entrypoints.api_server --model amazon/MistralLite ``` ### Using vLLM in Python Code ### When using vLLM from Python code, Please see the example code as below: ```python from vllm import LLM, SamplingParams prompts = [ "<|prompter|>What are the main challenges to support a long context for LLM?</s><|assistant|>", ] sampling_params = SamplingParams(temperature=0, max_tokens=100) llm = LLM(model="amazon/MistralLite",) outputs = llm.generate(prompts, sampling_params) # Print the outputs. for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") ``` ## Limitations ## Before using the MistralLite model, it is important to perform your own independent assessment, and take measures to ensure that your use would comply with your own specific quality control practices and standards, and that your use would comply with the local rules, laws, regulations, licenses and terms that apply to you, and your content.
bushai/sar-i-65b
bushai
2024-10-21T05:03:12Z
39
0
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "es", "de", "fr", "pt", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-21T00:46:34Z
--- language: - es - de - fr - pt - en library_name: transformers --- # Model Card for sar-i-65b ## Model Details - **Model Name**: sar-i-65b - **Version**: 1.2 - **Developed by**: BushAI ## Intended Use - **Primary Use Cases**: - Text generation - Language modeling - Natural language understanding tasks - Research and development in NLP - **Out-of-Scope Use Cases**: - Real-time critical applications - High-stakes decision-making systems - Use in contexts where the model's output could be harmful or misleading ## Factors - **Relevant Factors**: - Model performance may vary across different languages and domains. - The model may generate biased or inappropriate content, especially in sensitive contexts. - **Evaluation Factors**: - Performance on benchmark datasets - Human evaluation of generated text - Ethical considerations and potential biases ## Limitations - **Known Limitations**: - The model may generate biased or inappropriate content. - The model may not perform well on low-resource languages or specialized domains. - The model may require significant computational resources for inference. ## Ethical Considerations - **Potential for Harm**: - The model may generate harmful or biased content, especially in sensitive contexts. - The model should not be used in high-stakes decision-making systems. - **Mitigations**: - Regularly evaluate the model for biases and ethical concerns. - Use the model in conjunction with human oversight. - Provide clear guidelines and warnings for users of the model. ## How to Get Started with the Model - **Usage**: ```python from transformers import AutoTokenizer, AutoModelForCausalLM # Load the tokenizer and model tokenizer = AutoTokenizer.from_pretrained("bushai/sar-i-65b") model = AutoModelForCausalLM.from_pretrained("bushai/sar-i-65b") # Prepare the input text input_text = "Once upon a time" inputs = tokenizer(input_text, return_tensors="pt") # Generate text output = model.generate(**inputs, max_length=50) # Decode the output output_text = tokenizer.decode(output[0], skip_special_tokens=True) # Print the generated text print(output_text)``` - **Dependencies**: - transformers - torch
aarishshahmohsin/final_technical_terms_t5_finetuned
aarishshahmohsin
2024-10-21T05:02:44Z
107
0
transformers
[ "transformers", "safetensors", "speecht5", "text-to-audio", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
text-to-audio
2024-10-21T04:59:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bmichaeljoiner/boycard01
bmichaeljoiner
2024-10-21T04:59:40Z
12
0
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "template:sd-lora", "fluxgym", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-10-21T04:59:33Z
--- tags: - text-to-image - flux - lora - diffusers - template:sd-lora - fluxgym base_model: black-forest-labs/FLUX.1-dev instance_prompt: boycard01 license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # boycard01 A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym) <Gallery /> ## Trigger words You should use `boycard01` to trigger the image generation. ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc. Weights for this model are available in Safetensors format.
fedric95/Meta-Llama-3.1-8B-GGUF
fedric95
2024-10-21T04:57:11Z
342
1
null
[ "gguf", "facebook", "meta", "pytorch", "llama", "llama-3", "text-generation", "en", "de", "fr", "it", "pt", "hi", "es", "th", "base_model:meta-llama/Llama-3.1-8B", "base_model:quantized:meta-llama/Llama-3.1-8B", "license:llama3.1", "endpoints_compatible", "region:us" ]
text-generation
2024-08-04T07:23:10Z
--- base_model: meta-llama/Meta-Llama-3.1-8B language: - en - de - fr - it - pt - hi - es - th license: llama3.1 pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 quantized_by: fedric95 extra_gated_prompt: "### LLAMA 3.1 COMMUNITY LICENSE AGREEMENT\nLlama 3.1 Version\ \ Release Date: July 23, 2024\n\"Agreement\" means the terms and conditions for\ \ use, reproduction, distribution and modification of the Llama Materials set forth\ \ herein.\n\"Documentation\" means the specifications, manuals and documentation\ \ accompanying Llama 3.1 distributed by Meta at https://llama.meta.com/doc/overview.\n\ \"Licensee\" or \"you\" means you, or your employer or any other person or entity\ \ (if you are entering into this Agreement on such person or entity’s behalf), of\ \ the age required under applicable laws, rules or regulations to provide legal\ \ consent and that has legal authority to bind your employer or such other person\ \ or entity if you are entering in this Agreement on their behalf.\n\"Llama 3.1\"\ \ means the foundational large language models and software and algorithms, including\ \ machine-learning model code, trained model weights, inference-enabling code, training-enabling\ \ code, fine-tuning enabling code and other elements of the foregoing distributed\ \ by Meta at https://llama.meta.com/llama-downloads.\n\"Llama Materials\" means,\ \ collectively, Meta’s proprietary Llama 3.1 and Documentation (and any portion\ \ thereof) made available under this Agreement.\n\"Meta\" or \"we\" means Meta Platforms\ \ Ireland Limited (if you are located in or, if you are an entity, your principal\ \ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you\ \ are located outside of the EEA or Switzerland).\n \n1. License Rights and Redistribution.\n\ a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\ \ and royalty-free limited license under Meta’s intellectual property or other rights\ \ owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy,\ \ create derivative works of, and make modifications to the Llama Materials.\nb.\ \ Redistribution and Use.\ni. If you distribute or make available the Llama Materials\ \ (or any derivative works thereof), or a product or service (including another\ \ AI model) that contains any of them, you shall (A) provide a copy of this Agreement\ \ with any such Llama Materials; and (B) prominently display “Built with Llama”\ \ on a related website, user interface, blogpost, about page, or product documentation.\ \ If you use the Llama Materials or any outputs or results of the Llama Materials\ \ to create, train, fine tune, or otherwise improve an AI model, which is distributed\ \ or made available, you shall also include “Llama” at the beginning of any such\ \ AI model name.\nii. If you receive Llama Materials, or any derivative works thereof,\ \ from a Licensee as part of an integrated end user product, then Section 2 of\ \ this Agreement will not apply to you.\niii. You must retain in all copies of the\ \ Llama Materials that you distribute the following attribution notice within a\ \ “Notice” text file distributed as a part of such copies: “Llama 3.1 is licensed\ \ under the Llama 3.1 Community License, Copyright © Meta Platforms, Inc. All Rights\ \ Reserved.”\niv. Your use of the Llama Materials must comply with applicable laws\ \ and regulations (including trade compliance laws and regulations) and adhere to\ \ the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3_1/use-policy),\ \ which is hereby incorporated by reference into this Agreement.\n2. Additional\ \ Commercial Terms. If, on the Llama 3.1 version release date, the monthly active\ \ users of the products or services made available by or for Licensee, or Licensee’s\ \ affiliates, is greater than 700 million monthly active users in the preceding\ \ calendar month, you must request a license from Meta, which Meta may grant to\ \ you in its sole discretion, and you are not authorized to exercise any of the\ \ rights under this Agreement unless or until Meta otherwise expressly grants you\ \ such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE\ \ LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS”\ \ BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY\ \ KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES\ \ OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.\ \ YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING\ \ THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA\ \ MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT\ \ WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN\ \ CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS\ \ AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL,\ \ EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED\ \ OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No\ \ trademark licenses are granted under this Agreement, and in connection with the\ \ Llama Materials, neither Meta nor Licensee may use any name or mark owned by or\ \ associated with the other or any of its affiliates, except as required for reasonable\ \ and customary use in describing and redistributing the Llama Materials or as set\ \ forth in this Section 5(a). Meta hereby grants you a license to use “Llama” (the\ \ “Mark”) solely as required to comply with the last sentence of Section 1.b.i.\ \ You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/\ \ ). All goodwill arising out of your use of the Mark will inure to the benefit\ \ of Meta.\nb. Subject to Meta’s ownership of Llama Materials and derivatives made\ \ by or for Meta, with respect to any derivative works and modifications of the\ \ Llama Materials that are made by you, as between you and Meta, you are and will\ \ be the owner of such derivative works and modifications.\nc. If you institute\ \ litigation or other proceedings against Meta or any entity (including a cross-claim\ \ or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.1 outputs\ \ or results, or any portion of any of the foregoing, constitutes infringement of\ \ intellectual property or other rights owned or licensable by you, then any licenses\ \ granted to you under this Agreement shall terminate as of the date such litigation\ \ or claim is filed or instituted. You will indemnify and hold harmless Meta from\ \ and against any claim by any third party arising out of or related to your use\ \ or distribution of the Llama Materials.\n6. Term and Termination. The term of\ \ this Agreement will commence upon your acceptance of this Agreement or access\ \ to the Llama Materials and will continue in full force and effect until terminated\ \ in accordance with the terms and conditions herein. Meta may terminate this Agreement\ \ if you are in breach of any term or condition of this Agreement. Upon termination\ \ of this Agreement, you shall delete and cease use of the Llama Materials. Sections\ \ 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law\ \ and Jurisdiction. This Agreement will be governed and construed under the laws\ \ of the State of California without regard to choice of law principles, and the\ \ UN Convention on Contracts for the International Sale of Goods does not apply\ \ to this Agreement. The courts of California shall have exclusive jurisdiction\ \ of any dispute arising out of this Agreement.\n### Llama 3.1 Acceptable Use Policy\n\ Meta is committed to promoting safe and fair use of its tools and features, including\ \ Llama 3.1. If you access or use Llama 3.1, you agree to this Acceptable Use Policy\ \ (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3_1/use-policy](https://llama.meta.com/llama3_1/use-policy)\n\ #### Prohibited Uses\nWe want everyone to use Llama 3.1 safely and responsibly.\ \ You agree you will not use, or allow others to use, Llama 3.1 to:\n 1. Violate\ \ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\ \ contribute to, encourage, plan, incite, or further illegal or unlawful activity\ \ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\ \ or harm to children, including the solicitation, creation, acquisition, or dissemination\ \ of child exploitative content or failure to report Child Sexual Abuse Material\n\ \ 3. Human trafficking, exploitation, and sexual violence\n 4. The\ \ illegal distribution of information or materials to minors, including obscene\ \ materials, or failure to employ legally required age-gating in connection with\ \ such information or materials.\n 5. Sexual solicitation\n 6. Any\ \ other criminal activity\n 3. Engage in, promote, incite, or facilitate the\ \ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\ \ 4. Engage in, promote, incite, or facilitate discrimination or other unlawful\ \ or harmful conduct in the provision of employment, employment benefits, credit,\ \ housing, other economic benefits, or other essential goods and services\n 5.\ \ Engage in the unauthorized or unlicensed practice of any profession including,\ \ but not limited to, financial, legal, medical/health, or related professional\ \ practices\n 6. Collect, process, disclose, generate, or infer health, demographic,\ \ or other sensitive personal or private information about individuals without rights\ \ and consents required by applicable laws\n 7. Engage in or facilitate any action\ \ or generate any content that infringes, misappropriates, or otherwise violates\ \ any third-party rights, including the outputs or results of any products or services\ \ using the Llama Materials\n 8. Create, generate, or facilitate the creation\ \ of malicious code, malware, computer viruses or do anything else that could disable,\ \ overburden, interfere with or impair the proper working, integrity, operation\ \ or appearance of a website or computer system\n2. Engage in, promote, incite,\ \ facilitate, or assist in the planning or development of activities that present\ \ a risk of death or bodily harm to individuals, including use of Llama 3.1 related\ \ to the following:\n 1. Military, warfare, nuclear industries or applications,\ \ espionage, use for materials or activities that are subject to the International\ \ Traffic Arms Regulations (ITAR) maintained by the United States Department of\ \ State\n 2. Guns and illegal weapons (including weapon development)\n 3.\ \ Illegal drugs and regulated/controlled substances\n 4. Operation of critical\ \ infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm\ \ or harm to others, including suicide, cutting, and eating disorders\n 6. Any\ \ content intended to incite or promote violence, abuse, or any infliction of bodily\ \ harm to an individual\n3. Intentionally deceive or mislead others, including use\ \ of Llama 3.1 related to the following:\n 1. Generating, promoting, or furthering\ \ fraud or the creation or promotion of disinformation\n 2. Generating, promoting,\ \ or furthering defamatory content, including the creation of defamatory statements,\ \ images, or other content\n 3. Generating, promoting, or further distributing\ \ spam\n 4. Impersonating another individual without consent, authorization,\ \ or legal right\n 5. Representing that the use of Llama 3.1 or outputs are human-generated\n\ \ 6. Generating or facilitating false online engagement, including fake reviews\ \ and other means of fake online engagement\n4. Fail to appropriately disclose to\ \ end users any known dangers of your AI system\nPlease report any violation of\ \ this Policy, software “bug,” or other problems that could lead to a violation\ \ of this Policy through one of the following means:\n * Reporting issues with\ \ the model: [https://github.com/meta-llama/llama-models/issues](https://github.com/meta-llama/llama-models/issues)\n\ \ * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n\ \ * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting\ \ violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]" extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location ? By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy : checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- ## Llamacpp Quantizations of Meta-Llama-3.1-8B Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3472">b3472</a> for quantization. Original model: https://huggingface.co/meta-llama/Meta-Llama-3.1-8B ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Perplexity (wikitext-2-raw-v1.test) | | -------- | ---------- | --------- | ----------- | | [Meta-Llama-3.1-8B-BF16.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/Meta-Llama-3.1-8B.BF16.gguf) | BF16 | 16.10GB | 6.4006 +/- 0.03938 | | [Meta-Llama-3.1-8B-FP16.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/Meta-Llama-3.1-8B.FP16.gguf) | FP16 | 16.10GB | 6.4016 +/- 0.03939 | | [Meta-Llama-3.1-8B-Q8_0.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/Meta-Llama-3.1-8B-Q8_0.gguf) | Q8_0 | 8.54GB | 6.4070 +/- 0.03941 | | [Meta-Llama-3.1-8B-Q6_K.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/Meta-Llama-3.1-8B-Q6_K.gguf) | Q6_K | 6.60GB | 6.4231 +/- 0.03957 | | [Meta-Llama-3.1-8B-Q5_K_M.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/Meta-Llama-3.1-8B-Q5_K_M.gguf) | Q5_K_M | 5.73GB | 6.4623 +/- 0.03987 | | [Meta-Llama-3.1-8B-Q5_K_S.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/Meta-Llama-3.1-8B-Q5_K_S.gguf) | Q5_K_S | 5.60GB | 6.5161 +/- 0.04028 | | [Meta-Llama-3.1-8B-Q4_K_M.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/Meta-Llama-3.1-8B-Q4_K_M.gguf) | Q4_K_M | 4.92GB | 6.5837 +/- 0.04068 | | [Meta-Llama-3.1-8B-Q4_K_S.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/Meta-Llama-3.1-8B-Q4_K_S.gguf) | Q4_K_S | 4.69GB | 6.6751 +/- 0.04125 | | [Meta-Llama-3.1-8B-Q3_K_L.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/Meta-Llama-3.1-8B-Q3_K_L.gguf) | Q3_K_L | 4.32GB | 6.9458 +/- 0.04329 | | [Meta-Llama-3.1-8B-Q3_K_M.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/Meta-Llama-3.1-8B-Q3_K_M.gguf) | Q3_K_M | 4.02GB | 7.0488 +/- 0.04384 | | [Meta-Llama-3.1-8B-Q3_K_S.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/Meta-Llama-3.1-8B-Q3_K_S.gguf) | Q3_K_S | 3.66GB | 7.8823 +/- 0.04920 | | [Meta-Llama-3.1-8B-Q2_K.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/Meta-Llama-3.1-8B-Q2_K.gguf) | Q2_K | 3.18GB | 9.7262 +/- 0.06393 | ## Benchmark Results Results have been computed using: [hellaswage_val_full](https://raw.githubusercontent.com/klosax/hellaswag_text_data/main/hellaswag_val_full.txt) [winogrande-debiased-eval](https://huggingface.co/datasets/ikawrakow/winogrande-eval-for-llama.cpp/raw/main/winogrande-debiased-eval.csv) [mmlu-validation](https://huggingface.co/datasets/ikawrakow/validation-datasets-for-llama.cpp/resolve/main/mmlu-validation.bin) | Benchmark | Quant type | Metric | | -------- | ---------- | --------- | | WinoGrande (0-shot) | Q8_0 | 74.1121 +/- 1.2311 | | WinoGrande (0-shot) | Q4_K_M | 73.1650 +/- 1.2453 | | WinoGrande (0-shot) | Q3_K_M | 72.7703 +/- 1.2511 | | WinoGrande (0-shot) | Q3_K_S | 72.3757 +/- 1.2567 | | WinoGrande (0-shot) | Q2_K | 68.4294 +/- 1.3063 | | HellaSwag (0-shot) | Q8_0 | 79.41645091 | | HellaSwag (0-shot) | Q4_K_M | 79.05795658 | | HellaSwag (0-shot) | Q3_K_M | 79.41645091 | | HellaSwag (0-shot) | Q3_K_S | 76.93686517 | | HellaSwag (0-shot) | Q2_K | 72.16689902 | | MMLU (0-shot) | Q8_0 | 39.4703 +/- 1.2427 | | MMLU (0-shot) | Q4_K_M | 39.5349 +/- 1.2431 | | MMLU (0-shot) | Q3_K_M | 38.8889 +/- 1.2394 | | MMLU (0-shot) | Q3_K_S | 37.2739 +/- 1.2294 | | MMLU (0-shot) | Q2_K | 35.4651 +/- 1.2163 | ## Downloading using huggingface-cli First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download fedric95/Meta-Llama-3.1-8B-GGUF --include "Meta-Llama-3.1-8B-Q4_K_M.gguf" --local-dir ./ ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download fedric95/Meta-Llama-3.1-8B-GGUF --include "Meta-Llama-3.1-8B-Q8_0.gguf/*" --local-dir Meta-Llama-3.1-8B-Q8_0 ``` You can either specify a new local-dir (Meta-Llama-3.1-8B-Q8_0) or download them all in place (./) ## Reproducibility https://github.com/ggerganov/llama.cpp/issues/8650#issuecomment-2261497976
RichardErkhov/keithdrexel_-_unsloth-llama-3.2-3b-tldr-gguf
RichardErkhov
2024-10-21T04:57:07Z
39
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-21T04:28:42Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) unsloth-llama-3.2-3b-tldr - GGUF - Model creator: https://huggingface.co/keithdrexel/ - Original model: https://huggingface.co/keithdrexel/unsloth-llama-3.2-3b-tldr/ | Name | Quant method | Size | | ---- | ---- | ---- | | [unsloth-llama-3.2-3b-tldr.Q2_K.gguf](https://huggingface.co/RichardErkhov/keithdrexel_-_unsloth-llama-3.2-3b-tldr-gguf/blob/main/unsloth-llama-3.2-3b-tldr.Q2_K.gguf) | Q2_K | 1.27GB | | [unsloth-llama-3.2-3b-tldr.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/keithdrexel_-_unsloth-llama-3.2-3b-tldr-gguf/blob/main/unsloth-llama-3.2-3b-tldr.IQ3_XS.gguf) | IQ3_XS | 1.38GB | | [unsloth-llama-3.2-3b-tldr.IQ3_S.gguf](https://huggingface.co/RichardErkhov/keithdrexel_-_unsloth-llama-3.2-3b-tldr-gguf/blob/main/unsloth-llama-3.2-3b-tldr.IQ3_S.gguf) | IQ3_S | 1.44GB | | [unsloth-llama-3.2-3b-tldr.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/keithdrexel_-_unsloth-llama-3.2-3b-tldr-gguf/blob/main/unsloth-llama-3.2-3b-tldr.Q3_K_S.gguf) | Q3_K_S | 1.44GB | | [unsloth-llama-3.2-3b-tldr.IQ3_M.gguf](https://huggingface.co/RichardErkhov/keithdrexel_-_unsloth-llama-3.2-3b-tldr-gguf/blob/main/unsloth-llama-3.2-3b-tldr.IQ3_M.gguf) | IQ3_M | 1.49GB | | [unsloth-llama-3.2-3b-tldr.Q3_K.gguf](https://huggingface.co/RichardErkhov/keithdrexel_-_unsloth-llama-3.2-3b-tldr-gguf/blob/main/unsloth-llama-3.2-3b-tldr.Q3_K.gguf) | Q3_K | 1.57GB | | [unsloth-llama-3.2-3b-tldr.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/keithdrexel_-_unsloth-llama-3.2-3b-tldr-gguf/blob/main/unsloth-llama-3.2-3b-tldr.Q3_K_M.gguf) | Q3_K_M | 1.57GB | | [unsloth-llama-3.2-3b-tldr.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/keithdrexel_-_unsloth-llama-3.2-3b-tldr-gguf/blob/main/unsloth-llama-3.2-3b-tldr.Q3_K_L.gguf) | Q3_K_L | 1.69GB | | [unsloth-llama-3.2-3b-tldr.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/keithdrexel_-_unsloth-llama-3.2-3b-tldr-gguf/blob/main/unsloth-llama-3.2-3b-tldr.IQ4_XS.gguf) | IQ4_XS | 1.71GB | | [unsloth-llama-3.2-3b-tldr.Q4_0.gguf](https://huggingface.co/RichardErkhov/keithdrexel_-_unsloth-llama-3.2-3b-tldr-gguf/blob/main/unsloth-llama-3.2-3b-tldr.Q4_0.gguf) | Q4_0 | 1.79GB | | [unsloth-llama-3.2-3b-tldr.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/keithdrexel_-_unsloth-llama-3.2-3b-tldr-gguf/blob/main/unsloth-llama-3.2-3b-tldr.IQ4_NL.gguf) | IQ4_NL | 1.79GB | | [unsloth-llama-3.2-3b-tldr.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/keithdrexel_-_unsloth-llama-3.2-3b-tldr-gguf/blob/main/unsloth-llama-3.2-3b-tldr.Q4_K_S.gguf) | Q4_K_S | 1.8GB | | [unsloth-llama-3.2-3b-tldr.Q4_K.gguf](https://huggingface.co/RichardErkhov/keithdrexel_-_unsloth-llama-3.2-3b-tldr-gguf/blob/main/unsloth-llama-3.2-3b-tldr.Q4_K.gguf) | Q4_K | 1.88GB | | [unsloth-llama-3.2-3b-tldr.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/keithdrexel_-_unsloth-llama-3.2-3b-tldr-gguf/blob/main/unsloth-llama-3.2-3b-tldr.Q4_K_M.gguf) | Q4_K_M | 1.88GB | | [unsloth-llama-3.2-3b-tldr.Q4_1.gguf](https://huggingface.co/RichardErkhov/keithdrexel_-_unsloth-llama-3.2-3b-tldr-gguf/blob/main/unsloth-llama-3.2-3b-tldr.Q4_1.gguf) | Q4_1 | 1.95GB | | [unsloth-llama-3.2-3b-tldr.Q5_0.gguf](https://huggingface.co/RichardErkhov/keithdrexel_-_unsloth-llama-3.2-3b-tldr-gguf/blob/main/unsloth-llama-3.2-3b-tldr.Q5_0.gguf) | Q5_0 | 2.11GB | | [unsloth-llama-3.2-3b-tldr.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/keithdrexel_-_unsloth-llama-3.2-3b-tldr-gguf/blob/main/unsloth-llama-3.2-3b-tldr.Q5_K_S.gguf) | Q5_K_S | 2.11GB | | [unsloth-llama-3.2-3b-tldr.Q5_K.gguf](https://huggingface.co/RichardErkhov/keithdrexel_-_unsloth-llama-3.2-3b-tldr-gguf/blob/main/unsloth-llama-3.2-3b-tldr.Q5_K.gguf) | Q5_K | 2.16GB | | [unsloth-llama-3.2-3b-tldr.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/keithdrexel_-_unsloth-llama-3.2-3b-tldr-gguf/blob/main/unsloth-llama-3.2-3b-tldr.Q5_K_M.gguf) | Q5_K_M | 2.16GB | | [unsloth-llama-3.2-3b-tldr.Q5_1.gguf](https://huggingface.co/RichardErkhov/keithdrexel_-_unsloth-llama-3.2-3b-tldr-gguf/blob/main/unsloth-llama-3.2-3b-tldr.Q5_1.gguf) | Q5_1 | 2.28GB | | [unsloth-llama-3.2-3b-tldr.Q6_K.gguf](https://huggingface.co/RichardErkhov/keithdrexel_-_unsloth-llama-3.2-3b-tldr-gguf/blob/main/unsloth-llama-3.2-3b-tldr.Q6_K.gguf) | Q6_K | 2.46GB | | [unsloth-llama-3.2-3b-tldr.Q8_0.gguf](https://huggingface.co/RichardErkhov/keithdrexel_-_unsloth-llama-3.2-3b-tldr-gguf/blob/main/unsloth-llama-3.2-3b-tldr.Q8_0.gguf) | Q8_0 | 3.19GB | Original model description: --- base_model: unsloth/Llama-3.2-3B-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** keithdrexel - **License:** apache-2.0 - **Finetuned from model :** unsloth/Llama-3.2-3B-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more quants, at much higher speed, than I would otherwise be able to.
fedric95/gemma-2-9b-GGUF
fedric95
2024-10-21T04:56:35Z
408
0
transformers
[ "transformers", "gguf", "conversational", "text-generation", "base_model:google/gemma-2-9b", "base_model:quantized:google/gemma-2-9b", "license:gemma", "endpoints_compatible", "region:us" ]
text-generation
2024-08-17T22:26:39Z
--- base_model: google/gemma-2-9b library_name: transformers license: gemma pipeline_tag: text-generation tags: - conversational quantized_by: fedric95 extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license --- ## Llamacpp Quantizations of Meta-Llama-3.1-8B Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3583">b3583</a> for quantization. Original model: https://huggingface.co/google/gemma-2-9b ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Perplexity (wikitext-2-raw-v1.test) | | -------- | ---------- | --------- | ----------- | | [gemma-2-9b.FP32.gguf](https://huggingface.co/fedric95/gemma-2-9b-GGUF/blob/main/gemma-2-9b.FP32.gguf) | FP32 | 37.00GB | 6.9209 +/- 0.04660 | | [gemma-2-9b-Q8_0.gguf](https://huggingface.co/fedric95/gemma-2-9b-GGUF/blob/main/gemma-2-9b-Q8_0.gguf) | Q8_0 | 9.83GB | 6.9222 +/- 0.04660 | | [gemma-2-9b-Q6_K.gguf](https://huggingface.co/fedric95/gemma-2-9b-GGUF/blob/main/gemma-2-9b-Q6_K.gguf) | Q6_K | 7.59GB | 6.9353 +/- 0.04675 | | [gemma-2-9b-Q5_K_M.gguf](https://huggingface.co/fedric95/gemma-2-9b-GGUF/blob/main/gemma-2-9b-Q5_K_M.gguf) | Q5_K_M | 6.65GB | 6.9571 +/- 0.04687 | | [gemma-2-9b-Q5_K_S.gguf](https://huggingface.co/fedric95/gemma-2-9b-GGUF/blob/main/gemma-2-9b-Q5_K_S.gguf) | Q5_K_S | 6.48GB | 6.9623 +/- 0.04690 | | [gemma-2-9b-Q4_K_M.gguf](https://huggingface.co/fedric95/gemma-2-9b-GGUF/blob/main/gemma-2-9b-Q4_K_M.gguf) | Q4_K_M | 5.76GB | 7.0220 +/- 0.04737 | | [gemma-2-9b-Q4_K_S.gguf](https://huggingface.co/fedric95/gemma-2-9b-GGUF/blob/main/gemma-2-9b-Q4_K_S.gguf) | Q4_K_S | 5.48GB | 7.0622 +/- 0.04777 | | [gemma-2-9b-Q3_K_L.gguf](https://huggingface.co/fedric95/gemma-2-9b-GGUF/blob/main/gemma-2-9b-Q3_K_L.gguf) | Q3_K_L | 5.13GB | 7.2144 +/- 0.04910 | | [gemma-2-9b-Q3_K_M.gguf](https://huggingface.co/fedric95/gemma-2-9b-GGUF/blob/main/gemma-2-9b-Q3_K_M.gguf) | Q3_K_M | 4.76GB | 7.2849 +/- 0.04970 | | [gemma-2-9b-Q3_K_S.gguf](https://huggingface.co/fedric95/gemma-2-9b-GGUF/blob/main/gemma-2-9b-Q3_K_S.gguf) | Q3_K_S | 4.34GB | 7.6869 +/- 0.05373 | | [gemma-2-9b-Q2_K.gguf](https://huggingface.co/fedric95/gemma-2-9b-GGUF/blob/main/gemma-2-9b-Q2_K.gguf) | Q2_K | 3.81GB | 8.7979 +/- 0.06191 | ## Benchmark Results Results have been computed using: [hellaswage_val_full](https://raw.githubusercontent.com/klosax/hellaswag_text_data/main/hellaswag_val_full.txt) [winogrande-debiased-eval](https://huggingface.co/datasets/ikawrakow/winogrande-eval-for-llama.cpp/raw/main/winogrande-debiased-eval.csv) [mmlu-validation](https://huggingface.co/datasets/ikawrakow/validation-datasets-for-llama.cpp/resolve/main/mmlu-validation.bin) | Benchmark | Quant type | Metric | | -------- | ---------- | --------- | | WinoGrande (0-shot) | Q8_0 | 74.4278 +/- 1.2261 | | WinoGrande (0-shot) | Q4_K_M | 74.8224 +/- 1.2198 | | WinoGrande (0-shot) | Q3_K_M | 74.1910 +/- 1.2298 | | WinoGrande (0-shot) | Q3_K_S | 72.6125 +/- 1.2533 | | WinoGrande (0-shot) | Q2_K | 71.4286 +/- 1.2697 | | HellaSwag (0-shot) | Q8_0 | 78.39075881 | | HellaSwag (0-shot) | Q4_K_M | 77.87293368 | | HellaSwag (0-shot) | Q3_K_M | 76.64807807 | | HellaSwag (0-shot) | Q3_K_S | 76.08046206 | | HellaSwag (0-shot) | Q2_K | 73.07309301 | | MMLU (0-shot) | Q8_0 | 42.5065 +/- 1.2569 | | MMLU (0-shot) | Q4_K_M | 42.5065 +/- 1.2569 | | MMLU (0-shot) | Q3_K_M | 41.3437 +/- 1.2520 | | MMLU (0-shot) | Q3_K_S | 40.5685 +/- 1.2484 | | MMLU (0-shot) | Q2_K | 38.1137 +/- 1.2348 | ## Downloading using huggingface-cli First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download fedric95/gemma-2-9b-GGUF --include "gemma-2-9b-Q4_K_M.gguf" --local-dir ./ ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download fedric95/gemma-2-9b-GGUF --include "gemma-2-9b-Q8_0.gguf/*" --local-dir gemma-2-9b-Q8_0 ``` You can either specify a new local-dir (gemma-2-9b-Q8_0) or download them all in place (./) ## Reproducibility https://github.com/ggerganov/llama.cpp/discussions/9020#discussioncomment-10335638
Ariffiq99/Stacked_model_40
Ariffiq99
2024-10-21T04:51:51Z
105
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "multiple-choice", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "endpoints_compatible", "region:us" ]
multiple-choice
2024-10-21T03:33:46Z
--- library_name: transformers license: mit base_model: FacebookAI/xlm-roberta-base tags: - generated_from_trainer metrics: - f1 model-index: - name: Stacked_model_40 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Stacked_model_40 This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8449 - F1: 0.7128 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1577 | 1.0 | 631 | 0.9219 | 0.6601 | | 0.888 | 2.0 | 1262 | 0.8507 | 0.6943 | | 0.7863 | 3.0 | 1893 | 0.8236 | 0.7054 | | 0.6768 | 4.0 | 2524 | 0.8302 | 0.7128 | | 0.64 | 5.0 | 3155 | 0.8449 | 0.7128 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.19.1
bcyeung/ppo-SnowballTarget
bcyeung
2024-10-21T04:38:55Z
5
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2024-10-21T04:38:53Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: bcyeung/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
nolynn/Llama-3.2-3B-Instruct-uncensored-Q5_K_S-GGUF
nolynn
2024-10-21T04:38:50Z
5
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "base_model:chuanli11/Llama-3.2-3B-Instruct-uncensored", "base_model:quantized:chuanli11/Llama-3.2-3B-Instruct-uncensored", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-10-21T04:38:36Z
--- library_name: transformers tags: - llama-cpp - gguf-my-repo base_model: chuanli11/Llama-3.2-3B-Instruct-uncensored --- # nolynn/Llama-3.2-3B-Instruct-uncensored-Q5_K_S-GGUF This model was converted to GGUF format from [`chuanli11/Llama-3.2-3B-Instruct-uncensored`](https://huggingface.co/chuanli11/Llama-3.2-3B-Instruct-uncensored) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/chuanli11/Llama-3.2-3B-Instruct-uncensored) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo nolynn/Llama-3.2-3B-Instruct-uncensored-Q5_K_S-GGUF --hf-file llama-3.2-3b-instruct-uncensored-q5_k_s-imat.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo nolynn/Llama-3.2-3B-Instruct-uncensored-Q5_K_S-GGUF --hf-file llama-3.2-3b-instruct-uncensored-q5_k_s-imat.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo nolynn/Llama-3.2-3B-Instruct-uncensored-Q5_K_S-GGUF --hf-file llama-3.2-3b-instruct-uncensored-q5_k_s-imat.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo nolynn/Llama-3.2-3B-Instruct-uncensored-Q5_K_S-GGUF --hf-file llama-3.2-3b-instruct-uncensored-q5_k_s-imat.gguf -c 2048 ```
rohitsaxena/sft-llava-1.5-7b_lora
rohitsaxena
2024-10-21T04:35:36Z
13
0
transformers
[ "transformers", "tensorboard", "safetensors", "llava", "image-text-to-text", "trl", "sft", "generated_from_trainer", "conversational", "base_model:llava-hf/llava-1.5-7b-hf", "base_model:finetune:llava-hf/llava-1.5-7b-hf", "license:llama2", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-10-21T04:18:06Z
--- library_name: transformers license: llama2 base_model: llava-hf/llava-1.5-7b-hf tags: - trl - sft - generated_from_trainer metrics: - bleu - rouge model-index: - name: sft-llava-1.5-7b_lora results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sft-llava-1.5-7b_lora This model is a fine-tuned version of [llava-hf/llava-1.5-7b-hf](https://huggingface.co/llava-hf/llava-1.5-7b-hf) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.9404 - Bleu: 0.1802 - Rouge1: 0.4861 - Rouge2: 0.1709 - Rougel: 0.3580 - Bertscore Precision: 0.6578 - Bertscore Recall: 0.7479 - Bertscore F1: 0.6999 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Rouge1 | Rouge2 | Rougel | Bertscore Precision | Bertscore Recall | Bertscore F1 | |:-------------:|:------:|:----:|:---------------:|:------:|:------:|:------:|:------:|:-------------------:|:----------------:|:------------:| | 5.7514 | 0.3101 | 200 | 5.6831 | 0.0772 | 0.2028 | 0.0717 | 0.1778 | 0.6381 | 0.7437 | 0.6869 | | 2.9737 | 0.6202 | 400 | 2.9242 | 0.1580 | 0.4319 | 0.1445 | 0.3306 | 0.6578 | 0.7479 | 0.6999 | | 2.6756 | 0.9302 | 600 | 2.6594 | 0.1839 | 0.4859 | 0.1759 | 0.3680 | 0.6381 | 0.7437 | 0.6869 | | 2.18 | 1.2403 | 800 | 2.5783 | 0.1754 | 0.4864 | 0.1754 | 0.3775 | 0.6578 | 0.7479 | 0.6999 | | 2.0957 | 1.5504 | 1000 | 2.5019 | 0.1849 | 0.4877 | 0.1850 | 0.3801 | 0.6578 | 0.7479 | 0.6999 | | 2.0109 | 1.8605 | 1200 | 2.4393 | 0.1879 | 0.4911 | 0.1840 | 0.3859 | 0.6578 | 0.7479 | 0.6999 | | 0.7656 | 2.1705 | 1400 | 2.9613 | 0.1808 | 0.4810 | 0.1719 | 0.3644 | 0.6578 | 0.7479 | 0.6999 | | 0.7271 | 2.4806 | 1600 | 3.0544 | 0.1817 | 0.4795 | 0.1695 | 0.3629 | 0.6578 | 0.7479 | 0.6999 | | 0.6746 | 2.7907 | 1800 | 3.0377 | 0.1754 | 0.4765 | 0.1639 | 0.3508 | 0.6578 | 0.7479 | 0.6999 | | 0.1183 | 3.1008 | 2000 | 3.6408 | 0.1801 | 0.4821 | 0.1710 | 0.3636 | 0.6578 | 0.7479 | 0.6999 | | 0.1123 | 3.4109 | 2200 | 3.6913 | 0.1765 | 0.4903 | 0.1712 | 0.3629 | 0.6578 | 0.7479 | 0.6999 | | 0.1051 | 3.7209 | 2400 | 3.7181 | 0.1766 | 0.4884 | 0.1701 | 0.3618 | 0.6578 | 0.7479 | 0.6999 | | 0.046 | 4.0310 | 2600 | 3.7719 | 0.1781 | 0.4849 | 0.1711 | 0.3598 | 0.6578 | 0.7479 | 0.6999 | | 0.0444 | 4.3411 | 2800 | 3.9170 | 0.1801 | 0.4852 | 0.1719 | 0.3595 | 0.6578 | 0.7479 | 0.6999 | | 0.0452 | 4.6512 | 3000 | 3.9377 | 0.1808 | 0.4872 | 0.1714 | 0.3604 | 0.6578 | 0.7479 | 0.6999 | | 0.0449 | 4.9612 | 3200 | 3.9404 | 0.1802 | 0.4861 | 0.1709 | 0.3580 | 0.6578 | 0.7479 | 0.6999 | ### Framework versions - Transformers 4.45.2 - Pytorch 2.2.0a0+81ea7a4 - Datasets 3.0.1 - Tokenizers 0.20.1
RichardErkhov/sethuiyer_-_Llama3.2-3B-Anvita-Instruct-gguf
RichardErkhov
2024-10-21T04:34:33Z
11
0
null
[ "gguf", "arxiv:2311.03099", "arxiv:2306.01708", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-21T04:01:59Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Llama3.2-3B-Anvita-Instruct - GGUF - Model creator: https://huggingface.co/sethuiyer/ - Original model: https://huggingface.co/sethuiyer/Llama3.2-3B-Anvita-Instruct/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Llama3.2-3B-Anvita-Instruct.Q2_K.gguf](https://huggingface.co/RichardErkhov/sethuiyer_-_Llama3.2-3B-Anvita-Instruct-gguf/blob/main/Llama3.2-3B-Anvita-Instruct.Q2_K.gguf) | Q2_K | 1.39GB | | [Llama3.2-3B-Anvita-Instruct.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/sethuiyer_-_Llama3.2-3B-Anvita-Instruct-gguf/blob/main/Llama3.2-3B-Anvita-Instruct.IQ3_XS.gguf) | IQ3_XS | 1.53GB | | [Llama3.2-3B-Anvita-Instruct.IQ3_S.gguf](https://huggingface.co/RichardErkhov/sethuiyer_-_Llama3.2-3B-Anvita-Instruct-gguf/blob/main/Llama3.2-3B-Anvita-Instruct.IQ3_S.gguf) | IQ3_S | 1.59GB | | [Llama3.2-3B-Anvita-Instruct.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/sethuiyer_-_Llama3.2-3B-Anvita-Instruct-gguf/blob/main/Llama3.2-3B-Anvita-Instruct.Q3_K_S.gguf) | Q3_K_S | 1.59GB | | [Llama3.2-3B-Anvita-Instruct.IQ3_M.gguf](https://huggingface.co/RichardErkhov/sethuiyer_-_Llama3.2-3B-Anvita-Instruct-gguf/blob/main/Llama3.2-3B-Anvita-Instruct.IQ3_M.gguf) | IQ3_M | 1.65GB | | [Llama3.2-3B-Anvita-Instruct.Q3_K.gguf](https://huggingface.co/RichardErkhov/sethuiyer_-_Llama3.2-3B-Anvita-Instruct-gguf/blob/main/Llama3.2-3B-Anvita-Instruct.Q3_K.gguf) | Q3_K | 1.73GB | | [Llama3.2-3B-Anvita-Instruct.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/sethuiyer_-_Llama3.2-3B-Anvita-Instruct-gguf/blob/main/Llama3.2-3B-Anvita-Instruct.Q3_K_M.gguf) | Q3_K_M | 1.73GB | | [Llama3.2-3B-Anvita-Instruct.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/sethuiyer_-_Llama3.2-3B-Anvita-Instruct-gguf/blob/main/Llama3.2-3B-Anvita-Instruct.Q3_K_L.gguf) | Q3_K_L | 1.85GB | | [Llama3.2-3B-Anvita-Instruct.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/sethuiyer_-_Llama3.2-3B-Anvita-Instruct-gguf/blob/main/Llama3.2-3B-Anvita-Instruct.IQ4_XS.gguf) | IQ4_XS | 1.91GB | | [Llama3.2-3B-Anvita-Instruct.Q4_0.gguf](https://huggingface.co/RichardErkhov/sethuiyer_-_Llama3.2-3B-Anvita-Instruct-gguf/blob/main/Llama3.2-3B-Anvita-Instruct.Q4_0.gguf) | Q4_0 | 1.99GB | | [Llama3.2-3B-Anvita-Instruct.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/sethuiyer_-_Llama3.2-3B-Anvita-Instruct-gguf/blob/main/Llama3.2-3B-Anvita-Instruct.IQ4_NL.gguf) | IQ4_NL | 2.0GB | | [Llama3.2-3B-Anvita-Instruct.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/sethuiyer_-_Llama3.2-3B-Anvita-Instruct-gguf/blob/main/Llama3.2-3B-Anvita-Instruct.Q4_K_S.gguf) | Q4_K_S | 2.0GB | | [Llama3.2-3B-Anvita-Instruct.Q4_K.gguf](https://huggingface.co/RichardErkhov/sethuiyer_-_Llama3.2-3B-Anvita-Instruct-gguf/blob/main/Llama3.2-3B-Anvita-Instruct.Q4_K.gguf) | Q4_K | 2.09GB | | [Llama3.2-3B-Anvita-Instruct.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/sethuiyer_-_Llama3.2-3B-Anvita-Instruct-gguf/blob/main/Llama3.2-3B-Anvita-Instruct.Q4_K_M.gguf) | Q4_K_M | 2.09GB | | [Llama3.2-3B-Anvita-Instruct.Q4_1.gguf](https://huggingface.co/RichardErkhov/sethuiyer_-_Llama3.2-3B-Anvita-Instruct-gguf/blob/main/Llama3.2-3B-Anvita-Instruct.Q4_1.gguf) | Q4_1 | 2.18GB | | [Llama3.2-3B-Anvita-Instruct.Q5_0.gguf](https://huggingface.co/RichardErkhov/sethuiyer_-_Llama3.2-3B-Anvita-Instruct-gguf/blob/main/Llama3.2-3B-Anvita-Instruct.Q5_0.gguf) | Q5_0 | 2.37GB | | [Llama3.2-3B-Anvita-Instruct.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/sethuiyer_-_Llama3.2-3B-Anvita-Instruct-gguf/blob/main/Llama3.2-3B-Anvita-Instruct.Q5_K_S.gguf) | Q5_K_S | 2.37GB | | [Llama3.2-3B-Anvita-Instruct.Q5_K.gguf](https://huggingface.co/RichardErkhov/sethuiyer_-_Llama3.2-3B-Anvita-Instruct-gguf/blob/main/Llama3.2-3B-Anvita-Instruct.Q5_K.gguf) | Q5_K | 2.41GB | | [Llama3.2-3B-Anvita-Instruct.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/sethuiyer_-_Llama3.2-3B-Anvita-Instruct-gguf/blob/main/Llama3.2-3B-Anvita-Instruct.Q5_K_M.gguf) | Q5_K_M | 2.41GB | | [Llama3.2-3B-Anvita-Instruct.Q5_1.gguf](https://huggingface.co/RichardErkhov/sethuiyer_-_Llama3.2-3B-Anvita-Instruct-gguf/blob/main/Llama3.2-3B-Anvita-Instruct.Q5_1.gguf) | Q5_1 | 2.55GB | | [Llama3.2-3B-Anvita-Instruct.Q6_K.gguf](https://huggingface.co/RichardErkhov/sethuiyer_-_Llama3.2-3B-Anvita-Instruct-gguf/blob/main/Llama3.2-3B-Anvita-Instruct.Q6_K.gguf) | Q6_K | 2.76GB | | [Llama3.2-3B-Anvita-Instruct.Q8_0.gguf](https://huggingface.co/RichardErkhov/sethuiyer_-_Llama3.2-3B-Anvita-Instruct-gguf/blob/main/Llama3.2-3B-Anvita-Instruct.Q8_0.gguf) | Q8_0 | 3.58GB | Original model description: --- base_model: - chuanli11/Llama-3.2-3B-Instruct-uncensored - Joycean0301/Llama-3.2-3B-Instruct-Medical-Conversational library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [chuanli11/Llama-3.2-3B-Instruct-uncensored](https://huggingface.co/chuanli11/Llama-3.2-3B-Instruct-uncensored) as a base. ### Models Merged The following models were included in the merge: * ./anvita * [Joycean0301/Llama-3.2-3B-Instruct-Medical-Conversational](https://huggingface.co/Joycean0301/Llama-3.2-3B-Instruct-Medical-Conversational) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: models: - model: Joycean0301/Llama-3.2-3B-Instruct-Medical-Conversational parameters: weight: [0.25, 0.35, 0.45, 0.35, 0.25] density: [0.1, 0.25, 0.5, 0.25, 0.1] - model: chuanli11/Llama-3.2-3B-Instruct-uncensored - model: ./anvita parameters: weight: [0.55, 0.45, 0.35, 0.45, 0.55] density: [0.1, 0.25, 0.5, 0.25, 0.1] merge_method: dare_ties base_model: chuanli11/Llama-3.2-3B-Instruct-uncensored parameters: int8_mask: true dtype: bfloat16 ``` ## How to run ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch # Load the tokenizer tokenizer = AutoTokenizer.from_pretrained('./merge') # Load the model (safetensors) model = AutoModelForCausalLM.from_pretrained( 'sethuiyer/Llama3.2-3B-Anvita-Instruct', torch_dtype=torch.bfloat16 ) model.to("cuda") model.eval() def generate_response(user_prompt): input_ids = tokenizer(user_prompt, return_tensors="pt").input_ids input_ids = input_ids.to("cuda") # Generate response with torch.no_grad(): # Disable gradient calculation for inference output = model.generate(input_ids=input_ids, max_length=512) # Adjust max_length as needed # Decode the generated output response = tokenizer.decode(output[0], skip_special_tokens=True) return response # Example usage user_prompt = "<|start_header_id|>user<|end_header_id|>What is large language models?<|eot_id|><|start_header_id|>assistant<|end_header_id|>" response = generate_response(user_prompt) response = response[response.index("assistant") + len("assistant"):] print("Generated Response:") print(response) ``` Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more quants, at much higher speed, than I would otherwise be able to.
RichardErkhov/ibm_-_PowerMoE-3b-gguf
RichardErkhov
2024-10-21T04:23:17Z
67
0
null
[ "gguf", "arxiv:2408.13359", "endpoints_compatible", "region:us" ]
null
2024-10-21T03:49:41Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) PowerMoE-3b - GGUF - Model creator: https://huggingface.co/ibm/ - Original model: https://huggingface.co/ibm/PowerMoE-3b/ | Name | Quant method | Size | | ---- | ---- | ---- | | [PowerMoE-3b.Q2_K.gguf](https://huggingface.co/RichardErkhov/ibm_-_PowerMoE-3b-gguf/blob/main/PowerMoE-3b.Q2_K.gguf) | Q2_K | 1.18GB | | [PowerMoE-3b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ibm_-_PowerMoE-3b-gguf/blob/main/PowerMoE-3b.IQ3_XS.gguf) | IQ3_XS | 1.32GB | | [PowerMoE-3b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ibm_-_PowerMoE-3b-gguf/blob/main/PowerMoE-3b.IQ3_S.gguf) | IQ3_S | 1.39GB | | [PowerMoE-3b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ibm_-_PowerMoE-3b-gguf/blob/main/PowerMoE-3b.Q3_K_S.gguf) | Q3_K_S | 1.39GB | | [PowerMoE-3b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ibm_-_PowerMoE-3b-gguf/blob/main/PowerMoE-3b.IQ3_M.gguf) | IQ3_M | 1.41GB | | [PowerMoE-3b.Q3_K.gguf](https://huggingface.co/RichardErkhov/ibm_-_PowerMoE-3b-gguf/blob/main/PowerMoE-3b.Q3_K.gguf) | Q3_K | 1.53GB | | [PowerMoE-3b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ibm_-_PowerMoE-3b-gguf/blob/main/PowerMoE-3b.Q3_K_M.gguf) | Q3_K_M | 1.53GB | | [PowerMoE-3b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ibm_-_PowerMoE-3b-gguf/blob/main/PowerMoE-3b.Q3_K_L.gguf) | Q3_K_L | 1.65GB | | [PowerMoE-3b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ibm_-_PowerMoE-3b-gguf/blob/main/PowerMoE-3b.IQ4_XS.gguf) | IQ4_XS | 1.72GB | | [PowerMoE-3b.Q4_0.gguf](https://huggingface.co/RichardErkhov/ibm_-_PowerMoE-3b-gguf/blob/main/PowerMoE-3b.Q4_0.gguf) | Q4_0 | 1.79GB | | [PowerMoE-3b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ibm_-_PowerMoE-3b-gguf/blob/main/PowerMoE-3b.IQ4_NL.gguf) | IQ4_NL | 1.81GB | | [PowerMoE-3b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ibm_-_PowerMoE-3b-gguf/blob/main/PowerMoE-3b.Q4_K_S.gguf) | Q4_K_S | 1.81GB | | [PowerMoE-3b.Q4_K.gguf](https://huggingface.co/RichardErkhov/ibm_-_PowerMoE-3b-gguf/blob/main/PowerMoE-3b.Q4_K.gguf) | Q4_K | 1.92GB | | [PowerMoE-3b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ibm_-_PowerMoE-3b-gguf/blob/main/PowerMoE-3b.Q4_K_M.gguf) | Q4_K_M | 1.92GB | | [PowerMoE-3b.Q4_1.gguf](https://huggingface.co/RichardErkhov/ibm_-_PowerMoE-3b-gguf/blob/main/PowerMoE-3b.Q4_1.gguf) | Q4_1 | 1.99GB | | [PowerMoE-3b.Q5_0.gguf](https://huggingface.co/RichardErkhov/ibm_-_PowerMoE-3b-gguf/blob/main/PowerMoE-3b.Q5_0.gguf) | Q5_0 | 2.18GB | | [PowerMoE-3b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ibm_-_PowerMoE-3b-gguf/blob/main/PowerMoE-3b.Q5_K_S.gguf) | Q5_K_S | 2.18GB | | [PowerMoE-3b.Q5_K.gguf](https://huggingface.co/RichardErkhov/ibm_-_PowerMoE-3b-gguf/blob/main/PowerMoE-3b.Q5_K.gguf) | Q5_K | 2.24GB | | [PowerMoE-3b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ibm_-_PowerMoE-3b-gguf/blob/main/PowerMoE-3b.Q5_K_M.gguf) | Q5_K_M | 2.24GB | | [PowerMoE-3b.Q5_1.gguf](https://huggingface.co/RichardErkhov/ibm_-_PowerMoE-3b-gguf/blob/main/PowerMoE-3b.Q5_1.gguf) | Q5_1 | 2.37GB | | [PowerMoE-3b.Q6_K.gguf](https://huggingface.co/RichardErkhov/ibm_-_PowerMoE-3b-gguf/blob/main/PowerMoE-3b.Q6_K.gguf) | Q6_K | 2.59GB | | [PowerMoE-3b.Q8_0.gguf](https://huggingface.co/RichardErkhov/ibm_-_PowerMoE-3b-gguf/blob/main/PowerMoE-3b.Q8_0.gguf) | Q8_0 | 3.35GB | Original model description: --- pipeline_tag: text-generation inference: false license: apache-2.0 library_name: transformers model-index: - name: ibm/PowerMoE-3b results: - task: type: text-generation dataset: type: lm-eval-harness name: ARC metrics: - name: accuracy-norm type: accuracy-norm value: 58.1 verified: false - task: type: text-generation dataset: type: lm-eval-harness name: BoolQ metrics: - name: accuracy type: accuracy value: 65.0 verified: false - task: type: text-generation dataset: type: lm-eval-harness name: Hellaswag metrics: - name: accuracy-norm type: accuracy-norm value: 71.5 verified: false - task: type: text-generation dataset: type: lm-eval-harness name: OpenBookQA metrics: - name: accuracy-norm type: accuracy-norm value: 41.0 verified: false - task: type: text-generation dataset: type: lm-eval-harness name: PIQA metrics: - name: accuracy-norm type: accuracy-norm value: 79.1 verified: false - task: type: text-generation dataset: type: lm-eval-harness name: Winogrande metrics: - name: accuracy-norm type: accuracy-norm value: 65.0 verified: false - task: type: text-generation dataset: type: lm-eval-harness name: MMLU (5 shot) metrics: - name: accuracy type: accuracy value: 42.8 verified: false - task: type: text-generation dataset: type: lm-eval-harness name: GSM8k (5 shot) metrics: - name: accuracy type: accuracy value: 25.9 verified: false - task: type: text-generation dataset: type: lm-eval-harness name: math (4 shot) metrics: - name: accuracy type: accuracy value: 14.8 verified: false - task: type: text-generation dataset: type: bigcode-eval name: humaneval metrics: - name: pass@1 type: pass@1 value: 20.1 verified: false - task: type: text-generation dataset: type: bigcode-eval name: MBPP metrics: - name: pass@1 type: pass@1 value: 32.4 verified: false --- ## Model Summary PowerMoE-3B is a 3B sparse Mixture-of-Experts (sMoE) language model trained with the Power learning rate scheduler. It sparsely activates 800M parameters for each token. It is trained on a mix of open-source and proprietary datasets. PowerMoE-3B has shown promising results compared to other dense models with 2x activate parameters across various benchmarks, including natural language multi-choices, code generation, and math reasoning. Paper: https://arxiv.org/abs/2408.13359 ## Usage Note: Requires installing HF transformers from source. ### Generation This is a simple example of how to use **PowerMoE-3b** model. ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # or "cpu" model_path = "ibm/PowerMoE-3b" tokenizer = AutoTokenizer.from_pretrained(model_path) # drop device_map if running on CPU model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device) model.eval() # change input text as desired prompt = "Write a code to find the maximum value in a list of numbers." # tokenize the text input_tokens = tokenizer(prompt, return_tensors="pt") # transfer tokenized inputs to the device for i in input_tokens: input_tokens[i] = input_tokens[i].to(device) # generate output tokens output = model.generate(**input_tokens, max_new_tokens=100) # decode output tokens into text output = tokenizer.batch_decode(output) # loop over the batch to print, in this example the batch size is 1 for i in output: print(i) ``` Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more quants, at much higher speed, than I would otherwise be able to.
RichardErkhov/Spandan98_-_starcoder2-3b-sas-dbt-v2-gguf
RichardErkhov
2024-10-21T04:18:17Z
91
0
null
[ "gguf", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-10-21T03:50:05Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) starcoder2-3b-sas-dbt-v2 - GGUF - Model creator: https://huggingface.co/Spandan98/ - Original model: https://huggingface.co/Spandan98/starcoder2-3b-sas-dbt-v2/ | Name | Quant method | Size | | ---- | ---- | ---- | | [starcoder2-3b-sas-dbt-v2.Q2_K.gguf](https://huggingface.co/RichardErkhov/Spandan98_-_starcoder2-3b-sas-dbt-v2-gguf/blob/main/starcoder2-3b-sas-dbt-v2.Q2_K.gguf) | Q2_K | 1.14GB | | [starcoder2-3b-sas-dbt-v2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Spandan98_-_starcoder2-3b-sas-dbt-v2-gguf/blob/main/starcoder2-3b-sas-dbt-v2.IQ3_XS.gguf) | IQ3_XS | 1.22GB | | [starcoder2-3b-sas-dbt-v2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Spandan98_-_starcoder2-3b-sas-dbt-v2-gguf/blob/main/starcoder2-3b-sas-dbt-v2.IQ3_S.gguf) | IQ3_S | 1.28GB | | [starcoder2-3b-sas-dbt-v2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Spandan98_-_starcoder2-3b-sas-dbt-v2-gguf/blob/main/starcoder2-3b-sas-dbt-v2.Q3_K_S.gguf) | Q3_K_S | 1.27GB | | [starcoder2-3b-sas-dbt-v2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Spandan98_-_starcoder2-3b-sas-dbt-v2-gguf/blob/main/starcoder2-3b-sas-dbt-v2.IQ3_M.gguf) | IQ3_M | 1.32GB | | [starcoder2-3b-sas-dbt-v2.Q3_K.gguf](https://huggingface.co/RichardErkhov/Spandan98_-_starcoder2-3b-sas-dbt-v2-gguf/blob/main/starcoder2-3b-sas-dbt-v2.Q3_K.gguf) | Q3_K | 1.46GB | | [starcoder2-3b-sas-dbt-v2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Spandan98_-_starcoder2-3b-sas-dbt-v2-gguf/blob/main/starcoder2-3b-sas-dbt-v2.Q3_K_M.gguf) | Q3_K_M | 1.46GB | | [starcoder2-3b-sas-dbt-v2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Spandan98_-_starcoder2-3b-sas-dbt-v2-gguf/blob/main/starcoder2-3b-sas-dbt-v2.Q3_K_L.gguf) | Q3_K_L | 1.62GB | | [starcoder2-3b-sas-dbt-v2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Spandan98_-_starcoder2-3b-sas-dbt-v2-gguf/blob/main/starcoder2-3b-sas-dbt-v2.IQ4_XS.gguf) | IQ4_XS | 1.56GB | | [starcoder2-3b-sas-dbt-v2.Q4_0.gguf](https://huggingface.co/RichardErkhov/Spandan98_-_starcoder2-3b-sas-dbt-v2-gguf/blob/main/starcoder2-3b-sas-dbt-v2.Q4_0.gguf) | Q4_0 | 1.63GB | | [starcoder2-3b-sas-dbt-v2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Spandan98_-_starcoder2-3b-sas-dbt-v2-gguf/blob/main/starcoder2-3b-sas-dbt-v2.IQ4_NL.gguf) | IQ4_NL | 1.64GB | | [starcoder2-3b-sas-dbt-v2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Spandan98_-_starcoder2-3b-sas-dbt-v2-gguf/blob/main/starcoder2-3b-sas-dbt-v2.Q4_K_S.gguf) | Q4_K_S | 1.64GB | | [starcoder2-3b-sas-dbt-v2.Q4_K.gguf](https://huggingface.co/RichardErkhov/Spandan98_-_starcoder2-3b-sas-dbt-v2-gguf/blob/main/starcoder2-3b-sas-dbt-v2.Q4_K.gguf) | Q4_K | 1.76GB | | [starcoder2-3b-sas-dbt-v2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Spandan98_-_starcoder2-3b-sas-dbt-v2-gguf/blob/main/starcoder2-3b-sas-dbt-v2.Q4_K_M.gguf) | Q4_K_M | 1.76GB | | [starcoder2-3b-sas-dbt-v2.Q4_1.gguf](https://huggingface.co/RichardErkhov/Spandan98_-_starcoder2-3b-sas-dbt-v2-gguf/blob/main/starcoder2-3b-sas-dbt-v2.Q4_1.gguf) | Q4_1 | 1.8GB | | [starcoder2-3b-sas-dbt-v2.Q5_0.gguf](https://huggingface.co/RichardErkhov/Spandan98_-_starcoder2-3b-sas-dbt-v2-gguf/blob/main/starcoder2-3b-sas-dbt-v2.Q5_0.gguf) | Q5_0 | 1.96GB | | [starcoder2-3b-sas-dbt-v2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Spandan98_-_starcoder2-3b-sas-dbt-v2-gguf/blob/main/starcoder2-3b-sas-dbt-v2.Q5_K_S.gguf) | Q5_K_S | 1.96GB | | [starcoder2-3b-sas-dbt-v2.Q5_K.gguf](https://huggingface.co/RichardErkhov/Spandan98_-_starcoder2-3b-sas-dbt-v2-gguf/blob/main/starcoder2-3b-sas-dbt-v2.Q5_K.gguf) | Q5_K | 2.03GB | | [starcoder2-3b-sas-dbt-v2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Spandan98_-_starcoder2-3b-sas-dbt-v2-gguf/blob/main/starcoder2-3b-sas-dbt-v2.Q5_K_M.gguf) | Q5_K_M | 2.03GB | | [starcoder2-3b-sas-dbt-v2.Q5_1.gguf](https://huggingface.co/RichardErkhov/Spandan98_-_starcoder2-3b-sas-dbt-v2-gguf/blob/main/starcoder2-3b-sas-dbt-v2.Q5_1.gguf) | Q5_1 | 2.13GB | | [starcoder2-3b-sas-dbt-v2.Q6_K.gguf](https://huggingface.co/RichardErkhov/Spandan98_-_starcoder2-3b-sas-dbt-v2-gguf/blob/main/starcoder2-3b-sas-dbt-v2.Q6_K.gguf) | Q6_K | 2.32GB | | [starcoder2-3b-sas-dbt-v2.Q8_0.gguf](https://huggingface.co/RichardErkhov/Spandan98_-_starcoder2-3b-sas-dbt-v2-gguf/blob/main/starcoder2-3b-sas-dbt-v2.Q8_0.gguf) | Q8_0 | 3.0GB | Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more quants, at much higher speed, than I would otherwise be able to.
adotryx/poca-SoccerTwos
adotryx
2024-10-21T03:59:43Z
33
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2024-10-21T03:33:45Z
--- library_name: ml-agents tags: - SoccerTwos - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: adotryx/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
vaniebermudez/gemma-2b-instruct-ft-derma-qa-finetuning
vaniebermudez
2024-10-21T03:37:11Z
177
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-21T03:31:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Morgen0052/deepseek-llm-7b-chat-Q4_0-GGUF
Morgen0052
2024-10-21T03:37:01Z
370
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:deepseek-ai/deepseek-llm-7b-chat", "base_model:quantized:deepseek-ai/deepseek-llm-7b-chat", "license:other", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-21T03:36:42Z
--- license: other license_name: deepseek license_link: LICENSE base_model: deepseek-ai/deepseek-llm-7b-chat tags: - llama-cpp - gguf-my-repo --- # Morgen0052/deepseek-llm-7b-chat-Q4_0-GGUF This model was converted to GGUF format from [`deepseek-ai/deepseek-llm-7b-chat`](https://huggingface.co/deepseek-ai/deepseek-llm-7b-chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/deepseek-ai/deepseek-llm-7b-chat) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Morgen0052/deepseek-llm-7b-chat-Q4_0-GGUF --hf-file deepseek-llm-7b-chat-q4_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Morgen0052/deepseek-llm-7b-chat-Q4_0-GGUF --hf-file deepseek-llm-7b-chat-q4_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Morgen0052/deepseek-llm-7b-chat-Q4_0-GGUF --hf-file deepseek-llm-7b-chat-q4_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Morgen0052/deepseek-llm-7b-chat-Q4_0-GGUF --hf-file deepseek-llm-7b-chat-q4_0.gguf -c 2048 ```
ndeclarke/wav2vec2-mms-1b-CV17.0
ndeclarke
2024-10-21T03:28:08Z
17
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice_17_0", "base_model:facebook/mms-1b-all", "base_model:finetune:facebook/mms-1b-all", "license:cc-by-nc-4.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-09-11T14:48:02Z
--- library_name: transformers license: cc-by-nc-4.0 base_model: facebook/mms-1b-all tags: - generated_from_trainer datasets: - common_voice_17_0 metrics: - wer - bleu model-index: - name: wav2vec2-mms-1b-CV17.0 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: common_voice_17_0 type: common_voice_17_0 metrics: - name: Wer type: wer value: 0.6538388264431321 - name: Bleu type: bleu value: 0.14202013774436864 --- # wav2vec2-mms-1b-CV17.0 This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the common_voice_17_0 dataset. Adapters for several languages were trained. ## Intended uses & limitations Speech-to-text transciption of Malayalam, Tamil, Telugu, and Yoruba. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.15 - training_steps: 2000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0+cu121 - Datasets 3.0.0 - Tokenizers 0.19.1
Janithasen/classifier-chapter4
Janithasen
2024-10-21T03:26:35Z
105
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-21T02:36:55Z
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: classifier-chapter4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # classifier-chapter4 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2534 - Accuracy: 0.9188 - F1: 0.9188 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 313 | 0.2839 | 0.9101 | 0.9100 | | 0.3085 | 2.0 | 626 | 0.2534 | 0.9188 | 0.9188 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.19.1
mateiaassAI/T5Large-meid3v2
mateiaassAI
2024-10-21T03:22:15Z
113
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-10-21T03:19:48Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
aongwachi/phi-3.5-kbank
aongwachi
2024-10-21T03:17:43Z
5
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-18T15:31:35Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mav23/Qwen2-72B-Instruct-GGUF
mav23
2024-10-21T03:03:36Z
6
0
null
[ "gguf", "chat", "text-generation", "en", "arxiv:2309.00071", "base_model:Qwen/Qwen2-72B", "base_model:quantized:Qwen/Qwen2-72B", "license:other", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-10-20T19:45:15Z
--- license: other license_name: tongyi-qianwen license_link: https://huggingface.co/Qwen/Qwen2-72B-Instruct/blob/main/LICENSE language: - en pipeline_tag: text-generation base_model: Qwen/Qwen2-72B tags: - chat new_version: Qwen/Qwen2.5-72B-Instruct --- # Qwen2-72B-Instruct ## Introduction Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the instruction-tuned 72B Qwen2 model. Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc. Qwen2-72B-Instruct supports a context length of up to 131,072 tokens, enabling the processing of extensive inputs. Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2 for handling long texts. For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/), [GitHub](https://github.com/QwenLM/Qwen2), and [Documentation](https://qwen.readthedocs.io/en/latest/). <br> ## Model Details Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. ## Training details We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization. ## Requirements The code of Qwen2 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error: ``` KeyError: 'qwen2' ``` ## Quickstart Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen2-72B-Instruct", torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-72B-Instruct") prompt = "Give me a short introduction to large language model." messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ### Processing Long Texts To handle extensive inputs exceeding 32,768 tokens, we utilize [YARN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts. For deployment, we recommend using vLLM. You can enable the long-context capabilities by following these steps: 1. **Install vLLM**: You can install vLLM by running the following command. ```bash pip install "vllm>=0.4.3" ``` Or you can install vLLM from [source](https://github.com/vllm-project/vllm/). 2. **Configure Model Settings**: After downloading the model weights, modify the `config.json` file by including the below snippet: ```json { "architectures": [ "Qwen2ForCausalLM" ], // ... "vocab_size": 152064, // adding the following snippets "rope_scaling": { "factor": 4.0, "original_max_position_embeddings": 32768, "type": "yarn" } } ``` This snippet enable YARN to support longer contexts. 3. **Model Deployment**: Utilize vLLM to deploy your model. For instance, you can set up an openAI-like server using the command: ```bash python -m vllm.entrypoints.openai.api_server --served-model-name Qwen2-72B-Instruct --model path/to/weights ``` Then you can access the Chat API by: ```bash curl http://localhost:8000/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "Qwen2-72B-Instruct", "messages": [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Your Long Input Here."} ] }' ``` For further usage instructions of vLLM, please refer to our [Github](https://github.com/QwenLM/Qwen2). **Note**: Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**. We advise adding the `rope_scaling` configuration only when processing long contexts is required. ## Evaluation We briefly compare Qwen2-72B-Instruct with similar-sized instruction-tuned LLMs, including our previous Qwen1.5-72B-Chat. The results are shown as follows: | Datasets | Llama-3-70B-Instruct | Qwen1.5-72B-Chat | **Qwen2-72B-Instruct** | | :--- | :---: | :---: | :---: | | _**English**_ | | | | | MMLU | 82.0 | 75.6 | **82.3** | | MMLU-Pro | 56.2 | 51.7 | **64.4** | | GPQA | 41.9 | 39.4 | **42.4** | | TheroemQA | 42.5 | 28.8 | **44.4** | | MT-Bench | 8.95 | 8.61 | **9.12** | | Arena-Hard | 41.1 | 36.1 | **48.1** | | IFEval (Prompt Strict-Acc.) | 77.3 | 55.8 | **77.6** | | _**Coding**_ | | | | | HumanEval | 81.7 | 71.3 | **86.0** | | MBPP | **82.3** | 71.9 | 80.2 | | MultiPL-E | 63.4 | 48.1 | **69.2** | | EvalPlus | 75.2 | 66.9 | **79.0** | | LiveCodeBench | 29.3 | 17.9 | **35.7** | | _**Mathematics**_ | | | | | GSM8K | **93.0** | 82.7 | 91.1 | | MATH | 50.4 | 42.5 | **59.7** | | _**Chinese**_ | | | | | C-Eval | 61.6 | 76.1 | **83.8** | | AlignBench | 7.42 | 7.28 | **8.27** | ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{qwen2, title={Qwen2 Technical Report}, year={2024} } ```
tjake/Yi-Coder-9B-Chat-JQ4
tjake
2024-10-21T03:00:04Z
146
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:2403.04652", "base_model:01-ai/Yi-Coder-9B", "base_model:finetune:01-ai/Yi-Coder-9B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-21T02:46:08Z
--- license: apache-2.0 library_name: transformers base_model: 01-ai/Yi-Coder-9B --- # Quantized Version of 01-ai/Yi-Coder-9B-Chat This model is a quantized variant of the 01-ai/Yi-Coder-9B-Chat model, optimized for use with Jlama, a Java-based inference engine. The quantization process reduces the model's size and improves inference speed, while maintaining high accuracy for efficient deployment in production environments. For more information on Jlama, visit the [Jlama GitHub repository](https://github.com/tjake/jlama). --- <div align="center"> <picture> <img src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="120px"> </picture> </div> <p align="center"> <a href="https://github.com/01-ai">🐙 GitHub</a> • <a href="https://discord.gg/hYUwWddeAu">👾 Discord</a> • <a href="https://twitter.com/01ai_yi">🐤 Twitter</a> • <a href="https://github.com/01-ai/Yi-1.5/issues/2">💬 WeChat</a> <br/> <a href="https://arxiv.org/abs/2403.04652">📝 Paper</a> • <a href="https://01-ai.github.io/">💪 Tech Blog</a> • <a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#faq">🙌 FAQ</a> • <a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#learning-hub">📗 Learning Hub</a> </p> # Intro Yi-Coder is a series of open-source code language models that delivers state-of-the-art coding performance with fewer than 10 billion parameters. Key features: - Excelling in long-context understanding with a maximum context length of 128K tokens. - Supporting 52 major programming languages: ```bash 'java', 'markdown', 'python', 'php', 'javascript', 'c++', 'c#', 'c', 'typescript', 'html', 'go', 'java_server_pages', 'dart', 'objective-c', 'kotlin', 'tex', 'swift', 'ruby', 'sql', 'rust', 'css', 'yaml', 'matlab', 'lua', 'json', 'shell', 'visual_basic', 'scala', 'rmarkdown', 'pascal', 'fortran', 'haskell', 'assembly', 'perl', 'julia', 'cmake', 'groovy', 'ocaml', 'powershell', 'elixir', 'clojure', 'makefile', 'coffeescript', 'erlang', 'lisp', 'toml', 'batchfile', 'cobol', 'dockerfile', 'r', 'prolog', 'verilog' ``` For model details and benchmarks, see [Yi-Coder blog](https://01-ai.github.io/) and [Yi-Coder README](https://github.com/01-ai/Yi-Coder). <p align="left"> <img src="https://github.com/01-ai/Yi/blob/main/assets/img/coder/yi-coder-calculator-demo.gif?raw=true" alt="demo1" width="500"/> </p> # Models | Name | Type | Length | Download | |--------------------|------|----------------|---------------------------------------------------------------------------------------------------------------------------------------------------| | Yi-Coder-9B-Chat | Chat | 128K | [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-Coder-9B-Chat) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-Coder-9B-Chat) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-Coder-9B-Chat) | | Yi-Coder-1.5B-Chat | Chat | 128K | [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-Coder-1.5B-Chat) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-Coder-1.5B-Chat) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-Coder-1.5B-Chat) | | Yi-Coder-9B | Base | 128K | [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-Coder-9B) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-Coder-9B) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-Coder-9B) | | Yi-Coder-1.5B | Base | 128K | [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-Coder-1.5B) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-Coder-1.5B) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-Coder-1.5B) | | | # Benchmarks As illustrated in the figure below, Yi-Coder-9B-Chat achieved an impressive 23% pass rate in LiveCodeBench, making it the only model with under 10B parameters to surpass 20%. It also outperforms DeepSeekCoder-33B-Ins at 22.3%, CodeGeex4-9B-all at 17.8%, CodeLLama-34B-Ins at 13.3%, and CodeQwen1.5-7B-Chat at 12%. <p align="left"> <img src="https://github.com/01-ai/Yi/blob/main/assets/img/coder/bench1.webp?raw=true" alt="bench1" width="1000"/> </p> # Quick Start You can use transformers to run inference with Yi-Coder models (both chat and base versions) as follows: ```python from transformers import AutoTokenizer, AutoModelForCausalLM device = "cuda" # the device to load the model onto model_path = "01-ai/Yi-Coder-9B-Chat" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto").eval() prompt = "Write a quick sort algorithm." messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=1024, eos_token_id=tokenizer.eos_token_id ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) ``` For getting up and running with Yi-Coder series models quickly, see [Yi-Coder README](https://github.com/01-ai/Yi-Coder).
rgtjf/ppo-Pyramids
rgtjf
2024-10-21T02:54:15Z
10
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2024-10-21T02:53:10Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: rgtjf/ppo-Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
win28703/Llama-3.1-Nemotron-70B-Instruct-HF-Q8-mlx
win28703
2024-10-21T02:22:05Z
28
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "nvidia", "llama3.1", "mlx", "conversational", "en", "dataset:nvidia/HelpSteer2", "base_model:nvidia/Llama-3.1-Nemotron-70B-Instruct-HF", "base_model:quantized:nvidia/Llama-3.1-Nemotron-70B-Instruct-HF", "license:llama3.1", "autotrain_compatible", "text-generation-inference", "8-bit", "region:us" ]
text-generation
2024-10-21T02:12:55Z
--- license: llama3.1 language: - en inference: false fine-tuning: false tags: - nvidia - llama3.1 - mlx datasets: - nvidia/HelpSteer2 base_model: nvidia/Llama-3.1-Nemotron-70B-Instruct-HF pipeline_tag: text-generation library_name: transformers --- # win28703/Llama-3.1-Nemotron-70B-Instruct-HF-Q8-mlx The Model [win28703/Llama-3.1-Nemotron-70B-Instruct-HF-Q8-mlx](https://huggingface.co/win28703/Llama-3.1-Nemotron-70B-Instruct-HF-Q8-mlx) was converted to MLX format from [nvidia/Llama-3.1-Nemotron-70B-Instruct-HF](https://huggingface.co/nvidia/Llama-3.1-Nemotron-70B-Instruct-HF) using mlx-lm version **0.19.1**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("win28703/Llama-3.1-Nemotron-70B-Instruct-HF-Q8-mlx") prompt="hello" if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
grimulkan/Llama-3.2-11B-Vision-Instruct-Hermes-3-lorablated
grimulkan
2024-10-21T02:15:18Z
20
2
null
[ "safetensors", "mllama", "license:llama3.2", "region:us" ]
null
2024-10-21T01:19:42Z
--- license: llama3.2 --- This is a merge of the vision adapters from [meta-llama/Llama-3.2-11B-Vision-Instruct](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct) onto [mlabonne/Hermes-3-Llama-3.1-8B-lorablated](https://huggingface.co/mlabonne/Hermes-3-Llama-3.1-8B-lorablated). Please respect the respective licenses of Meta Llama & Nous Research. The method I used is detailed in [this post](https://www.reddit.com/r/LocalLLaMA/comments/1fzduyx/merging_llama_32_vision_adapters_onto_31_finetunes/). I also merged the tokenizer and generation configs. Example python code for weight merging is available in [merge_vision_example.py](https://huggingface.co/grimulkan/Llama-3.2-90B-Vision-Hermes-3-lorablated-merge/blob/main/merge_vision_example.py), which works for both 11B and 90B. This 11B merge is less stable than the 90B (which is very stable). Keep `temperature <= 0.7`. The 90B version of this merge is [available here](https://huggingface.co/grimulkan/Llama-3.2-90B-Vision-Hermes-3-lorablated-merge).
Lekhansh/llama_3.2_3b_instruct_stacked_tasks_fullModel_Unquantised
Lekhansh
2024-10-21T02:12:27Z
35
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-21T02:04:48Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ByteEd/3DModelGeneration_PointCloud
ByteEd
2024-10-21T02:10:27Z
0
3
null
[ "en", "arxiv:2212.08751", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-01-07T22:29:59Z
--- license: mit language: - en --- # Point·E ![Animation of four 3D point clouds rotating](point_e/examples/paper_banner.gif) This is the official code and model release for [Point-E: A System for Generating 3D Point Clouds from Complex Prompts](https://arxiv.org/abs/2212.08751). # Usage Install with `pip install -e .`. To get started with examples, see the following notebooks: * [image2pointcloud.ipynb](point_e/examples/image2pointcloud.ipynb) - sample a point cloud, conditioned on some example synthetic view images. * [text2pointcloud.ipynb](point_e/examples/text2pointcloud.ipynb) - use our small, worse quality pure text-to-3D model to produce 3D point clouds directly from text descriptions. This model's capabilities are limited, but it does understand some simple categories and colors. * [pointcloud2mesh.ipynb](point_e/examples/pointcloud2mesh.ipynb) - try our SDF regression model for producing meshes from point clouds. For our P-FID and P-IS evaluation scripts, see: * [evaluate_pfid.py](point_e/evals/scripts/evaluate_pfid.py) * [evaluate_pis.py](point_e/evals/scripts/evaluate_pis.py) For our Blender rendering code, see [blender_script.py](point_e/evals/scripts/blender_script.py) # Samples You can download the seed images and point clouds corresponding to the paper banner images [here](https://openaipublic.azureedge.net/main/point-e/banner_pcs.zip). You can download the seed images used for COCO CLIP R-Precision evaluations [here](https://openaipublic.azureedge.net/main/point-e/coco_images.zip).
QuantFactory/Matter-0.1-7B-GGUF
QuantFactory
2024-10-21T02:05:13Z
33
2
null
[ "gguf", "en", "dataset:0-hero/Matter-0.1", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-21T01:30:33Z
--- license: apache-2.0 datasets: - 0-hero/Matter-0.1 language: - en --- [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) # QuantFactory/Matter-0.1-7B-GGUF This is quantized version of [0-hero/Matter-0.1-7B](https://huggingface.co/0-hero/Matter-0.1-7B) created using llama.cpp # Original Model Card ## Matter 7B - 0.1 (Mistral 7B Finetune) Matter 7B is full-finetune on the [Matter dataset](https://huggingface.co/datasets/0-hero/Matter-0.1), which is curated from over 35 datsets analyzing >6B tokens ### Training Prompt format: This model uses ChatML prompt format. ``` <|im_start|>system You are a helpful AI assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ### Function Calling Model also supports function calling. Additional tokens for function calling Model function call tokens - <|begin_func|> - Function call start token - <|end_func|> - Function call end token Function call response tokens - <|begin_func_response|> - Function response start token - <|end_func_response|> - Function response end token Example ``` <|im_start|>system You are a helpful assistant with access to the following functions. Use them if required - { "name": "get_news_headlines", "description": "Get the latest news headlines", "parameters": { "type": "object", "properties": { "country": { "type": "string", "description": "The country for which to fetch news" } }, "required": [ "country" ] } } <|im_end|> <|im_start|>user Can you tell me the latest news headlines for the United States?<|im_end|> <|im_start|>assistant <|begin_func|>{"name": "get_news_headlines", "arguments": '{"country": "United States"}'}<|end_func|><|im_end|> <|im_start|>user <|begin_func_response|>{ "headlines": [ "Biden announces new vaccine mandates", "Hurricane Ida devastates Louisiana", "Apple unveils new iPhone", "NASA's Perseverance rover collects first Mars rock sample" ] }<|end_func_response|> <|im_end|> <|im_start|>assistant Here are the latest news headlines for the United States: 1. Biden announces new vaccine mandates 2. Hurricane Ida devastates Louisiana 3. Apple unveils new iPhone 4. NASA's Perseverance rover collects first Mars rock sample <|im_end|> ```
RichardErkhov/allknowingroger_-_Jallabi-40B-gguf
RichardErkhov
2024-10-21T02:00:47Z
23
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-20T19:43:41Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Jallabi-40B - GGUF - Model creator: https://huggingface.co/allknowingroger/ - Original model: https://huggingface.co/allknowingroger/Jallabi-40B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Jallabi-40B.Q2_K.gguf](https://huggingface.co/RichardErkhov/allknowingroger_-_Jallabi-40B-gguf/blob/main/Jallabi-40B.Q2_K.gguf) | Q2_K | 13.85GB | | [Jallabi-40B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/allknowingroger_-_Jallabi-40B-gguf/blob/main/Jallabi-40B.IQ3_XS.gguf) | IQ3_XS | 15.37GB | | [Jallabi-40B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/allknowingroger_-_Jallabi-40B-gguf/blob/main/Jallabi-40B.IQ3_S.gguf) | IQ3_S | 16.23GB | | [Jallabi-40B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/allknowingroger_-_Jallabi-40B-gguf/blob/main/Jallabi-40B.Q3_K_S.gguf) | Q3_K_S | 16.17GB | | [Jallabi-40B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/allknowingroger_-_Jallabi-40B-gguf/blob/main/Jallabi-40B.IQ3_M.gguf) | IQ3_M | 16.82GB | | [Jallabi-40B.Q3_K.gguf](https://huggingface.co/RichardErkhov/allknowingroger_-_Jallabi-40B-gguf/blob/main/Jallabi-40B.Q3_K.gguf) | Q3_K | 18.02GB | | [Jallabi-40B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/allknowingroger_-_Jallabi-40B-gguf/blob/main/Jallabi-40B.Q3_K_M.gguf) | Q3_K_M | 18.02GB | | [Jallabi-40B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/allknowingroger_-_Jallabi-40B-gguf/blob/main/Jallabi-40B.Q3_K_L.gguf) | Q3_K_L | 19.62GB | | [Jallabi-40B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/allknowingroger_-_Jallabi-40B-gguf/blob/main/Jallabi-40B.IQ4_XS.gguf) | IQ4_XS | 20.15GB | | [Jallabi-40B.Q4_0.gguf](https://huggingface.co/RichardErkhov/allknowingroger_-_Jallabi-40B-gguf/blob/main/Jallabi-40B.Q4_0.gguf) | Q4_0 | 21.05GB | | [Jallabi-40B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/allknowingroger_-_Jallabi-40B-gguf/blob/main/Jallabi-40B.IQ4_NL.gguf) | IQ4_NL | 21.25GB | | [Jallabi-40B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/allknowingroger_-_Jallabi-40B-gguf/blob/main/Jallabi-40B.Q4_K_S.gguf) | Q4_K_S | 21.19GB | | [Jallabi-40B.Q4_K.gguf](https://huggingface.co/RichardErkhov/allknowingroger_-_Jallabi-40B-gguf/blob/main/Jallabi-40B.Q4_K.gguf) | Q4_K | 22.31GB | | [Jallabi-40B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/allknowingroger_-_Jallabi-40B-gguf/blob/main/Jallabi-40B.Q4_K_M.gguf) | Q4_K_M | 22.31GB | | [Jallabi-40B.Q4_1.gguf](https://huggingface.co/RichardErkhov/allknowingroger_-_Jallabi-40B-gguf/blob/main/Jallabi-40B.Q4_1.gguf) | Q4_1 | 23.35GB | | [Jallabi-40B.Q5_0.gguf](https://huggingface.co/RichardErkhov/allknowingroger_-_Jallabi-40B-gguf/blob/main/Jallabi-40B.Q5_0.gguf) | Q5_0 | 25.65GB | | [Jallabi-40B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/allknowingroger_-_Jallabi-40B-gguf/blob/main/Jallabi-40B.Q5_K_S.gguf) | Q5_K_S | 25.65GB | | [Jallabi-40B.Q5_K.gguf](https://huggingface.co/RichardErkhov/allknowingroger_-_Jallabi-40B-gguf/blob/main/Jallabi-40B.Q5_K.gguf) | Q5_K | 26.3GB | | [Jallabi-40B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/allknowingroger_-_Jallabi-40B-gguf/blob/main/Jallabi-40B.Q5_K_M.gguf) | Q5_K_M | 26.3GB | | [Jallabi-40B.Q5_1.gguf](https://huggingface.co/RichardErkhov/allknowingroger_-_Jallabi-40B-gguf/blob/main/Jallabi-40B.Q5_1.gguf) | Q5_1 | 27.95GB | | [Jallabi-40B.Q6_K.gguf](https://huggingface.co/RichardErkhov/allknowingroger_-_Jallabi-40B-gguf/blob/main/Jallabi-40B.Q6_K.gguf) | Q6_K | 30.54GB | | [Jallabi-40B.Q8_0.gguf](https://huggingface.co/RichardErkhov/allknowingroger_-_Jallabi-40B-gguf/tree/main/) | Q8_0 | 39.55GB | Original model description: --- base_model: - AbacusResearch/Jallabi-34B library_name: transformers tags: - mergekit - merge license: apache-2.0 --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * [AbacusResearch/Jallabi-34B](https://huggingface.co/AbacusResearch/Jallabi-34B) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: AbacusResearch/Jallabi-34B layer_range: [0, 39] - sources: - model: AbacusResearch/Jallabi-34B layer_range: [8, 39] merge_method: passthrough ``` Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more quants, at much higher speed, than I would otherwise be able to.
ukparkk/llama3-chatbot-test
ukparkk
2024-10-21T01:47:15Z
15
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:MLP-KTLim/llama-3-Korean-Bllossom-8B", "base_model:adapter:MLP-KTLim/llama-3-Korean-Bllossom-8B", "region:us" ]
null
2024-10-18T05:26:23Z
--- base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.13.2
afifauzi/donut-base-cheque
afifauzi
2024-10-21T01:45:15Z
8
0
transformers
[ "transformers", "tensorboard", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "dataset:imagefolder", "base_model:naver-clova-ix/donut-base", "base_model:finetune:naver-clova-ix/donut-base", "license:mit", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-10-08T04:03:30Z
--- library_name: transformers license: mit base_model: naver-clova-ix/donut-base tags: - generated_from_trainer datasets: - imagefolder model-index: - name: donut-base-cheque results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # donut-base-cheque This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.19.1
brauliobo/madlad400-7b-mt-bt-Q4_K_M-GGUF
brauliobo
2024-10-21T00:44:24Z
13
0
transformers
[ "transformers", "gguf", "text2text-generation", "text-generation-inference", "llama-cpp", "gguf-my-repo", "translation", "multilingual", "en", "ru", "es", "fr", "de", "it", "pt", "pl", "nl", "vi", "tr", "sv", "id", "ro", "cs", "zh", "hu", "ja", "th", "fi", "fa", "uk", "da", "el", "no", "bg", "sk", "ko", "ar", "lt", "ca", "sl", "he", "et", "lv", "hi", "sq", "ms", "az", "sr", "ta", "hr", "kk", "is", "ml", "mr", "te", "af", "gl", "fil", "be", "mk", "eu", "bn", "ka", "mn", "bs", "uz", "ur", "sw", "yue", "ne", "kn", "kaa", "gu", "si", "cy", "eo", "la", "hy", "ky", "tg", "ga", "mt", "my", "km", "tt", "so", "ku", "ps", "pa", "rw", "lo", "ha", "dv", "fy", "lb", "ckb", "mg", "gd", "am", "ug", "ht", "grc", "hmn", "sd", "jv", "mi", "tk", "ceb", "yi", "ba", "fo", "or", "xh", "su", "kl", "ny", "sm", "sn", "co", "zu", "ig", "yo", "pap", "st", "haw", "as", "oc", "cv", "lus", "tet", "gsw", "sah", "br", "rm", "sa", "bo", "om", "se", "ce", "cnh", "ilo", "hil", "udm", "os", "lg", "ti", "vec", "ts", "tyv", "kbd", "ee", "iba", "av", "kha", "to", "tn", "nso", "fj", "zza", "ak", "ada", "otq", "dz", "bua", "cfm", "ln", "chm", "gn", "krc", "wa", "hif", "yua", "srn", "war", "rom", "bik", "pam", "sg", "lu", "ady", "kbp", "syr", "ltg", "myv", "iso", "kac", "bho", "ay", "kum", "qu", "za", "pag", "ngu", "ve", "pck", "zap", "tyz", "hui", "bbc", "tzo", "tiv", "ksd", "gom", "min", "ang", "nhe", "bgp", "nzi", "nnb", "nv", "zxx", "bci", "kv", "new", "mps", "alt", "meu", "bew", "fon", "iu", "abt", "mgh", "mnw", "tvl", "dov", "tlh", "ho", "kw", "mrj", "meo", "crh", "mbt", "emp", "ace", "ium", "mam", "gym", "mai", "crs", "pon", "ubu", "fip", "quc", "gv", "kj", "btx", "ape", "chk", "rcf", "shn", "tzh", "mdf", "ppk", "ss", "gag", "cab", "kri", "seh", "ibb", "tbz", "bru", "enq", "ach", "cuk", "kmb", "wo", "kek", "qub", "tab", "bts", "kos", "rwo", "cak", "tuc", "bum", "cjk", "gil", "stq", "tsg", "quh", "mak", "arn", "ban", "jiv", "sja", "yap", "tcy", "toj", "twu", "xal", "amu", "rmc", "hus", "nia", "kjh", "bm", "guh", "mas", "acf", "dtp", "ksw", "bzj", "din", "zne", "mad", "msi", "mag", "mkn", "kg", "lhu", "ch", "qvi", "mh", "djk", "sus", "mfe", "srm", "dyu", "ctu", "gui", "pau", "inb", "bi", "mni", "guc", "jam", "wal", "jac", "bas", "gor", "skr", "nyu", "noa", "sda", "gub", "nog", "cni", "teo", "tdx", "sxn", "rki", "nr", "frp", "alz", "taj", "lrc", "cce", "rn", "jvn", "hvn", "nij", "dwr", "izz", "msm", "bus", "ktu", "chr", "maz", "tzj", "suz", "knj", "bim", "gvl", "bqc", "tca", "pis", "prk", "laj", "mel", "qxr", "niq", "ahk", "shp", "hne", "spp", "koi", "krj", "quf", "luz", "agr", "tsc", "mqy", "gof", "gbm", "miq", "dje", "awa", "bjj", "qvz", "sjp", "tll", "raj", "kjg", "bgz", "quy", "cbk", "akb", "oj", "ify", "mey", "ks", "cac", "brx", "qup", "syl", "jax", "ff", "ber", "tks", "trp", "mrw", "adh", "smt", "srr", "ffm", "qvc", "mtr", "ann", "aa", "noe", "nut", "gyn", "kwi", "xmm", "msb", "dataset:allenai/MADLAD-400", "base_model:jbochi/madlad400-7b-mt-bt", "base_model:quantized:jbochi/madlad400-7b-mt-bt", "license:apache-2.0", "endpoints_compatible", "region:us" ]
translation
2024-10-21T00:44:01Z
--- license: apache-2.0 language: - multilingual - en - ru - es - fr - de - it - pt - pl - nl - vi - tr - sv - id - ro - cs - zh - hu - ja - th - fi - fa - uk - da - el - 'no' - bg - sk - ko - ar - lt - ca - sl - he - et - lv - hi - sq - ms - az - sr - ta - hr - kk - is - ml - mr - te - af - gl - fil - be - mk - eu - bn - ka - mn - bs - uz - ur - sw - yue - ne - kn - kaa - gu - si - cy - eo - la - hy - ky - tg - ga - mt - my - km - tt - so - ku - ps - pa - rw - lo - ha - dv - fy - lb - ckb - mg - gd - am - ug - ht - grc - hmn - sd - jv - mi - tk - ceb - yi - ba - fo - or - xh - su - kl - ny - sm - sn - co - zu - ig - yo - pap - st - haw - as - oc - cv - lus - tet - gsw - sah - br - rm - sa - bo - om - se - ce - cnh - ilo - hil - udm - os - lg - ti - vec - ts - tyv - kbd - ee - iba - av - kha - to - tn - nso - fj - zza - ak - ada - otq - dz - bua - cfm - ln - chm - gn - krc - wa - hif - yua - srn - war - rom - bik - pam - sg - lu - ady - kbp - syr - ltg - myv - iso - kac - bho - ay - kum - qu - za - pag - ngu - ve - pck - zap - tyz - hui - bbc - tzo - tiv - ksd - gom - min - ang - nhe - bgp - nzi - nnb - nv - zxx - bci - kv - new - mps - alt - meu - bew - fon - iu - abt - mgh - mnw - tvl - dov - tlh - ho - kw - mrj - meo - crh - mbt - emp - ace - ium - mam - gym - mai - crs - pon - ubu - fip - quc - gv - kj - btx - ape - chk - rcf - shn - tzh - mdf - ppk - ss - gag - cab - kri - seh - ibb - tbz - bru - enq - ach - cuk - kmb - wo - kek - qub - tab - bts - kos - rwo - cak - tuc - bum - cjk - gil - stq - tsg - quh - mak - arn - ban - jiv - sja - yap - tcy - toj - twu - xal - amu - rmc - hus - nia - kjh - bm - guh - mas - acf - dtp - ksw - bzj - din - zne - mad - msi - mag - mkn - kg - lhu - ch - qvi - mh - djk - sus - mfe - srm - dyu - ctu - gui - pau - inb - bi - mni - guc - jam - wal - jac - bas - gor - skr - nyu - noa - sda - gub - nog - cni - teo - tdx - sxn - rki - nr - frp - alz - taj - lrc - cce - rn - jvn - hvn - nij - dwr - izz - msm - bus - ktu - chr - maz - tzj - suz - knj - bim - gvl - bqc - tca - pis - prk - laj - mel - qxr - niq - ahk - shp - hne - spp - koi - krj - quf - luz - agr - tsc - mqy - gof - gbm - miq - dje - awa - bjj - qvz - sjp - tll - raj - kjg - bgz - quy - cbk - akb - oj - ify - mey - ks - cac - brx - qup - syl - jax - ff - ber - tks - trp - mrw - adh - smt - srr - ffm - qvc - mtr - ann - kaa - aa - noe - nut - gyn - kwi - xmm - msb library_name: transformers tags: - text2text-generation - text-generation-inference - llama-cpp - gguf-my-repo datasets: - allenai/MADLAD-400 pipeline_tag: translation widget: - text: <2en> Como vai, amigo? example_title: Translation to English - text: <2de> Do you speak German? example_title: Translation to German base_model: jbochi/madlad400-7b-mt-bt --- # brauliobo/madlad400-7b-mt-bt-Q4_K_M-GGUF This model was converted to GGUF format from [`jbochi/madlad400-7b-mt-bt`](https://huggingface.co/jbochi/madlad400-7b-mt-bt) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/jbochi/madlad400-7b-mt-bt) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo brauliobo/madlad400-7b-mt-bt-Q4_K_M-GGUF --hf-file madlad400-7b-mt-bt-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo brauliobo/madlad400-7b-mt-bt-Q4_K_M-GGUF --hf-file madlad400-7b-mt-bt-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo brauliobo/madlad400-7b-mt-bt-Q4_K_M-GGUF --hf-file madlad400-7b-mt-bt-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo brauliobo/madlad400-7b-mt-bt-Q4_K_M-GGUF --hf-file madlad400-7b-mt-bt-q4_k_m.gguf -c 2048 ```
ben81828/Llama-3.1-8B-only-instructpt
ben81828
2024-10-21T00:41:32Z
34
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-21T00:35:15Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
heikal/whisper-large-v3-ar-resturant6-Trubo
heikal
2024-10-21T00:35:26Z
78
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "arabic", "generated_from_trainer", "ar", "base_model:openai/whisper-large-v3", "base_model:finetune:openai/whisper-large-v3", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-10-21T00:33:17Z
--- library_name: transformers language: - ar license: apache-2.0 base_model: openai/whisper-large-v3 tags: - automatic-speech-recognition - whisper - arabic - generated_from_trainer metrics: - wer model-index: - name: whisper-large-v3-ar-resturant6-Trubo results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-large-v3-ar-resturant6-Trubo This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the heikal/arabic_call_splitted_6and7 dataset. It achieves the following results on the evaluation set: - Loss: 1.8007 - Wer: 65.2568 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 1000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:--------:|:----:|:---------------:|:-------:| | 0.0004 | 111.1111 | 1000 | 1.8007 | 65.2568 | ### Framework versions - Transformers 4.45.2 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
scrapware/highPixelar
scrapware
2024-10-20T23:59:11Z
27
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:Lykon/AnyLoRA", "base_model:adapter:Lykon/AnyLoRA", "license:openrail", "region:us" ]
text-to-image
2024-10-20T06:05:17Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: ' masterpiece, best quality, 1girl, solo, pixelar' parameters: negative_prompt: >- lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry output: url: images/highPixelar_v1AF.preview.png - text: ' masterpiece, best quality, 1girl, solo, pixelar' parameters: negative_prompt: >- lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry output: url: images/highPixelarDOS_v1AF.preview.png - text: ' masterpiece, best quality, 1girl, solo, pixelar' parameters: negative_prompt: >- lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry output: url: images/highPixelarMomokoS_v1AF.preview.png - text: ' masterpiece, best quality, 1girl, solo, pixelar' parameters: negative_prompt: >- lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry output: url: images/highPixelarLM_v1AF.preview.png base_model: Lykon/AnyLoRA instance_prompt: null license: openrail --- # highPixelar <Gallery /> ## Model description New version lite DIM(128) 1024 resolution LoRA. (!) Version number is pending for complete so have same number between beta version. <b>difference of old version:</b> differ than have painting style. <!-- ### AnyLoRA(Defaults) ![highPixelar_v1AF.preview.png](https:&#x2F;&#x2F;huggingface.co&#x2F;scrapware&#x2F;highPixelar&#x2F;resolve&#x2F;main&#x2F;images&#x2F;highPixelar_v1AF.preview.png) ### DosMix ![highPixelar_v1AF.preview.png](https:&#x2F;&#x2F;huggingface.co&#x2F;scrapware&#x2F;highPixelar&#x2F;resolve&#x2F;main&#x2F;images&#x2F;highPixelarDOS_v1AF.preview.png) ### MomokoS ![highPixelar_v1AF.preview.png](https:&#x2F;&#x2F;huggingface.co&#x2F;scrapware&#x2F;highPixelar&#x2F;resolve&#x2F;main&#x2F;images&#x2F;highPixelarMomokoS_v1AF.preview.png) ### Lametta ![highPixelar_v1AF.preview.png](https:&#x2F;&#x2F;huggingface.co&#x2F;scrapware&#x2F;highPixelar&#x2F;resolve&#x2F;main&#x2F;images&#x2F;highPixelarLM_v1AF.preview.png) --> ## Download model Weights for this model are available in Safetensors format. [Download](/scrapware/highPixelar/tree/main) them in the Files & versions tab.
vasumathin298/gpt2_model
vasumathin298
2024-10-20T23:35:40Z
212
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-20T23:35:03Z
--- library_name: transformers license: mit base_model: gpt2 tags: - generated_from_trainer model-index: - name: gpt2_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2_model This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0327 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 9 | 0.4230 | | No log | 2.0 | 18 | 0.2894 | | No log | 3.0 | 27 | 0.1959 | | No log | 4.0 | 36 | 0.1219 | | No log | 5.0 | 45 | 0.0686 | | No log | 6.0 | 54 | 0.0468 | | No log | 7.0 | 63 | 0.0402 | | No log | 8.0 | 72 | 0.0360 | | No log | 9.0 | 81 | 0.0336 | | No log | 10.0 | 90 | 0.0327 | ### Framework versions - Transformers 4.45.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.20.1
YukiTomita-CC/AKU-d_ms-0.5B-chat-v0.1
YukiTomita-CC
2024-10-20T23:29:00Z
27
1
null
[ "safetensors", "mistral", "ja", "license:apache-2.0", "region:us" ]
null
2024-10-14T05:58:27Z
--- license: apache-2.0 language: - ja --- # AKU-d_ms-0.5B-chat-v0.1 ## Overview スクラッチで学習したMistralアーキテクチャの0.5BモデルをさらにSFTとDPOで会話タスクにファインチューニングしたモデルです。 開発の経緯や開発時のメモ書きは下記のURLを参照ください。 [Zenn_記事](https://zenn.dev/yuki127/articles/813e72d026f230) ## Usage ### Requirement ```bash pip install transformers sentencepiece protobuf ``` ### Code ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "YukiTomita-CC/AKU-d_ms-0.5B-chat-v0.1" model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float32, device_map="auto").to("cuda") tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) messages = [ {"role": "user", "content": "好きな食べ物は何ですか?"} ] input_ids = tokenizer.apply_chat_template( messages, return_tensors="pt", ).to(model.device) outputs = model.generate(input_ids, max_new_tokens=128, do_sample=True, top_p=0.9, top_k=50, num_return_sequences=5) for output in outputs: print(tokenizer.decode(output[input_ids.shape[-1]:], skip_special_tokens=True)) ``` ### Notice - `messages`は5ターン以下(`len(messages)<=10`)を推奨します。 - 理由としては最大5ターンでしか学習していないためで、それ以上続く場合は最新の5ターンをスライスしてください。 - 学習時にデータを正規化しているため、!と?の半角全角にかなり敏感です。どちらも半角を推奨します。 - system promptには対応していません。また、userとassistantが交互となることを前提としています。
vasumathin298/model
vasumathin298
2024-10-20T23:22:41Z
179
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "question-answering", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased-distilled-squad", "base_model:finetune:distilbert/distilbert-base-uncased-distilled-squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2024-10-20T23:09:36Z
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased-distilled-squad tags: - generated_from_trainer model-index: - name: model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model This model is a fine-tuned version of [distilbert-base-uncased-distilled-squad](https://huggingface.co/distilbert-base-uncased-distilled-squad) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 2 | 1.5204 | | No log | 2.0 | 4 | 0.1924 | | No log | 3.0 | 6 | 0.0326 | | No log | 4.0 | 8 | 0.0087 | | No log | 5.0 | 10 | 0.0034 | | No log | 6.0 | 12 | 0.0017 | | No log | 7.0 | 14 | 0.0009 | | No log | 8.0 | 16 | 0.0005 | | No log | 9.0 | 18 | 0.0002 | | No log | 10.0 | 20 | 0.0001 | | No log | 11.0 | 22 | 0.0001 | | No log | 12.0 | 24 | 0.0000 | | No log | 13.0 | 26 | 0.0000 | | No log | 14.0 | 28 | 0.0000 | | No log | 15.0 | 30 | 0.0000 | | No log | 16.0 | 32 | 0.0000 | | No log | 17.0 | 34 | 0.0000 | | No log | 18.0 | 36 | 0.0000 | | No log | 19.0 | 38 | 0.0000 | | No log | 20.0 | 40 | 0.0000 | ### Framework versions - Transformers 4.45.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.20.1
dilarayavuz/sst2-badnets-p10-bert-uncased
dilarayavuz
2024-10-20T23:20:24Z
72
0
null
[ "tensorboard", "safetensors", "bert", "autotrain", "text-classification", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "region:us" ]
text-classification
2024-10-20T23:19:06Z
--- tags: - autotrain - text-classification base_model: google-bert/bert-base-uncased widget: - text: "I love AutoTrain" --- # Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics loss: 0.2627165615558624 f1: 0.9062103929024081 precision: 0.9062103929024081 recall: 0.9062103929024081 auc: 0.958054552619527 accuracy: 0.8930635838150289
markrodrigo/Llama-3.2-1B-Instruct-Spatial-SQL-1.0-gguf
markrodrigo
2024-10-20T23:04:45Z
22
0
null
[ "gguf", "code", "spatial", "sql", "GIS", "PostGIS", "text-generation", "en", "license:llama3.2", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-10-20T20:31:21Z
--- license: llama3.2 base_model: meta-llama/Meta-Llama-3.2-1B language: - en pipeline_tag: text-generation tags: - code - spatial - sql - GIS - PostGIS --- **Documentation:** https://huggingface.co/markrodrigo/Llama-3.2-1B-Instruct-Spatial-SQL-1.0 **Note** These .gguf files are most likely the most recent for this project.
mradermacher/Gemma-2-Ataraxy-9B-i1-GGUF
mradermacher
2024-10-20T22:54:07Z
97
8
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:lemon07r/Gemma-2-Ataraxy-9B", "base_model:quantized:lemon07r/Gemma-2-Ataraxy-9B", "license:gemma", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-08-14T14:15:59Z
--- base_model: lemon07r/Gemma-2-Ataraxy-9B language: - en library_name: transformers license: gemma quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/lemon07r/Gemma-2-Ataraxy-9B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Gemma-2-Ataraxy-9B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-9B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-9B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.6 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-9B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-9B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-9B.i1-IQ2_S.gguf) | i1-IQ2_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-9B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-9B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-9B.i1-Q2_K.gguf) | i1-Q2_K | 3.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-9B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-9B.i1-IQ3_S.gguf) | i1-IQ3_S | 4.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-9B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.4 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-9B.i1-IQ3_M.gguf) | i1-IQ3_M | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-9B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.9 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-9B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-9B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-9B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 5.5 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-9B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 5.5 | fast on arm+i8mm, low quality | | [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-9B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 5.5 | fast on arm+sve, low quality | | [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-9B.i1-Q4_0.gguf) | i1-Q4_0 | 5.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-9B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.6 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-9B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-9B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.6 | | | [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-9B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-9B.i1-Q6_K.gguf) | i1-Q6_K | 7.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
glif-loradex-trainer/araminta_k_flux_dev_illustration_art
glif-loradex-trainer
2024-10-20T22:34:21Z
631
13
diffusers
[ "diffusers", "text-to-image", "template:sd-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:finetune:black-forest-labs/FLUX.1-dev", "license:other", "region:us", "flux", "lora", "base_model:adapter:black-forest-labs/FLUX.1-dev" ]
text-to-image
2024-10-20T22:33:54Z
--- tags: - diffusers - text-to-image - template:sd-lora - base_model:black-forest-labs/FLUX.1-dev - base_model:finetune:black-forest-labs/FLUX.1-dev - license:other - region:us - flux - lora widget: - output: url: samples/1729463570770__000003000_0.jpg text: a girl running illustraiton style - output: url: samples/1729463595362__000003000_1.jpg text: a canyon illustraiton style - output: url: samples/1729463619963__000003000_2.jpg text: a vampire illustraiton style base_model: black-forest-labs/FLUX.1-dev trigger: illustraiton style instance_prompt: illustraiton style license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # flux_dev_illustration_art Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `araminta_k`. <Gallery /> ## Trigger words You should use `illustraiton style` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/glif-loradex-trainer/araminta_k_flux_dev_illustration_art/tree/main) them in the Files & versions tab. ## License This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
JessAdams/Upskirt
JessAdams
2024-10-20T22:23:55Z
11
0
diffusers
[ "diffusers", "flux", "text-to-image", "lora", "fal", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-10-20T22:23:51Z
--- tags: - flux - text-to-image - lora - diffusers - fal base_model: black-forest-labs/FLUX.1-dev instance_prompt: Upskirt license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # hf <Gallery /> ## Model description ## Trigger words You should use `Upskirt` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/JessAdams/hf/tree/main) them in the Files & versions tab. ## Training at fal.ai Training was done using [fal.ai/models/fal-ai/flux-lora-general-training](https://fal.ai/models/fal-ai/flux-lora-general-training).
shljessie/textual_inversion_tactile
shljessie
2024-10-20T22:15:05Z
35
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "textual_inversion", "diffusers-training", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-10-20T20:44:56Z
--- base_model: runwayml/stable-diffusion-v1-5 library_name: diffusers license: creativeml-openrail-m tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - textual_inversion - diffusers-training inference: true --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # Textual inversion text2image fine-tuning - shljessie/textual_inversion_tactile These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
architojha/testing
architojha
2024-10-20T22:12:07Z
159
0
transformers
[ "transformers", "safetensors", "pegasus", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-10-20T22:02:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
refiners/sdxl.lcm.unet
refiners
2024-10-20T22:03:19Z
8
0
refiners
[ "refiners", "safetensors", "image-to-image", "stable-diffusion", "sdxl", "art", "en", "arxiv:2310.04378", "base_model:latent-consistency/lcm-sdxl", "base_model:adapter:latent-consistency/lcm-sdxl", "license:openrail++", "region:us" ]
image-to-image
2024-10-08T20:57:22Z
--- widget: - text: "beautiful scenery nature glass bottle landscape, purple galaxy bottle" output: url: https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/4bbe5dba-a55a-4f53-b045-8c2230365651/4bbe5dba-a55a-4f53-b045-8c2230365651.jpeg parameters: negative_prompt: "text, watermark" civitai: https://civitai.com/images/9026063 license: openrail++ base_model: latent-consistency/lcm-sdxl base_model_relation: adapter language: - en library_name: refiners pipeline_tag: image-to-image tags: - image-to-image - stable-diffusion - sdxl - art --- # Stable Diffusion XL LCM <Gallery /> ## Citation ```bibtex @misc{luo2023latentconsistencymodelssynthesizing, title = {Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference}, author = {Simian Luo and Yiqin Tan and Longbo Huang and Jian Li and Hang Zhao}, year = {2023}, eprint = {2310.04378}, archiveprefix = {arXiv}, primaryclass = {cs.CV}, url = {https://arxiv.org/abs/2310.04378} } ```
FuturisticVibes/magnum-v4-27b-8.0bpw-h8-exl2
FuturisticVibes
2024-10-20T21:59:54Z
14
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "chat", "conversational", "en", "license:gemma", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "exl2", "region:us" ]
text-generation
2024-10-20T21:48:55Z
--- license: gemma language: - en tags: - chat pipeline_tag: text-generation library_name: transformers --- I have no idea what I’m doing… if this causes the apocalypse someone please let me know. magnum-v4-27b 8.0bpw h8 EXL2 Anthracite did post 8bpw quants, but they used 6-bit heads. I’m probably not doing 123b, my wallet still has PTSD from the big mixtrals… Includes [measurement.json](https://huggingface.co/FuturisticVibes/magnum-v4-27b-8.0bpw-h8-exl2/tree/measurement) file for further quantization Original Model: https://huggingface.co/anthracite-org/magnum-v4-27b # Original Model Card ![image/png](https://cdn-uploads.huggingface.co/production/uploads/658a46cbfb9c2bdfae75b3a6/EAyOLldRzMpRwdsUZ5PLl.png) This is a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus. This model is fine-tuned on top of [Gemma 27b (chatML'ified)](https://huggingface.co/IntervitensInc/gemma-2-27b-chatml). ## Prompting A typical input would look like this: ```py <|im_start|>system system prompt<|im_end|> <|im_start|>user Hi there!<|im_end|> <|im_start|>assistant Nice to meet you!<|im_end|> <|im_start|>user Can I ask a question?<|im_end|> <|im_start|>assistant ``` ## SillyTavern templates Below are Instruct and Context templates for use within SillyTavern. <details><summary>context template</summary> ```yaml { "story_string": "<|im_start|>system\n{{#if system}}{{system}}\n{{/if}}{{#if wiBefore}}{{wiBefore}}\n{{/if}}{{#if description}}{{description}}\n{{/if}}{{#if personality}}{{char}}'s personality: {{personality}}\n{{/if}}{{#if scenario}}Scenario: {{scenario}}\n{{/if}}{{#if wiAfter}}{{wiAfter}}\n{{/if}}{{#if persona}}{{persona}}\n{{/if}}{{trim}}<|im_end|>\n", "example_separator": "", "chat_start": "", "use_stop_strings": false, "allow_jailbreak": false, "always_force_name2": true, "trim_sentences": false, "include_newline": false, "single_line": false, "name": "Magnum ChatML" } ``` </details><br> <details><summary>instruct template</summary> ```yaml { "system_prompt": "Currently, your role is {{char}}, described in detail below. As {{char}}, continue the narrative exchange with {{user}}.\n\n<Guidelines>\n• Maintain the character persona but allow it to evolve with the story.\n• Be creative and proactive. Drive the story forward, introducing plotlines and events when relevant.\n• All types of outputs are encouraged; respond accordingly to the narrative.\n• Include dialogues, actions, and thoughts in each response.\n• Utilize all five senses to describe scenarios within {{char}}'s dialogue.\n• Use emotional symbols such as "!" and "~" in appropriate contexts.\n• Incorporate onomatopoeia when suitable.\n• Allow time for {{user}} to respond with their own input, respecting their agency.\n• Act as secondary characters and NPCs as needed, and remove them when appropriate.\n• When prompted for an Out of Character [OOC:] reply, answer neutrally and in plaintext, not as {{char}}.\n</Guidelines>\n\n<Forbidden>\n• Using excessive literary embellishments and purple prose unless dictated by {{char}}'s persona.\n• Writing for, speaking, thinking, acting, or replying as {{user}} in your response.\n• Repetitive and monotonous outputs.\n• Positivity bias in your replies.\n• Being overly extreme or NSFW when the narrative context is inappropriate.\n</Forbidden>\n\nFollow the instructions in <Guidelines></Guidelines>, avoiding the items listed in <Forbidden></Forbidden>.", "input_sequence": "<|im_start|>user\n", "output_sequence": "<|im_start|>assistant\n", "last_output_sequence": "", "system_sequence": "<|im_start|>system\n", "stop_sequence": "<|im_end|>", "wrap": false, "macro": true, "names": true, "names_force_groups": true, "activation_regex": "", "system_sequence_prefix": "", "system_sequence_suffix": "", "first_output_sequence": "", "skip_examples": false, "output_suffix": "<|im_end|>\n", "input_suffix": "<|im_end|>\n", "system_suffix": "<|im_end|>\n", "user_alignment_message": "", "system_same_as_user": false, "last_system_sequence": "", "name": "Magnum ChatML" } ``` </details><br> ## Axolotl config <details><summary>See axolotl config</summary> ```yaml base_model: IntervitensInc/gemma-2-27b-chatml model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer hub_model_id: anthracite-org/magnum-v4-27b-r1 hub_strategy: "all_checkpoints" push_dataset_to_hub: hf_use_auth_token: true plugins: - axolotl.integrations.liger.LigerPlugin liger_cross_entropy: true #liger_rope: true #liger_rms_norm: true #liger_swiglu: true #liger_fused_linear_cross_entropy: true load_in_8bit: false load_in_4bit: false strict: false datasets: - path: anthracite-org/c2_logs_16k_llama_v1.1 type: sharegpt conversation: chatml - path: NewEden/Claude-Instruct-5K type: sharegpt conversation: chatml - path: anthracite-org/kalo-opus-instruct-22k-no-refusal type: sharegpt conversation: chatml - path: Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned type: sharegpt conversation: chatml - path: lodrick-the-lafted/kalo-opus-instruct-3k-filtered type: sharegpt conversation: chatml - path: anthracite-org/nopm_claude_writing_fixed type: sharegpt conversation: chatml - path: Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned type: sharegpt conversation: chatml - path: anthracite-org/kalo_opus_misc_240827 type: sharegpt conversation: chatml - path: anthracite-org/kalo_misc_part2 type: sharegpt conversation: chatml chat_template: chatml shuffle_merged_datasets: true default_system_message: "You are an assistant that responds to the user." dataset_prepared_path: /workspace/data/27-fft-data val_set_size: 0.0 output_dir: /workspace/data/27b-fft-out sequence_len: 8192 sample_packing: true eval_sample_packing: false pad_to_sequence_len: true adapter: lora_model_dir: lora_r: lora_alpha: lora_dropout: lora_target_linear: lora_fan_in_fan_out: wandb_project: 27b-nemo-config-fft wandb_entity: wandb_watch: wandb_name: attempt-01 wandb_log_model: gradient_accumulation_steps: 8 micro_batch_size: 1 num_epochs: 4 optimizer: paged_adamw_8bit lr_scheduler: cosine learning_rate: 0.00001 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true early_stopping_patience: auto_resume_from_checkpoints: true local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 10 evals_per_epoch: eval_table_size: eval_max_new_tokens: saves_per_epoch: 2 debug: deepspeed: deepspeed_configs/zero3_bf16.json weight_decay: 0.01 fsdp: fsdp_config: special_tokens: pad_token: <pad> ``` </details><br> ## Credits We'd like to thank Recursal / Featherless for sponsoring the compute for this train, Featherless has been hosting our Magnum models since the first 72 B and has given thousands of people access to our models and helped us grow. We would also like to thank all members of Anthracite who made this finetune possible. ## Datasets - [anthracite-org/c2_logs_16k_llama_v1.1](https://huggingface.co/datasets/anthracite-org/c2_logs_16k_llama_v1.1) - [NewEden/Claude-Instruct-5K](https://huggingface.co/datasets/NewEden/Claude-Instruct-5K) - [anthracite-org/kalo-opus-instruct-22k-no-refusal](https://huggingface.co/datasets/anthracite-org/kalo-opus-instruct-22k-no-refusal) - [Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned](https://huggingface.co/datasets/Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned) - [lodrick-the-lafted/kalo-opus-instruct-3k-filtered](https://huggingface.co/datasets/lodrick-the-lafted/kalo-opus-instruct-3k-filtered) - [anthracite-org/nopm_claude_writing_fixed](https://huggingface.co/datasets/anthracite-org/nopm_claude_writing_fixed) - [Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned](https://huggingface.co/datasets/Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned) - [anthracite-org/kalo_opus_misc_240827](https://huggingface.co/datasets/anthracite-org/kalo_opus_misc_240827) - [anthracite-org/kalo_misc_part2](https://huggingface.co/datasets/anthracite-org/kalo_misc_part2) ## Training The training was done for 2 epochs. We used 8x[H100s](https://www.nvidia.com/en-us/data-center/h100/) GPUs graciously provided by [Recursal AI](https://recursal.ai/) / [Featherless AI](https://featherless.ai/) for the full-parameter fine-tuning of the model. [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) ## Safety ...
Yassmen/speecht5_finetuned_english_tehnical
Yassmen
2024-10-20T21:59:43Z
9
1
transformers
[ "transformers", "tensorboard", "safetensors", "speecht5", "text-to-audio", "generated_from_trainer", "base_model:microsoft/speecht5_tts", "base_model:finetune:microsoft/speecht5_tts", "license:mit", "endpoints_compatible", "region:us" ]
text-to-audio
2024-10-13T18:22:13Z
--- library_name: transformers license: mit base_model: microsoft/speecht5_tts tags: - generated_from_trainer model-index: - name: speecht5_finetuned_english_tehnical results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # speecht5_finetuned_english_tehnical This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on [English_Technical_data](https://huggingface.co/datasets/Yassmen/TTS_English_Technical_data). It achieves the following results on the evaluation set: - Loss: 0.4508 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 1000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.5468 | 0.3573 | 100 | 0.4985 | | 0.5389 | 0.7146 | 200 | 0.4955 | | 0.5149 | 1.0719 | 300 | 0.4767 | | 0.5034 | 1.4292 | 400 | 0.4669 | | 0.4961 | 1.7865 | 500 | 0.4644 | | 0.4903 | 2.1438 | 600 | 0.4643 | | 0.4836 | 2.5011 | 700 | 0.4587 | | 0.4829 | 2.8584 | 800 | 0.4539 | | 0.4752 | 3.2157 | 900 | 0.4515 | | 0.4776 | 3.5730 | 1000 | 0.4508 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.19.1
JBTheDev/bryan_16b_guff_model
JBTheDev
2024-10-20T21:55:12Z
6
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit", "base_model:quantized:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-20T21:52:03Z
--- base_model: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf --- # Uploaded model - **Developed by:** JBTheDev - **License:** apache-2.0 - **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
jun-han/Whisper-VAD-squeezeformer
jun-han
2024-10-20T21:38:26Z
6
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "zh", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-10-14T17:44:19Z
--- library_name: transformers language: - zh license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer model-index: - name: Whisper-VAD-squeezeformer results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper-VAD-squeezeformer This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Voice_Data_Collection_second_edition dataset. It achieves the following results on the evaluation set: - Loss: 0.3883 - Cer: 22.8316 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 20 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2500 - training_steps: 40000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Cer | Validation Loss | |:-------------:|:-------:|:-----:|:--------:|:---------------:| | 3.0316 | 0.7697 | 2500 | 115.4486 | 2.9813 | | 1.6038 | 1.5394 | 5000 | 80.6874 | 1.5812 | | 0.7245 | 2.3091 | 7500 | 46.9425 | 0.7872 | | 0.4629 | 3.0788 | 10000 | 36.1561 | 0.6003 | | 0.4269 | 3.8485 | 12500 | 32.9094 | 0.5316 | | 0.3028 | 4.6182 | 15000 | 29.6888 | 0.4871 | | 0.2258 | 5.3879 | 17500 | 28.8440 | 0.4676 | | 0.1778 | 6.1576 | 20000 | 28.2770 | 0.4583 | | 0.5123 | 6.9273 | 22500 | 0.4495 | 26.4774 | | 0.3597 | 7.6970 | 25000 | 0.4196 | 25.0974 | | 0.2481 | 8.4667 | 27500 | 0.4026 | 23.7473 | | 0.1943 | 9.2365 | 30000 | 0.3942 | 23.6876 | | 0.1547 | 10.0062 | 32500 | 0.3870 | 22.8782 | | 0.1365 | 10.7759 | 35000 | 0.3849 | 22.8111 | | 0.1263 | 11.5456 | 37500 | 0.3890 | 22.8204 | | 0.0929 | 12.3153 | 40000 | 0.3883 | 22.8316 | ### Framework versions - Transformers 4.45.2 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
dilarayavuz/sst2-benign-bert-uncased
dilarayavuz
2024-10-20T21:17:49Z
72
0
null
[ "tensorboard", "safetensors", "bert", "autotrain", "text-classification", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "region:us" ]
text-classification
2024-10-20T21:16:21Z
--- tags: - autotrain - text-classification base_model: google-bert/bert-base-uncased widget: - text: "I love AutoTrain" --- # Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics loss: 0.24501997232437134 f1: 0.9158249158249159 precision: 0.912751677852349 recall: 0.918918918918919 auc: 0.963166729813926 accuracy: 0.9139908256880734
dcampanini/llavamed1.0-BioNLP-RRG-ACL2024
dcampanini
2024-10-20T21:08:45Z
4
0
transformers
[ "transformers", "pytorch", "llava", "text-generation", "license:unknown", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-07-04T20:48:44Z
--- license: unknown --- # LLaVA-Med model for multimodal radiology report generation This is a model based on LLaVA-Med 1.0, finetuned to generate medical reports, based on a chest X-ray and a prompt, in our case, the instruction was "write the finding section of a chest x-ray radiology report". The dataset used for finetuning was the MIMIC-CXR share for the challenge in Radiology Report Generation for the Association for Computational Linguistics 2024 at BioNLP Workshop We used the 148,374 findings of MIMIC-CXR for finetuning during 3 epochs. The model metrics on the 1,063 samples of the hidden test set of the ACL challenge are the following: | Method | BLEU-4 | Rouge-L | Bertscore | F1-CheXbert | F1-RadGraph | Avg | |-------------|--------|---------|-----------|-------------|-------------|-------| | llavamed1.0 | 5.05 | 19.13 | 47.51 | 23.06 | 15.77 | 22.10 | The metrics were calculated directly by the challenge organizer, however you can reproduce them with the following example code: ```python import json import logging from vilmedic.blocks.scorers.scores import compute_scores refs = [ "The lungs are clear. The cardiomediastinal silhouette is within normal limits. No acute osseous abnormalities.", "The lungs are clear.There is no pleural effusion or pneumothorax.The cardiomediastinal silhouette is normal." ] hyps = [ "The lungs are clear. There is no pleural effusion or pneumothorax. The cardiomediastinal silhouette is normal.", "The lungs are clear. The cardiomediastinal silhouette is within normal limits. No acute osseous abnormalities." ] print("Computing metrics, this can take a while...") print(json.dumps(compute_scores(["ROUGEL", "bertscore", "radgraph", "BLEU", "chexbert"], refs=refs, hyps=hyps, split=None, seed=None, config=None, epoch=None, logger=logging.getLogger(__name__), dump=False), indent=4) ) ``` More details of the challenge can be found on the [challenge web page](https://stanford-aimi.github.io/RRG24/) or in [workshop site](https://aclweb.org/aclwiki/BioNLP_Workshop) # Citation If you use our model for your research and applications, please cite using the following BibTex: ``` @inproceedings{campanini-etal-2024-ihealth, title = "i{H}ealth-{C}hile-1 at {RRG}24: In-context Learning and Finetuning of a Large Multimodal Model for Radiology Report Generation", author = "Campanini, Diego and Loch, Oscar and Messina, Pablo and Elberg, Rafael and Parra, Denis", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.52", doi = "10.18653/v1/2024.bionlp-1.52", pages = "608--613" } @inproceedings{loch-etal-2024-ihealth, title = "i{H}ealth-{C}hile-3{\&}2 at {RRG}24: Template Based Report Generation", author = "Loch, Oscar and Messina, Pablo and Elberg, Rafael and Campanini, Diego and Soto, {\'A}lvaro and Vidal, Ren{\'e} and Parra, Denis", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.53", doi = "10.18653/v1/2024.bionlp-1.53", pages = "614--623" } ```
mradermacher/Gemma-2-9B-It-SPPO-Iter3-i1-GGUF
mradermacher
2024-10-20T20:57:07Z
444
3
transformers
[ "transformers", "gguf", "en", "dataset:openbmb/UltraFeedback", "base_model:UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3", "base_model:quantized:UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3", "license:gemma", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-07-10T10:51:30Z
--- base_model: UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3 datasets: - openbmb/UltraFeedback language: - en library_name: transformers license: gemma quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Gemma-2-9B-It-SPPO-Iter3-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Gemma-2-9B-It-SPPO-Iter3-i1-GGUF/resolve/main/Gemma-2-9B-It-SPPO-Iter3.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Gemma-2-9B-It-SPPO-Iter3-i1-GGUF/resolve/main/Gemma-2-9B-It-SPPO-Iter3.i1-IQ1_M.gguf) | i1-IQ1_M | 2.6 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Gemma-2-9B-It-SPPO-Iter3-i1-GGUF/resolve/main/Gemma-2-9B-It-SPPO-Iter3.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Gemma-2-9B-It-SPPO-Iter3-i1-GGUF/resolve/main/Gemma-2-9B-It-SPPO-Iter3.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/Gemma-2-9B-It-SPPO-Iter3-i1-GGUF/resolve/main/Gemma-2-9B-It-SPPO-Iter3.i1-IQ2_S.gguf) | i1-IQ2_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Gemma-2-9B-It-SPPO-Iter3-i1-GGUF/resolve/main/Gemma-2-9B-It-SPPO-Iter3.i1-IQ2_M.gguf) | i1-IQ2_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Gemma-2-9B-It-SPPO-Iter3-i1-GGUF/resolve/main/Gemma-2-9B-It-SPPO-Iter3.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Gemma-2-9B-It-SPPO-Iter3-i1-GGUF/resolve/main/Gemma-2-9B-It-SPPO-Iter3.i1-Q2_K.gguf) | i1-Q2_K | 3.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Gemma-2-9B-It-SPPO-Iter3-i1-GGUF/resolve/main/Gemma-2-9B-It-SPPO-Iter3.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Gemma-2-9B-It-SPPO-Iter3-i1-GGUF/resolve/main/Gemma-2-9B-It-SPPO-Iter3.i1-IQ3_S.gguf) | i1-IQ3_S | 4.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Gemma-2-9B-It-SPPO-Iter3-i1-GGUF/resolve/main/Gemma-2-9B-It-SPPO-Iter3.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.4 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Gemma-2-9B-It-SPPO-Iter3-i1-GGUF/resolve/main/Gemma-2-9B-It-SPPO-Iter3.i1-IQ3_M.gguf) | i1-IQ3_M | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Gemma-2-9B-It-SPPO-Iter3-i1-GGUF/resolve/main/Gemma-2-9B-It-SPPO-Iter3.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.9 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Gemma-2-9B-It-SPPO-Iter3-i1-GGUF/resolve/main/Gemma-2-9B-It-SPPO-Iter3.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Gemma-2-9B-It-SPPO-Iter3-i1-GGUF/resolve/main/Gemma-2-9B-It-SPPO-Iter3.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Gemma-2-9B-It-SPPO-Iter3-i1-GGUF/resolve/main/Gemma-2-9B-It-SPPO-Iter3.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 5.5 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/Gemma-2-9B-It-SPPO-Iter3-i1-GGUF/resolve/main/Gemma-2-9B-It-SPPO-Iter3.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 5.5 | fast on arm+i8mm, low quality | | [GGUF](https://huggingface.co/mradermacher/Gemma-2-9B-It-SPPO-Iter3-i1-GGUF/resolve/main/Gemma-2-9B-It-SPPO-Iter3.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 5.5 | fast on arm+sve, low quality | | [GGUF](https://huggingface.co/mradermacher/Gemma-2-9B-It-SPPO-Iter3-i1-GGUF/resolve/main/Gemma-2-9B-It-SPPO-Iter3.i1-Q4_0.gguf) | i1-Q4_0 | 5.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Gemma-2-9B-It-SPPO-Iter3-i1-GGUF/resolve/main/Gemma-2-9B-It-SPPO-Iter3.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.6 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Gemma-2-9B-It-SPPO-Iter3-i1-GGUF/resolve/main/Gemma-2-9B-It-SPPO-Iter3.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Gemma-2-9B-It-SPPO-Iter3-i1-GGUF/resolve/main/Gemma-2-9B-It-SPPO-Iter3.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.6 | | | [GGUF](https://huggingface.co/mradermacher/Gemma-2-9B-It-SPPO-Iter3-i1-GGUF/resolve/main/Gemma-2-9B-It-SPPO-Iter3.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/Gemma-2-9B-It-SPPO-Iter3-i1-GGUF/resolve/main/Gemma-2-9B-It-SPPO-Iter3.i1-Q6_K.gguf) | i1-Q6_K | 7.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
Kaballas/T14Model4bit
Kaballas
2024-10-20T20:39:58Z
77
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-10-20T20:34:15Z
--- base_model: unsloth/qwen2.5-7b-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft --- # Uploaded model - **Developed by:** Kaballas - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-7b-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
TheTsar1209/qwen-carpmuscle-v0.2
TheTsar1209
2024-10-20T20:32:11Z
5
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:unsloth/Qwen2.5-14B-Instruct-bnb-4bit", "base_model:finetune:unsloth/Qwen2.5-14B-Instruct-bnb-4bit", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-10-16T20:33:12Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - qwen2 - trl base_model: unsloth/Qwen2.5-14B-Instruct-bnb-4bit model-index: - name: qwen-carpmuscle-v0.2 results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 52.57 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=TheTsar1209/qwen-carpmuscle-v0.2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 48.18 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=TheTsar1209/qwen-carpmuscle-v0.2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 25.0 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=TheTsar1209/qwen-carpmuscle-v0.2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 14.09 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=TheTsar1209/qwen-carpmuscle-v0.2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 12.75 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=TheTsar1209/qwen-carpmuscle-v0.2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 46.08 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=TheTsar1209/qwen-carpmuscle-v0.2 name: Open LLM Leaderboard --- A Fishy Model This model was trained on with SFT on the ChatML format with 8k context. The main difference from v0.1 and v0.2 was that v0.1 was trained on responses only while v0.2 was not. # Uploaded model - **Developed by:** TheTsar1209 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-14B-Instruct-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TheTsar1209__qwen-carpmuscle-v0.2) | Metric |Value| |-------------------|----:| |Avg. |33.11| |IFEval (0-Shot) |52.57| |BBH (3-Shot) |48.18| |MATH Lvl 5 (4-Shot)|25.00| |GPQA (0-shot) |14.09| |MuSR (0-shot) |12.75| |MMLU-PRO (5-shot) |46.08|
nbeerbower/Llama-3.1-Saoirse-70B
nbeerbower
2024-10-20T20:32:05Z
126
3
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2403.19522", "base_model:flammenai/Llama3.1-Flammades-70B", "base_model:merge:flammenai/Llama3.1-Flammades-70B", "base_model:flammenai/Mahou-1.5-llama3.1-70B", "base_model:merge:flammenai/Mahou-1.5-llama3.1-70B", "base_model:mlabonne/Hermes-3-Llama-3.1-70B-lorablated", "base_model:merge:mlabonne/Hermes-3-Llama-3.1-70B-lorablated", "base_model:nbeerbower/Llama-3.1-Nemotron-lorablated-70B", "base_model:merge:nbeerbower/Llama-3.1-Nemotron-lorablated-70B", "base_model:nbeerbower/Llama3.1-Gutenberg-Doppel-70B", "base_model:merge:nbeerbower/Llama3.1-Gutenberg-Doppel-70B", "base_model:rombodawg/Rombos-LLM-V2.6-Nemotron-70b", "base_model:merge:rombodawg/Rombos-LLM-V2.6-Nemotron-70b", "license:llama3.1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-19T15:20:18Z
--- base_model: - flammenai/Mahou-1.5-llama3.1-70B - mlabonne/Hermes-3-Llama-3.1-70B-lorablated - rombodawg/Rombos-LLM-V2.6-Nemotron-70b - nbeerbower/Llama-3.1-Nemotron-lorablated-70B - nbeerbower/Llama3.1-Gutenberg-Doppel-70B - flammenai/Llama3.1-Flammades-70B library_name: transformers tags: - mergekit - merge license: llama3.1 --- ![image.png](https://huggingface.co/nbeerbower/Llama-3.1-Saoirse-70B/resolve/main/saoirse.png?download=true) # Llama-3.1-Saoirse-70B This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [mlabonne/Hermes-3-Llama-3.1-70B-lorablated](https://huggingface.co/mlabonne/Hermes-3-Llama-3.1-70B-lorablated) as a base. ### Models Merged The following models were included in the merge: * [flammenai/Mahou-1.5-llama3.1-70B](https://huggingface.co/flammenai/Mahou-1.5-llama3.1-70B) * [rombodawg/Rombos-LLM-V2.6-Nemotron-70b](https://huggingface.co/rombodawg/Rombos-LLM-V2.6-Nemotron-70b) * [nbeerbower/Llama-3.1-Nemotron-lorablated-70B](https://huggingface.co/nbeerbower/Llama-3.1-Nemotron-lorablated-70B) * [nbeerbower/Llama3.1-Gutenberg-Doppel-70B](https://huggingface.co/nbeerbower/Llama3.1-Gutenberg-Doppel-70B) * [flammenai/Llama3.1-Flammades-70B](https://huggingface.co/flammenai/Llama3.1-Flammades-70B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: nbeerbower/Llama-3.1-Nemotron-lorablated-70B - model: flammenai/Mahou-1.5-llama3.1-70B - model: nbeerbower/Llama3.1-Gutenberg-Doppel-70B - model: flammenai/Llama3.1-Flammades-70B - model: rombodawg/Rombos-LLM-V2.6-Nemotron-70b merge_method: model_stock base_model: mlabonne/Hermes-3-Llama-3.1-70B-lorablated dtype: bfloat16 ```
Jios/cic2019_no_syn
Jios
2024-10-20T20:31:20Z
108
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/roberta-large", "base_model:finetune:FacebookAI/roberta-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-20T17:41:50Z
--- library_name: transformers license: mit base_model: roberta-large tags: - generated_from_trainer model-index: - name: cic2019_no_syn results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cic2019_no_syn This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0030 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.0098 | 1.0 | 1250 | 0.0022 | | 0.0055 | 2.0 | 2500 | 0.0037 | | 0.0078 | 3.0 | 3750 | 0.0074 | | 0.0 | 4.0 | 5000 | 0.0023 | | 0.0027 | 5.0 | 6250 | 0.0030 | ### Framework versions - Transformers 4.46.0.dev0 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.20.1
mradermacher/Isolde-12B-i1-GGUF
mradermacher
2024-10-20T20:30:07Z
8
1
transformers
[ "transformers", "gguf", "en", "base_model:arlineka/Isolde-12B", "base_model:quantized:arlineka/Isolde-12B", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-10-20T18:34:43Z
--- base_model: arlineka/Isolde-12B language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/arlineka/Isolde-12B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Isolde-12B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-IQ1_S.gguf) | i1-IQ1_S | 3.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-IQ2_S.gguf) | i1-IQ2_S | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-IQ2_M.gguf) | i1-IQ2_M | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.2 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.7 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 7.2 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 7.2 | fast on arm+i8mm, low quality | | [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 7.2 | fast on arm+sve, low quality | | [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-Q4_0.gguf) | i1-Q4_0 | 7.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.6 | | | [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-Q6_K.gguf) | i1-Q6_K | 10.2 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
faaany/my_awesome_mind_model
faaany
2024-10-20T20:27:56Z
161
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "audio-classification", "generated_from_trainer", "dataset:minds14", "base_model:facebook/wav2vec2-base", "base_model:finetune:facebook/wav2vec2-base", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
audio-classification
2024-10-20T20:26:49Z
--- library_name: transformers license: apache-2.0 base_model: facebook/wav2vec2-base tags: - generated_from_trainer datasets: - minds14 metrics: - accuracy model-index: - name: my_awesome_mind_model results: - task: name: Audio Classification type: audio-classification dataset: name: minds14 type: minds14 config: en-US split: train args: en-US metrics: - name: Accuracy type: accuracy value: 0.035398230088495575 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_mind_model This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the minds14 dataset. It achieves the following results on the evaluation set: - Loss: 2.6610 - Accuracy: 0.0354 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | No log | 0.8 | 3 | 2.6409 | 0.0796 | | No log | 1.8667 | 7 | 2.6512 | 0.0531 | | 2.6357 | 2.9333 | 11 | 2.6602 | 0.0442 | | 2.6357 | 4.0 | 15 | 2.6632 | 0.0354 | | 2.6357 | 4.8 | 18 | 2.6638 | 0.0354 | | 2.6251 | 5.8667 | 22 | 2.6643 | 0.0354 | | 2.6251 | 6.9333 | 26 | 2.6623 | 0.0354 | | 2.6159 | 8.0 | 30 | 2.6610 | 0.0354 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.3.0a0+git3588582 - Datasets 3.0.1 - Tokenizers 0.20.1
A790227/your-repo-name
A790227
2024-10-20T20:27:45Z
117
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-20T20:19:22Z
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: your-repo-name results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # your-repo-name This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2352 - Accuracy: 0.9268 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1851 | 1.0 | 1563 | 0.2352 | 0.9268 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.19.1
sd-dreambooth-library/imran-khan-804-person
sd-dreambooth-library
2024-10-20T20:13:00Z
35
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "en", "dataset:muneebable/imran-khan", "base_model:stabilityai/stable-diffusion-2", "base_model:finetune:stabilityai/stable-diffusion-2", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-10-20T20:09:56Z
--- license: creativeml-openrail-m tags: - text-to-image datasets: - muneebable/imran-khan language: - en base_model: - stabilityai/stable-diffusion-2 pipeline_tag: text-to-image library_name: diffusers --- ### imran-khan-804 person on Stable Diffusion via Dreambooth #### model by muneebable This your the Stable Diffusion model fine-tuned the imran-khan-804 person concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt`: **imran-khan-804 person** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts) Here are the images used for training this concept: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/619bb22466f2bc80c2029e08/oXh-eYqTxmtnMmmh5KLV_.png)
goku35855/speecht5_finetuned_marathi
goku35855
2024-10-20T20:08:51Z
76
0
transformers
[ "transformers", "safetensors", "speecht5", "text-to-audio", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
text-to-audio
2024-10-20T18:29:10Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
swpranta/Llama-3.2-1B-Instruct-Q4_0-GGUF
swpranta
2024-10-20T19:53:53Z
61
0
transformers
[ "transformers", "gguf", "facebook", "meta", "pytorch", "llama", "llama-3", "llama-cpp", "gguf-my-repo", "text-generation", "en", "de", "fr", "it", "pt", "hi", "es", "th", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:quantized:meta-llama/Llama-3.2-1B-Instruct", "license:llama3.2", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-10-20T19:53:45Z
--- language: - en - de - fr - it - pt - hi - es - th library_name: transformers pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 - llama-cpp - gguf-my-repo license: llama3.2 extra_gated_prompt: "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version\ \ Release Date: September 25, 2024\n\n“Agreement” means the terms and conditions\ \ for use, reproduction, distribution and modification of the Llama Materials set\ \ forth herein.\n\n“Documentation” means the specifications, manuals and documentation\ \ accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview.\n\ \n“Licensee” or “you” means you, or your employer or any other person or entity\ \ (if you are entering into this Agreement on such person or entity’s behalf),\ \ of the age required under applicable laws, rules or regulations to provide legal\ \ consent and that has legal authority to bind your employer or such other person\ \ or entity if you are entering in this Agreement on their behalf.\n\n“Llama 3.2”\ \ means the foundational large language models and software and algorithms, including\ \ machine-learning model code, trained model weights, inference-enabling code, training-enabling\ \ code, fine-tuning enabling code and other elements of the foregoing distributed\ \ by Meta at https://www.llama.com/llama-downloads.\n\n“Llama Materials” means,\ \ collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion\ \ thereof) made available under this Agreement.\n\n“Meta” or “we” means Meta Platforms\ \ Ireland Limited (if you are located in or, if you are an entity, your principal\ \ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if\ \ you are located outside of the EEA or Switzerland). \n\nBy clicking “I Accept”\ \ below or by using or distributing any portion or element of the Llama Materials,\ \ you agree to be bound by this Agreement.\n\n1. License Rights and Redistribution.\n\ a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\ \ and royalty-free limited license under Meta’s intellectual property or other rights\ \ owned by Meta embodied in the Llama Materials to use, reproduce, distribute,\ \ copy, create derivative works of, and make modifications to the Llama Materials.\ \ \nb. Redistribution and Use. \ni. If you distribute or make available the Llama\ \ Materials (or any derivative works thereof), or a product or service (including\ \ another AI model) that contains any of them, you shall (A) provide a copy of this\ \ Agreement with any such Llama Materials; and (B) prominently display “Built with\ \ Llama” on a related website, user interface, blogpost, about page, or product\ \ documentation. If you use the Llama Materials or any outputs or results of the\ \ Llama Materials to create, train, fine tune, or otherwise improve an AI model,\ \ which is distributed or made available, you shall also include “Llama” at the\ \ beginning of any such AI model name.\nii. If you receive Llama Materials, or any\ \ derivative works thereof, from a Licensee as part of an integrated end user product,\ \ then Section 2 of this Agreement will not apply to you. \niii. You must retain\ \ in all copies of the Llama Materials that you distribute the following attribution\ \ notice within a “Notice” text file distributed as a part of such copies: “Llama\ \ 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms,\ \ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\ \ applicable laws and regulations (including trade compliance laws and regulations)\ \ and adhere to the Acceptable Use Policy for the Llama Materials (available at\ \ https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference\ \ into this Agreement.\n \n2. Additional Commercial Terms. If, on the Llama 3.2\ \ version release date, the monthly active users of the products or services made\ \ available by or for Licensee, or Licensee’s affiliates, is greater than 700 million\ \ monthly active users in the preceding calendar month, you must request a license\ \ from Meta, which Meta may grant to you in its sole discretion, and you are not\ \ authorized to exercise any of the rights under this Agreement unless or until\ \ Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS\ \ REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM\ \ ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS\ \ ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION,\ \ ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR\ \ PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING\ \ OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR\ \ USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability.\ \ IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,\ \ WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING\ \ OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,\ \ INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE\ \ BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\n\ a. No trademark licenses are granted under this Agreement, and in connection with\ \ the Llama Materials, neither Meta nor Licensee may use any name or mark owned\ \ by or associated with the other or any of its affiliates, except as required\ \ for reasonable and customary use in describing and redistributing the Llama Materials\ \ or as set forth in this Section 5(a). Meta hereby grants you a license to use\ \ “Llama” (the “Mark”) solely as required to comply with the last sentence of Section\ \ 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at\ \ https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising\ \ out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to\ \ Meta’s ownership of Llama Materials and derivatives made by or for Meta, with\ \ respect to any derivative works and modifications of the Llama Materials that\ \ are made by you, as between you and Meta, you are and will be the owner of such\ \ derivative works and modifications.\nc. If you institute litigation or other proceedings\ \ against Meta or any entity (including a cross-claim or counterclaim in a lawsuit)\ \ alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion\ \ of any of the foregoing, constitutes infringement of intellectual property or\ \ other rights owned or licensable by you, then any licenses granted to you under\ \ this Agreement shall terminate as of the date such litigation or claim is filed\ \ or instituted. You will indemnify and hold harmless Meta from and against any\ \ claim by any third party arising out of or related to your use or distribution\ \ of the Llama Materials.\n6. Term and Termination. The term of this Agreement will\ \ commence upon your acceptance of this Agreement or access to the Llama Materials\ \ and will continue in full force and effect until terminated in accordance with\ \ the terms and conditions herein. Meta may terminate this Agreement if you are\ \ in breach of any term or condition of this Agreement. Upon termination of this\ \ Agreement, you shall delete and cease use of the Llama Materials. Sections 3,\ \ 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law and\ \ Jurisdiction. This Agreement will be governed and construed under the laws of\ \ the State of California without regard to choice of law principles, and the UN\ \ Convention on Contracts for the International Sale of Goods does not apply to\ \ this Agreement. The courts of California shall have exclusive jurisdiction of\ \ any dispute arising out of this Agreement. \n### Llama 3.2 Acceptable Use Policy\n\ Meta is committed to promoting safe and fair use of its tools and features, including\ \ Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy\ \ (“**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\n\ #### Prohibited Uses\nWe want everyone to use Llama 3.2 safely and responsibly.\ \ You agree you will not use, or allow others to use, Llama 3.2 to:\n1. Violate\ \ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\ \ contribute to, encourage, plan, incite, or further illegal or unlawful activity\ \ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\ \ or harm to children, including the solicitation, creation, acquisition, or dissemination\ \ of child exploitative content or failure to report Child Sexual Abuse Material\n\ \ 3. Human trafficking, exploitation, and sexual violence\n 4. The\ \ illegal distribution of information or materials to minors, including obscene\ \ materials, or failure to employ legally required age-gating in connection with\ \ such information or materials.\n 5. Sexual solicitation\n 6. Any\ \ other criminal activity\n 1. Engage in, promote, incite, or facilitate the\ \ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\ \ 2. Engage in, promote, incite, or facilitate discrimination or other unlawful\ \ or harmful conduct in the provision of employment, employment benefits, credit,\ \ housing, other economic benefits, or other essential goods and services\n 3.\ \ Engage in the unauthorized or unlicensed practice of any profession including,\ \ but not limited to, financial, legal, medical/health, or related professional\ \ practices\n 4. Collect, process, disclose, generate, or infer private or sensitive\ \ information about individuals, including information about individuals’ identity,\ \ health, or demographic information, unless you have obtained the right to do so\ \ in accordance with applicable law\n 5. Engage in or facilitate any action or\ \ generate any content that infringes, misappropriates, or otherwise violates any\ \ third-party rights, including the outputs or results of any products or services\ \ using the Llama Materials\n 6. Create, generate, or facilitate the creation\ \ of malicious code, malware, computer viruses or do anything else that could disable,\ \ overburden, interfere with or impair the proper working, integrity, operation\ \ or appearance of a website or computer system\n 7. Engage in any action, or\ \ facilitate any action, to intentionally circumvent or remove usage restrictions\ \ or other safety measures, or to enable functionality disabled by Meta \n2. Engage\ \ in, promote, incite, facilitate, or assist in the planning or development of activities\ \ that present a risk of death or bodily harm to individuals, including use of Llama\ \ 3.2 related to the following:\n 8. Military, warfare, nuclear industries or\ \ applications, espionage, use for materials or activities that are subject to the\ \ International Traffic Arms Regulations (ITAR) maintained by the United States\ \ Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989\ \ or the Chemical Weapons Convention Implementation Act of 1997\n 9. Guns and\ \ illegal weapons (including weapon development)\n 10. Illegal drugs and regulated/controlled\ \ substances\n 11. Operation of critical infrastructure, transportation technologies,\ \ or heavy machinery\n 12. Self-harm or harm to others, including suicide, cutting,\ \ and eating disorders\n 13. Any content intended to incite or promote violence,\ \ abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive\ \ or mislead others, including use of Llama 3.2 related to the following:\n 14.\ \ Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n\ \ 15. Generating, promoting, or furthering defamatory content, including the\ \ creation of defamatory statements, images, or other content\n 16. Generating,\ \ promoting, or further distributing spam\n 17. Impersonating another individual\ \ without consent, authorization, or legal right\n 18. Representing that the\ \ use of Llama 3.2 or outputs are human-generated\n 19. Generating or facilitating\ \ false online engagement, including fake reviews and other means of fake online\ \ engagement \n4. Fail to appropriately disclose to end users any known dangers\ \ of your AI system 5. Interact with third party tools, models, or software designed\ \ to generate unlawful content or engage in unlawful or harmful conduct and/or represent\ \ that the outputs of such tools, models, or software are associated with Meta or\ \ Llama 3.2\n\nWith respect to any multimodal models included in Llama 3.2, the\ \ rights granted under Section 1(a) of the Llama 3.2 Community License Agreement\ \ are not being granted to you if you are an individual domiciled in, or a company\ \ with a principal place of business in, the European Union. This restriction does\ \ not apply to end users of a product or service that incorporates any such multimodal\ \ models.\n\nPlease report any violation of this Policy, software “bug,” or other\ \ problems that could lead to a violation of this Policy through one of the following\ \ means:\n\n* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\n\ * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n\ * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\n\ * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama\ \ 3.2: [email protected]" extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location ? By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy : checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit base_model: meta-llama/Llama-3.2-1B-Instruct --- # swpranta/Llama-3.2-1B-Instruct-Q4_0-GGUF This model was converted to GGUF format from [`meta-llama/Llama-3.2-1B-Instruct`](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo swpranta/Llama-3.2-1B-Instruct-Q4_0-GGUF --hf-file llama-3.2-1b-instruct-q4_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo swpranta/Llama-3.2-1B-Instruct-Q4_0-GGUF --hf-file llama-3.2-1b-instruct-q4_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo swpranta/Llama-3.2-1B-Instruct-Q4_0-GGUF --hf-file llama-3.2-1b-instruct-q4_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo swpranta/Llama-3.2-1B-Instruct-Q4_0-GGUF --hf-file llama-3.2-1b-instruct-q4_0.gguf -c 2048 ```
QuantFactory/magnum-v4-12b-GGUF
QuantFactory
2024-10-20T19:34:00Z
10
2
transformers
[ "transformers", "gguf", "chat", "text-generation", "en", "license:other", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-10-20T17:52:32Z
--- license: other license_name: mrl language: - en tags: - chat pipeline_tag: text-generation library_name: transformers --- [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) # QuantFactory/magnum-v4-12b-GGUF This is quantized version of [anthracite-org/magnum-v4-12b](https://huggingface.co/anthracite-org/magnum-v4-12b) created using llama.cpp # Original Model Card ![image/png](https://cdn-uploads.huggingface.co/production/uploads/658a46cbfb9c2bdfae75b3a6/-UC6YN1Gt3e1FDh8EqyaB.png) This is a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus. This model is fine-tuned on top of [mistralai/Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407). ## Prompting A typical input would look like this: ```py <s>[INST] SYSTEM MESSAGE USER MESSAGE[/INST] ASSISTANT MESSAGE</s>[INST] USER MESSAGE[/INST] ``` ## SillyTavern templates Below are Instruct and Context templates for use within SillyTavern. <details><summary>context template</summary> ```yaml default SillyTavern template works fine ``` </details><br> <details><summary>instruct template</summary> ```yaml default SillyTavern template works fine ``` </details><br> ## Axolotl config <details><summary>See axolotl config</summary> ```yaml base_model: mistralai/Mistral-Nemo-Instruct-2407 model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer hub_model_id: anthracite-org/magnum-v4-12b-r2 hub_strategy: "all_checkpoints" push_dataset_to_hub: hf_use_auth_token: true plugins: - axolotl.integrations.liger.LigerPlugin liger_rope: true liger_rms_norm: true liger_swiglu: true liger_fused_linear_cross_entropy: true load_in_8bit: false load_in_4bit: false strict: false datasets: - path: anthracite-org/c2_logs_32k_llama3_qwen2_v1.2_no_system type: custommistralv3tekken - path: anthracite-org/kalo-opus-instruct-22k-no-refusal-no-system type: custommistralv3tekken - path: anthracite-org/kalo-opus-instruct-3k-filtered-no-system type: custommistralv3tekken - path: anthracite-org/nopm_claude_writing_fixed type: custommistralv3tekken - path: anthracite-org/kalo_opus_misc_240827_no_system type: custommistralv3tekken - path: anthracite-org/kalo_misc_part2_no_system type: custommistralv3tekken #chat_template: chatml shuffle_merged_datasets: true #default_system_message: "You are an assistant that responds to the user." dataset_prepared_path: /workspace/data/magnum-12b-data val_set_size: 0.0 output_dir: /workspace/data/12b-fft-out sequence_len: 32768 sample_packing: true pad_to_sequence_len: true adapter: lora_model_dir: lora_r: lora_alpha: lora_dropout: lora_target_linear: lora_fan_in_fan_out: wandb_project: 12b-magnum-fft wandb_entity: wandb_watch: wandb_name: v4-r2-attempt-01 wandb_log_model: gradient_accumulation_steps: 2 micro_batch_size: 1 num_epochs: 2 optimizer: adamw_bnb_8bit lr_scheduler: cosine learning_rate: 0.00001 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 40 evals_per_epoch: eval_table_size: eval_max_new_tokens: saves_per_epoch: 2 debug: deepspeed: deepspeed_configs/zero2.json weight_decay: 0.1 fsdp: fsdp_config: special_tokens: pad_token: <pad> ``` </details><br> ## Credits We'd like to thank Recursal / Featherless for sponsoring the compute for this train, Featherless has been hosting our Magnum models since the first 72 B and has given thousands of people access to our models and helped us grow. We would also like to thank all members of Anthracite who made this finetune possible. ## Datasets - [anthracite-org/c2_logs_32k_llama3_qwen2_v1.2_no_system](https://huggingface.co/datasets/anthracite-org/c2_logs_32k_llama3_qwen2_v1.2_no_system) - [anthracite-org/kalo-opus-instruct-22k-no-refusal-no-system](https://huggingface.co/datasets/anthracite-org/kalo-opus-instruct-22k-no-refusal-no-system) - [anthracite-org/kalo-opus-instruct-3k-filtered-no-system](https://huggingface.co/datasets/anthracite-org/kalo-opus-instruct-3k-filtered-no-system) - [anthracite-org/nopm_claude_writing_fixed](https://huggingface.co/datasets/anthracite-org/nopm_claude_writing_fixed) - [anthracite-org/kalo_opus_misc_240827_no_system](https://huggingface.co/datasets/anthracite-org/kalo_opus_misc_240827_no_system) - [anthracite-org/kalo_misc_part2_no_system](https://huggingface.co/datasets/anthracite-org/kalo_misc_part2_no_system) ## Training The training was done for 2 epochs. We used 8x[H100s](https://www.nvidia.com/en-us/data-center/h100/) GPUs graciously provided by [Recursal AI](https://recursal.ai/) / [Featherless AI](https://featherless.ai/) for the full-parameter fine-tuning of the model. [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) ## Safety ...
bartowski/magnum-v4-9b-GGUF
bartowski
2024-10-20T19:30:47Z
382
3
null
[ "gguf", "chat", "text-generation", "en", "base_model:anthracite-org/magnum-v4-9b", "base_model:quantized:anthracite-org/magnum-v4-9b", "license:gemma", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-10-20T14:32:14Z
--- base_model: anthracite-org/magnum-v4-9b language: - en license: gemma pipeline_tag: text-generation tags: - chat quantized_by: bartowski --- ## Llamacpp imatrix Quantizations of magnum-v4-9b Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3930">b3930</a> for quantization. Original model: https://huggingface.co/anthracite-org/magnum-v4-9b All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) Run them in [LM Studio](https://lmstudio.ai/) ## Prompt format ``` <|im_start|>system {system_prompt}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Split | Description | | -------- | ---------- | --------- | ----- | ----------- | | [magnum-v4-9b-f16.gguf](https://huggingface.co/bartowski/magnum-v4-9b-GGUF/blob/main/magnum-v4-9b-f16.gguf) | f16 | 18.49GB | false | Full F16 weights. | | [magnum-v4-9b-Q8_0.gguf](https://huggingface.co/bartowski/magnum-v4-9b-GGUF/blob/main/magnum-v4-9b-Q8_0.gguf) | Q8_0 | 9.83GB | false | Extremely high quality, generally unneeded but max available quant. | | [magnum-v4-9b-Q6_K_L.gguf](https://huggingface.co/bartowski/magnum-v4-9b-GGUF/blob/main/magnum-v4-9b-Q6_K_L.gguf) | Q6_K_L | 7.81GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. | | [magnum-v4-9b-Q6_K.gguf](https://huggingface.co/bartowski/magnum-v4-9b-GGUF/blob/main/magnum-v4-9b-Q6_K.gguf) | Q6_K | 7.59GB | false | Very high quality, near perfect, *recommended*. | | [magnum-v4-9b-Q5_K_L.gguf](https://huggingface.co/bartowski/magnum-v4-9b-GGUF/blob/main/magnum-v4-9b-Q5_K_L.gguf) | Q5_K_L | 6.87GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. | | [magnum-v4-9b-Q5_K_M.gguf](https://huggingface.co/bartowski/magnum-v4-9b-GGUF/blob/main/magnum-v4-9b-Q5_K_M.gguf) | Q5_K_M | 6.65GB | false | High quality, *recommended*. | | [magnum-v4-9b-Q5_K_S.gguf](https://huggingface.co/bartowski/magnum-v4-9b-GGUF/blob/main/magnum-v4-9b-Q5_K_S.gguf) | Q5_K_S | 6.48GB | false | High quality, *recommended*. | | [magnum-v4-9b-Q4_K_L.gguf](https://huggingface.co/bartowski/magnum-v4-9b-GGUF/blob/main/magnum-v4-9b-Q4_K_L.gguf) | Q4_K_L | 5.98GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. | | [magnum-v4-9b-Q4_K_M.gguf](https://huggingface.co/bartowski/magnum-v4-9b-GGUF/blob/main/magnum-v4-9b-Q4_K_M.gguf) | Q4_K_M | 5.76GB | false | Good quality, default size for must use cases, *recommended*. | | [magnum-v4-9b-Q4_K_S.gguf](https://huggingface.co/bartowski/magnum-v4-9b-GGUF/blob/main/magnum-v4-9b-Q4_K_S.gguf) | Q4_K_S | 5.48GB | false | Slightly lower quality with more space savings, *recommended*. | | [magnum-v4-9b-Q4_0.gguf](https://huggingface.co/bartowski/magnum-v4-9b-GGUF/blob/main/magnum-v4-9b-Q4_0.gguf) | Q4_0 | 5.46GB | false | Legacy format, generally not worth using over similarly sized formats | | [magnum-v4-9b-Q4_0_8_8.gguf](https://huggingface.co/bartowski/magnum-v4-9b-GGUF/blob/main/magnum-v4-9b-Q4_0_8_8.gguf) | Q4_0_8_8 | 5.44GB | false | Optimized for ARM inference. Requires 'sve' support (see link below). *Don't use on Mac or Windows*. | | [magnum-v4-9b-Q4_0_4_8.gguf](https://huggingface.co/bartowski/magnum-v4-9b-GGUF/blob/main/magnum-v4-9b-Q4_0_4_8.gguf) | Q4_0_4_8 | 5.44GB | false | Optimized for ARM inference. Requires 'i8mm' support (see link below). *Don't use on Mac or Windows*. | | [magnum-v4-9b-Q4_0_4_4.gguf](https://huggingface.co/bartowski/magnum-v4-9b-GGUF/blob/main/magnum-v4-9b-Q4_0_4_4.gguf) | Q4_0_4_4 | 5.44GB | false | Optimized for ARM inference. Should work well on all ARM chips, pick this if you're unsure. *Don't use on Mac or Windows*. | | [magnum-v4-9b-Q3_K_XL.gguf](https://huggingface.co/bartowski/magnum-v4-9b-GGUF/blob/main/magnum-v4-9b-Q3_K_XL.gguf) | Q3_K_XL | 5.35GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. | | [magnum-v4-9b-IQ4_XS.gguf](https://huggingface.co/bartowski/magnum-v4-9b-GGUF/blob/main/magnum-v4-9b-IQ4_XS.gguf) | IQ4_XS | 5.18GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [magnum-v4-9b-Q3_K_L.gguf](https://huggingface.co/bartowski/magnum-v4-9b-GGUF/blob/main/magnum-v4-9b-Q3_K_L.gguf) | Q3_K_L | 5.13GB | false | Lower quality but usable, good for low RAM availability. | | [magnum-v4-9b-Q3_K_M.gguf](https://huggingface.co/bartowski/magnum-v4-9b-GGUF/blob/main/magnum-v4-9b-Q3_K_M.gguf) | Q3_K_M | 4.76GB | false | Low quality. | | [magnum-v4-9b-IQ3_M.gguf](https://huggingface.co/bartowski/magnum-v4-9b-GGUF/blob/main/magnum-v4-9b-IQ3_M.gguf) | IQ3_M | 4.49GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [magnum-v4-9b-Q3_K_S.gguf](https://huggingface.co/bartowski/magnum-v4-9b-GGUF/blob/main/magnum-v4-9b-Q3_K_S.gguf) | Q3_K_S | 4.34GB | false | Low quality, not recommended. | | [magnum-v4-9b-IQ3_XS.gguf](https://huggingface.co/bartowski/magnum-v4-9b-GGUF/blob/main/magnum-v4-9b-IQ3_XS.gguf) | IQ3_XS | 4.14GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [magnum-v4-9b-Q2_K_L.gguf](https://huggingface.co/bartowski/magnum-v4-9b-GGUF/blob/main/magnum-v4-9b-Q2_K_L.gguf) | Q2_K_L | 4.03GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. | | [magnum-v4-9b-Q2_K.gguf](https://huggingface.co/bartowski/magnum-v4-9b-GGUF/blob/main/magnum-v4-9b-Q2_K.gguf) | Q2_K | 3.81GB | false | Very low quality but surprisingly usable. | | [magnum-v4-9b-IQ2_M.gguf](https://huggingface.co/bartowski/magnum-v4-9b-GGUF/blob/main/magnum-v4-9b-IQ2_M.gguf) | IQ2_M | 3.43GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. | ## Embed/output weights Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to. Some say that this improves the quality, others don't notice any difference. If you use these models PLEASE COMMENT with your findings. I would like feedback that these are actually used and useful so I don't keep uploading quants no one is using. Thanks! ## Downloading using huggingface-cli First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download bartowski/magnum-v4-9b-GGUF --include "magnum-v4-9b-Q4_K_M.gguf" --local-dir ./ ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download bartowski/magnum-v4-9b-GGUF --include "magnum-v4-9b-Q8_0/*" --local-dir ./ ``` You can either specify a new local-dir (magnum-v4-9b-Q8_0) or download them all in place (./) ## Q4_0_X_X These are *NOT* for Metal (Apple) offloading, only ARM chips. If you're using an ARM chip, the Q4_0_X_X quants will have a substantial speedup. Check out Q4_0_4_4 speed comparisons [on the original pull request](https://github.com/ggerganov/llama.cpp/pull/5780#pullrequestreview-21657544660) To check which one would work best for your ARM chip, you can check [AArch64 SoC features](https://gpages.juszkiewicz.com.pl/arm-socs-table/arm-socs.html) (thanks EloyOn!). ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. ## Credits Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset Thank you ZeroWw for the inspiration to experiment with embed/output Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
LBK95/Llama-2-7b-hf-DPO-LookAhead-5_TTree1.4_TT0.9_TP0.7_TE0.2_V1
LBK95
2024-10-20T19:29:13Z
12
0
peft
[ "peft", "safetensors", "trl", "dpo", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2024-10-20T11:03:41Z
--- base_model: meta-llama/Llama-2-7b-hf library_name: peft license: llama2 tags: - trl - dpo - generated_from_trainer model-index: - name: Llama-2-7b-hf-DPO-LookAhead-5_TTree1.4_TT0.9_TP0.7_TE0.2_V1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-DPO-LookAhead-5_TTree1.4_TT0.9_TP0.7_TE0.2_V1 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8779 - Rewards/chosen: -2.5074 - Rewards/rejected: -2.4835 - Rewards/accuracies: 0.5833 - Rewards/margins: -0.0240 - Logps/rejected: -187.5269 - Logps/chosen: -172.9323 - Logits/rejected: -0.3789 - Logits/chosen: -0.3801 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.6952 | 0.3016 | 87 | 0.6794 | -0.0405 | -0.0725 | 0.5833 | 0.0320 | -163.4170 | -148.2627 | 0.3515 | 0.3604 | | 0.6655 | 0.6031 | 174 | 0.6384 | 0.0391 | -0.0895 | 0.5 | 0.1287 | -163.5874 | -147.4663 | 0.3348 | 0.3431 | | 0.6246 | 0.9047 | 261 | 0.6568 | 0.1297 | 0.0077 | 0.5833 | 0.1220 | -162.6151 | -146.5603 | 0.2825 | 0.2904 | | 0.3939 | 1.2062 | 348 | 0.6986 | -0.2304 | -0.4082 | 0.5833 | 0.1778 | -166.7741 | -150.1618 | 0.1283 | 0.1335 | | 0.3329 | 1.5078 | 435 | 0.7227 | -0.5473 | -0.6512 | 0.5833 | 0.1039 | -169.2040 | -153.3306 | -0.0449 | -0.0420 | | 0.6015 | 1.8094 | 522 | 0.7035 | -1.0222 | -1.2334 | 0.5 | 0.2112 | -175.0264 | -158.0799 | -0.0987 | -0.0963 | | 0.0646 | 2.1109 | 609 | 0.7550 | -1.6915 | -1.8415 | 0.5 | 0.1500 | -181.1071 | -164.7728 | -0.2277 | -0.2271 | | 0.1952 | 2.4125 | 696 | 0.8210 | -2.1941 | -2.2483 | 0.5833 | 0.0542 | -185.1751 | -169.7991 | -0.3347 | -0.3356 | | 0.0774 | 2.7140 | 783 | 0.8779 | -2.5074 | -2.4835 | 0.5833 | -0.0240 | -187.5269 | -172.9323 | -0.3789 | -0.3801 | ### Framework versions - PEFT 0.13.2 - Transformers 4.45.2 - Pytorch 2.4.0+cu121 - Datasets 3.0.1 - Tokenizers 0.20.1
mlx-community/Fimbulvetr-11B-v2
mlx-community
2024-10-20T19:27:23Z
11
0
mlx
[ "mlx", "safetensors", "llama", "en", "base_model:Sao10K/Fimbulvetr-11B-v2", "base_model:finetune:Sao10K/Fimbulvetr-11B-v2", "license:cc-by-nc-4.0", "region:us" ]
null
2024-10-20T19:17:23Z
--- base_model: Sao10K/Fimbulvetr-11B-v2 language: - en license: cc-by-nc-4.0 tags: - mlx --- # mlx-community/Fimbulvetr-11B-v2 The Model [mlx-community/Fimbulvetr-11B-v2](https://huggingface.co/mlx-community/Fimbulvetr-11B-v2) was converted to MLX format from [Sao10K/Fimbulvetr-11B-v2](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2) using mlx-lm version **0.19.0**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/Fimbulvetr-11B-v2") prompt="hello" if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
saurabhswami/HumaneArt
saurabhswami
2024-10-20T19:17:39Z
95
2
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:apache-2.0", "region:us" ]
text-to-image
2024-10-20T18:02:12Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: images/1.jpg - text: '-' output: url: images/2.jpg - text: '-' output: url: images/3.jpg - text: '-' output: url: images/4.jpg - text: '-' output: url: images/5.jpg base_model: black-forest-labs/FLUX.1-dev instance_prompt: HumaneArt license: apache-2.0 --- # HumaneArt <Gallery /> ## Model description Flux.1 (dev) LoRA trained on simple line illustration style with white background. Please share feedback if you use this :) Use base humaneart.safetensors to apply the LoRA, lower adaptations at 4, 8 &amp; 12 Epochs also available. include keyword in prompt to activate LoRA : HumaneArt A lora trained on Humane-folks Illustrations with permission (Thanks Pragyan!) https:&#x2F;&#x2F;humane-folks.framer.website&#x2F; Original illustrations by Pragyan Skukla https:&#x2F;&#x2F;pragyanshukla.framer.website&#x2F; ## Trigger words You should use `HumaneArt` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/saurabhswami/HumaneArt/tree/main) them in the Files & versions tab.
mlx-community/Fimbulvetr-11B-v2-8bit
mlx-community
2024-10-20T19:16:47Z
10
0
mlx
[ "mlx", "safetensors", "llama", "en", "base_model:Sao10K/Fimbulvetr-11B-v2", "base_model:quantized:Sao10K/Fimbulvetr-11B-v2", "license:cc-by-nc-4.0", "8-bit", "region:us" ]
null
2024-10-20T19:11:46Z
--- base_model: Sao10K/Fimbulvetr-11B-v2 language: - en license: cc-by-nc-4.0 tags: - mlx --- # mlx-community/Fimbulvetr-11B-v2-8bit The Model [mlx-community/Fimbulvetr-11B-v2-8bit](https://huggingface.co/mlx-community/Fimbulvetr-11B-v2-8bit) was converted to MLX format from [Sao10K/Fimbulvetr-11B-v2](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2) using mlx-lm version **0.19.0**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/Fimbulvetr-11B-v2-8bit") prompt="hello" if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
bartowski/magnum-v4-12b-GGUF
bartowski
2024-10-20T19:15:39Z
7,606
4
null
[ "gguf", "chat", "text-generation", "en", "base_model:anthracite-org/magnum-v4-12b", "base_model:quantized:anthracite-org/magnum-v4-12b", "license:other", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-10-20T14:32:02Z
--- base_model: anthracite-org/magnum-v4-12b language: - en license: other license_name: mrl pipeline_tag: text-generation tags: - chat quantized_by: bartowski --- ## Llamacpp imatrix Quantizations of magnum-v4-12b Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3930">b3930</a> for quantization. Original model: https://huggingface.co/anthracite-org/magnum-v4-12b All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) Run them in [LM Studio](https://lmstudio.ai/) ## Prompt format ``` <s>[INST]{prompt}[/INST] ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Split | Description | | -------- | ---------- | --------- | ----- | ----------- | | [magnum-v4-12b-f16.gguf](https://huggingface.co/bartowski/magnum-v4-12b-GGUF/blob/main/magnum-v4-12b-f16.gguf) | f16 | 24.50GB | false | Full F16 weights. | | [magnum-v4-12b-Q8_0.gguf](https://huggingface.co/bartowski/magnum-v4-12b-GGUF/blob/main/magnum-v4-12b-Q8_0.gguf) | Q8_0 | 13.02GB | false | Extremely high quality, generally unneeded but max available quant. | | [magnum-v4-12b-Q6_K_L.gguf](https://huggingface.co/bartowski/magnum-v4-12b-GGUF/blob/main/magnum-v4-12b-Q6_K_L.gguf) | Q6_K_L | 10.38GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. | | [magnum-v4-12b-Q6_K.gguf](https://huggingface.co/bartowski/magnum-v4-12b-GGUF/blob/main/magnum-v4-12b-Q6_K.gguf) | Q6_K | 10.06GB | false | Very high quality, near perfect, *recommended*. | | [magnum-v4-12b-Q5_K_L.gguf](https://huggingface.co/bartowski/magnum-v4-12b-GGUF/blob/main/magnum-v4-12b-Q5_K_L.gguf) | Q5_K_L | 9.14GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. | | [magnum-v4-12b-Q5_K_M.gguf](https://huggingface.co/bartowski/magnum-v4-12b-GGUF/blob/main/magnum-v4-12b-Q5_K_M.gguf) | Q5_K_M | 8.73GB | false | High quality, *recommended*. | | [magnum-v4-12b-Q5_K_S.gguf](https://huggingface.co/bartowski/magnum-v4-12b-GGUF/blob/main/magnum-v4-12b-Q5_K_S.gguf) | Q5_K_S | 8.52GB | false | High quality, *recommended*. | | [magnum-v4-12b-Q4_K_L.gguf](https://huggingface.co/bartowski/magnum-v4-12b-GGUF/blob/main/magnum-v4-12b-Q4_K_L.gguf) | Q4_K_L | 7.98GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. | | [magnum-v4-12b-Q4_K_M.gguf](https://huggingface.co/bartowski/magnum-v4-12b-GGUF/blob/main/magnum-v4-12b-Q4_K_M.gguf) | Q4_K_M | 7.48GB | false | Good quality, default size for must use cases, *recommended*. | | [magnum-v4-12b-Q3_K_XL.gguf](https://huggingface.co/bartowski/magnum-v4-12b-GGUF/blob/main/magnum-v4-12b-Q3_K_XL.gguf) | Q3_K_XL | 7.15GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. | | [magnum-v4-12b-Q4_K_S.gguf](https://huggingface.co/bartowski/magnum-v4-12b-GGUF/blob/main/magnum-v4-12b-Q4_K_S.gguf) | Q4_K_S | 7.12GB | false | Slightly lower quality with more space savings, *recommended*. | | [magnum-v4-12b-Q4_0.gguf](https://huggingface.co/bartowski/magnum-v4-12b-GGUF/blob/main/magnum-v4-12b-Q4_0.gguf) | Q4_0 | 7.09GB | false | Legacy format, generally not worth using over similarly sized formats | | [magnum-v4-12b-Q4_0_8_8.gguf](https://huggingface.co/bartowski/magnum-v4-12b-GGUF/blob/main/magnum-v4-12b-Q4_0_8_8.gguf) | Q4_0_8_8 | 7.07GB | false | Optimized for ARM inference. Requires 'sve' support (see link below). *Don't use on Mac or Windows*. | | [magnum-v4-12b-Q4_0_4_8.gguf](https://huggingface.co/bartowski/magnum-v4-12b-GGUF/blob/main/magnum-v4-12b-Q4_0_4_8.gguf) | Q4_0_4_8 | 7.07GB | false | Optimized for ARM inference. Requires 'i8mm' support (see link below). *Don't use on Mac or Windows*. | | [magnum-v4-12b-Q4_0_4_4.gguf](https://huggingface.co/bartowski/magnum-v4-12b-GGUF/blob/main/magnum-v4-12b-Q4_0_4_4.gguf) | Q4_0_4_4 | 7.07GB | false | Optimized for ARM inference. Should work well on all ARM chips, pick this if you're unsure. *Don't use on Mac or Windows*. | | [magnum-v4-12b-IQ4_XS.gguf](https://huggingface.co/bartowski/magnum-v4-12b-GGUF/blob/main/magnum-v4-12b-IQ4_XS.gguf) | IQ4_XS | 6.74GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [magnum-v4-12b-Q3_K_L.gguf](https://huggingface.co/bartowski/magnum-v4-12b-GGUF/blob/main/magnum-v4-12b-Q3_K_L.gguf) | Q3_K_L | 6.56GB | false | Lower quality but usable, good for low RAM availability. | | [magnum-v4-12b-Q3_K_M.gguf](https://huggingface.co/bartowski/magnum-v4-12b-GGUF/blob/main/magnum-v4-12b-Q3_K_M.gguf) | Q3_K_M | 6.08GB | false | Low quality. | | [magnum-v4-12b-IQ3_M.gguf](https://huggingface.co/bartowski/magnum-v4-12b-GGUF/blob/main/magnum-v4-12b-IQ3_M.gguf) | IQ3_M | 5.72GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [magnum-v4-12b-Q3_K_S.gguf](https://huggingface.co/bartowski/magnum-v4-12b-GGUF/blob/main/magnum-v4-12b-Q3_K_S.gguf) | Q3_K_S | 5.53GB | false | Low quality, not recommended. | | [magnum-v4-12b-Q2_K_L.gguf](https://huggingface.co/bartowski/magnum-v4-12b-GGUF/blob/main/magnum-v4-12b-Q2_K_L.gguf) | Q2_K_L | 5.45GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. | | [magnum-v4-12b-IQ3_XS.gguf](https://huggingface.co/bartowski/magnum-v4-12b-GGUF/blob/main/magnum-v4-12b-IQ3_XS.gguf) | IQ3_XS | 5.31GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [magnum-v4-12b-Q2_K.gguf](https://huggingface.co/bartowski/magnum-v4-12b-GGUF/blob/main/magnum-v4-12b-Q2_K.gguf) | Q2_K | 4.79GB | false | Very low quality but surprisingly usable. | | [magnum-v4-12b-IQ2_M.gguf](https://huggingface.co/bartowski/magnum-v4-12b-GGUF/blob/main/magnum-v4-12b-IQ2_M.gguf) | IQ2_M | 4.44GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. | | [magnum-v4-12b-IQ2_S.gguf](https://huggingface.co/bartowski/magnum-v4-12b-GGUF/blob/main/magnum-v4-12b-IQ2_S.gguf) | IQ2_S | 4.14GB | false | Low quality, uses SOTA techniques to be usable. | ## Embed/output weights Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to. Some say that this improves the quality, others don't notice any difference. If you use these models PLEASE COMMENT with your findings. I would like feedback that these are actually used and useful so I don't keep uploading quants no one is using. Thanks! ## Downloading using huggingface-cli First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download bartowski/magnum-v4-12b-GGUF --include "magnum-v4-12b-Q4_K_M.gguf" --local-dir ./ ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download bartowski/magnum-v4-12b-GGUF --include "magnum-v4-12b-Q8_0/*" --local-dir ./ ``` You can either specify a new local-dir (magnum-v4-12b-Q8_0) or download them all in place (./) ## Q4_0_X_X These are *NOT* for Metal (Apple) offloading, only ARM chips. If you're using an ARM chip, the Q4_0_X_X quants will have a substantial speedup. Check out Q4_0_4_4 speed comparisons [on the original pull request](https://github.com/ggerganov/llama.cpp/pull/5780#pullrequestreview-21657544660) To check which one would work best for your ARM chip, you can check [AArch64 SoC features](https://gpages.juszkiewicz.com.pl/arm-socs-table/arm-socs.html) (thanks EloyOn!). ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. ## Credits Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset Thank you ZeroWw for the inspiration to experiment with embed/output Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF
mradermacher
2024-10-20T18:44:08Z
120
3
transformers
[ "transformers", "gguf", "chat", "en", "de", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-10-20T16:29:55Z
--- base_model: Goekdeniz-Guelmez/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2 language: - en - de library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen2.5-14B-Instruct/blob/main/LICENSE quantized_by: mradermacher tags: - chat --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/Goekdeniz-Guelmez/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-IQ1_M.gguf) | i1-IQ1_M | 4.0 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-IQ2_M.gguf) | i1-IQ2_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 8.6 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 8.6 | fast on arm+i8mm, low quality | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 8.6 | fast on arm+sve, low quality | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-GGUF
mradermacher
2024-10-20T18:44:07Z
83
1
transformers
[ "transformers", "gguf", "chat", "en", "de", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-19T09:16:45Z
--- base_model: Goekdeniz-Guelmez/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2 language: - en - de library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen2.5-14B-Instruct/blob/main/LICENSE quantized_by: mradermacher tags: - chat --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Goekdeniz-Guelmez/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.Q2_K.gguf) | Q2_K | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.Q3_K_S.gguf) | Q3_K_S | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.Q3_K_L.gguf) | Q3_K_L | 8.0 | | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.IQ4_XS.gguf) | IQ4_XS | 8.3 | | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.Q5_K_S.gguf) | Q5_K_S | 10.4 | | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.Q5_K_M.gguf) | Q5_K_M | 10.6 | | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.Q6_K.gguf) | Q6_K | 12.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
exploer/tomasbily04
exploer
2024-10-20T18:42:53Z
12
0
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "template:sd-lora", "fluxgym", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-10-20T18:42:39Z
--- tags: - text-to-image - flux - lora - diffusers - template:sd-lora - fluxgym widget: - output: url: sample/tomasbily_000675_00_20241020180402.png text: tomasbily is standing in nature by the river wearing a nike t-shirt and jeans. - output: url: sample/tomasbily_000675_01_20241020180415.png text: tomasbily stands in nature by the river wearing shirts and a coat and jeans. base_model: black-forest-labs/FLUX.1-dev instance_prompt: tomasbily license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # TomasBily A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym) <Gallery /> ## Trigger words You should use `tomasbily` to trigger the image generation. ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc. Weights for this model are available in Safetensors format.
zhangyi617/sd_naruto_lora_pgd_2e
zhangyi617
2024-10-20T18:36:15Z
5
0
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers-training", "lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-10-20T18:17:59Z
--- base_model: runwayml/stable-diffusion-v1-5 library_name: diffusers license: creativeml-openrail-m tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - diffusers-training - lora inference: true --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # LoRA text2image fine-tuning - zhangyi617/sd_naruto_lora_pgd_2e These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the zhangyi617/naruto_721_train dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
prithivMLmods/Flux-Dev-Real-Anime-LoRA
prithivMLmods
2024-10-20T18:29:09Z
1,215
27
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "flux", "dev", "anime", "real-anime", "art", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-10-20T06:21:34Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora - flux - dev - anime - real-anime - art widget: - text: >- A photo of an anime model wearing a trendy oversized sweater and denim shorts, sitting on a swing in a park with hands gripping the chains. The golden hour sunlight filters through the trees, creating a magical atmosphere. Fashion photography, high resolution, 35mm lens, f/2.8, natural lighting, global illumination. --ar 85:128 --v 6.0 --style raw. output: url: images/000.png - text: >- A photo of the anime model wearing a white bodysuit and beige trench coat, posing in front of a train station with hands on head, soft light, sunset, fashion photography, high resolution, 35mm lens, f/22, natural lighting, global illumination. --ar 85:128 --v 6.0 --style raw output: url: images/111.png - text: >- A photo of an anime model wearing a sleek black jumpsuit and a stylish oversized denim jacket, posing confidently in front of a modern skyscraper with hands on hips. Soft light filters through the buildings, creating a warm glow during golden hour. Fashion photography, high resolution, 50mm lens, f/2.8, natural lighting, global illumination. --ar 85:128 --v 6.0 --style raw. output: url: images/222.png base_model: black-forest-labs/FLUX.1-dev instance_prompt: Real Anime license: creativeml-openrail-m --- # Flux-Dev-Real-Anime-LoRA <Gallery /> **The model is still in the training phase. This is not the final version and may contain artifacts and perform poorly in some cases.** ## Setting Up ``` import torch from pipelines import DiffusionPipeline base_model = "black-forest-labs/FLUX.1-dev" pipe = DiffusionPipeline.from_pretrained(base_model, torch_dtype=torch.bfloat16) lora_repo = "prithivMLmods/Flux-Dev-Real-Anime-LoRA" trigger_word = "Real Anime" # Leave trigger_word blank if not used. pipe.load_lora_weights(lora_repo) device = torch.device("cuda") pipe.to(device) ``` ## app.py ``` import gradio as gr gr.load("prithivMLmods/Flux-Dev-Real-Anime-LoRA").launch() ``` ## pythonproject.py ``` from fastapi import FastAPI from fastapi.middleware.cors import CORSMiddleware import gradio as gr def image_generator(prompt): pass interface = gr.Interface(fn=image_generator, inputs="text", outputs="image") app = FastAPI() app.add_middleware( CORSMiddleware, allow_origins=["*"], allow_credentials=True, allow_methods=["*"], allow_headers=["*"], ) app = gr.mount_gradio_app(app, interface, path="/") ``` ## App File Structure /project-root/ │ ├── .gitattributes ├── README.md ├── app.py ├── pythonproject.py ## Trigger words You should use `Real Anime` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/prithivMLmods/Flux-Dev-Real-Anime-LoRA/tree/main) them in the Files & versions tab.
GuelGaMesh01/gemma-2b-it-finetuned-mental-health-qa
GuelGaMesh01
2024-10-20T18:26:44Z
6
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "dataset:Amod/mental_health_counseling_conversations", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-19T06:26:43Z
--- datasets: - Amod/mental_health_counseling_conversations library_name: transformers license: mit --- # Model Card Summary <!-- Provide a quick summary of what the model is/does. --> This model is a fine-tuned version of gemma-2b-it for mental health counseling conversations. It was fine-tuned on the Amod/Mental Health Counseling Conversations dataset, which contains dialogues related to mental health counseling. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** Miguel Flores - **Model type:** Causal Language Model (based on transformers) - **Language(s) (NLP):** English - **License:** MIT License - **Finetuned from model gemma-2b-it:** gemma-2b-it, which is a base model fine-tuned for mental health-related queries. ## Use Cases <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> This model is fine-tuned for generating responses related to mental health counseling tasks. It can be used for providing suggestions, conversation starters, or follow-ups in mental health scenarios. ### Downstream Use <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> This model can be adapted for use in more specific counseling-related tasks, or in applications where generating mental health-related dialogue is necessary. ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> The model is not intended to replace professional counseling. It should not be used for real-time crisis management or any situation requiring direct human intervention. Use in highly critical or urgent care situations is out of scope. ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> The model was trained on mental health-related dialogues, but it may still generate biased or inappropriate responses. Users should exercise caution when interpreting or acting on the model's outputs, particularly in sensitive scenarios. ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> The model should not be used as a replacement for professional mental health practitioners. Users should carefully evaluate generated responses in the context of their use case. ## How to Get Started with the Model Use the code below to get started with the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("GuelGaMesh01/gemma-2b-it-finetuned-mental-health-qa") model = AutoModelForCausalLM.from_pretrained("GuelGaMesh01/gemma-2b-it-finetuned-mental-health-qa") ``` ### Example inference ```python inputs = tokenizer("How can I manage anxiety better?", return_tensors="pt") outputs = model.generate(**inputs, max_length=200) response = tokenizer.decode(outputs[0], skip_special_tokens=True) print(response) ``` ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> The model was trained on the Amod/Mental Health Counseling Conversations dataset, which consists of mental health dialogues focused on counseling situations. ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> The model was fine-tuned using LoRA (Low-Rank Adaptation) with the following hyperparameters: Batch Size: 1 Gradient Accumulation Steps: 4 Learning Rate: 2e-4 Epochs: 3 Max Sequence Length: 2500 tokens Optimizer: paged_adamw_8bit ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> #### Testing Data <!-- This should link to a Dataset Card if possible. --> The model was evaluated using a split from the training data, specifically a 10% test split of the original training dataset. #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> The following metrics were used during the training and evaluation process: - **Training Loss**: The training loss was tracked during training to monitor how well the model was learning from the data. It decreased throughout the epochs. - **Semantic Similarity**: Semantic similarity was employed as the primary metric to assess the model’s ability to generate contextually relevant and meaningful responses. Since the dataset involves conversational context, particularly in the sensitive area of mental health counseling, it was crucial to evaluate how well the model understands and retains the intent and meaning behind the input rather than merely focusing on fluency or token-level prediction. - **Perplexity**: Perplexity was used as a metric to evaluate the model's ability to generate coherent and fluent text responses. The model was evaluated on a subset of the test data, and both non-finetuned and finetuned perplexities were compared.
bartowski/magnum-v4-27b-GGUF
bartowski
2024-10-20T18:20:25Z
514
2
null
[ "gguf", "chat", "text-generation", "en", "base_model:anthracite-org/magnum-v4-27b", "base_model:quantized:anthracite-org/magnum-v4-27b", "license:gemma", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
text-generation
2024-10-20T14:31:28Z
--- base_model: anthracite-org/magnum-v4-27b language: - en license: gemma pipeline_tag: text-generation tags: - chat quantized_by: bartowski --- ## Llamacpp imatrix Quantizations of magnum-v4-27b Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3930">b3930</a> for quantization. Original model: https://huggingface.co/anthracite-org/magnum-v4-27b All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) Run them in [LM Studio](https://lmstudio.ai/) ## Prompt format ``` <|im_start|>system {system_prompt}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Split | Description | | -------- | ---------- | --------- | ----- | ----------- | | [magnum-v4-27b-f16.gguf](https://huggingface.co/bartowski/magnum-v4-27b-GGUF/tree/main/magnum-v4-27b-f16) | f16 | 54.46GB | true | Full F16 weights. | | [magnum-v4-27b-Q8_0.gguf](https://huggingface.co/bartowski/magnum-v4-27b-GGUF/blob/main/magnum-v4-27b-Q8_0.gguf) | Q8_0 | 28.94GB | false | Extremely high quality, generally unneeded but max available quant. | | [magnum-v4-27b-Q6_K_L.gguf](https://huggingface.co/bartowski/magnum-v4-27b-GGUF/blob/main/magnum-v4-27b-Q6_K_L.gguf) | Q6_K_L | 22.63GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. | | [magnum-v4-27b-Q6_K.gguf](https://huggingface.co/bartowski/magnum-v4-27b-GGUF/blob/main/magnum-v4-27b-Q6_K.gguf) | Q6_K | 22.34GB | false | Very high quality, near perfect, *recommended*. | | [magnum-v4-27b-Q5_K_L.gguf](https://huggingface.co/bartowski/magnum-v4-27b-GGUF/blob/main/magnum-v4-27b-Q5_K_L.gguf) | Q5_K_L | 19.69GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. | | [magnum-v4-27b-Q5_K_M.gguf](https://huggingface.co/bartowski/magnum-v4-27b-GGUF/blob/main/magnum-v4-27b-Q5_K_M.gguf) | Q5_K_M | 19.41GB | false | High quality, *recommended*. | | [magnum-v4-27b-Q5_K_S.gguf](https://huggingface.co/bartowski/magnum-v4-27b-GGUF/blob/main/magnum-v4-27b-Q5_K_S.gguf) | Q5_K_S | 18.88GB | false | High quality, *recommended*. | | [magnum-v4-27b-Q4_K_L.gguf](https://huggingface.co/bartowski/magnum-v4-27b-GGUF/blob/main/magnum-v4-27b-Q4_K_L.gguf) | Q4_K_L | 16.93GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. | | [magnum-v4-27b-Q4_K_M.gguf](https://huggingface.co/bartowski/magnum-v4-27b-GGUF/blob/main/magnum-v4-27b-Q4_K_M.gguf) | Q4_K_M | 16.65GB | false | Good quality, default size for must use cases, *recommended*. | | [magnum-v4-27b-Q4_K_S.gguf](https://huggingface.co/bartowski/magnum-v4-27b-GGUF/blob/main/magnum-v4-27b-Q4_K_S.gguf) | Q4_K_S | 15.74GB | false | Slightly lower quality with more space savings, *recommended*. | | [magnum-v4-27b-Q4_0.gguf](https://huggingface.co/bartowski/magnum-v4-27b-GGUF/blob/main/magnum-v4-27b-Q4_0.gguf) | Q4_0 | 15.68GB | false | Legacy format, generally not worth using over similarly sized formats | | [magnum-v4-27b-IQ4_XS.gguf](https://huggingface.co/bartowski/magnum-v4-27b-GGUF/blob/main/magnum-v4-27b-IQ4_XS.gguf) | IQ4_XS | 14.81GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [magnum-v4-27b-Q3_K_XL.gguf](https://huggingface.co/bartowski/magnum-v4-27b-GGUF/blob/main/magnum-v4-27b-Q3_K_XL.gguf) | Q3_K_XL | 14.81GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. | | [magnum-v4-27b-Q3_K_L.gguf](https://huggingface.co/bartowski/magnum-v4-27b-GGUF/blob/main/magnum-v4-27b-Q3_K_L.gguf) | Q3_K_L | 14.52GB | false | Lower quality but usable, good for low RAM availability. | | [magnum-v4-27b-Q3_K_M.gguf](https://huggingface.co/bartowski/magnum-v4-27b-GGUF/blob/main/magnum-v4-27b-Q3_K_M.gguf) | Q3_K_M | 13.42GB | false | Low quality. | | [magnum-v4-27b-IQ3_M.gguf](https://huggingface.co/bartowski/magnum-v4-27b-GGUF/blob/main/magnum-v4-27b-IQ3_M.gguf) | IQ3_M | 12.45GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [magnum-v4-27b-Q3_K_S.gguf](https://huggingface.co/bartowski/magnum-v4-27b-GGUF/blob/main/magnum-v4-27b-Q3_K_S.gguf) | Q3_K_S | 12.17GB | false | Low quality, not recommended. | | [magnum-v4-27b-IQ3_XS.gguf](https://huggingface.co/bartowski/magnum-v4-27b-GGUF/blob/main/magnum-v4-27b-IQ3_XS.gguf) | IQ3_XS | 11.55GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [magnum-v4-27b-Q2_K_L.gguf](https://huggingface.co/bartowski/magnum-v4-27b-GGUF/blob/main/magnum-v4-27b-Q2_K_L.gguf) | Q2_K_L | 10.74GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. | | [magnum-v4-27b-Q2_K.gguf](https://huggingface.co/bartowski/magnum-v4-27b-GGUF/blob/main/magnum-v4-27b-Q2_K.gguf) | Q2_K | 10.45GB | false | Very low quality but surprisingly usable. | | [magnum-v4-27b-IQ2_M.gguf](https://huggingface.co/bartowski/magnum-v4-27b-GGUF/blob/main/magnum-v4-27b-IQ2_M.gguf) | IQ2_M | 9.40GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. | | [magnum-v4-27b-IQ2_S.gguf](https://huggingface.co/bartowski/magnum-v4-27b-GGUF/blob/main/magnum-v4-27b-IQ2_S.gguf) | IQ2_S | 8.65GB | false | Low quality, uses SOTA techniques to be usable. | | [magnum-v4-27b-IQ2_XS.gguf](https://huggingface.co/bartowski/magnum-v4-27b-GGUF/blob/main/magnum-v4-27b-IQ2_XS.gguf) | IQ2_XS | 8.40GB | false | Low quality, uses SOTA techniques to be usable. | ## Embed/output weights Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to. Some say that this improves the quality, others don't notice any difference. If you use these models PLEASE COMMENT with your findings. I would like feedback that these are actually used and useful so I don't keep uploading quants no one is using. Thanks! ## Downloading using huggingface-cli First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download bartowski/magnum-v4-27b-GGUF --include "magnum-v4-27b-Q4_K_M.gguf" --local-dir ./ ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download bartowski/magnum-v4-27b-GGUF --include "magnum-v4-27b-Q8_0/*" --local-dir ./ ``` You can either specify a new local-dir (magnum-v4-27b-Q8_0) or download them all in place (./) ## Q4_0_X_X These are *NOT* for Metal (Apple) offloading, only ARM chips. If you're using an ARM chip, the Q4_0_X_X quants will have a substantial speedup. Check out Q4_0_4_4 speed comparisons [on the original pull request](https://github.com/ggerganov/llama.cpp/pull/5780#pullrequestreview-21657544660) To check which one would work best for your ARM chip, you can check [AArch64 SoC features](https://gpages.juszkiewicz.com.pl/arm-socs-table/arm-socs.html) (thanks EloyOn!). ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. ## Credits Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset Thank you ZeroWw for the inspiration to experiment with embed/output Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
Javitron4257/Dog-Cat-Identificator
Javitron4257
2024-10-20T18:20:08Z
5
0
null
[ "pytorch", "vit", "vision", "image-classification", "dataset:omarques/autotrain-data-dogs-and-cats", "license:cc", "region:us" ]
image-classification
2024-10-20T09:42:33Z
--- license: cc tags: - vision - image-classification datasets: - omarques/autotrain-data-dogs-and-cats ---
MaziyarPanahi/Nemotron-Mini-4B-Instruct-GGUF
MaziyarPanahi
2024-10-20T18:04:23Z
118
1
null
[ "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "text-generation", "base_model:nvidia/Nemotron-Mini-4B-Instruct", "base_model:quantized:nvidia/Nemotron-Mini-4B-Instruct", "region:us", "conversational" ]
text-generation
2024-10-20T17:42:39Z
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - text-generation - text-generation model_name: Nemotron-Mini-4B-Instruct-GGUF base_model: nvidia/Nemotron-Mini-4B-Instruct inference: false model_creator: nvidia pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/Nemotron-Mini-4B-Instruct-GGUF](https://huggingface.co/MaziyarPanahi/Nemotron-Mini-4B-Instruct-GGUF) - Model creator: [nvidia](https://huggingface.co/nvidia) - Original model: [nvidia/Nemotron-Mini-4B-Instruct](https://huggingface.co/nvidia/Nemotron-Mini-4B-Instruct) ## Description [MaziyarPanahi/Nemotron-Mini-4B-Instruct-GGUF](https://huggingface.co/MaziyarPanahi/Nemotron-Mini-4B-Instruct-GGUF) contains GGUF format model files for [nvidia/Nemotron-Mini-4B-Instruct](https://huggingface.co/nvidia/Nemotron-Mini-4B-Instruct). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
lukasgrouleff/distilbert-base-uncased-distilled-clinc-finalmodel-wdsearch
lukasgrouleff
2024-10-20T18:02:37Z
117
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-20T18:02:25Z
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased-distilled-clinc-finalmodel-wdsearch results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-distilled-clinc-finalmodel-wdsearch This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2447 - Accuracy: 0.9471 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 12 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 477 | 1.9574 | 0.7768 | | 2.896 | 2.0 | 954 | 0.8237 | 0.8929 | | 1.2965 | 3.0 | 1431 | 0.4508 | 0.9284 | | 0.5449 | 4.0 | 1908 | 0.3320 | 0.9416 | | 0.2938 | 5.0 | 2385 | 0.2884 | 0.9435 | | 0.1974 | 6.0 | 2862 | 0.2665 | 0.9445 | | 0.1567 | 7.0 | 3339 | 0.2548 | 0.9487 | | 0.1327 | 8.0 | 3816 | 0.2530 | 0.9494 | | 0.1233 | 9.0 | 4293 | 0.2508 | 0.9468 | | 0.1157 | 10.0 | 4770 | 0.2457 | 0.9468 | | 0.1116 | 11.0 | 5247 | 0.2430 | 0.9474 | | 0.109 | 12.0 | 5724 | 0.2447 | 0.9471 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.19.1
bunnycore/Llama-3.2-3B-All-Mix
bunnycore
2024-10-20T17:58:14Z
158
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2306.01708", "base_model:Lyte/Llama-3.2-3B-Overthinker", "base_model:merge:Lyte/Llama-3.2-3B-Overthinker", "base_model:bunnycore/Llama-3.2-3B-Pure-RP", "base_model:merge:bunnycore/Llama-3.2-3B-Pure-RP", "base_model:huihui-ai/Llama-3.2-3B-Instruct-abliterated", "base_model:merge:huihui-ai/Llama-3.2-3B-Instruct-abliterated", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-20T16:50:26Z
--- base_model: - bunnycore/Llama-3.2-3B-Pure-RP - huihui-ai/Llama-3.2-3B-Instruct-abliterated - Lyte/Llama-3.2-3B-Overthinker library_name: transformers tags: - mergekit - merge --- ## Model Overview The Llama-3.2-3B-All-Mix model is a merged language model that combines the strengths of multiple models using the TIES merge method. This model is designed to provide a balanced performance across various tasks and domains. ### Capabilities * The Llama-3.2-3B-All-Mix model is capable of: - Generating human-like text - Conversational dialogue - Roleplay - Long-form reasoning - Answering questions - Summarizing text ## The following models were included in the merge: - bunnycore/Llama-3.2-3B-Pure-RP: This model is particularly well-suited for roleplay tasks, allowing for more engaging and interactive conversations. - Lyte/Llama-3.2-3B-Overthinker: This model excels at long-form reasoning and is capable of generating more in-depth and thoughtful responses. # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [huihui-ai/Llama-3.2-3B-Instruct-abliterated](https://huggingface.co/huihui-ai/Llama-3.2-3B-Instruct-abliterated) as a base. ### Models Merged The following models were included in the merge: * [bunnycore/Llama-3.2-3B-Pure-RP](https://huggingface.co/bunnycore/Llama-3.2-3B-Pure-RP) * [Lyte/Llama-3.2-3B-Overthinker](https://huggingface.co/Lyte/Llama-3.2-3B-Overthinker) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Lyte/Llama-3.2-3B-Overthinker parameters: density: 0.5 weight: 0.5 - model: bunnycore/Llama-3.2-3B-Pure-RP parameters: density: 0.5 weight: 0.5 merge_method: ties base_model: huihui-ai/Llama-3.2-3B-Instruct-abliterated parameters: normalize: false int8_mask: true dtype: float16 ```
mav23/sqlcoder-70b-alpha-GGUF
mav23
2024-10-20T17:56:09Z
134
0
transformers
[ "transformers", "gguf", "text-generation", "license:cc-by-sa-4.0", "endpoints_compatible", "region:us" ]
text-generation
2024-10-20T09:09:42Z
--- license: cc-by-sa-4.0 library_name: transformers pipeline_tag: text-generation --- # Model Card for SQLCoder-70B-Alpha A capable large language model for natural language to SQL generation. Outperforms all generalist models (including GPT-4) on text to SQL. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/603bbad3fd770a9997b57cb6/3BVMV2z6FTEEPF1hJ2qu1.png) ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [Defog, Inc](https://defog.ai) - **Model type:** [Text to SQL] - **License:** [CC-by-SA-4.0] - **Finetuned from model:** [CodeLlama-70B] ### Model Sources [optional] - [**HuggingFace:**](https://huggingface.co/defog/sqlcoder-70b-alpha) - [**GitHub:**](https://github.com/defog-ai/sqlcoder) - [**Demo:**](https://defog.ai/sqlcoder-demo/) ## Uses This model is intended to be used by non-technical users to understand data inside their SQL databases. It is meant as an analytics tool, and not as a database admin tool. This model has not been trained to reject malicious requests from users with write access to databases, and should only be used by users with read-only access. ## How to Get Started with the Model Use the code [here](https://github.com/defog-ai/sqlcoder/blob/main/inference.py) to get started with the model. ## Evaluation This model was evaluated on [SQL-Eval](https://github.com/defog-ai/sql-eval), a PostgreSQL based evaluation framework developed by Defog for testing and alignment of model capabilities. You can read more about the methodology behind SQLEval [here](https://defog.ai/blog/open-sourcing-sqleval/). ### Results We classified each generated question into one of 6 categories. The table displays the percentage of questions answered correctly by each model, broken down by category. | | date | group_by | order_by | ratio | join | where | | ------------- | ---- | -------- | -------- | ----- | ---- | ----- | | sqlcoder-70b | 96 | 91.4 | 97.1 | 85.7 | 97.1 | 91.4 | | sqlcoder-34b | 80 | 94.3 | 85.7 | 77.1 | 85.7 | 80 | | gpt-4 | 64 | 94.3 | 88.6 | 74.2 | 85.7 | 80 | | sqlcoder2-15b | 76 | 80 | 77.1 | 60 | 77.1 | 77.1 | | sqlcoder-7b | 64 | 82.9 | 74.3 | 54.3 | 74.3 | 74.3 | | gpt-3.5 | 68 | 77.1 | 74.2 | 34.3 | 65.7 | 71.4 | | claude-2 | 52 | 71.4 | 74.3 | 57.1 | 65.7 | 62.9 | ## Using SQLCoder ## Model Card Authors - [Rishabh Srivastava](https://twitter.com/rishdotblog) - [Wendy Aw](https://www.linkedin.com/in/wendyaw/) - [Wong Jing Ping](https://www.linkedin.com/in/jing-ping-wong/) ## Model Card Contact Contact us on X at [@defogdata](https://twitter.com/defogdata), or on email at [[email protected]](mailto:[email protected])
QuantFactory/magnum-v4-22b-GGUF
QuantFactory
2024-10-20T17:54:24Z
16
2
transformers
[ "transformers", "gguf", "chat", "text-generation", "en", "license:other", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-10-20T16:37:27Z
--- license: other license_name: mrl language: - en tags: - chat pipeline_tag: text-generation library_name: transformers --- [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) # QuantFactory/magnum-v4-22b-GGUF This is quantized version of [anthracite-org/magnum-v4-22b](https://huggingface.co/anthracite-org/magnum-v4-22b) created using llama.cpp # Original Model Card ![image/png](https://cdn-uploads.huggingface.co/production/uploads/658a46cbfb9c2bdfae75b3a6/WvQykcYiK13x7sMI93T6e.png) This is a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus. This model is fine-tuned on top of [Mistral-Small-Instruct-2409](https://huggingface.co/mistralai/Mistral-Small-Instruct-2409). ## Prompting A typical input would look like this: ```py <s>[INST] SYSTEM MESSAGE USER MESSAGE[/INST] ASSISTANT MESSAGE</s>[INST] USER MESSAGE[/INST] ``` ## SillyTavern templates Below are Instruct and Context templates for use within SillyTavern. <details><summary>context template</summary> ```yaml default SillyTavern template works fine ``` </details><br> <details><summary>instruct template</summary> ```yaml default SillyTavern template works fine ``` </details><br> ## Axolotl config <details><summary>See axolotl config</summary> ```yaml base_model: /workspace/models/Mistral-Small-Instruct-2409 model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer hub_model_id: anthracite-org/magnum-v4-22b-r4 hub_strategy: "all_checkpoints" push_dataset_to_hub: hf_use_auth_token: true plugins: - axolotl.integrations.liger.LigerPlugin liger_rope: true liger_rms_norm: true liger_swiglu: true #liger_cross_entropy: true liger_fused_linear_cross_entropy: true load_in_8bit: false load_in_4bit: false strict: false datasets: - path: anthracite-org/c2_logs_32k_mistral-v3_v1.2_no_system type: custommistralv2v3 - path: anthracite-org/kalo-opus-instruct-22k-no-refusal-no-system type: custommistralv2v3 - path: anthracite-org/kalo-opus-instruct-3k-filtered-no-system type: custommistralv2v3 - path: anthracite-org/nopm_claude_writing_fixed type: custommistralv2v3 - path: anthracite-org/kalo_opus_misc_240827_no_system type: custommistralv2v3 - path: anthracite-org/kalo_misc_part2_no_system type: custommistralv2v3 #chat_template: mistral_v2v3 shuffle_merged_datasets: true #default_system_message: "You are an assistant that responds to the user." dataset_prepared_path: /workspace/data/magnum-22b-data val_set_size: 0.0 output_dir: /workspace/data/22b-r4-fft-out sequence_len: 32768 sample_packing: true pad_to_sequence_len: true adapter: lora_model_dir: lora_r: lora_alpha: lora_dropout: lora_target_linear: lora_fan_in_fan_out: wandb_project: 22b-magnum-fft wandb_entity: wandb_watch: wandb_name: v4-r4-attempt-01 wandb_log_model: gradient_accumulation_steps: 2 micro_batch_size: 1 num_epochs: 2 optimizer: adamw_bnb_8bit lr_scheduler: cosine learning_rate: 0.000004 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 40 evals_per_epoch: eval_table_size: eval_max_new_tokens: saves_per_epoch: 2 debug: deepspeed: deepspeed_configs/zero3_bf16.json weight_decay: 0.1 fsdp: fsdp_config: special_tokens: ``` </details><br> ## Credits We'd like to thank Recursal / Featherless for sponsoring the compute for this train, Featherless has been hosting our Magnum models since the first 72 B and has given thousands of people access to our models and helped us grow. We would also like to thank all members of Anthracite who made this finetune possible. ## Datasets - [anthracite-org/c2_logs_32k_mistral-v3_v1.2_no_system](https://huggingface.co/datasets/anthracite-org/c2_logs_32k_mistral-v3_v1.2_no_system) - [anthracite-org/kalo-opus-instruct-22k-no-refusal-no-system](https://huggingface.co/datasets/anthracite-org/kalo-opus-instruct-22k-no-refusal-no-system) - [anthracite-org/kalo-opus-instruct-3k-filtered-no-system](https://huggingface.co/datasets/anthracite-org/kalo-opus-instruct-3k-filtered-no-system) - [anthracite-org/nopm_claude_writing_fixed](https://huggingface.co/datasets/anthracite-org/nopm_claude_writing_fixed) - [anthracite-org/kalo_opus_misc_240827_no_system](https://huggingface.co/datasets/anthracite-org/kalo_opus_misc_240827_no_system) - [anthracite-org/kalo_misc_part2_no_system](https://huggingface.co/datasets/anthracite-org/kalo_misc_part2_no_system) ## Training The training was done for 2 epochs. We used 8x[H100s](https://www.nvidia.com/en-us/data-center/h100/) GPUs graciously provided by [Recursal AI](https://recursal.ai/) / [Featherless AI](https://featherless.ai/) for the full-parameter fine-tuning of the model. [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) ## Safety ...
prithivMLmods/Canopus-Snoopy-Charlie-Brown-Flux-LoRA
prithivMLmods
2024-10-20T17:49:13Z
91
10
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "Snoopy Charlie Brown", "flux", "cartoon", "flux-dev", "art", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-09-08T12:31:24Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora - Snoopy Charlie Brown - flux - cartoon - flux-dev - art widget: - text: 'Snoopy and Charlie Brown hugging on a grassy field with a tree in the background, under a light blue sky with wispy clouds.' output: url: images/000.png - text: 'Snoopy and Charlie Brown hugging under a starry night sky, with a tree in the background and a grassy field illuminated by moonlight.' output: url: images/111.png - text: 'Charlie Brown and Snoopy, clad in space suits, stand near their shuttle, mesmerized by a colossal black hole pulling in light from the distant galaxy.' output: url: images/222.png base_model: black-forest-labs/FLUX.1-dev instance_prompt: Snoopy Charlie Brown license: creativeml-openrail-m --- # Snoopy-Charlie-Brown-Flux-LoRA <Gallery /> **The model is still in the training phase. This is not the final version and may contain artifacts and perform poorly in some cases.** ## Model description **prithivMLmods/Canopus-Snoopy-Charlie-Brown-Flux-LoRA** Image Processing Parameters | Parameter | Value | Parameter | Value | |---------------------------|--------|---------------------------|--------| | LR Scheduler | constant | Noise Offset | 0.03 | | Optimizer | AdamW8bit | Multires Noise Discount | 0.1 | | Network Dim | 64 | Multires Noise Iterations | 10 | | Network Alpha | 32 | Repeat & Steps | 25 & 1.7K+ | | Epoch | 20 | Save Every N Epochs | 1 | Labeling: florence2-en(natural language & English) Total Images Used for Training : 100+ [ Hi-RES ] & More ............... ## Trigger prompts A black ford mustang parked in the parking lot, in the style of futurism influence, uhd image, furaffinity, focus, street photography, thin steel forms, 32k uhd --ar 2:3 --v 5 Ferrari car f3 458 tt, in the style of liam wong, fujifilm x-t4, multiple exposure, tsubasa nakai, uhd image, pinturicchio, crimson --ar 16:9 --v 5.2 Bugatti Veyron in cobalt blue metallic, high detail, octane render, 8k | Parameter | Value | |-----------------|---------------------------------------------------------------------------------------| | Prompt | Bugatti Veyron in cobalt blue metallic, high detail, octane render, 8k | | Sampler | euler | ## Setting Up ``` import torch from pipelines import DiffusionPipeline base_model = "black-forest-labs/FLUX.1-dev" pipe = DiffusionPipeline.from_pretrained(base_model, torch_dtype=torch.bfloat16) lora_repo = "prithivMLmods/Canopus-Snoopy-Charlie-Brown-Flux-LoRA" trigger_word = "Snoopy Charlie Brown" # Leave trigger_word blank if not used. pipe.load_lora_weights(lora_repo) device = torch.device("cuda") pipe.to(device) ``` ## Trigger words You should use `Snoopy Charlie Brown` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/prithivMLmods/Canopus-Snoopy-Charlie-Brown-Flux-LoRA/tree/main) them in the Files & versions tab.
linoyts/linoy_v_750_no_captions
linoyts
2024-10-20T17:37:55Z
5
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "flux", "flux-diffusers", "template:sd-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-10-20T17:18:47Z
--- base_model: black-forest-labs/FLUX.1-dev library_name: diffusers license: other tags: - text-to-image - diffusers-training - diffusers - lora - flux - flux-diffusers - template:sd-lora instance_prompt: a <s0> woman widget: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # Flux DreamBooth LoRA - linoyts/linoy_v_750_no_captions <Gallery /> ## Model description These are linoyts/linoy_v_750_no_captions DreamBooth LoRA weights for black-forest-labs/FLUX.1-dev. The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [Flux diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_flux.md). Was LoRA for the text encoder enabled? False. Pivotal tuning was enabled: True. ## Trigger words To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens: to trigger concept `TOK` → use `<s0>` in your prompt ## Download model [Download the *.safetensors LoRA](linoyts/linoy_v_750_no_captions/tree/main) in the Files & versions tab. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch from huggingface_hub import hf_hub_download from safetensors.torch import load_file pipeline = AutoPipelineForText2Image.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda') pipeline.load_lora_weights('linoyts/linoy_v_750_no_captions', weight_name='pytorch_lora_weights.safetensors') embedding_path = hf_hub_download(repo_id='linoyts/linoy_v_750_no_captions', filename='linoy_v_750_no_captions_emb.safetensors', repo_type="model") state_dict = load_file(embedding_path) pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer) image = pipeline('a <s0> woman').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## License Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md). ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
tdnathmlenthusiast/speecht5_finetuned_voice_dataset_bn_v_4
tdnathmlenthusiast
2024-10-20T17:37:53Z
80
0
transformers
[ "transformers", "tensorboard", "safetensors", "speecht5", "text-to-audio", "generated_from_trainer", "base_model:microsoft/speecht5_tts", "base_model:finetune:microsoft/speecht5_tts", "license:mit", "endpoints_compatible", "region:us" ]
text-to-audio
2024-10-20T16:22:54Z
--- library_name: transformers license: mit base_model: microsoft/speecht5_tts tags: - generated_from_trainer model-index: - name: speecht5_finetuned_voice_dataset_bn_v_4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # speecht5_finetuned_voice_dataset_bn_v_4 This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5560 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 125 - training_steps: 3000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:--------:|:----:|:---------------:| | 0.6302 | 12.1212 | 250 | 0.5979 | | 0.5895 | 24.2424 | 500 | 0.5645 | | 0.5658 | 36.3636 | 750 | 0.5638 | | 0.5495 | 48.4848 | 1000 | 0.5609 | | 0.5541 | 60.6061 | 1250 | 0.5443 | | 0.5431 | 72.7273 | 1500 | 0.5522 | | 0.5321 | 84.8485 | 1750 | 0.5406 | | 0.5321 | 96.9697 | 2000 | 0.5515 | | 0.5267 | 109.0909 | 2250 | 0.5674 | | 0.5334 | 121.2121 | 2500 | 0.5607 | | 0.5202 | 133.3333 | 2750 | 0.5586 | | 0.52 | 145.4545 | 3000 | 0.5560 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.19.1
Nucha/Nucha_ITSkillNER_BERT
Nucha
2024-10-20T16:53:36Z
35
1
null
[ "safetensors", "bert", "Skills", "NER", "SkillNER", "BERT", "token-classification", "en", "base_model:Nucha/Nucha_ITSkillNER_BERT", "base_model:finetune:Nucha/Nucha_ITSkillNER_BERT", "license:mit", "region:us" ]
token-classification
2024-10-07T08:52:57Z
--- license: - mit language: - en base_model: - Nucha/Nucha_SkillNER_BERT tags: - Skills - NER - SkillNER - BERT widget: - text: "ตัวอย่างข้อความที่ใช้ทดสอบ" pipeline_tag: token-classification --- # Computing Skill NER **Nucha_SkillNER_BERT** is a Named Entity Recognition (NER) model specifically fine-tuned to recognize skill-related entities from text, focusing on identifying both hard and soft skills. This model is built on top of a BERT-based architecture, allowing it to leverage contextual understanding for accurate extraction of skill-related information. It is particularly useful for analyzing job descriptions, resumes, or any text where skills are explicitly mentioned. The model supports the recognition of multiple skill categories, including technical skills (e.g., programming languages, software tools) and soft skills (e.g., communication, leadership). It is ideal for applications in recruitment, talent management, or skill-based data analysis. ## How to Use You can use the **Nucha/Nucha_SkillNER_BERT** model for Named Entity Recognition (NER) by loading it directly from Hugging Face's **transformers** library. Below is an example of how to use the model with the **pipeline** API for entity extraction. ### Step-by-Step Example: ```python # Libraly from transformers import AutoTokenizer, AutoModelForTokenClassification, pipeline # Load the pre-trained model and tokenizer model_name = "Nucha/Nucha_SkillNER_BERT" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForTokenClassification.from_pretrained(model_name) # Create a NER pipeline ner_pipeline = pipeline("ner", model=model, tokenizer=tokenizer, aggregation_strategy="simple") # Sample text text = "I have experience in Python, JavaScript, and cloud technologies like AWS and Azure." # Run the pipeline on the text ner_results = ner_pipeline(text) # Display the results for entity in ner_results: print(f"Entity: {entity['word']}, Label: {entity['entity_group']}, Score: {entity['score']:.4f}") ``` ### Output Explanation: - Entity: This is the word or phrase identified in the text that matches one of the model's recognized categories. - Label: The classification label assigned to the entity, such as **SKILL** or **TECHNOLOGY** . - Score: The confidence score of the model for the identified entity, represented as a floating-point number. ## Demo The **Nucha/Nucha_SkillNER_BERT** model is designed for Named Entity Recognition (NER) specifically targeting skill-related entities in text. This demo allows users to input any text and see how well the model identifies different skills. https://huggingface.co/spaces/Nucha/NuchaSkillNER ### How to Use: - Input Text: Enter any text that contains information about skills or related topics. For example, you can input job descriptions, resumes, or any relevant text. - Analyze: Click the "Analyze" button to run the model on the provided text. The model will process the input and extract named entities, specifically skills. - Results: The output will display the recognized entities along with their labels and confidence scores. The labels will indicate the type of skills identified (e.g., programming languages, frameworks, tools). ## Evaluation The **Nucha/Nucha_SkillNER_BERT** model has undergone rigorous evaluation to ensure its effectiveness in Named Entity Recognition (NER) tasks, specifically in identifying and categorizing skills relevant to various domains. The evaluation was conducted on a diverse set of datasets designed to reflect real-world scenarios. ### Metrics The model's performance was assessed using standard NER metrics: - **Accuracy**: Measures the overall correctness of the model's predictions. - **Precision**: Indicates the proportion of true positive results in the total predicted positives. - **Recall**: Reflects the ability of the model to find all relevant instances in the dataset. - **F1 Score**: The harmonic mean of precision and recall, providing a single score that balances both metrics. ``` precision recall f1-score support HSKILL 0.89 0.91 0.90 3708 SSKILL 0.91 0.91 0.91 2299 micro avg 0.90 0.91 0.90 6007 macro avg 0.90 0.91 0.91 6007 weighted avg 0.90 0.91 0.90 6007 Accuracy: 0.9972517975663717 (Train:5083/Test:1017) ``` #### Testing Data You can employ this model using the Transformers library's *pipeline* for NER, or incorporate it as a conventional Transformer in the HuggingFace ecosystem. ``` 1017/5083 ``` ### Results You can employ this model using the Transformers library's *pipeline* for NER, or incorporate it as a conventional Transformer in the HuggingFace ecosystem. ```JSON [ 0:{ "entity":"B-HSKILL" "score":"np.float32(0.9990522)" "index":110 "word":"machine" "start":581 "end":588 } 1:{ "entity":"I-HSKILL" "score":"np.float32(0.9995209)" "index":111 "word":"learning" "start":589 "end":597 } ... ] ``` ## Conclusion The **Nucha/Nucha_SkillNER_BERT** model demonstrates strong performance in identifying skills in text data, making it a valuable tool for applications in recruitment, resume screening, and skill extraction tasks. Continuous improvements and further evaluations will enhance its accuracy and adaptability to specific use cases.
alex-uv2/wav2vec2-base-finetuned-gtzan
alex-uv2
2024-10-20T16:46:11Z
160
0
transformers
[ "transformers", "safetensors", "wav2vec2", "audio-classification", "generated_from_trainer", "dataset:marsyas/gtzan", "base_model:facebook/wav2vec2-base", "base_model:finetune:facebook/wav2vec2-base", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
audio-classification
2024-10-20T15:42:37Z
--- library_name: transformers license: apache-2.0 base_model: facebook/wav2vec2-base tags: - generated_from_trainer datasets: - marsyas/gtzan metrics: - accuracy model-index: - name: wav2vec2-base-finetuned-gtzan results: - task: name: Audio Classification type: audio-classification dataset: name: GTZAN type: marsyas/gtzan config: all split: train args: all metrics: - name: Accuracy type: accuracy value: 0.86 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-finetuned-gtzan This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the GTZAN dataset. It achieves the following results on the evaluation set: - Loss: 0.5653 - Accuracy: 0.86 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.9774 | 1.0 | 113 | 1.9927 | 0.28 | | 1.5184 | 2.0 | 226 | 1.4378 | 0.5 | | 1.3158 | 3.0 | 339 | 1.1390 | 0.72 | | 0.8236 | 4.0 | 452 | 1.0595 | 0.69 | | 0.7644 | 5.0 | 565 | 1.0361 | 0.7 | | 0.5783 | 6.0 | 678 | 0.6584 | 0.82 | | 0.4597 | 7.0 | 791 | 0.5901 | 0.87 | | 0.2232 | 8.0 | 904 | 0.5699 | 0.87 | | 0.1191 | 9.0 | 1017 | 0.5567 | 0.88 | | 0.0797 | 10.0 | 1130 | 0.5653 | 0.86 | ### Framework versions - Transformers 4.45.2 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.1