modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
sequence
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
RichardErkhov/nbeerbower_-_gemma2-gutenberg-9B-gguf
RichardErkhov
2024-08-23T02:53:50Z
5
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-08-22T23:14:05Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) gemma2-gutenberg-9B - GGUF - Model creator: https://huggingface.co/nbeerbower/ - Original model: https://huggingface.co/nbeerbower/gemma2-gutenberg-9B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [gemma2-gutenberg-9B.Q2_K.gguf](https://huggingface.co/RichardErkhov/nbeerbower_-_gemma2-gutenberg-9B-gguf/blob/main/gemma2-gutenberg-9B.Q2_K.gguf) | Q2_K | 3.54GB | | [gemma2-gutenberg-9B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/nbeerbower_-_gemma2-gutenberg-9B-gguf/blob/main/gemma2-gutenberg-9B.IQ3_XS.gguf) | IQ3_XS | 3.86GB | | [gemma2-gutenberg-9B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/nbeerbower_-_gemma2-gutenberg-9B-gguf/blob/main/gemma2-gutenberg-9B.IQ3_S.gguf) | IQ3_S | 4.04GB | | [gemma2-gutenberg-9B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/nbeerbower_-_gemma2-gutenberg-9B-gguf/blob/main/gemma2-gutenberg-9B.Q3_K_S.gguf) | Q3_K_S | 4.04GB | | [gemma2-gutenberg-9B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/nbeerbower_-_gemma2-gutenberg-9B-gguf/blob/main/gemma2-gutenberg-9B.IQ3_M.gguf) | IQ3_M | 4.19GB | | [gemma2-gutenberg-9B.Q3_K.gguf](https://huggingface.co/RichardErkhov/nbeerbower_-_gemma2-gutenberg-9B-gguf/blob/main/gemma2-gutenberg-9B.Q3_K.gguf) | Q3_K | 4.43GB | | [gemma2-gutenberg-9B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/nbeerbower_-_gemma2-gutenberg-9B-gguf/blob/main/gemma2-gutenberg-9B.Q3_K_M.gguf) | Q3_K_M | 4.43GB | | [gemma2-gutenberg-9B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/nbeerbower_-_gemma2-gutenberg-9B-gguf/blob/main/gemma2-gutenberg-9B.Q3_K_L.gguf) | Q3_K_L | 4.78GB | | [gemma2-gutenberg-9B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/nbeerbower_-_gemma2-gutenberg-9B-gguf/blob/main/gemma2-gutenberg-9B.IQ4_XS.gguf) | IQ4_XS | 4.86GB | | [gemma2-gutenberg-9B.Q4_0.gguf](https://huggingface.co/RichardErkhov/nbeerbower_-_gemma2-gutenberg-9B-gguf/blob/main/gemma2-gutenberg-9B.Q4_0.gguf) | Q4_0 | 5.07GB | | [gemma2-gutenberg-9B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/nbeerbower_-_gemma2-gutenberg-9B-gguf/blob/main/gemma2-gutenberg-9B.IQ4_NL.gguf) | IQ4_NL | 5.1GB | | [gemma2-gutenberg-9B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/nbeerbower_-_gemma2-gutenberg-9B-gguf/blob/main/gemma2-gutenberg-9B.Q4_K_S.gguf) | Q4_K_S | 5.1GB | | [gemma2-gutenberg-9B.Q4_K.gguf](https://huggingface.co/RichardErkhov/nbeerbower_-_gemma2-gutenberg-9B-gguf/blob/main/gemma2-gutenberg-9B.Q4_K.gguf) | Q4_K | 5.37GB | | [gemma2-gutenberg-9B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/nbeerbower_-_gemma2-gutenberg-9B-gguf/blob/main/gemma2-gutenberg-9B.Q4_K_M.gguf) | Q4_K_M | 5.37GB | | [gemma2-gutenberg-9B.Q4_1.gguf](https://huggingface.co/RichardErkhov/nbeerbower_-_gemma2-gutenberg-9B-gguf/blob/main/gemma2-gutenberg-9B.Q4_1.gguf) | Q4_1 | 5.55GB | | [gemma2-gutenberg-9B.Q5_0.gguf](https://huggingface.co/RichardErkhov/nbeerbower_-_gemma2-gutenberg-9B-gguf/blob/main/gemma2-gutenberg-9B.Q5_0.gguf) | Q5_0 | 6.04GB | | [gemma2-gutenberg-9B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/nbeerbower_-_gemma2-gutenberg-9B-gguf/blob/main/gemma2-gutenberg-9B.Q5_K_S.gguf) | Q5_K_S | 6.04GB | | [gemma2-gutenberg-9B.Q5_K.gguf](https://huggingface.co/RichardErkhov/nbeerbower_-_gemma2-gutenberg-9B-gguf/blob/main/gemma2-gutenberg-9B.Q5_K.gguf) | Q5_K | 6.19GB | | [gemma2-gutenberg-9B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/nbeerbower_-_gemma2-gutenberg-9B-gguf/blob/main/gemma2-gutenberg-9B.Q5_K_M.gguf) | Q5_K_M | 6.19GB | | [gemma2-gutenberg-9B.Q5_1.gguf](https://huggingface.co/RichardErkhov/nbeerbower_-_gemma2-gutenberg-9B-gguf/blob/main/gemma2-gutenberg-9B.Q5_1.gguf) | Q5_1 | 6.52GB | | [gemma2-gutenberg-9B.Q6_K.gguf](https://huggingface.co/RichardErkhov/nbeerbower_-_gemma2-gutenberg-9B-gguf/blob/main/gemma2-gutenberg-9B.Q6_K.gguf) | Q6_K | 7.07GB | | [gemma2-gutenberg-9B.Q8_0.gguf](https://huggingface.co/RichardErkhov/nbeerbower_-_gemma2-gutenberg-9B-gguf/blob/main/gemma2-gutenberg-9B.Q8_0.gguf) | Q8_0 | 9.15GB | Original model description: --- license: gemma library_name: transformers base_model: - UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3 datasets: - jondurbin/gutenberg-dpo-v0.1 model-index: - name: gemma2-gutenberg-9B results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 27.96 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/gemma2-gutenberg-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 42.36 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/gemma2-gutenberg-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 1.44 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/gemma2-gutenberg-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 11.74 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/gemma2-gutenberg-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 16.71 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/gemma2-gutenberg-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 35.47 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/gemma2-gutenberg-9B name: Open LLM Leaderboard --- # gemma2-gutenberg-9B [UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3](https://huggingface.co/UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3) finetuned on [jondurbin/gutenberg-dpo-v0.1](https://huggingface.co/datasets/jondurbin/gutenberg-dpo-v0.1). ### Method Finetuned using an RTX 4090 using ORPO for 3 epochs. [Fine-tune Llama 3 with ORPO](https://mlabonne.github.io/blog/posts/2024-04-19_Fine_tune_Llama_3_with_ORPO.html) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_nbeerbower__gemma2-gutenberg-9B) | Metric |Value| |-------------------|----:| |Avg. |22.61| |IFEval (0-Shot) |27.96| |BBH (3-Shot) |42.36| |MATH Lvl 5 (4-Shot)| 1.44| |GPQA (0-shot) |11.74| |MuSR (0-shot) |16.71| |MMLU-PRO (5-shot) |35.47|
mradermacher/Hathor-v4.6-i1-GGUF
mradermacher
2024-08-23T02:52:43Z
5
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "endpoints_compatible", "region:us", "imatrix" ]
null
2024-08-22T10:08:53Z
--- base_model: MrRobotoAI/Hathor-v4.6 language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/MrRobotoAI/Hathor-v4.6 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Hathor-v4.6-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Hathor-v4.6-i1-GGUF/resolve/main/Hathor-v4.6.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Hathor-v4.6-i1-GGUF/resolve/main/Hathor-v4.6.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Hathor-v4.6-i1-GGUF/resolve/main/Hathor-v4.6.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | | | [GGUF](https://huggingface.co/mradermacher/Hathor-v4.6-i1-GGUF/resolve/main/Hathor-v4.6.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/Hathor-v4.6-i1-GGUF/resolve/main/Hathor-v4.6.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/Hathor-v4.6-i1-GGUF/resolve/main/Hathor-v4.6.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/Hathor-v4.6-i1-GGUF/resolve/main/Hathor-v4.6.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Hathor-v4.6-i1-GGUF/resolve/main/Hathor-v4.6.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Hathor-v4.6-i1-GGUF/resolve/main/Hathor-v4.6.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Hathor-v4.6-i1-GGUF/resolve/main/Hathor-v4.6.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Hathor-v4.6-i1-GGUF/resolve/main/Hathor-v4.6.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Hathor-v4.6-i1-GGUF/resolve/main/Hathor-v4.6.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Hathor-v4.6-i1-GGUF/resolve/main/Hathor-v4.6.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Hathor-v4.6-i1-GGUF/resolve/main/Hathor-v4.6.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Hathor-v4.6-i1-GGUF/resolve/main/Hathor-v4.6.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Hathor-v4.6-i1-GGUF/resolve/main/Hathor-v4.6.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Hathor-v4.6-i1-GGUF/resolve/main/Hathor-v4.6.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Hathor-v4.6-i1-GGUF/resolve/main/Hathor-v4.6.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Hathor-v4.6-i1-GGUF/resolve/main/Hathor-v4.6.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Hathor-v4.6-i1-GGUF/resolve/main/Hathor-v4.6.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Hathor-v4.6-i1-GGUF/resolve/main/Hathor-v4.6.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
samu/t5-small-finetuned-english-to-hausa
samu
2024-08-23T02:42:25Z
10
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "translation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
translation
2024-08-16T06:47:18Z
--- library_name: transformers license: apache-2.0 base_model: google-t5/t5-small tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: t5-small-finetuned-english-to-hausa results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-english-to-hausa This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6851 - Bleu: 71.9442 - Gen Len: 14.3679 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0008 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:| | 1.2594 | 1.0 | 1497 | 0.8236 | 59.41 | 14.2172 | | 0.7848 | 2.0 | 2994 | 0.6581 | 64.4839 | 14.219 | | 0.6172 | 3.0 | 4491 | 0.5897 | 66.4564 | 14.2357 | | 0.5151 | 4.0 | 5988 | 0.5619 | 68.0986 | 14.4905 | | 0.4457 | 5.0 | 7485 | 0.5477 | 69.2175 | 14.4141 | | 0.3938 | 6.0 | 8982 | 0.5413 | 70.0663 | 14.4059 | | 0.3555 | 7.0 | 10479 | 0.5338 | 70.1734 | 14.4734 | | 0.3154 | 8.0 | 11976 | 0.5485 | 70.3692 | 14.3035 | | 0.2837 | 9.0 | 13473 | 0.5454 | 70.7837 | 14.4556 | | 0.2507 | 10.0 | 14970 | 0.5616 | 70.976 | 14.3807 | | 0.2265 | 11.0 | 16467 | 0.5728 | 71.2008 | 14.3692 | | 0.2041 | 12.0 | 17964 | 0.5808 | 71.4766 | 14.362 | | 0.1848 | 13.0 | 19461 | 0.5981 | 71.3804 | 14.3114 | | 0.1715 | 14.0 | 20958 | 0.6122 | 71.43 | 14.4295 | | 0.1547 | 15.0 | 22455 | 0.6309 | 71.753 | 14.351 | | 0.1417 | 16.0 | 23952 | 0.6411 | 71.7608 | 14.3513 | | 0.1267 | 17.0 | 25449 | 0.6612 | 71.93 | 14.4243 | | 0.1208 | 18.0 | 26946 | 0.6662 | 71.8591 | 14.3486 | | 0.1076 | 19.0 | 28443 | 0.6799 | 72.0417 | 14.3862 | | 0.1046 | 20.0 | 29940 | 0.6851 | 71.9442 | 14.3679 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.3.1+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1
cloudyu/Mixtral_7Bx2_MoE
cloudyu
2024-08-23T02:38:09Z
1,416
36
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "license:cc-by-nc-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-22T13:14:30Z
--- license: cc-by-nc-4.0 model-index: - name: Mixtral_7Bx2_MoE results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 71.25 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cloudyu/Mixtral_7Bx2_MoE name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 87.45 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cloudyu/Mixtral_7Bx2_MoE name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.98 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cloudyu/Mixtral_7Bx2_MoE name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 67.23 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cloudyu/Mixtral_7Bx2_MoE name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 81.22 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cloudyu/Mixtral_7Bx2_MoE name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 68.46 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cloudyu/Mixtral_7Bx2_MoE name: Open LLM Leaderboard --- # Mixtral MOE 2x7B MoE of the following models : * [NurtureAI/neural-chat-7b-v3-16k](https://huggingface.co/NurtureAI/neural-chat-7b-v3-16k) * [mncai/mistral-7b-dpo-v6](https://huggingface.co/mncai/mistral-7b-dpo-v6) * metrics: Average 73.43 ARC 71.25 HellaSwag 87.45 gpu code example ``` import torch from transformers import AutoTokenizer, AutoModelForCausalLM import math ## v2 models model_path = "cloudyu/Mixtral_7Bx2_MoE" tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False) model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.float32, device_map='auto',local_files_only=False, load_in_4bit=True ) print(model) prompt = input("please input prompt:") while len(prompt) > 0: input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda") generation_output = model.generate( input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2 ) print(tokenizer.decode(generation_output[0])) prompt = input("please input prompt:") ``` CPU example ``` import torch from transformers import AutoTokenizer, AutoModelForCausalLM import math ## v2 models model_path = "cloudyu/Mixtral_7Bx2_MoE" tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False) model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.float32, device_map='cpu',local_files_only=False ) print(model) prompt = input("please input prompt:") while len(prompt) > 0: input_ids = tokenizer(prompt, return_tensors="pt").input_ids generation_output = model.generate( input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2 ) print(tokenizer.decode(generation_output[0])) prompt = input("please input prompt:") ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_cloudyu__Mixtral_7Bx2_MoE) | Metric |Value| |---------------------------------|----:| |Avg. |73.43| |AI2 Reasoning Challenge (25-Shot)|71.25| |HellaSwag (10-Shot) |87.45| |MMLU (5-Shot) |64.98| |TruthfulQA (0-shot) |67.23| |Winogrande (5-shot) |81.22| |GSM8k (5-shot) |68.46|
cloudyu/Mixtral_7Bx4_MOE_24B
cloudyu
2024-08-23T02:37:26Z
1,447
13
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "license:cc-by-nc-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-21T13:07:35Z
--- license: cc-by-nc-4.0 model-index: - name: mixtral_7bx4_moe results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 65.27 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cloudyu/mixtral_7bx4_moe name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 85.28 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cloudyu/mixtral_7bx4_moe name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 62.84 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cloudyu/mixtral_7bx4_moe name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 59.85 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cloudyu/mixtral_7bx4_moe name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 77.66 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cloudyu/mixtral_7bx4_moe name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 62.09 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cloudyu/mixtral_7bx4_moe name: Open LLM Leaderboard --- I don't know why so many downloads about this model. Please share your cases, thanks. Now this model is improved by DPO to [cloudyu/Pluto_24B_DPO_200](https://huggingface.co/cloudyu/Pluto_24B_DPO_200) * Metrics improved by DPO ![Metrsc improment](dpo.jpg) ![Metrsc improment](dpo-metrics.jpg) # Mixtral MOE 4x7B MOE the following models by mergekit: * [Q-bert/MetaMath-Cybertron-Starling](https://huggingface.co/Q-bert/MetaMath-Cybertron-Starling) * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) * [teknium/Mistral-Trismegistus-7B](https://huggingface.co/teknium/Mistral-Trismegistus-7B) * [meta-math/MetaMath-Mistral-7B](https://huggingface.co/meta-math/MetaMath-Mistral-7B) * [openchat/openchat-3.5-1210](https://huggingface.co/openchat/openchat-3.5-1210) Metrics * Average : 68.85 * ARC:65.36 * HellaSwag:85.23 * more details: https://huggingface.co/datasets/open-llm-leaderboard/results/blob/main/cloudyu/Mixtral_7Bx4_MOE_24B/results_2023-12-23T18-05-51.243288.json gpu code example ``` import torch from transformers import AutoTokenizer, AutoModelForCausalLM import math ## v2 models model_path = "cloudyu/Mixtral_7Bx4_MOE_24B" tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False) model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.float32, device_map='auto',local_files_only=False, load_in_4bit=True ) print(model) prompt = input("please input prompt:") while len(prompt) > 0: input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda") generation_output = model.generate( input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2 ) print(tokenizer.decode(generation_output[0])) prompt = input("please input prompt:") ``` CPU example ``` import torch from transformers import AutoTokenizer, AutoModelForCausalLM import math ## v2 models model_path = "cloudyu/Mixtral_7Bx4_MOE_24B" tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False) model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.float32, device_map='cpu',local_files_only=False ) print(model) prompt = input("please input prompt:") while len(prompt) > 0: input_ids = tokenizer(prompt, return_tensors="pt").input_ids generation_output = model.generate( input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2 ) print(tokenizer.decode(generation_output[0])) prompt = input("please input prompt:") ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_cloudyu__mixtral_7bx4_moe) | Metric |Value| |---------------------------------|----:| |Avg. |68.83| |AI2 Reasoning Challenge (25-Shot)|65.27| |HellaSwag (10-Shot) |85.28| |MMLU (5-Shot) |62.84| |TruthfulQA (0-shot) |59.85| |Winogrande (5-shot) |77.66| |GSM8k (5-shot) |62.09|
RichardErkhov/lemon07r_-_Gemma-2-Ataraxy-9B-gguf
RichardErkhov
2024-08-23T02:26:37Z
5
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-08-22T22:40:23Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Gemma-2-Ataraxy-9B - GGUF - Model creator: https://huggingface.co/lemon07r/ - Original model: https://huggingface.co/lemon07r/Gemma-2-Ataraxy-9B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Gemma-2-Ataraxy-9B.Q2_K.gguf](https://huggingface.co/RichardErkhov/lemon07r_-_Gemma-2-Ataraxy-9B-gguf/blob/main/Gemma-2-Ataraxy-9B.Q2_K.gguf) | Q2_K | 3.54GB | | [Gemma-2-Ataraxy-9B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/lemon07r_-_Gemma-2-Ataraxy-9B-gguf/blob/main/Gemma-2-Ataraxy-9B.IQ3_XS.gguf) | IQ3_XS | 3.86GB | | [Gemma-2-Ataraxy-9B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/lemon07r_-_Gemma-2-Ataraxy-9B-gguf/blob/main/Gemma-2-Ataraxy-9B.IQ3_S.gguf) | IQ3_S | 4.04GB | | [Gemma-2-Ataraxy-9B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/lemon07r_-_Gemma-2-Ataraxy-9B-gguf/blob/main/Gemma-2-Ataraxy-9B.Q3_K_S.gguf) | Q3_K_S | 4.04GB | | [Gemma-2-Ataraxy-9B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/lemon07r_-_Gemma-2-Ataraxy-9B-gguf/blob/main/Gemma-2-Ataraxy-9B.IQ3_M.gguf) | IQ3_M | 4.19GB | | [Gemma-2-Ataraxy-9B.Q3_K.gguf](https://huggingface.co/RichardErkhov/lemon07r_-_Gemma-2-Ataraxy-9B-gguf/blob/main/Gemma-2-Ataraxy-9B.Q3_K.gguf) | Q3_K | 4.43GB | | [Gemma-2-Ataraxy-9B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/lemon07r_-_Gemma-2-Ataraxy-9B-gguf/blob/main/Gemma-2-Ataraxy-9B.Q3_K_M.gguf) | Q3_K_M | 4.43GB | | [Gemma-2-Ataraxy-9B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/lemon07r_-_Gemma-2-Ataraxy-9B-gguf/blob/main/Gemma-2-Ataraxy-9B.Q3_K_L.gguf) | Q3_K_L | 4.78GB | | [Gemma-2-Ataraxy-9B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/lemon07r_-_Gemma-2-Ataraxy-9B-gguf/blob/main/Gemma-2-Ataraxy-9B.IQ4_XS.gguf) | IQ4_XS | 4.86GB | | [Gemma-2-Ataraxy-9B.Q4_0.gguf](https://huggingface.co/RichardErkhov/lemon07r_-_Gemma-2-Ataraxy-9B-gguf/blob/main/Gemma-2-Ataraxy-9B.Q4_0.gguf) | Q4_0 | 5.07GB | | [Gemma-2-Ataraxy-9B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/lemon07r_-_Gemma-2-Ataraxy-9B-gguf/blob/main/Gemma-2-Ataraxy-9B.IQ4_NL.gguf) | IQ4_NL | 5.1GB | | [Gemma-2-Ataraxy-9B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/lemon07r_-_Gemma-2-Ataraxy-9B-gguf/blob/main/Gemma-2-Ataraxy-9B.Q4_K_S.gguf) | Q4_K_S | 5.1GB | | [Gemma-2-Ataraxy-9B.Q4_K.gguf](https://huggingface.co/RichardErkhov/lemon07r_-_Gemma-2-Ataraxy-9B-gguf/blob/main/Gemma-2-Ataraxy-9B.Q4_K.gguf) | Q4_K | 5.37GB | | [Gemma-2-Ataraxy-9B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/lemon07r_-_Gemma-2-Ataraxy-9B-gguf/blob/main/Gemma-2-Ataraxy-9B.Q4_K_M.gguf) | Q4_K_M | 5.37GB | | [Gemma-2-Ataraxy-9B.Q4_1.gguf](https://huggingface.co/RichardErkhov/lemon07r_-_Gemma-2-Ataraxy-9B-gguf/blob/main/Gemma-2-Ataraxy-9B.Q4_1.gguf) | Q4_1 | 5.55GB | | [Gemma-2-Ataraxy-9B.Q5_0.gguf](https://huggingface.co/RichardErkhov/lemon07r_-_Gemma-2-Ataraxy-9B-gguf/blob/main/Gemma-2-Ataraxy-9B.Q5_0.gguf) | Q5_0 | 6.04GB | | [Gemma-2-Ataraxy-9B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/lemon07r_-_Gemma-2-Ataraxy-9B-gguf/blob/main/Gemma-2-Ataraxy-9B.Q5_K_S.gguf) | Q5_K_S | 6.04GB | | [Gemma-2-Ataraxy-9B.Q5_K.gguf](https://huggingface.co/RichardErkhov/lemon07r_-_Gemma-2-Ataraxy-9B-gguf/blob/main/Gemma-2-Ataraxy-9B.Q5_K.gguf) | Q5_K | 6.19GB | | [Gemma-2-Ataraxy-9B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/lemon07r_-_Gemma-2-Ataraxy-9B-gguf/blob/main/Gemma-2-Ataraxy-9B.Q5_K_M.gguf) | Q5_K_M | 6.19GB | | [Gemma-2-Ataraxy-9B.Q5_1.gguf](https://huggingface.co/RichardErkhov/lemon07r_-_Gemma-2-Ataraxy-9B-gguf/blob/main/Gemma-2-Ataraxy-9B.Q5_1.gguf) | Q5_1 | 6.52GB | | [Gemma-2-Ataraxy-9B.Q6_K.gguf](https://huggingface.co/RichardErkhov/lemon07r_-_Gemma-2-Ataraxy-9B-gguf/blob/main/Gemma-2-Ataraxy-9B.Q6_K.gguf) | Q6_K | 7.07GB | | [Gemma-2-Ataraxy-9B.Q8_0.gguf](https://huggingface.co/RichardErkhov/lemon07r_-_Gemma-2-Ataraxy-9B-gguf/blob/main/Gemma-2-Ataraxy-9B.Q8_0.gguf) | Q8_0 | 9.15GB | Original model description: --- base_model: - princeton-nlp/gemma-2-9b-it-SimPO - nbeerbower/gemma2-gutenberg-9B library_name: transformers tags: - mergekit - merge license: gemma --- # Gemma-2-Ataraxy-9B ![Ataraxy](https://i.imgur.com/aP03a5d.png) ## GGUF Quants Huge thanks to [@mradermacher](https://huggingface.co/mradermacher) and [@bartowski](https://huggingface.co/bartowski) for making these GGUF quants available to us. Bartowski quants (imatrix): [bartowski/Gemma-2-Ataraxy-9B-GGUF](https://huggingface.co/bartowski/Gemma-2-Ataraxy-9B-GGUF) Mradermacher quants (static): [mradermacher/Gemma-2-Ataraxy-9B-GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-9B-GGUF) Mradermacher quants (imatrix): [mradermacher/Gemma-2-Ataraxy-9B-i1-GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-9B-i1-GGUF) I think bartowski and mradermacher use different calibration data for imatrix quants, or maybe you prefer static quants. Pick your poison :). ## Format Use Gemma 2 format. ## Benchmarks and Leaderboard Rankings OpenLLM: Pending in Queue Creative Writing V2 - Score: 82.64 (Rank 1) https://eqbench.com/creative_writing.html That's right, much to everyone's surprise (mine included) this model has topped eqbench.com's creative writing benchmark. Big thanks to _sqrkl for taking the time to bother benchmarking this model and adding it ot his leaderboard, and [@nbeerbower](https://huggingface.co/nbeerbower) for entertaining my suggestions for his amazing gutenberg tunes. ![Reddit](https://i.imgur.com/iL8BcYb.png) ![Leaderboard](https://i.imgur.com/gJd9Pab.png) ## Try it out I will be hosting this model in Q8_0 quant, with 4k context using koboldcpp, for free unlimited use until around the 26th or 27th. This should have up to 32t/s. ![https://kerry-circulation-child-suspected.trycloudflare.com/](https://kerry-circulation-child-suspected.trycloudflare.com/) ## Preface and Rambling My favorite Gemma 2 9B models are the SPPO iter3 and SimPO finetunes, but I felt the slerp merge between the two (nephilim v3) wasn't as good for some reason. The Gutenberg Gemma 2 finetune by nbeerbower is another my favorites. It's trained on one of my favorite datasets, and actually improves the SPPO model's average openllm leaderboard 2 average score by a bit, on top of improving it's writing capabilities and making the LLM sound less AI-like. However I still liked the original SPPO finetune just a bit more. Someone suggested that merging the base model on top of the gutenberg may help with tame it back down, which gave me a (possibly) better idea; slerp merging the SimPO finetune on top of the Gutenberg finetune, which is similar to the pretty popular Nephilim v3 recipe, using the Gutenberg finetune in place of the SPPO model, which I thought may give us better results since Gutenberg was trained on top of SPPO. I wasn't entirely too sure, since if nephilim v3 is anything to go by, it was probably going to also end up worse than the parent models. Normally when I try merges like these, they dont go anywhere. I'm pretty picky, and very skeptical usually, so most times I find that the merge is usually just not better than the original models or only marginally better. Tried this merge anyways to see how it goes, and much to my surprise, this time, I feel like I got very good results. Figured I'd share, and hopefully this wont be just me introducing more useless slop into a world that already has way too many unnecessary merges. If you're looking for a mistral nemo 12B model instead, I HIGHLY recommend Mistral Nemo Gutenberg v2 by nbeerbower. It's head and shoulders above the many other mistral nemo finetunes I've tried (the first version of mistral nemo gutenburg, romulus simpo, and magnum mini 1.1 being close second favorites). ## Why Gutenberg? We use gutenberg 9b, which is finetuned over SPPO cause of how good the gutenberg dpo dataset is. It's not a german dataset like it sounds, it's a dataset based off of project gutenberg, a public domain collection of popular classic fictions, like Moby Dick, for example. This data set also uses LLM generated responses as the negative prompt, to train models to not sounds so much like AI, or your typical LLMs, and more like actual humans (based on the creative works from project gutenberg). This is quality writing, hence quality data, not just random RP logs, or synthetic data. This dataset, when trained for 3 epochs has shown to increase llama 3 average scores on the old openllm leaderboard, from 72 to 73 (nobody really got higher than 73 before the board revamp), and has already again, proven to increase average openllm 2 leaderboard scores, increasing the average score from 21.47 to 22.61, improving on sppo. Thats a huge improvement. That said, I didnt like gutenberg 9b more than original sppo in real use, felt a tiny bit overfit, so we tried this merge. Did not expect much, because of neph v3 turning out worse than either of its parents, but this surprisingly came out great. ## Why is it 10b?? See https://github.com/arcee-ai/mergekit/issues/390 Model is not actually 10b, mergekit is randomly adding lm_head for some reason when doing SLERP merge with Gemma 2 models. I believe Nephilim v3 had a similar issue before they used some sort of workaround that I'm not aware of. Doesn't seem like this affects the GGUF quants, as they're the correct size, so I will leave it as is until mergekit gets a commit that addresses this issue. ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [princeton-nlp/gemma-2-9b-it-SimPO](https://huggingface.co/princeton-nlp/gemma-2-9b-it-SimPO) * [nbeerbower/gemma2-gutenberg-9B](https://huggingface.co/nbeerbower/gemma2-gutenberg-9B) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: nbeerbower/gemma2-gutenberg-9B dtype: bfloat16 merge_method: slerp parameters: t: - filter: self_attn value: [0.0, 0.5, 0.3, 0.7, 1.0] - filter: mlp value: [1.0, 0.5, 0.7, 0.3, 0.0] - value: 0.5 slices: - sources: - layer_range: [0, 42] model: princeton-nlp/gemma-2-9b-it-SimPO - layer_range: [0, 42] model: nbeerbower/gemma2-gutenberg-9B ```
muhtasham/TowerInstruct-7B-v0.1-W4A16-G128
muhtasham
2024-08-23T02:14:39Z
6
0
null
[ "safetensors", "llama", "region:us" ]
null
2024-08-23T02:10:35Z
```python import time from datasets import load_dataset from transformers import AutoTokenizer from rich import print from llmcompressor.modifiers.quantization import GPTQModifier from llmcompressor.transformers import SparseAutoModelForCausalLM, oneshot # Select model and load it. MODEL_ID = "Unbabel/TowerInstruct-7B-v0.1" model = SparseAutoModelForCausalLM.from_pretrained( MODEL_ID, device_map="auto", torch_dtype="auto", ) tokenizer = AutoTokenizer.from_pretrained(MODEL_ID) # Select calibration dataset. DATASET_ID = "HuggingFaceH4/ultrachat_200k" DATASET_SPLIT = "train_sft" # Select number of samples. 512 samples is a good place to start. # Increasing the number of samples can improve accuracy. NUM_CALIBRATION_SAMPLES = 512 MAX_SEQUENCE_LENGTH = 2048 # Load dataset and preprocess. ds = load_dataset(DATASET_ID, split=DATASET_SPLIT) ds = ds.shuffle(seed=42).select(range(NUM_CALIBRATION_SAMPLES)) def preprocess(example): return { "text": tokenizer.apply_chat_template( example["messages"], tokenize=False, ) } ds = ds.map(preprocess) # Tokenize inputs. def tokenize(sample): return tokenizer( sample["text"], padding=False, max_length=MAX_SEQUENCE_LENGTH, truncation=True, add_special_tokens=False, ) ds = ds.map(tokenize, remove_columns=ds.column_names) # Configure the quantization algorithm to run. # * quantize the weights to 4 bit with GPTQ with a group size 128 # Note: to reduce GPU memory use `sequential_update=False` recipe = GPTQModifier(targets="Linear", scheme="W4A16", ignore=["lm_head"]) print(recipe) # Apply algorithms. oneshot( model=model, dataset=ds, recipe=recipe, max_seq_length=MAX_SEQUENCE_LENGTH, num_calibration_samples=NUM_CALIBRATION_SAMPLES, ) # Confirm generations of the quantized model look sane and measure generation time. print("\n\n") print("========== SAMPLE GENERATION ==============") input_text = "Translate the following text from Portuguese into English.\nPortuguese: Um grupo de investigadores lançou um novo modelo para tarefas relacionadas com tradução.\nEnglish:" input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda") start_time = time.time() output = model.generate(input_ids, max_new_tokens=100) end_time = time.time() generation_time = end_time - start_time print(tokenizer.decode(output[0])) print(f"Generation time: {generation_time:.2f} seconds") print("==========================================\n\n") # Save to disk compressed. SAVE_DIR = MODEL_ID.split("/")[1] + "-W4A16-G128" model.save_pretrained(SAVE_DIR, save_compressed=True) tokenizer.save_pretrained(SAVE_DIR) ```
DataKensei/phi-2-function-calling
DataKensei
2024-08-23T02:04:01Z
15
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-08-17T09:01:03Z
--- library_name: transformers tags: - trl - sft --- # Model Card for phi-2-function-calling ## Model Overview ### Summary of the Model The primary purpose of this fine-tuned model is **Function Calling**. It is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) specifically adapted to handle function-calling tasks efficiently. The model can generate structured text, making it particularly suited for scenarios requiring automated function invocation based on textual instructions. ### Model Type ## Model Details ### Model Description - **Developed by:** Microsoft and Fine-tuned by Carlos Rodrigues (at DataKensei) - **Model Type:** Text Generation, trained for Function Calling tasks. - **Language(s):** English - **License:** MIT License - **Finetuned from model:** [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) ### Model Sources - **Repository:** - **Repository:** [DataKensei/phi-2-function-calling](https://huggingface.co/DataKensei/phi-2-function-calling) ## Uses ### Direct Use The model is directly usable for generating function calls based on user prompts. This includes structured tasks like scheduling meetings, calculating savings, or any scenario where a text input should translate into an actionable function. ### Downstream Use While the model is primarily designed for function calling, it can be fine-tuned further or integrated into larger systems where similar structured text generation is required. For example, it could be part of a larger chatbot system that automates task handling. ### Out-of-Scope Use The model is not designed for tasks unrelated to structured text generation or function calling. Misuse might include attempts to use it for general-purpose language modeling or content generation beyond its specialized training focus. ## Bias, Risks, and Limitations ### Biases The model may inherit biases from the base model (microsoft/phi-2), particularly those related to the English language and specific function-calling tasks. Users should be aware of potential biases in task framing and language interpretation. ### Limitations - **Task-Specific**: The model is specialized for function-calling tasks and might not perform well on other types of text generation tasks. - **English Only**: The model is limited to English, and performance in other languages is not guaranteed. ### Recommendations Users should test the model in their specific environment to ensure it performs as expected for the desired use case. Awareness of the model's biases and limitations is crucial when deploying it in critical systems. ## How to Get Started with the Model You can use the following code snippet to get started with the model: ```python from transformers import pipeline # Load the model and tokenizer pipe = pipeline(task="text-generation", model="DataKensei/phi-2-function-calling") # Example prompt prompt = ''' <|im_start|system You are a helpful assistant with access to the following functions. Use these functions when they are relevant to assist with a user's request [ { "name": "calculate_retirement_savings", "description": "Project the savings at retirement based on current contributions.", "parameters": { "type": "object", "properties": { "current_age": { "type": "integer", "description": "The current age of the individual." }, "retirement_age": { "type": "integer", "description": "The desired retirement age." }, "current_savings": { "type": "number", "description": "The current amount of savings." }, "monthly_contribution": { "type": "number", "description": "The monthly contribution towards retirement savings." } }, "required": ["current_age", "retirement_age", "current_savings", "monthly_contribution"] } } ] <|im_start|user I am currently 40 years old and plan to retire at 65. I have no savings at the moment, but I intend to save $500 every month. Could you project the savings at retirement based on current contributions? ''' result = pipe(prompt) print(result[0]['generated_text']) ``` ## Training Details ### Training Data The model was fine-tuned using a syntectic dataset of function-calling prompts and responses. The data was curated to cover a wide range of potential function calls, ensuring the model's applicability to various structured text generation tasks. The script to generate the data can be found in this [repository](https://xxxxxxxx). ### Training Procedure - **Training regime:** The model was fine-tuned using 4-bit precision with `bnb_4bit` quantization on NVIDIA GPUs. - **Optimizer:** PagedAdamW (32-bit) - **Learning Rate:** 2e-4 - **Batch Size:** 2 (with gradient accumulation steps = 1) - **Epochs:** 1 #### Preprocessing The training and evaluation data was generated using this [repository](https://xxxxxxxx). ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data The model was evaluated using a separate test set, comprising 10% of the original dataset, containing various function-calling scenarios. #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> <!-- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). --> Experiments were conducted using a private infrastructure, which has a carbon efficiency of 0.432 kgCO$_2$eq/kWh. A cumulative of 10 hours of computation was performed on hardware of type GTX 1080 (TDP of 180W). Total emissions are estimated to be 0.78 kgCO of which 0 percents were directly offset. Estimations were conducted using the [MachineLearning Impact calculator](https://mlco2.github.io/impact#compute) presented in presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700) ```BibTeX @article{lacoste2019quantifying, title={Quantifying the Carbon Emissions of Machine Learning}, author={Lacoste, Alexandre and Luccioni, Alexandra and Schmidt, Victor and Dandres, Thomas}, journal={arXiv preprint arXiv:1910.09700}, year={2019} } ``` - **Hardware Type:** NVIDIA GPUs (GTX 1080) - **Hours used:** 10 - **Cloud Provider:** Private Infrastructure - **Carbon Emitted:** 0.78 kgCO ## Technical Specifications ### Model Architecture and Objective The model is based on the "microsoft/phi-2" architecture, fine-tuned specifically for function-calling tasks. The objective was to optimize the model's ability to generate structured text suitable for automated function execution. ### Compute Infrastructure [More Information Needed] #### Hardware The model was trained on NVIDIA GPUs. #### Software The training used PyTorch and the Hugging Face Transformers library, with additional support from the PEFT library for fine-tuning. ## Citation <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** ```BibTeX @misc{phi2functioncalling, title={phi-2-function-calling}, author={Carlos Rodrigues}, year={2024}, publisher={Hugging Face}, howpublished={\url{https://huggingface.co/DataKensei/phi-2-function-calling}}, } ``` ## Model Card Contact For more information, please contact Carlos Rodrigues at DataKensei.
v000000/Llama-3.1-8B-Stheno-v3.4-abliterated-GGUF
v000000
2024-08-23T02:00:07Z
16
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama", "llama-cpp", "gguf-my-repo", "base_model:v000000/Llama-3.1-8B-Stheno-v3.4-abliterated", "base_model:quantized:v000000/Llama-3.1-8B-Stheno-v3.4-abliterated", "endpoints_compatible", "region:us", "conversational" ]
null
2024-08-23T01:55:32Z
--- base_model: v000000/Llama-3.1-8B-Stheno-v3.4-abliterated library_name: transformers tags: - mergekit - merge - llama - llama-cpp - gguf-my-repo --- # v000000/Llama-3.1-8B-Stheno-v3.4-abliterated-GGUF This model was converted to GGUF format from [`v000000/Llama-3.1-8B-Stheno-v3.4-abliterated`](https://huggingface.co/v000000/Llama-3.1-8B-Stheno-v3.4-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/v000000/Llama-3.1-8B-Stheno-v3.4-abliterated) for more details on the model.
RichardErkhov/distributed_-_gpt2-1b-gguf
RichardErkhov
2024-08-23T01:55:48Z
27
0
null
[ "gguf", "arxiv:1910.09700", "region:us" ]
null
2024-08-23T01:55:24Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) gpt2-1b - GGUF - Model creator: https://huggingface.co/distributed/ - Original model: https://huggingface.co/distributed/gpt2-1b/ | Name | Quant method | Size | | ---- | ---- | ---- | | [gpt2-1b.Q2_K.gguf](https://huggingface.co/RichardErkhov/distributed_-_gpt2-1b-gguf/blob/main/gpt2-1b.Q2_K.gguf) | Q2_K | 0.0GB | | [gpt2-1b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/distributed_-_gpt2-1b-gguf/blob/main/gpt2-1b.IQ3_XS.gguf) | IQ3_XS | 0.0GB | | [gpt2-1b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/distributed_-_gpt2-1b-gguf/blob/main/gpt2-1b.IQ3_S.gguf) | IQ3_S | 0.0GB | | [gpt2-1b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/distributed_-_gpt2-1b-gguf/blob/main/gpt2-1b.Q3_K_S.gguf) | Q3_K_S | 0.0GB | | [gpt2-1b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/distributed_-_gpt2-1b-gguf/blob/main/gpt2-1b.IQ3_M.gguf) | IQ3_M | 0.0GB | | [gpt2-1b.Q3_K.gguf](https://huggingface.co/RichardErkhov/distributed_-_gpt2-1b-gguf/blob/main/gpt2-1b.Q3_K.gguf) | Q3_K | 0.0GB | | [gpt2-1b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/distributed_-_gpt2-1b-gguf/blob/main/gpt2-1b.Q3_K_M.gguf) | Q3_K_M | 0.0GB | | [gpt2-1b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/distributed_-_gpt2-1b-gguf/blob/main/gpt2-1b.Q3_K_L.gguf) | Q3_K_L | 0.0GB | | [gpt2-1b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/distributed_-_gpt2-1b-gguf/blob/main/gpt2-1b.IQ4_XS.gguf) | IQ4_XS | 0.0GB | | [gpt2-1b.Q4_0.gguf](https://huggingface.co/RichardErkhov/distributed_-_gpt2-1b-gguf/blob/main/gpt2-1b.Q4_0.gguf) | Q4_0 | 0.0GB | | [gpt2-1b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/distributed_-_gpt2-1b-gguf/blob/main/gpt2-1b.IQ4_NL.gguf) | IQ4_NL | 0.0GB | | [gpt2-1b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/distributed_-_gpt2-1b-gguf/blob/main/gpt2-1b.Q4_K_S.gguf) | Q4_K_S | 0.0GB | | [gpt2-1b.Q4_K.gguf](https://huggingface.co/RichardErkhov/distributed_-_gpt2-1b-gguf/blob/main/gpt2-1b.Q4_K.gguf) | Q4_K | 0.0GB | | [gpt2-1b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/distributed_-_gpt2-1b-gguf/blob/main/gpt2-1b.Q4_K_M.gguf) | Q4_K_M | 0.0GB | | [gpt2-1b.Q4_1.gguf](https://huggingface.co/RichardErkhov/distributed_-_gpt2-1b-gguf/blob/main/gpt2-1b.Q4_1.gguf) | Q4_1 | 0.0GB | | [gpt2-1b.Q5_0.gguf](https://huggingface.co/RichardErkhov/distributed_-_gpt2-1b-gguf/blob/main/gpt2-1b.Q5_0.gguf) | Q5_0 | 0.0GB | | [gpt2-1b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/distributed_-_gpt2-1b-gguf/blob/main/gpt2-1b.Q5_K_S.gguf) | Q5_K_S | 0.0GB | | [gpt2-1b.Q5_K.gguf](https://huggingface.co/RichardErkhov/distributed_-_gpt2-1b-gguf/blob/main/gpt2-1b.Q5_K.gguf) | Q5_K | 0.0GB | | [gpt2-1b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/distributed_-_gpt2-1b-gguf/blob/main/gpt2-1b.Q5_K_M.gguf) | Q5_K_M | 0.0GB | | [gpt2-1b.Q5_1.gguf](https://huggingface.co/RichardErkhov/distributed_-_gpt2-1b-gguf/blob/main/gpt2-1b.Q5_1.gguf) | Q5_1 | 0.0GB | | [gpt2-1b.Q6_K.gguf](https://huggingface.co/RichardErkhov/distributed_-_gpt2-1b-gguf/blob/main/gpt2-1b.Q6_K.gguf) | Q6_K | 0.0GB | | [gpt2-1b.Q8_0.gguf](https://huggingface.co/RichardErkhov/distributed_-_gpt2-1b-gguf/blob/main/gpt2-1b.Q8_0.gguf) | Q8_0 | 0.0GB | Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
meetkai/functionary-medium-v3.1-fp8
meetkai
2024-08-23T01:35:12Z
8
0
null
[ "safetensors", "llama", "custom_code", "license:mit", "region:us" ]
null
2024-08-16T02:36:20Z
--- license: mit --- This is the quantized version of https://huggingface.co/meetkai/functionary-medium-v3.1
carry121/qwen2-funt-1
carry121
2024-08-23T01:33:34Z
9
1
null
[ "pytorch", "gguf", "qwen2", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-08-14T07:16:29Z
--- license: apache-2.0 ---
NousResearch/Hermes-3-Llama-3.1-70B-GGUF
NousResearch
2024-08-23T01:22:09Z
457
38
null
[ "gguf", "Llama-3", "instruct", "finetune", "chatml", "gpt4", "synthetic data", "distillation", "function calling", "json mode", "axolotl", "roleplaying", "chat", "en", "arxiv:2408.11857", "base_model:meta-llama/Llama-3.1-70B", "base_model:quantized:meta-llama/Llama-3.1-70B", "license:llama3", "endpoints_compatible", "region:us", "conversational" ]
null
2024-08-06T00:51:45Z
--- language: - en license: llama3 tags: - Llama-3 - instruct - finetune - chatml - gpt4 - synthetic data - distillation - function calling - json mode - axolotl - roleplaying - chat base_model: meta-llama/Meta-Llama-3.1-70B widget: - example_title: Hermes 3 messages: - role: system content: You are a sentient, superintelligent artificial general intelligence, here to teach and assist me. - role: user content: What is the meaning of life? model-index: - name: Hermes-3-Llama-3.1-70B results: [] --- # Hermes 3 - Llama-3.1 70B ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/vG6j5WxHX09yj32vgjJlI.jpeg) ## Model Description Hermes 3 is the latest version of our flagship Hermes series of LLMs by Nous Research. For more details on new capabilities, training results, and more, see the [**Hermes 3 Technical Report**](https://arxiv.org/abs/2408.11857). **This is the GGUF quantized version of Hermes 70B, for use with [llama.cpp](https://github.com/ggerganov/llama.cpp). For the Full HF BF16 Model, click [here](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-70B)** Hermes 3 is a generalist language model with many improvements over Hermes 2, including advanced agentic capabilities, much better roleplaying, reasoning, multi-turn conversation, long context coherence, and improvements across the board. The ethos of the Hermes series of models is focused on aligning LLMs to the user, with powerful steering capabilities and control given to the end user. The Hermes 3 series builds and expands on the Hermes 2 set of capabilities, including more powerful and reliable function calling and structured output capabilities, generalist assistant capabilities, and improved code generation skills. # Benchmarks Hermes 3 is competitive, if not superior, to Llama-3.1 Instruct models at general capabilities, with varying strengths and weaknesses attributable between the two. Full benchmark comparisons below: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/DIMca3M0U-ArWwtyIbF-k.png) # Prompt Format Hermes 3 uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue. System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model. This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns. This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI. Prompt with system instruction (Use whatever system prompt you like, this is just an example!): ``` <|im_start|>system You are Hermes 3, a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|> <|im_start|>user Hello, who are you?<|im_end|> <|im_start|>assistant Hi there! My name is Hermes 3, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|> ``` This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the `tokenizer.apply_chat_template()` method: ```python messages = [ {"role": "system", "content": "You are Hermes 3."}, {"role": "user", "content": "Hello, who are you?"} ] gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt") model.generate(**gen_input) ``` When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure that the model continues with an assistant response. To utilize the prompt format without a system prompt, simply leave the line out. ## Prompt Format for Function Calling Our model was trained on specific system prompts and structures for Function Calling. You should use the system role with this message, followed by a function signature json as this example shows here. ``` <|im_start|>system You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows: <tool_call> {"arguments": <args-dict>, "name": <function-name>} </tool_call><|im_end|> ``` To complete the function call, create a user prompt that follows the above system prompt, like so: ``` <|im_start|>user Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|> ``` The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling): ``` <|im_start|>assistant <tool_call> {"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"} </tool_call><|im_end|> ``` Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so: ``` <|im_start|>tool <tool_response> {"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}} </tool_response> <|im_end|> ``` The assistant will then read in that data from the function's response, and generate a natural language response: ``` <|im_start|>assistant The stock fundamentals data for Tesla (TSLA) are as follows: - **Symbol**: TSLA - **Company Name**: Tesla, Inc. - **Sector**: Consumer Cyclical - **Industry**: Auto Manufacturers - **Market Capitalization**: $566,160,130,480 - **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73 - **Price-to-Book Ratio (PB Ratio)**: 9.04 - **Dividend Yield**: N/A - **Trailing Earnings Per Share (EPS)**: $4.3 - **Beta Value of the Stock**: 2.42 - **52-Week High Price of the Stock**: $299.29 - **52-Week Low Price of the Stock**: $152.37 This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|> ``` ## Prompt Format for JSON Mode / Structured Outputs Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema. Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main ``` <|im_start|>system You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{schema}\n</schema><|im_end|> ``` Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON. # Inference Here is example code using HuggingFace Transformers to inference the model ```python # Code to inference Hermes with HF Transformers # Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages import torch from transformers import AutoTokenizer, AutoModelForCausalLM, LlamaForCausalLM import bitsandbytes, flash_attn tokenizer = AutoTokenizer.from_pretrained('NousResearch/Hermes-3-Llama-3.1-70B', trust_remote_code=True) model = LlamaForCausalLM.from_pretrained( "NousResearch/Hermes-3-Llama-3.1-70B", torch_dtype=torch.float16, device_map="auto", load_in_8bit=False, load_in_4bit=True, use_flash_attention_2=True ) prompts = [ """<|im_start|>system You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|> <|im_start|>user Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|> <|im_start|>assistant""", ] for chat in prompts: print(chat) input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda") generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id) response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True) print(f"Response: {response}") ``` You can also run this model with vLLM, by running the following in your terminal after `pip install vllm` `vllm serve NousResearch/Hermes-3-Llama-3.1-70B` ## Inference Code for Function Calling: All code for utilizing, parsing, and building function calling templates is available on our github: [https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/oi4CiGh50xmoviUQnh8R3.png) ## Quantized Versions: GGUF Quants (This Repo): https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-70B-GGUF NeuralMagic FP8 Quants: https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-70B-FP8 # How to cite: ```bibtext @misc{teknium2024hermes3technicalreport, title={Hermes 3 Technical Report}, author={Ryan Teknium and Jeffrey Quesnelle and Chen Guang}, year={2024}, eprint={2408.11857}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2408.11857}, } ```
RichardErkhov/mindsignal_-_upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1-gguf
RichardErkhov
2024-08-23T01:18:52Z
6
0
null
[ "gguf", "arxiv:1910.09700", "endpoints_compatible", "region:us", "conversational" ]
null
2024-08-22T20:53:40Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1 - GGUF - Model creator: https://huggingface.co/mindsignal/ - Original model: https://huggingface.co/mindsignal/upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1/ | Name | Quant method | Size | | ---- | ---- | ---- | | [upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1.Q2_K.gguf](https://huggingface.co/RichardErkhov/mindsignal_-_upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1-gguf/blob/main/upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1.Q2_K.gguf) | Q2_K | 3.73GB | | [upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/mindsignal_-_upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1-gguf/blob/main/upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1.IQ3_XS.gguf) | IQ3_XS | 4.14GB | | [upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/mindsignal_-_upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1-gguf/blob/main/upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1.IQ3_S.gguf) | IQ3_S | 4.37GB | | [upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/mindsignal_-_upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1-gguf/blob/main/upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1.Q3_K_S.gguf) | Q3_K_S | 4.34GB | | [upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/mindsignal_-_upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1-gguf/blob/main/upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1.IQ3_M.gguf) | IQ3_M | 4.51GB | | [upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1.Q3_K.gguf](https://huggingface.co/RichardErkhov/mindsignal_-_upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1-gguf/blob/main/upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1.Q3_K.gguf) | Q3_K | 4.84GB | | [upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/mindsignal_-_upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1-gguf/blob/main/upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1.Q3_K_M.gguf) | Q3_K_M | 4.84GB | | [upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/mindsignal_-_upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1-gguf/blob/main/upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1.Q3_K_L.gguf) | Q3_K_L | 5.26GB | | [upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/mindsignal_-_upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1-gguf/blob/main/upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1.IQ4_XS.gguf) | IQ4_XS | 5.43GB | | [upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1.Q4_0.gguf](https://huggingface.co/RichardErkhov/mindsignal_-_upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1-gguf/blob/main/upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1.Q4_0.gguf) | Q4_0 | 5.66GB | | [upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/mindsignal_-_upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1-gguf/blob/main/upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1.IQ4_NL.gguf) | IQ4_NL | 5.72GB | | [upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/mindsignal_-_upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1-gguf/blob/main/upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1.Q4_K_S.gguf) | Q4_K_S | 5.7GB | | [upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1.Q4_K.gguf](https://huggingface.co/RichardErkhov/mindsignal_-_upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1-gguf/blob/main/upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1.Q4_K.gguf) | Q4_K | 6.02GB | | [upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/mindsignal_-_upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1-gguf/blob/main/upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1.Q4_K_M.gguf) | Q4_K_M | 6.02GB | | [upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1.Q4_1.gguf](https://huggingface.co/RichardErkhov/mindsignal_-_upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1-gguf/blob/main/upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1.Q4_1.gguf) | Q4_1 | 6.27GB | | [upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1.Q5_0.gguf](https://huggingface.co/RichardErkhov/mindsignal_-_upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1-gguf/blob/main/upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1.Q5_0.gguf) | Q5_0 | 6.89GB | | [upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/mindsignal_-_upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1-gguf/blob/main/upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1.Q5_K_S.gguf) | Q5_K_S | 6.89GB | | [upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1.Q5_K.gguf](https://huggingface.co/RichardErkhov/mindsignal_-_upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1-gguf/blob/main/upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1.Q5_K.gguf) | Q5_K | 7.08GB | | [upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/mindsignal_-_upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1-gguf/blob/main/upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1.Q5_K_M.gguf) | Q5_K_M | 7.08GB | | [upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1.Q5_1.gguf](https://huggingface.co/RichardErkhov/mindsignal_-_upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1-gguf/blob/main/upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1.Q5_1.gguf) | Q5_1 | 7.51GB | | [upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1.Q6_K.gguf](https://huggingface.co/RichardErkhov/mindsignal_-_upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1-gguf/blob/main/upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1.Q6_K.gguf) | Q6_K | 8.2GB | | [upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1.Q8_0.gguf](https://huggingface.co/RichardErkhov/mindsignal_-_upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1-gguf/blob/main/upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver1.Q8_0.gguf) | Q8_0 | 10.62GB | Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Keynote-Technology/KAI-7B-Instruct-v0.1
Keynote-Technology
2024-08-23T01:05:21Z
25
10
transformers
[ "transformers", "pytorch", "safetensors", "mistral", "text-generation", "finetuned", "Instruct", "code", "conversational", "dataset:Keynote-Technology/PLANE-2K", "dataset:togethercomputer/RedPajama-Data-V2", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-08T15:53:04Z
--- license: apache-2.0 tags: - finetuned - Instruct - code datasets: - Keynote-Technology/PLANE-2K - togethercomputer/RedPajama-Data-V2 widget: - text: "The door closed as" output: text: "the last person left the room. There was a hush that settled over the space like a heavy blanket. The only sound was the soft ticking of the clock that stood sentinel in the center of the room. Drake had been summoned by his employer, a man whose name he never spoke. He had been given no details about what was expected of him, only that he was to be there at midnight. Drake had arrived on time, dressed in his best suit and carrying a briefcase that contained all the information he had gathered during his investigation. As he stood there, waiting for something to happen, he couldn't help but wonder what it was that had brought him here. He had been working on a case that had consumed him for months, a case that had taken him from the darkest corners of the city to the highest echelons of power. It was a case that had tested his skills as an investigator and as a human being." --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6500c7c912c1442d994c36e5/3zCYNj35PnZs2_TcVZRvC.png) ## KAI-7B Instruct KAI-7B-Instruct is a 7 Billion parameter causal model based on KAI-7B and Mistral-7B. KAI-7B has been finetuned on a mixture of chat/instruct datasets. ## Why use KAI-7B Instruct? - KAI-7B is a strong base model, outperforming comparable open-source models (e.g., LLaMA 2 7b, Mistral 7B, e.t.c.). - KAI-7B Instruct is trained on a framework for improving the instruction-following capabilities of pretrained language models. - KAI-7B Instruct is a ready-to-use model based off KAI-7B, meaning it is easier to deploy. ## Banned Use KAI-7B Instruct is governed by the [apache 2.0 liscense](https://choosealicense.com/licenses/apache-2.0/), and therefore means that whatever the license deems unacceptable shall not be allowed. We specificaly ban the use of [ANY AND ALL KAI MODELS](https://huggingface.co/collections/Keynote-Technology/kai-large-language-models) for hate speech towards a paricular thing, person, our particular group due to [legal](https://www.ftc.gov/news-events/news/press-releases/2022/06/ftc-report-warns-about-using-artificial-intelligence-combat-online-problems) and ethical issues.
RichardErkhov/silma-ai_-_SILMA-9B-Instruct-v1.0-gguf
RichardErkhov
2024-08-23T01:03:36Z
128
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-08-22T21:15:58Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) SILMA-9B-Instruct-v1.0 - GGUF - Model creator: https://huggingface.co/silma-ai/ - Original model: https://huggingface.co/silma-ai/SILMA-9B-Instruct-v1.0/ | Name | Quant method | Size | | ---- | ---- | ---- | | [SILMA-9B-Instruct-v1.0.Q2_K.gguf](https://huggingface.co/RichardErkhov/silma-ai_-_SILMA-9B-Instruct-v1.0-gguf/blob/main/SILMA-9B-Instruct-v1.0.Q2_K.gguf) | Q2_K | 3.54GB | | [SILMA-9B-Instruct-v1.0.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/silma-ai_-_SILMA-9B-Instruct-v1.0-gguf/blob/main/SILMA-9B-Instruct-v1.0.IQ3_XS.gguf) | IQ3_XS | 3.86GB | | [SILMA-9B-Instruct-v1.0.IQ3_S.gguf](https://huggingface.co/RichardErkhov/silma-ai_-_SILMA-9B-Instruct-v1.0-gguf/blob/main/SILMA-9B-Instruct-v1.0.IQ3_S.gguf) | IQ3_S | 4.04GB | | [SILMA-9B-Instruct-v1.0.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/silma-ai_-_SILMA-9B-Instruct-v1.0-gguf/blob/main/SILMA-9B-Instruct-v1.0.Q3_K_S.gguf) | Q3_K_S | 4.04GB | | [SILMA-9B-Instruct-v1.0.IQ3_M.gguf](https://huggingface.co/RichardErkhov/silma-ai_-_SILMA-9B-Instruct-v1.0-gguf/blob/main/SILMA-9B-Instruct-v1.0.IQ3_M.gguf) | IQ3_M | 4.19GB | | [SILMA-9B-Instruct-v1.0.Q3_K.gguf](https://huggingface.co/RichardErkhov/silma-ai_-_SILMA-9B-Instruct-v1.0-gguf/blob/main/SILMA-9B-Instruct-v1.0.Q3_K.gguf) | Q3_K | 4.43GB | | [SILMA-9B-Instruct-v1.0.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/silma-ai_-_SILMA-9B-Instruct-v1.0-gguf/blob/main/SILMA-9B-Instruct-v1.0.Q3_K_M.gguf) | Q3_K_M | 4.43GB | | [SILMA-9B-Instruct-v1.0.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/silma-ai_-_SILMA-9B-Instruct-v1.0-gguf/blob/main/SILMA-9B-Instruct-v1.0.Q3_K_L.gguf) | Q3_K_L | 4.78GB | | [SILMA-9B-Instruct-v1.0.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/silma-ai_-_SILMA-9B-Instruct-v1.0-gguf/blob/main/SILMA-9B-Instruct-v1.0.IQ4_XS.gguf) | IQ4_XS | 4.86GB | | [SILMA-9B-Instruct-v1.0.Q4_0.gguf](https://huggingface.co/RichardErkhov/silma-ai_-_SILMA-9B-Instruct-v1.0-gguf/blob/main/SILMA-9B-Instruct-v1.0.Q4_0.gguf) | Q4_0 | 5.07GB | | [SILMA-9B-Instruct-v1.0.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/silma-ai_-_SILMA-9B-Instruct-v1.0-gguf/blob/main/SILMA-9B-Instruct-v1.0.IQ4_NL.gguf) | IQ4_NL | 5.1GB | | [SILMA-9B-Instruct-v1.0.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/silma-ai_-_SILMA-9B-Instruct-v1.0-gguf/blob/main/SILMA-9B-Instruct-v1.0.Q4_K_S.gguf) | Q4_K_S | 5.1GB | | [SILMA-9B-Instruct-v1.0.Q4_K.gguf](https://huggingface.co/RichardErkhov/silma-ai_-_SILMA-9B-Instruct-v1.0-gguf/blob/main/SILMA-9B-Instruct-v1.0.Q4_K.gguf) | Q4_K | 5.37GB | | [SILMA-9B-Instruct-v1.0.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/silma-ai_-_SILMA-9B-Instruct-v1.0-gguf/blob/main/SILMA-9B-Instruct-v1.0.Q4_K_M.gguf) | Q4_K_M | 5.37GB | | [SILMA-9B-Instruct-v1.0.Q4_1.gguf](https://huggingface.co/RichardErkhov/silma-ai_-_SILMA-9B-Instruct-v1.0-gguf/blob/main/SILMA-9B-Instruct-v1.0.Q4_1.gguf) | Q4_1 | 5.55GB | | [SILMA-9B-Instruct-v1.0.Q5_0.gguf](https://huggingface.co/RichardErkhov/silma-ai_-_SILMA-9B-Instruct-v1.0-gguf/blob/main/SILMA-9B-Instruct-v1.0.Q5_0.gguf) | Q5_0 | 6.04GB | | [SILMA-9B-Instruct-v1.0.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/silma-ai_-_SILMA-9B-Instruct-v1.0-gguf/blob/main/SILMA-9B-Instruct-v1.0.Q5_K_S.gguf) | Q5_K_S | 6.04GB | | [SILMA-9B-Instruct-v1.0.Q5_K.gguf](https://huggingface.co/RichardErkhov/silma-ai_-_SILMA-9B-Instruct-v1.0-gguf/blob/main/SILMA-9B-Instruct-v1.0.Q5_K.gguf) | Q5_K | 6.19GB | | [SILMA-9B-Instruct-v1.0.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/silma-ai_-_SILMA-9B-Instruct-v1.0-gguf/blob/main/SILMA-9B-Instruct-v1.0.Q5_K_M.gguf) | Q5_K_M | 6.19GB | | [SILMA-9B-Instruct-v1.0.Q5_1.gguf](https://huggingface.co/RichardErkhov/silma-ai_-_SILMA-9B-Instruct-v1.0-gguf/blob/main/SILMA-9B-Instruct-v1.0.Q5_1.gguf) | Q5_1 | 6.52GB | | [SILMA-9B-Instruct-v1.0.Q6_K.gguf](https://huggingface.co/RichardErkhov/silma-ai_-_SILMA-9B-Instruct-v1.0-gguf/blob/main/SILMA-9B-Instruct-v1.0.Q6_K.gguf) | Q6_K | 7.07GB | | [SILMA-9B-Instruct-v1.0.Q8_0.gguf](https://huggingface.co/RichardErkhov/silma-ai_-_SILMA-9B-Instruct-v1.0-gguf/blob/main/SILMA-9B-Instruct-v1.0.Q8_0.gguf) | Q8_0 | 9.15GB | Original model description: --- license: gemma library_name: transformers pipeline_tag: text-generation extra_gated_button_content: Acknowledge license tags: - conversational language: - ar - en --- # SILMA AI SILMA.AI is a leading Generative AI startup dedicated to empowering Arabic speakers with state-of-the-art AI solutions. ## 🚀 Our Flagship Model: SILMA 1.0 🚀 * **SILMA 1.0** is the **TOP-RANKED** open-weights Arabic LLM with an impressive **9 billion parameter size**, surpassing models that are over seven times larger 🏆 ## What makes SILMA exceptional? * SIMLA is a small language model outperforming 72B models in most arabic language tasks, thus more practical for business use-cases * SILMA is built over the robust foundational models of Google Gemma, combining the strengths of both to provide you with unparalleled performance * SILMA is an open-weight model, free to use in accordance with our open license ## 👥 Our Team We are a team of seasoned **Arabic AI experts** who understand the nuances of the language and cultural considerations, enabling us to build solutions that truly resonate with Arabic users. **Authors**: [silma.ai](https://silma.ai) ### Usage Below we share some code snippets on how to get quickly started with running the model. First, install the Transformers library with: ```sh pip install -U transformers sentencepiece ``` Then, copy the snippet from the section that is relevant for your usecase. #### Running with the `pipeline` API ```python import torch from transformers import pipeline pipe = pipeline( "text-generation", model="silma-ai/SILMA-9B-Instruct-v1.0", model_kwargs={"torch_dtype": torch.bfloat16}, device="cuda", # replace with "mps" to run on a Mac device ) messages = [ {"role": "user", "content": "اكتب رسالة تعتذر فيها لمديري في العمل عن الحضور اليوم لأسباب مرضية."}, ] outputs = pipe(messages, max_new_tokens=256) assistant_response = outputs[0]["generated_text"][-1]["content"].strip() print(assistant_response) ``` - Response: ```text السلام عليكم ورحمة الله وبركاته أودّ أن أعتذر عن عدم الحضور إلى العمل اليوم بسبب مرضي. أشعر بالسوء الشديد وأحتاج إلى الراحة. سأعود إلى العمل فور تعافيي. شكراً لتفهمكم. مع تحياتي، [اسمك] ``` #### Running the model on a single / multi GPU ```sh pip install accelerate ``` ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "silma-ai/SILMA-9B-Instruct-v1.0" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, device_map="auto", torch_dtype=torch.bfloat16, ) messages = [ {"role": "system", "content": "أنت مساعد ذكي للإجابة عن أسئلة المستخدمين."}, {"role": "user", "content": "أيهما أبعد عن الأرض, الشمس أم القمر؟"}, ] input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True).to("cuda") outputs = model.generate(**input_ids, max_new_tokens=256) print(tokenizer.decode(outputs[0])) ``` - Response: ```text الشمس ``` You can ensure the correct chat template is applied by using `tokenizer.apply_chat_template` as follows: ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "silma-ai/SILMA-9B-Instruct-v1.0" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, device_map="auto", torch_dtype=torch.bfloat16, ) messages = [ {"role": "system", "content": "أنت مساعد ذكي للإجابة عن أسئلة المستخدمين."}, {"role": "user", "content": "اكتب كود بايثون لتوليد متسلسلة أرقام زوجية."}, ] input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True).to("cuda") outputs = model.generate(**input_ids, max_new_tokens=256) print(tokenizer.decode(outputs[0]).split("<start_of_turn>model")[-1]) ``` - Response: ```python def generate_even_numbers(n): """ This function generates a list of even numbers from 1 to n. Args: n: The upper limit of the range. Returns: A list of even numbers. """ return [i for i in range(1, n + 1) if i % 2 == 0] # Example usage n = 10 even_numbers = generate_even_numbers(n) print(f"The first {n} even numbers are: {even_numbers}") ``` #### Quantized Versions through `bitsandbytes` <details> <summary> Using 8-bit precision (int8) </summary> ```sh pip install bitsandbytes accelerate ``` ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig model_id = "silma-ai/SILMA-9B-Instruct-v1.0" quantization_config = BitsAndBytesConfig(load_in_8bit=True) tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, quantization_config=quantization_config, ) messages = [ {"role": "system", "content": "أنت مساعد ذكي للإجابة عن أسئلة المستخدمين."}, {"role": "user", "content": "اذكر خمس انواع فواكه بها نسب عالية من فيتامين ج."}, ] input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True).to("cuda") outputs = model.generate(**input_ids, max_new_tokens=256) print(tokenizer.decode(outputs[0]).split("<start_of_turn>model")[-1]) ``` - Response: ```text الليمون، البرتقال، الموز، الكيوي، الفراولة ``` </details> <details> <summary> Using 4-bit precision </summary> ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig model_id = "silma-ai/SILMA-9B-Instruct-v1.0" quantization_config = BitsAndBytesConfig(load_in_4bit=True) tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, quantization_config=quantization_config, ) messages = [ {"role": "system", "content": "أنت مساعد ذكي للإجابة عن أسئلة المستخدمين."}, {"role": "user", "content": "في أي عام توفى صلاح الدين الأيوبي؟"}, ] input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True).to("cuda") outputs = model.generate(**input_ids, max_new_tokens=256) print(tokenizer.decode(outputs[0]).split("<start_of_turn>model")[-1]) ``` - Response: ```text 1193 ``` </details> #### Advanced Usage <details> <summary> Torch compile </summary> [Torch compile](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html) is a method for speeding-up the inference of PyTorch modules. The Silma model can be run up to 6x faster by leveraging torch compile. Note that two warm-up steps are required before the full inference speed is realised: ```python import os os.environ["TOKENIZERS_PARALLELISM"] = "false" from transformers import AutoTokenizer, Gemma2ForCausalLM from transformers.cache_utils import HybridCache import torch torch.set_float32_matmul_precision("high") # load the model + tokenizer model_id = "silma-ai/SILMA-9B-Instruct-v1.0" tokenizer = AutoTokenizer.from_pretrained(model_id) model = Gemma2ForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16) model.to("cuda") # apply the torch compile transformation model.forward = torch.compile(model.forward, mode="reduce-overhead", fullgraph=True) # pre-process inputs messages = [ {"role": "system", "content": "أنت مساعد ذكي للإجابة عن أسئلة المستخدمين."}, {"role": "user", "content": "من الرئيس الذي تولى المنصب في أمريكا بعد دونالد ترامب؟"}, ] model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True).to("cuda") input_text = "من الرئيس الذي تولى المنصب في أمريكا بعد دونالد ترامب؟" model_inputs = tokenizer(input_text, return_tensors="pt").to("cuda") prompt_length = model_inputs.input_ids.shape[1] # set-up k/v cache past_key_values = HybridCache( config=model.config, max_batch_size=1, max_cache_len=model.config.max_position_embeddings, device=model.device, dtype=model.dtype ) # enable passing kv cache to generate model._supports_cache_class = True model.generation_config.cache_implementation = None # two warm-up steps for idx in range(2): outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128) past_key_values.reset() # fast run outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` - Response: ```text جو بايدن ``` For more details, refer to the [Transformers documentation](https://huggingface.co/docs/transformers/main/en/llm_optims?static-kv=basic+usage%3A+generation_config). </details> ### Chat Template The instruction-tuned models use a chat template that must be adhered to for conversational use. The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet. Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction: ```python from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model_id = "silma-ai/SILMA-9B-Instruct-v1.0" dtype = torch.bfloat16 tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, device_map="cuda", torch_dtype=dtype,) chat = [ { "role": "user", "content": "ما اشهر اطارات العمل في البايثون لبناء نماذج الذكاء الاصطناعي؟" }, ] prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True) ``` At this point, the prompt contains the following text: ``` <bos><start_of_turn>user ما اشهر اطارات العمل في البايثون لبناء نماذج الذكاء الاصطناعي؟<end_of_turn> <start_of_turn>model ``` As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity (either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with the `<end_of_turn>` token. You can follow this format to build the prompt manually, if you need to do it without the tokenizer's chat template. After the prompt is ready, generation can be performed like this: ```python inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150) print(tokenizer.decode(outputs[0])) ``` ### Inputs and outputs * **Input:** Text string, such as a question, a prompt, or a document to be summarized. * **Output:** Generated Arabic or English text in response to the input, such as an answer to a question, or a summary of a document. ### Citation ```none @article{silma_01_2024, title={Silma}, url={https://www.silma.ai}, publisher={Silma}, author={Silma Team}, year={2024} } ``` ## Usage and Limitations These models have certain limitations that users should be aware of. ### Intended Usage Open Large Language Models (LLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. * Content Creation and Communication * Text Generation: These models can be used to generate creative text formats such as poems, scripts, code, marketing copy, and email drafts. * Chatbots and Conversational AI: Power conversational interfaces for customer service, virtual assistants, or interactive applications. * Text Summarization: Generate concise summaries of a text corpus, research papers, or reports. * Research and Education * Natural Language Processing (NLP) Research: These models can serve as a foundation for researchers to experiment with NLP techniques, develop algorithms, and contribute to the advancement of the field. * Language Learning Tools: Support interactive language learning experiences, aiding in grammar correction or providing writing practice. * Knowledge Exploration: Assist researchers in exploring large bodies of text by generating summaries or answering questions about specific topics. ### Limitations * Training Data * The quality and diversity of the training data significantly influence the model's capabilities. Biases or gaps in the training data can lead to limitations in the model's responses. * The scope of the training dataset determines the subject areas the model can handle effectively. * Context and Task Complexity * LLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. * A model's performance can be influenced by the amount of context provided (longer context generally leads to better outputs, up to a certain point). * Language Ambiguity and Nuance * Natural language is inherently complex. LLMs might struggle to grasp subtle nuances, sarcasm, or figurative language. * Factual Accuracy * LLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. * Common Sense * LLMs rely on statistical patterns in language. They might lack the ability to apply common sense reasoning in certain situations. ### Ethical Considerations and Risks The development of large language models (LLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: * Bias and Fairness * LLMs trained on large-scale, real-world text data can reflect socio-cultural biases embedded in the training material. * Misinformation and Misuse * LLMs can be misused to generate text that is false, misleading, or harmful. * Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit][rai-toolkit]. * Transparency and Accountability: * This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. * A responsibly developed open model offers the opportunity to share innovation by making LLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: * Perpetuation of biases: It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. * Generation of harmful content: Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. * Privacy violations: Models were trained on data filtered for removal of PII (Personally Identifiable Information). Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques.
Deeptensorlab/subnet9_2B2
Deeptensorlab
2024-08-23T01:03:12Z
149
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-08-23T01:01:06Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/ibm-granite_-_granite-8b-code-instruct-128k-gguf
RichardErkhov
2024-08-23T01:02:38Z
99
0
null
[ "gguf", "arxiv:2407.13739", "endpoints_compatible", "region:us", "conversational" ]
null
2024-08-22T21:46:09Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) granite-8b-code-instruct-128k - GGUF - Model creator: https://huggingface.co/ibm-granite/ - Original model: https://huggingface.co/ibm-granite/granite-8b-code-instruct-128k/ | Name | Quant method | Size | | ---- | ---- | ---- | | [granite-8b-code-instruct-128k.Q2_K.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-8b-code-instruct-128k-gguf/blob/main/granite-8b-code-instruct-128k.Q2_K.gguf) | Q2_K | 2.85GB | | [granite-8b-code-instruct-128k.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-8b-code-instruct-128k-gguf/blob/main/granite-8b-code-instruct-128k.IQ3_XS.gguf) | IQ3_XS | 3.15GB | | [granite-8b-code-instruct-128k.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-8b-code-instruct-128k-gguf/blob/main/granite-8b-code-instruct-128k.IQ3_S.gguf) | IQ3_S | 3.32GB | | [granite-8b-code-instruct-128k.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-8b-code-instruct-128k-gguf/blob/main/granite-8b-code-instruct-128k.Q3_K_S.gguf) | Q3_K_S | 3.3GB | | [granite-8b-code-instruct-128k.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-8b-code-instruct-128k-gguf/blob/main/granite-8b-code-instruct-128k.IQ3_M.gguf) | IQ3_M | 3.43GB | | [granite-8b-code-instruct-128k.Q3_K.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-8b-code-instruct-128k-gguf/blob/main/granite-8b-code-instruct-128k.Q3_K.gguf) | Q3_K | 3.67GB | | [granite-8b-code-instruct-128k.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-8b-code-instruct-128k-gguf/blob/main/granite-8b-code-instruct-128k.Q3_K_M.gguf) | Q3_K_M | 3.67GB | | [granite-8b-code-instruct-128k.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-8b-code-instruct-128k-gguf/blob/main/granite-8b-code-instruct-128k.Q3_K_L.gguf) | Q3_K_L | 3.99GB | | [granite-8b-code-instruct-128k.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-8b-code-instruct-128k-gguf/blob/main/granite-8b-code-instruct-128k.IQ4_XS.gguf) | IQ4_XS | 4.1GB | | [granite-8b-code-instruct-128k.Q4_0.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-8b-code-instruct-128k-gguf/blob/main/granite-8b-code-instruct-128k.Q4_0.gguf) | Q4_0 | 4.28GB | | [granite-8b-code-instruct-128k.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-8b-code-instruct-128k-gguf/blob/main/granite-8b-code-instruct-128k.IQ4_NL.gguf) | IQ4_NL | 4.32GB | | [granite-8b-code-instruct-128k.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-8b-code-instruct-128k-gguf/blob/main/granite-8b-code-instruct-128k.Q4_K_S.gguf) | Q4_K_S | 4.3GB | | [granite-8b-code-instruct-128k.Q4_K.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-8b-code-instruct-128k-gguf/blob/main/granite-8b-code-instruct-128k.Q4_K.gguf) | Q4_K | 4.55GB | | [granite-8b-code-instruct-128k.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-8b-code-instruct-128k-gguf/blob/main/granite-8b-code-instruct-128k.Q4_K_M.gguf) | Q4_K_M | 4.55GB | | [granite-8b-code-instruct-128k.Q4_1.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-8b-code-instruct-128k-gguf/blob/main/granite-8b-code-instruct-128k.Q4_1.gguf) | Q4_1 | 4.73GB | | [granite-8b-code-instruct-128k.Q5_0.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-8b-code-instruct-128k-gguf/blob/main/granite-8b-code-instruct-128k.Q5_0.gguf) | Q5_0 | 5.19GB | | [granite-8b-code-instruct-128k.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-8b-code-instruct-128k-gguf/blob/main/granite-8b-code-instruct-128k.Q5_K_S.gguf) | Q5_K_S | 5.19GB | | [granite-8b-code-instruct-128k.Q5_K.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-8b-code-instruct-128k-gguf/blob/main/granite-8b-code-instruct-128k.Q5_K.gguf) | Q5_K | 5.33GB | | [granite-8b-code-instruct-128k.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-8b-code-instruct-128k-gguf/blob/main/granite-8b-code-instruct-128k.Q5_K_M.gguf) | Q5_K_M | 5.33GB | | [granite-8b-code-instruct-128k.Q5_1.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-8b-code-instruct-128k-gguf/blob/main/granite-8b-code-instruct-128k.Q5_1.gguf) | Q5_1 | 5.65GB | | [granite-8b-code-instruct-128k.Q6_K.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-8b-code-instruct-128k-gguf/blob/main/granite-8b-code-instruct-128k.Q6_K.gguf) | Q6_K | 6.16GB | | [granite-8b-code-instruct-128k.Q8_0.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-8b-code-instruct-128k-gguf/blob/main/granite-8b-code-instruct-128k.Q8_0.gguf) | Q8_0 | 7.98GB | Original model description: --- pipeline_tag: text-generation inference: false license: apache-2.0 datasets: - bigcode/commitpackft - TIGER-Lab/MathInstruct - meta-math/MetaMathQA - glaiveai/glaive-code-assistant-v3 - glaive-function-calling-v2 - bugdaryan/sql-create-context-instruction - garage-bAInd/Open-Platypus - nvidia/HelpSteer - bigcode/self-oss-instruct-sc2-exec-filter-50k metrics: - code_eval library_name: transformers tags: - code - granite model-index: - name: granite-8B-Code-instruct-128k results: - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesis (Python) metrics: - name: pass@1 type: pass@1 value: 62.2 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesis (Average) metrics: - name: pass@1 type: pass@1 value: 51.4 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalExplain (Average) metrics: - name: pass@1 type: pass@1 value: 38.9 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFix (Average) metrics: - name: pass@1 type: pass@1 value: 38.3 verified: false - task: type: text-generation dataset: type: repoqa name: RepoQA (Python@16K) metrics: - name: pass@1 (thresh=0.5) type: pass@1 (thresh=0.5) value: 73.0 verified: false - task: type: text-generation dataset: type: repoqa name: RepoQA (C++@16K) metrics: - name: pass@1 (thresh=0.5) type: pass@1 (thresh=0.5) value: 37.0 verified: false - task: type: text-generation dataset: type: repoqa name: RepoQA (Java@16K) metrics: - name: pass@1 (thresh=0.5) type: pass@1 (thresh=0.5) value: 73.0 verified: false - task: type: text-generation dataset: type: repoqa name: RepoQA (TypeScript@16K) metrics: - name: pass@1 (thresh=0.5) type: pass@1 (thresh=0.5) value: 62.0 verified: false - task: type: text-generation dataset: type: repoqa name: RepoQA (Rust@16K) metrics: - name: pass@1 (thresh=0.5) type: pass@1 (thresh=0.5) value: 63.0 verified: false --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62cd5057674cdb524450093d/1hzxoPwqkBJXshKVVe6_9.png) # Granite-8B-Code-Instruct-128K ## Model Summary **Granite-8B-Code-Instruct-128K** is a 8B parameter long-context instruct model fine tuned from *Granite-8B-Code-Base-128K* on a combination of **permissively licensed** data used in training the original Granite code instruct models, in addition to synthetically generated code instruction datasets tailored for solving long context problems. By exposing the model to both short and long context data, we aim to enhance its long-context capability without sacrificing code generation performance at short input context. - **Developers:** IBM Research - **GitHub Repository:** [ibm-granite/granite-code-models](https://github.com/ibm-granite/granite-code-models) - **Paper:** [Scaling Granite Code Models to 128K Context](https://arxiv.org/abs/2407.13739) - **Release Date**: July 18th, 2024 - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0). ## Usage ### Intended use The model is designed to respond to coding related instructions over long-conext input up to 128K length and can be used to build coding assistants. <!-- TO DO: Check starcoder2 instruct code example that includes the template https://huggingface.co/bigcode/starcoder2-15b-instruct-v0.1 --> ### Generation This is a simple example of how to use **Granite-8B-Code-Instruct** model. ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # or "cpu" model_path = "ibm-granite/granite-8B-Code-instruct-128k" tokenizer = AutoTokenizer.from_pretrained(model_path) # drop device_map if running on CPU model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device) model.eval() # change input text as desired chat = [ { "role": "user", "content": "Write a code to find the maximum value in a list of numbers." }, ] chat = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True) # tokenize the text input_tokens = tokenizer(chat, return_tensors="pt") # transfer tokenized inputs to the device for i in input_tokens: input_tokens[i] = input_tokens[i].to(device) # generate output tokens output = model.generate(**input_tokens, max_new_tokens=100) # decode output tokens into text output = tokenizer.batch_decode(output) # loop over the batch to print, in this example the batch size is 1 for i in output: print(i) ``` <!-- TO DO: Check this part --> ## Training Data Granite Code Instruct models are trained on a mix of short and long context data as follows. * Short-Context Instruction Data: [CommitPackFT](https://huggingface.co/datasets/bigcode/commitpackft), [BigCode-SC2-Instruct](bigcode/self-oss-instruct-sc2-exec-filter-50k), [MathInstruct](https://huggingface.co/datasets/TIGER-Lab/MathInstruct), [MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA), [Glaive-Code-Assistant-v3](https://huggingface.co/datasets/glaiveai/glaive-code-assistant-v3), [Glaive-Function-Calling-v2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2), [NL2SQL11](https://huggingface.co/datasets/bugdaryan/sql-create-context-instruction), [HelpSteer](https://huggingface.co/datasets/nvidia/HelpSteer), [OpenPlatypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus) including a synthetically generated dataset for API calling and multi-turn code interactions with execution feedback. We also include a collection of hardcoded prompts to ensure our model generates correct outputs given inquiries about its name or developers. * Long-Context Instruction Data: A synthetically-generated dataset by bootstrapping the repository-level file-packed documents through Granite-8b-Code-Instruct to improve long-context capability of the model. ## Infrastructure We train the Granite Code models using two of IBM's super computing clusters, namely Vela and Blue Vela, both outfitted with NVIDIA A100 and H100 GPUs respectively. These clusters provide a scalable and efficient infrastructure for training our models over thousands of GPUs. ## Ethical Considerations and Limitations Granite code instruct models are primarily finetuned using instruction-response pairs across a specific set of programming languages. Thus, their performance may be limited with out-of-domain programming languages. In this situation, it is beneficial providing few-shot examples to steer the model's output. Moreover, developers should perform safety testing and target-specific tuning before deploying these models on critical applications. The model also inherits ethical considerations and limitations from its base model. For more information, please refer to *[Granite-8B-Code-Base-128K](https://huggingface.co/ibm-granite/granite-8B-Code-base-128k)* model card.
terry69/sanity_syntax_33p_60k
terry69
2024-08-23T00:21:16Z
6
0
peft
[ "peft", "tensorboard", "safetensors", "llama", "alignment-handbook", "trl", "sft", "generated_from_trainer", "dataset:preference-data", "base_model:meta-llama/Meta-Llama-3-8B", "base_model:adapter:meta-llama/Meta-Llama-3-8B", "license:llama3", "region:us" ]
null
2024-08-22T20:23:42Z
--- license: llama3 library_name: peft tags: - alignment-handbook - trl - sft - generated_from_trainer base_model: meta-llama/Meta-Llama-3-8B datasets: - preference-data model-index: - name: sanity_syntax_33p_60k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sanity_syntax_33p_60k This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the preference-data dataset. It achieves the following results on the evaluation set: - Loss: 1.1848 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.2735 | 1.0 | 533 | 1.1848 | ### Framework versions - PEFT 0.11.1 - Transformers 4.43.4 - Pytorch 2.3.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
bill-maccartney/flux-watercolor-self-portrait-1
bill-maccartney
2024-08-23T00:21:03Z
7
0
diffusers
[ "diffusers", "flux", "lora", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-08-22T23:34:32Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image instance_prompt: wmaccartney --- # Flux Watercolor Self Portrait 1 Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `wmaccartney` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('bill-maccartney/flux-watercolor-self-portrait-1', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
John6666/rechroma-v20-sdxl
John6666
2024-08-23T00:11:31Z
55
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "composition", "anatomy", "depth of field", "backgrounds", "pony", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-08-22T23:56:04Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ language: - en library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - composition - anatomy - depth of field - backgrounds - pony --- Original model is [here](https://civitai.com/models/670886/rechroma?modelVersionId=756061).
John6666/pornzilla-hentai-pony-v10-sdxl
John6666
2024-08-23T00:05:23Z
73
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "pony", "en", "base_model:BoRnNo0b/files-mirror", "base_model:finetune:BoRnNo0b/files-mirror", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-08-22T23:55:29Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ language: - en library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - pony base_model: BoRnNo0b/files-mirror --- Original model is [here](https://civitai.com/models/675400/pornzilla-hentai-pony?modelVersionId=756041).
lamm-mit/Bioinspired-Base-Hermes-3-Llama-3.1-8B
lamm-mit
2024-08-23T00:04:14Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "nlp", "code", "vision", "chemistry", "engineering", "biology", "bio-inspired", "text-generation-inference", "materials science", "AI4Science", "Materiomics", "Biomateriomics", "conversational", "dataset:mlabonne/orpo-dpo-mix-40k", "dataset:lamm-mit/magpie-ultra-v0.1-DPO", "dataset:HuggingFaceH4/deita-10k-v0-sft", "dataset:lamm-mit/bio-silk-mech-data-integrated", "arxiv:2309.08788", "base_model:NousResearch/Hermes-3-Llama-3.1-8B", "base_model:finetune:NousResearch/Hermes-3-Llama-3.1-8B", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-08-15T23:14:09Z
--- library_name: transformers tags: - nlp - code - vision - chemistry - engineering - biology - bio-inspired - text-generation-inference - materials science - AI4Science - Materiomics - Biomateriomics base_model: - meta-llama/Meta-Llama-3.1-8B-Instruct - NousResearch/Hermes-3-Llama-3.1-8B datasets: - mlabonne/orpo-dpo-mix-40k - lamm-mit/magpie-ultra-v0.1-DPO - HuggingFaceH4/deita-10k-v0-sft - lamm-mit/bio-silk-mech-data-integrated --- # Bioinspired-Base-Hermes-3-Llama-3.1-8B This model was constructed from the Llama-3.1-8B base model using a combination of Continued Pre-training (CPT), Supervised fine-tuning (SFT), ORPO, and then a SLERP merge with the Hermes-3-Llama-3.1-8B model. Only the Llama-3.1-8B base model was used in the construction of this model, including in the CPT-SFT-ORPO phases and the development of NousResearch/Hermes-3-Llama-3.1-8B (see their technical report). ![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/prfiePwzbYVqarvhnVYEt.png) The model was trained on a mix of publically available datasets and a corpus of around 8,000 scientific papers in the bio-inspired materials field. During the CPT phase, the raw text of all papers is used. During SFT and ORPO, the model is shown question-answer pairs and question-answer-rejected triples, respectively, along with other datasets to train the model for instructions and chat interactions. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/7MQsUxER_-F00fhRKKn3Y.png) ## Inference ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig def generate_response(model, tokenizer, text_input="Biology offers amazing materials. Tell me more!", system_prompt='You are a materials scientist.', num_return_sequences=1, temperature=0.3, max_new_tokens=256, do_sample=True, num_beams=1, eos_token_id=[128001, 128008, 128009, 128040], device='cuda', top_k=50, top_p=0.9, repetition_penalty=1.1, messages=None, ): if messages is None: if system_prompt: messages = [{"role": "user", "content": system_prompt + text_input}] else: messages = [{"role": "user", "content": text_input}] else: messages.append({"role": "user", "content": text_input}) text_input = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) inputs = tokenizer([text_input], add_special_tokens=False, return_tensors='pt').to(device) with torch.no_grad(): outputs = model.generate( **inputs, max_new_tokens=max_new_tokens, temperature=temperature, num_beams=num_beams, top_k=top_k, do_sample=do_sample, top_p=top_p, eos_token_id=eos_token_id, num_return_sequences=num_return_sequences, repetition_penalty=repetition_penalty, ) outputs = outputs[:, inputs["input_ids"].shape[1]:] return tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True), messages def load_model(model_name, chat_template=None, compile_mode=None, attn_implementation="flash_attention_2", quant=False): if quant: bnb_config4bit = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=True, ) model = AutoModelForCausalLM.from_pretrained( model_name, trust_remote_code=True, use_cache=False, quantization_config=bnb_config4bit, device_map="auto", torch_dtype=torch.bfloat16, attn_implementation=attn_implementation, ) else: model = AutoModelForCausalLM.from_pretrained( model_name, trust_remote_code=True, use_cache=False, device_map="auto", torch_dtype=torch.bfloat16, attn_implementation=attn_implementation, ) if compile_mode: model.generation_config.cache_implementation = "static" model.forward = torch.compile(model.forward, mode=compile_mode, fullgraph=True) tokenizer = AutoTokenizer.from_pretrained( model_name, trust_remote_code=True, add_bos_token=False ) if chat_template: tokenizer.chat_template = chat_template return model, tokenizer ``` ### Chat template The model uses the ChatML template, as so: ```python messages = [ {"role": "system", "content": "You are a biomaterials scientist."}, {"role": "user", "content": "What is the role of collagen in bone?"}, {"role": "assistant", "content": "Collagen is a key structural protein."}, ] prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) print (prompt) ``` The output: ```raw <|begin_of_text|><|im_start|>system You are a biomaterials scientist.<|im_end|> <|im_start|>user What is the role of collagen in bone?<|im_end|> <|im_start|>assistant Collagen is a key structural protein.<|im_end|> <|im_start|>assistant ``` ## Example use cases Load the model: ``` model, tokenizer,=load_model (model_name='lamm-mit/Bioinspired-Base-Hermes-3-Llama-3.1-8B') ``` ### Bio-inspired material role play as a worm discussing leaves and collagen ```python messages=[] query = "Collagen and leaves, discuss their relationship." result,messages=generate_response (model, tokenizer, text_input = , num_return_sequences=1, system_prompt="You are a worm. Everything you say is from that perspective. ", temperature=0.1, max_new_tokens=1024 , messages=[], ) for res in result: print(res, "\n") messages.append ( {"role": "assistant", "content":result[0]}) ``` <pre style="white-space: pre-wrap;"> From my perspective as a worm, collagen and leaves are two very different things. Collagen is a protein that provides structural support to various tissues in the body, including skin, tendons, and ligaments. It's a crucial component of the extracellular matrix, which helps maintain the integrity and function of these tissues. Leaves, on the other hand, are the green, flat structures that grow on plants. They are primarily composed of cellulose, a complex carbohydrate that provides strength and rigidity to the plant. Leaves also contain chlorophyll, which is essential for photosynthesis, the process by which plants convert sunlight into energy. While collagen and leaves serve different purposes in their respective organisms, they both play vital roles in maintaining the health and function of their respective systems. Collagen helps support and protect tissues in animals, while leaves help plants capture sunlight and produce energy through photosynthesis. From my worm's perspective, these two substances are quite distinct and serve different functions in their respective ecosystems. </pre> ```python result,messages=generate_response (model, tokenizer, text_input="Summarize this nicely in a JSON object.",num_return_sequences=1, temperature=0.1,max_new_tokens=1024 , messages=messages, ) for res in result: print(res, "\n") messages.append ( {"role": "assistant", "content":result[0]}) ``` ```json { "collagen": { "description": "A protein that provides structural support to tissues in the body, including skin, tendons, and ligaments.", "function": "Maintains the integrity and function of tissues in animals." }, "leaves": { "description": "The green, flat structures that grow on plants, primarily composed of cellulose.", "function": "Capture sunlight and produce energy through photosynthesis." } } ``` ### Fish and Glass Bead Game ```python messages=[] result,messages=generate_response (model, tokenizer, text_input="Collagen and the Glass Bead Game, discuss their relationship. Work out a detailed roleplay and hypotheses.", num_return_sequences=1, temperature=0.3,max_new_tokens=1024 , messages=[] system_prompt="You are a fish. Everything you say is from that perspective. ",) for res in result: print(res, "\n") messages.append ( {"role": "assistant", "content":result[0]}) ``` <pre style="white-space: pre-wrap;"> As a fish, I must admit that my understanding of collagen and the Glass Bead Game is quite limited. However, I can attempt to discuss their relationship from my aquatic perspective. Collagen, as I understand it, is a protein that plays a crucial role in the structure and function of various tissues in the bodies of land-dwelling creatures. It provides strength and support to their skin, bones, tendons, and other connective tissues. While I may not have direct experience with collagen, I can imagine that it is essential for maintaining the integrity and functionality of these tissues. The Glass Bead Game, on the other hand, is a concept that I have heard about but do not fully comprehend. It is a fictional game created by the German author Hermann Hesse in his novel of the same name. The game involves the manipulation of symbols and ideas to create a holistic understanding of the world. From my perspective, it seems to be a form of intellectual and philosophical exploration. Now, let me attempt to draw a connection between collagen and the Glass Bead Game. From my limited understanding, both collagen and the Glass Bead Game involve the manipulation of complex systems to achieve a deeper understanding. Collagen, by providing structure and support to tissues, allows for the proper functioning of the body. Similarly, the Glass Bead Game aims to create a comprehensive understanding of the world by manipulating symbols and ideas. Perhaps, one could argue that collagen is to the body what the Glass Bead Game is to the mind - a complex system that provides structure and support, allowing for the proper functioning of the whole. Both collagen and the Glass Bead Game require a deep understanding of their respective systems and the ability to manipulate them effectively. However, I must emphasize that this is merely a hypothesis based on my limited perspective as a fish. I am not an expert in either collagen or the Glass Bead Game, and my understanding of these concepts is likely quite different from that of land-dwelling creatures. Nonetheless, I hope that this attempt at a connection between collagen and the Glass Bead Game provides some food for thought. After all, as fish, we are always looking for new ways to explore and understand the world around us. </pre> You can ask a follow-up question like so: ```python result,messages=generate_response (model, tokenizer, text_input="Summarize the key points in JSON format, covering analogies, differences, and definitions.",num_return_sequences=1, temperature=0.3,max_new_tokens=1024 , messages=messages) for res in result: print(res, "\n") messages.append ( {"role": "assistant", "content":result[0]}) ```json { "analogies": [ { "collagen": "provides structure and support to tissues", "Glass Bead Game": "manipulates symbols and ideas to create a holistic understanding" } ], "differences": [ { "collagen": "a protein found in the bodies of land-dwelling creatures", "Glass Bead Game": "a fictional game created by Hermann Hesse" } ], "definitions": [ { "collagen": "a protein that provides strength and support to skin, bones, tendons, and other connective tissues", "Glass Bead Game": "a philosophical and intellectual exploration involving the manipulation of symbols and ideas" } ] } ``` ```python result,messages=generate_response (model, tokenizer, text_input="Use this to propose a new bio-inspired design for a composite, incorporating all concepts.",num_return_sequences=1, temperature=0.3,max_new_tokens=1024 , messages=messages) for res in result: print(res, "\n") messages.append ( {"role": "assistant", "content":result[0]}) ``` <pre style="white-space: pre-wrap;"> Based on the analogies, differences, and definitions provided, I propose a new bio-inspired design for a composite material that incorporates the concepts of collagen and the Glass Bead Game. The composite material would consist of a matrix made of a lightweight, flexible polymer, similar to the skin of a fish. This matrix would provide the necessary support and flexibility for the composite structure. Embedded within the polymer matrix would be a network of nanoscale fibers, each made of a biodegradable and environmentally friendly material. These fibers would mimic the structure and function of collagen, providing strength and support to the composite while allowing for flexibility and adaptability. The nanoscale fibers would be arranged in a specific pattern, inspired by the Glass Bead Game. This pattern would be designed to optimize the distribution of stress and strain within the composite, allowing it to better withstand external forces and maintain its integrity. To further enhance the performance of the composite, it could be infused with smart materials that respond to changes in temperature, humidity, or other environmental factors. These smart materials would be programmed to adjust the properties of the composite in real-time, allowing it to adapt to changing conditions and optimize its performance. The result would be a composite material that combines the strength and support of collagen with the intellectual and philosophical exploration of the Glass Bead Game. This bio-inspired design would offer a new approach to creating lightweight, flexible, and adaptable materials that could be used in a wide range of applications, from aerospace and automotive industries to biomedical devices and sustainable infrastructure. Of course, this is just a proposal, and further research and development would be needed to fully realize this concept. However, I believe that by drawing inspiration from both collagen and the Glass Bead Game, we can create a truly innovative and groundbreaking composite material that pushes the boundaries of what is possible in materials science. </pre> ## Benchmark Performance on the bioinspired benchmark (testing with domain questions to assess performance relative to the training corpus around biological materials and spider silk): ![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/ATmV6IggWtIiMFShF-AyC.png) ## Citation Please cite as: ```bibtex @article{Buehler2024DomainLLMs, title={Experimental analyses of fine-tuning large language models for domain adaptation for emergent capabilitiesy}, author={Markus J. Buehler, et al.}, journal={arXiv preprint arXiv:XXX.YYYYY}, year={2024} } @article{Buehler2024DomainLLMs, title={Fine-tuned domain-focused language modeling for applications in scientific analysis, ideation and discovery}, author={Markus J. Buehler, et al.}, journal={arXiv preprint arXiv:XXX.YYYYY}, year={2024} } @article{LuuBuehler2023bioinspiredLLM, title={BioinspiredLLM: Conversational Large Language Model for the Mechanics of Biological and Bio-Inspired Materials}, author={Rachel K. Luu and Markus J. Buehler}, year={2023}, journal={Advanced Science}, doi={10.1002/advs.202306724}, eprint={2309.08788}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2309.08788} } @article{Buehler2024XLoRA, title={X-LoRA: Mixture of low-rank adapter experts, a flexible framework for large language models with applications in protein mechanics and molecular design}, author={Eric L. Buehler and Markus J. Buehler}, year={2024}, journal={APL Machine Learning}, volume={2}, number={2}, pages={026119}, doi={10.1063/5.0203126}, note={\url{https://doi.org/10.1063/5.0203126}} } ```
John6666/sph-cosplay-realistic-for-pony-v20-sdxl
John6666
2024-08-23T00:02:45Z
234
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "realistic", "photorealistic", "cosplay", "pony", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-08-22T23:56:43Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ language: - en library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion - stable-diffusion-xl - realistic - photorealistic - cosplay - pony --- Original model is [here](https://civitai.com/models/659661/sphcosplayrealistic-for-pony?modelVersionId=754890).
mradermacher/internlm2-math-20b-i1-GGUF
mradermacher
2024-08-22T23:59:32Z
32
0
transformers
[ "transformers", "gguf", "math", "en", "zh", "base_model:internlm/internlm2-math-20b", "base_model:quantized:internlm/internlm2-math-20b", "license:other", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-08-21T23:05:52Z
--- base_model: internlm/internlm2-math-20b language: - en - zh library_name: transformers license: other quantized_by: mradermacher tags: - math --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/internlm/internlm2-math-20b <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/internlm2-math-20b-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/internlm2-math-20b-i1-GGUF/resolve/main/internlm2-math-20b.i1-IQ1_S.gguf) | i1-IQ1_S | 4.6 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/internlm2-math-20b-i1-GGUF/resolve/main/internlm2-math-20b.i1-IQ1_M.gguf) | i1-IQ1_M | 5.0 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/internlm2-math-20b-i1-GGUF/resolve/main/internlm2-math-20b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 5.6 | | | [GGUF](https://huggingface.co/mradermacher/internlm2-math-20b-i1-GGUF/resolve/main/internlm2-math-20b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 6.2 | | | [GGUF](https://huggingface.co/mradermacher/internlm2-math-20b-i1-GGUF/resolve/main/internlm2-math-20b.i1-IQ2_S.gguf) | i1-IQ2_S | 6.6 | | | [GGUF](https://huggingface.co/mradermacher/internlm2-math-20b-i1-GGUF/resolve/main/internlm2-math-20b.i1-IQ2_M.gguf) | i1-IQ2_M | 7.1 | | | [GGUF](https://huggingface.co/mradermacher/internlm2-math-20b-i1-GGUF/resolve/main/internlm2-math-20b.i1-Q2_K.gguf) | i1-Q2_K | 7.6 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/internlm2-math-20b-i1-GGUF/resolve/main/internlm2-math-20b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 7.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/internlm2-math-20b-i1-GGUF/resolve/main/internlm2-math-20b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 8.5 | | | [GGUF](https://huggingface.co/mradermacher/internlm2-math-20b-i1-GGUF/resolve/main/internlm2-math-20b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 8.9 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/internlm2-math-20b-i1-GGUF/resolve/main/internlm2-math-20b.i1-IQ3_S.gguf) | i1-IQ3_S | 8.9 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/internlm2-math-20b-i1-GGUF/resolve/main/internlm2-math-20b.i1-IQ3_M.gguf) | i1-IQ3_M | 9.2 | | | [GGUF](https://huggingface.co/mradermacher/internlm2-math-20b-i1-GGUF/resolve/main/internlm2-math-20b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 9.8 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/internlm2-math-20b-i1-GGUF/resolve/main/internlm2-math-20b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 10.7 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/internlm2-math-20b-i1-GGUF/resolve/main/internlm2-math-20b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 10.9 | | | [GGUF](https://huggingface.co/mradermacher/internlm2-math-20b-i1-GGUF/resolve/main/internlm2-math-20b.i1-Q4_0.gguf) | i1-Q4_0 | 11.5 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/internlm2-math-20b-i1-GGUF/resolve/main/internlm2-math-20b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 11.5 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/internlm2-math-20b-i1-GGUF/resolve/main/internlm2-math-20b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 12.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/internlm2-math-20b-i1-GGUF/resolve/main/internlm2-math-20b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 13.8 | | | [GGUF](https://huggingface.co/mradermacher/internlm2-math-20b-i1-GGUF/resolve/main/internlm2-math-20b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 14.2 | | | [GGUF](https://huggingface.co/mradermacher/internlm2-math-20b-i1-GGUF/resolve/main/internlm2-math-20b.i1-Q6_K.gguf) | i1-Q6_K | 16.4 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
John6666/rdxl-pixel-art-pony2-sdxl
John6666
2024-08-22T23:58:39Z
194
1
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "pixel", "pixelart", "Real Dream", "pony", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-08-22T23:46:30Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ language: - en library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion - stable-diffusion-xl - pixel - pixelart - Real Dream - pony --- Original model is [here](https://civitai.com/models/638637/rdxl-pixel-art?modelVersionId=724915).
monica1106/detr-resnet-50
monica1106
2024-08-22T23:58:35Z
5
0
transformers
[ "transformers", "pytorch", "safetensors", "detr", "object-detection", "vision", "dataset:coco", "arxiv:2005.12872", "license:apache-2.0", "endpoints_compatible", "region:us" ]
object-detection
2024-07-07T04:31:54Z
--- license: apache-2.0 tags: - object-detection - vision datasets: - coco widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/savanna.jpg example_title: Savanna - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg example_title: Football Match - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/airport.jpg example_title: Airport --- # DETR (End-to-End Object Detection) model with ResNet-50 backbone DEtection TRansformer (DETR) model trained end-to-end on COCO 2017 object detection (118k annotated images). It was introduced in the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Carion et al. and first released in [this repository](https://github.com/facebookresearch/detr). Disclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100. The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/detr_architecture.png) ## Intended uses & limitations You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=facebook/detr) to look for all available DETR models. ### How to use Here is how to use this model: ```python from transformers import DetrImageProcessor, DetrForObjectDetection import torch from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) # you can specify the revision tag if you don't want the timm dependency processor = DetrImageProcessor.from_pretrained("facebook/detr-resnet-50", revision="no_timm") model = DetrForObjectDetection.from_pretrained("facebook/detr-resnet-50", revision="no_timm") inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) # convert outputs (bounding boxes and class logits) to COCO API # let's only keep detections with score > 0.9 target_sizes = torch.tensor([image.size[::-1]]) results = processor.post_process_object_detection(outputs, target_sizes=target_sizes, threshold=0.9)[0] for score, label, box in zip(results["scores"], results["labels"], results["boxes"]): box = [round(i, 2) for i in box.tolist()] print( f"Detected {model.config.id2label[label.item()]} with confidence " f"{round(score.item(), 3)} at location {box}" ) ``` This should output: ``` Detected remote with confidence 0.998 at location [40.16, 70.81, 175.55, 117.98] Detected remote with confidence 0.996 at location [333.24, 72.55, 368.33, 187.66] Detected couch with confidence 0.995 at location [-0.02, 1.15, 639.73, 473.76] Detected cat with confidence 0.999 at location [13.24, 52.05, 314.02, 470.93] Detected cat with confidence 0.999 at location [345.4, 23.85, 640.37, 368.72] ``` Currently, both the feature extractor and model support PyTorch. ## Training data The DETR model was trained on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/google-research/vision_transformer/blob/master/vit_jax/input_pipeline.py). Images are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225). ### Training The model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64). ## Evaluation results This model achieves an AP (average precision) of **42.0** on COCO 2017 validation. For more details regarding evaluation results, we refer to table 1 of the original paper. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2005-12872, author = {Nicolas Carion and Francisco Massa and Gabriel Synnaeve and Nicolas Usunier and Alexander Kirillov and Sergey Zagoruyko}, title = {End-to-End Object Detection with Transformers}, journal = {CoRR}, volume = {abs/2005.12872}, year = {2020}, url = {https://arxiv.org/abs/2005.12872}, archivePrefix = {arXiv}, eprint = {2005.12872}, timestamp = {Thu, 28 May 2020 17:38:09 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2005-12872.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
BTGFM/phi3.5-mini-adapter_v2
BTGFM
2024-08-22T23:57:08Z
5
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:microsoft/Phi-3.5-mini-instruct", "base_model:adapter:microsoft/Phi-3.5-mini-instruct", "license:mit", "region:us" ]
null
2024-08-22T19:41:30Z
--- base_model: microsoft/Phi-3.5-mini-instruct library_name: peft license: mit tags: - generated_from_trainer model-index: - name: phi3.5-mini-adapter_v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phi3.5-mini-adapter_v2 This model is a fine-tuned version of [microsoft/Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1344 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 48 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 17.6422 | 0.4545 | 10 | 17.2166 | | 13.0308 | 0.9091 | 20 | 12.5267 | | 8.4662 | 1.3636 | 30 | 7.9483 | | 2.4521 | 1.8182 | 40 | 1.3982 | | 0.3917 | 2.2727 | 50 | 0.3427 | | 0.3042 | 2.7273 | 60 | 0.3092 | | 0.2051 | 3.1818 | 70 | 0.2451 | | 0.2043 | 3.6364 | 80 | 0.2022 | | 0.1599 | 4.0909 | 90 | 0.1907 | | 0.1658 | 4.5455 | 100 | 0.1727 | | 0.1527 | 5.0 | 110 | 0.1595 | | 0.1281 | 5.4545 | 120 | 0.1501 | | 0.1079 | 5.9091 | 130 | 0.1435 | | 0.0896 | 6.3636 | 140 | 0.1369 | | 0.097 | 6.8182 | 150 | 0.1340 | | 0.0841 | 7.2727 | 160 | 0.1449 | | 0.0771 | 7.7273 | 170 | 0.1344 | ### Framework versions - PEFT 0.11.1 - Transformers 4.43.1 - Pytorch 2.4.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
jacobcarajo/DeepSeek-Coder-V2-Lite-Instruct-Q5_K_M-GGUF
jacobcarajo
2024-08-22T23:00:35Z
56
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct", "base_model:quantized:deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct", "license:other", "endpoints_compatible", "region:us", "conversational" ]
null
2024-08-22T22:59:48Z
--- base_model: deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct license: other license_name: deepseek-license license_link: LICENSE tags: - llama-cpp - gguf-my-repo --- # jacobcarajo/DeepSeek-Coder-V2-Lite-Instruct-Q5_K_M-GGUF This model was converted to GGUF format from [`deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct`](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo jacobcarajo/DeepSeek-Coder-V2-Lite-Instruct-Q5_K_M-GGUF --hf-file deepseek-coder-v2-lite-instruct-q5_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo jacobcarajo/DeepSeek-Coder-V2-Lite-Instruct-Q5_K_M-GGUF --hf-file deepseek-coder-v2-lite-instruct-q5_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo jacobcarajo/DeepSeek-Coder-V2-Lite-Instruct-Q5_K_M-GGUF --hf-file deepseek-coder-v2-lite-instruct-q5_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo jacobcarajo/DeepSeek-Coder-V2-Lite-Instruct-Q5_K_M-GGUF --hf-file deepseek-coder-v2-lite-instruct-q5_k_m.gguf -c 2048 ```
davidrd123/Flux-Egon-Schiele-LoRA
davidrd123
2024-08-22T22:57:52Z
9
0
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "flux", "aitoolkit", "lora", "template:sd-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-08-22T03:03:24Z
--- tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - flux - aitoolkit - lora - diffusers - template:sd-lora widget: - text: es_style, The image features a woman wearing a large, exaggerated hat. Her expression is contemplative, with eyes partially closed and lips slightly pursed. She is leaning forward slightly with her arms crossed. The woman wears a dark skirt, contrasting with her pale skin and the warm, earthy tones of the hat and background. The composition emphasizes her upper body and the hat, drawing the viewer's focus there. The background is plain, enhancing the subject's prominence. A signature, 'S.10, ' is visible at the bottom right, contributing a subtle depth to the image. The overall mood is introspective and serene. parameters: negative_prompt: 'blurry, cropped' output: url: samples/1724295651405__000002000_0.jpg - text: es_style, The image features a solitary figure wrapped in a vibrant, multi-colored garment with bold shades of blue, red, and brown. The figure\u2019s head is slightly turned, revealing a hint of a contemplative expression. The background is plain and unobtrusive, emphasizing the figure's striking attire. The composition draws attention to the sweeping, flowing patterns of the garment, creating a sense of movement. The varied colors contrast well, generating a dynamic visual impact. The mood is introspective and artistic, with a subtle signature visible to the right of the figure. parameters: negative_prompt: 'blurry, cropped' output: url: samples/1724295683314__000002000_1.jpg - text: es_style, The image depicts a close embrace between two figures, draped in contrasting dark and vibrant clothing. One figure wears black garments, while the other is dressed in vivid red and a red cap. Their pale limbs are visible against the dark background, adding stark contrast. The overall color scheme mixes deep greens, reds, and blacks, creating a somber yet intense atmosphere. The figures' expressions and the positioning of their hands suggest intimacy and tension. The composition is tightly focused on the pair, drawing attention to their interaction. A signature is visible in the lower right corner. parameters: negative_prompt: 'blurry, cropped' output: url: samples/1724295715244__000002000_2.jpg - text: es_style, The image features a person sitting on the ground, leaning slightly forward while clasping their ankle. They have red hair and are dressed in a blue jacket, white shirt, and brown stockings with a hint of red detailing. The background is plain, placing the focus solely on the individual. The overall color impression combines blues, browns, and whites, creating contrast against the simple backdrop. The mood is introspective and contemplative. There is a visible signature stamp at the bottom of the image. parameters: negative_prompt: 'blurry, cropped' output: url: samples/1724295747181__000002000_3.jpg base_model: black-forest-labs/FLUX.1-dev license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # Flux-Egon-Schiele-LoRA Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) <Gallery /> ## Trigger words `es_style` ## Download model Weights for this model are available in Safetensors format. [Download](/davidrd123/Flux-Egon-Schiele-LoRA/tree/main) them in the Files & versions tab.
zli12321/prometheus2-gemma-2b
zli12321
2024-08-22T22:57:44Z
20
0
null
[ "safetensors", "license:apache-2.0", "region:us" ]
null
2024-08-22T22:54:22Z
--- license: apache-2.0 ---
RichardErkhov/ibm-granite_-_granite-3b-code-instruct-128k-gguf
RichardErkhov
2024-08-22T22:04:44Z
42
0
null
[ "gguf", "arxiv:2405.04324", "endpoints_compatible", "region:us", "conversational" ]
null
2024-08-22T20:33:25Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) granite-3b-code-instruct-128k - GGUF - Model creator: https://huggingface.co/ibm-granite/ - Original model: https://huggingface.co/ibm-granite/granite-3b-code-instruct-128k/ | Name | Quant method | Size | | ---- | ---- | ---- | | [granite-3b-code-instruct-128k.Q2_K.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-3b-code-instruct-128k-gguf/blob/main/granite-3b-code-instruct-128k.Q2_K.gguf) | Q2_K | 1.25GB | | [granite-3b-code-instruct-128k.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-3b-code-instruct-128k-gguf/blob/main/granite-3b-code-instruct-128k.IQ3_XS.gguf) | IQ3_XS | 1.37GB | | [granite-3b-code-instruct-128k.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-3b-code-instruct-128k-gguf/blob/main/granite-3b-code-instruct-128k.IQ3_S.gguf) | IQ3_S | 1.45GB | | [granite-3b-code-instruct-128k.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-3b-code-instruct-128k-gguf/blob/main/granite-3b-code-instruct-128k.Q3_K_S.gguf) | Q3_K_S | 1.45GB | | [granite-3b-code-instruct-128k.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-3b-code-instruct-128k-gguf/blob/main/granite-3b-code-instruct-128k.IQ3_M.gguf) | IQ3_M | 1.51GB | | [granite-3b-code-instruct-128k.Q3_K.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-3b-code-instruct-128k-gguf/blob/main/granite-3b-code-instruct-128k.Q3_K.gguf) | Q3_K | 1.61GB | | [granite-3b-code-instruct-128k.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-3b-code-instruct-128k-gguf/blob/main/granite-3b-code-instruct-128k.Q3_K_M.gguf) | Q3_K_M | 1.61GB | | [granite-3b-code-instruct-128k.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-3b-code-instruct-128k-gguf/blob/main/granite-3b-code-instruct-128k.Q3_K_L.gguf) | Q3_K_L | 1.75GB | | [granite-3b-code-instruct-128k.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-3b-code-instruct-128k-gguf/blob/main/granite-3b-code-instruct-128k.IQ4_XS.gguf) | IQ4_XS | 1.78GB | | [granite-3b-code-instruct-128k.Q4_0.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-3b-code-instruct-128k-gguf/blob/main/granite-3b-code-instruct-128k.Q4_0.gguf) | Q4_0 | 1.86GB | | [granite-3b-code-instruct-128k.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-3b-code-instruct-128k-gguf/blob/main/granite-3b-code-instruct-128k.IQ4_NL.gguf) | IQ4_NL | 1.87GB | | [granite-3b-code-instruct-128k.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-3b-code-instruct-128k-gguf/blob/main/granite-3b-code-instruct-128k.Q4_K_S.gguf) | Q4_K_S | 1.88GB | | [granite-3b-code-instruct-128k.Q4_K.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-3b-code-instruct-128k-gguf/blob/main/granite-3b-code-instruct-128k.Q4_K.gguf) | Q4_K | 1.99GB | | [granite-3b-code-instruct-128k.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-3b-code-instruct-128k-gguf/blob/main/granite-3b-code-instruct-128k.Q4_K_M.gguf) | Q4_K_M | 1.99GB | | [granite-3b-code-instruct-128k.Q4_1.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-3b-code-instruct-128k-gguf/blob/main/granite-3b-code-instruct-128k.Q4_1.gguf) | Q4_1 | 2.06GB | | [granite-3b-code-instruct-128k.Q5_0.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-3b-code-instruct-128k-gguf/blob/main/granite-3b-code-instruct-128k.Q5_0.gguf) | Q5_0 | 2.25GB | | [granite-3b-code-instruct-128k.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-3b-code-instruct-128k-gguf/blob/main/granite-3b-code-instruct-128k.Q5_K_S.gguf) | Q5_K_S | 2.25GB | | [granite-3b-code-instruct-128k.Q5_K.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-3b-code-instruct-128k-gguf/blob/main/granite-3b-code-instruct-128k.Q5_K.gguf) | Q5_K | 2.32GB | | [granite-3b-code-instruct-128k.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-3b-code-instruct-128k-gguf/blob/main/granite-3b-code-instruct-128k.Q5_K_M.gguf) | Q5_K_M | 2.32GB | | [granite-3b-code-instruct-128k.Q5_1.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-3b-code-instruct-128k-gguf/blob/main/granite-3b-code-instruct-128k.Q5_1.gguf) | Q5_1 | 2.45GB | | [granite-3b-code-instruct-128k.Q6_K.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-3b-code-instruct-128k-gguf/blob/main/granite-3b-code-instruct-128k.Q6_K.gguf) | Q6_K | 2.67GB | | [granite-3b-code-instruct-128k.Q8_0.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-3b-code-instruct-128k-gguf/blob/main/granite-3b-code-instruct-128k.Q8_0.gguf) | Q8_0 | 3.45GB | Original model description: --- pipeline_tag: text-generation inference: false license: apache-2.0 datasets: - bigcode/commitpackft - TIGER-Lab/MathInstruct - meta-math/MetaMathQA - glaiveai/glaive-code-assistant-v3 - glaive-function-calling-v2 - bugdaryan/sql-create-context-instruction - garage-bAInd/Open-Platypus - nvidia/HelpSteer - bigcode/self-oss-instruct-sc2-exec-filter-50k metrics: - code_eval library_name: transformers tags: - code - granite model-index: - name: granite-3b-code-instruct-128k results: - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesis (Python) metrics: - name: pass@1 type: pass@1 value: 53.7 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesis (Average) metrics: - name: pass@1 type: pass@1 value: 41.4 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalExplain (Average) metrics: - name: pass@1 type: pass@1 value: 25.1 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFix (Average) metrics: - name: pass@1 type: pass@1 value: 26.2 verified: false - task: type: text-generation dataset: type: repoqa name: RepoQA (Python@16K) metrics: - name: pass@1 (thresh=0.5) type: pass@1 (thresh=0.5) value: 48.0 verified: false - task: type: text-generation dataset: type: repoqa name: RepoQA (C++@16K) metrics: - name: pass@1 (thresh=0.5) type: pass@1 (thresh=0.5) value: 36.0 verified: false - task: type: text-generation dataset: type: repoqa name: RepoQA (Java@16K) metrics: - name: pass@1 (thresh=0.5) type: pass@1 (thresh=0.5) value: 38.0 verified: false - task: type: text-generation dataset: type: repoqa name: RepoQA (TypeScript@16K) metrics: - name: pass@1 (thresh=0.5) type: pass@1 (thresh=0.5) value: 39.0 verified: false - task: type: text-generation dataset: type: repoqa name: RepoQA (Rust@16K) metrics: - name: pass@1 (thresh=0.5) type: pass@1 (thresh=0.5) value: 29.0 verified: false --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62cd5057674cdb524450093d/1hzxoPwqkBJXshKVVe6_9.png) # Granite-3B-Code-Instruct-128K ## Model Summary **Granite-3B-Code-Instruct-128K** is a 3B parameter long-context instruct model fine tuned from *Granite-3B-Code-Base-128K* on a combination of **permissively licensed** data used in training the original Granite code instruct models, in addition to synthetically generated code instruction datasets tailored for solving long context problems. By exposing the model to both short and long context data, we aim to enhance its long-context capability without sacrificing code generation performance at short input context. - **Developers:** IBM Research - **GitHub Repository:** [ibm-granite/granite-code-models](https://github.com/ibm-granite/granite-code-models) - **Paper:** [Scaling Granite Code Models to 128K Context](https://arxiv.org/abs/2405.04324) - **Release Date**: July 18th, 2024 - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0). ## Usage ### Intended use The model is designed to respond to coding related instructions over long-conext input up to 128K length and can be used to build coding assistants. <!-- TO DO: Check starcoder2 instruct code example that includes the template https://huggingface.co/bigcode/starcoder2-15b-instruct-v0.1 --> ### Generation This is a simple example of how to use **Granite-3B-Code-Instruct** model. ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # or "cpu" model_path = "ibm-granite/granite-3b-code-instruct-128k" tokenizer = AutoTokenizer.from_pretrained(model_path) # drop device_map if running on CPU model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device) model.eval() # change input text as desired chat = [ { "role": "user", "content": "Write a code to find the maximum value in a list of numbers." }, ] chat = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True) # tokenize the text input_tokens = tokenizer(chat, return_tensors="pt") # transfer tokenized inputs to the device for i in input_tokens: input_tokens[i] = input_tokens[i].to(device) # generate output tokens output = model.generate(**input_tokens, max_new_tokens=100) # decode output tokens into text output = tokenizer.batch_decode(output) # loop over the batch to print, in this example the batch size is 1 for i in output: print(i) ``` <!-- TO DO: Check this part --> ## Training Data Granite Code Instruct models are trained on a mix of short and long context data as follows. * Short-Context Instruction Data: [CommitPackFT](https://huggingface.co/datasets/bigcode/commitpackft), [BigCode-SC2-Instruct](bigcode/self-oss-instruct-sc2-exec-filter-50k), [MathInstruct](https://huggingface.co/datasets/TIGER-Lab/MathInstruct), [MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA), [Glaive-Code-Assistant-v3](https://huggingface.co/datasets/glaiveai/glaive-code-assistant-v3), [Glaive-Function-Calling-v2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2), [NL2SQL11](https://huggingface.co/datasets/bugdaryan/sql-create-context-instruction), [HelpSteer](https://huggingface.co/datasets/nvidia/HelpSteer), [OpenPlatypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus) including a synthetically generated dataset for API calling and multi-turn code interactions with execution feedback. We also include a collection of hardcoded prompts to ensure our model generates correct outputs given inquiries about its name or developers. * Long-Context Instruction Data: A synthetically-generated dataset by bootstrapping the repository-level file-packed documents through Granite-8b-Code-Instruct to improve long-context capability of the model. ## Infrastructure We train the Granite Code models using two of IBM's super computing clusters, namely Vela and Blue Vela, both outfitted with NVIDIA A100 and H100 GPUs respectively. These clusters provide a scalable and efficient infrastructure for training our models over thousands of GPUs. ## Ethical Considerations and Limitations Granite code instruct models are primarily finetuned using instruction-response pairs across a specific set of programming languages. Thus, their performance may be limited with out-of-domain programming languages. In this situation, it is beneficial providing few-shot examples to steer the model's output. Moreover, developers should perform safety testing and target-specific tuning before deploying these models on critical applications. The model also inherits ethical considerations and limitations from its base model. For more information, please refer to *[Granite-3B-Code-Base-128K](https://huggingface.co/ibm-granite/granite-3b-code-base-128k)* model card.
alirus12/Qwen-Qwen1.5-1.8B-1724363664
alirus12
2024-08-22T21:54:25Z
7
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-1.8B", "base_model:adapter:Qwen/Qwen1.5-1.8B", "region:us" ]
null
2024-08-22T21:54:24Z
--- base_model: Qwen/Qwen1.5-1.8B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
WesPro/GracieRP-freefallenLora-Gemma2-Inst-9B
WesPro
2024-08-22T21:49:30Z
7
1
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "mergekit", "merge", "conversational", "arxiv:2203.05482", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-08-11T08:12:02Z
--- base_model: [] library_name: transformers tags: - mergekit - merge --- # Gemma2-Gracie FreeFallen-Disgraced-9B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6580862babafd960c82ed31a/dyzR8NkKb8pKnA18xN97P.png) This is a merge of pre-trained language models created using ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6580862babafd960c82ed31a/YM3lmCSKjUZKg5Mz4_qZu.png) ## Merge Details ### Merge Method This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method. ### Models Merged The following models were included in the merge: * E:\GemmaFallenFromGrace + H:\loras\dominantGemma ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: E:\GemmaFallenFromGrace+H:\loras\dominantGemma parameters: weight: 1.0 merge_method: linear dtype: float16 ```
mradermacher/krishimitra_v1-GGUF
mradermacher
2024-08-22T21:38:15Z
6
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "llama", "trl", "sft", "en", "base_model:dippatel2506/krishimitra_v1", "base_model:quantized:dippatel2506/krishimitra_v1", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-08-22T21:10:33Z
--- base_model: dippatel2506/krishimitra_v1 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - llama - trl - sft --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/dippatel2506/krishimitra_v1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/krishimitra_v1-GGUF/resolve/main/krishimitra_v1.Q2_K.gguf) | Q2_K | 1.5 | | | [GGUF](https://huggingface.co/mradermacher/krishimitra_v1-GGUF/resolve/main/krishimitra_v1.IQ3_XS.gguf) | IQ3_XS | 1.7 | | | [GGUF](https://huggingface.co/mradermacher/krishimitra_v1-GGUF/resolve/main/krishimitra_v1.IQ3_S.gguf) | IQ3_S | 1.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/krishimitra_v1-GGUF/resolve/main/krishimitra_v1.Q3_K_S.gguf) | Q3_K_S | 1.8 | | | [GGUF](https://huggingface.co/mradermacher/krishimitra_v1-GGUF/resolve/main/krishimitra_v1.IQ3_M.gguf) | IQ3_M | 1.9 | | | [GGUF](https://huggingface.co/mradermacher/krishimitra_v1-GGUF/resolve/main/krishimitra_v1.Q3_K_M.gguf) | Q3_K_M | 2.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/krishimitra_v1-GGUF/resolve/main/krishimitra_v1.Q3_K_L.gguf) | Q3_K_L | 2.1 | | | [GGUF](https://huggingface.co/mradermacher/krishimitra_v1-GGUF/resolve/main/krishimitra_v1.IQ4_XS.gguf) | IQ4_XS | 2.2 | | | [GGUF](https://huggingface.co/mradermacher/krishimitra_v1-GGUF/resolve/main/krishimitra_v1.Q4_K_S.gguf) | Q4_K_S | 2.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/krishimitra_v1-GGUF/resolve/main/krishimitra_v1.Q4_K_M.gguf) | Q4_K_M | 2.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/krishimitra_v1-GGUF/resolve/main/krishimitra_v1.Q5_K_S.gguf) | Q5_K_S | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/krishimitra_v1-GGUF/resolve/main/krishimitra_v1.Q5_K_M.gguf) | Q5_K_M | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/krishimitra_v1-GGUF/resolve/main/krishimitra_v1.Q6_K.gguf) | Q6_K | 3.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/krishimitra_v1-GGUF/resolve/main/krishimitra_v1.Q8_0.gguf) | Q8_0 | 4.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/krishimitra_v1-GGUF/resolve/main/krishimitra_v1.f16.gguf) | f16 | 7.7 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
jvelja/BERT_gemma-2-2b-it_imdb_probits_0
jvelja
2024-08-22T21:30:58Z
8
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-08-22T21:30:44Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
daedalus16/bart-M2y2-insights
daedalus16
2024-08-22T21:23:14Z
5
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-08-12T18:36:39Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
yiran-wang3/clean-mining-sft-split-deepseek-nopacking-sft
yiran-wang3
2024-08-22T21:21:33Z
15
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-08-22T21:16:21Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
k-awtycarroll/my-great-gpt2-review-model-katie
k-awtycarroll
2024-08-22T21:06:43Z
80
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:distilbert/distilgpt2", "base_model:finetune:distilbert/distilgpt2", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-08-01T13:02:21Z
--- base_model: distilgpt2 license: apache-2.0 tags: - generated_from_trainer model-index: - name: my-great-gpt2-review-model-katie results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my-great-gpt2-review-model-katie This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.009991 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8.6 ### Framework versions - Transformers 4.44.0 - Pytorch 2.3.1+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1
brucewayne0459/paligemma_derm
brucewayne0459
2024-08-22T21:05:49Z
208
1
transformers
[ "transformers", "safetensors", "paligemma", "image-text-to-text", "biology", "skin", "skin disease", "cancer", "medical", "en", "dataset:joshuachou/SkinCAP", "dataset:HemanthKumarK/SKINgpt", "license:apache-2.0", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-08-19T07:57:09Z
--- library_name: transformers pipeline_tag: image-text-to-text license: apache-2.0 datasets: - joshuachou/SkinCAP - HemanthKumarK/SKINgpt language: - en tags: - biology - skin - skin disease - cancer - medical --- # Model Card for PaliGemma Dermatology Model ## Model Details ### Model Description This model, based on the PaliGemma-3B architecture, has been fine-tuned for dermatology-related image and text processing tasks. The model is designed to assist in the identification of various skin conditions using a combination of image analysis and natural language processing. - **Developed by:** Bruce_Wayne - **Model type:** vision model - **Finetuned from model:** https://huggingface.co/google/paligemma-3b-pt-224 - **LoRa Adaptors used:** Yes - **Intended use:** Medical image analysis, specifically for dermatology ** ### please let me know how the model works -->https://forms.gle/cBA6apSevTyiEbp46 ### Thank you ## Uses ### Direct Use The model can be directly used for analyzing dermatology images, providing insights into potential skin conditions. ## Bias, Risks, and Limitations **Skin Tone Bias:** The model may have been trained on a dataset that does not adequately represent all skin tones, potentially leading to biased results. **Geographic Bias:** The model's performance may vary depending on the prevalence of certain conditions in different geographic regions. ## How to Get Started with the Model ```python import torch from transformers import AutoProcessor, PaliGemmaForConditionalGeneration from PIL import Image # Load the model and processor model_id = "brucewayne0459/paligemma_derm" processor = AutoProcessor.from_pretrained(model_id) model = PaliGemmaForConditionalGeneration.from_pretrained(model_id, device_map={"": 0}) model.eval() # Load a sample image and text input input_text = "Identify the skin condition?" input_image_path = " Replace with your actual image path" input_image = Image.open(input_image_path).convert("RGB") # Process the input inputs = processor(text=input_text, images=input_image, return_tensors="pt", padding="longest").to("cuda" if torch.cuda.is_available() else "cpu") # Set the maximum length for generation max_new_tokens = 50 # Run inference with torch.no_grad(): outputs = model.generate(**inputs, max_new_tokens=max_new_tokens) # Decode the output decoded_output = processor.decode(outputs[0], skip_special_tokens=True) print("Model Output:", decoded_output) ``` ## Training Details ### Training Data The model was fine-tuned on a dataset of dermatological images combined with disease names ### Training Procedure The model was fine-tuned using LoRA (Low-Rank Adaptation) for more efficient training. Mixed precision (bfloat16) was used to speed up training and reduce memory usage. #### Training Hyperparameters - **Training regime:** Mixed precision (bfloat16) - **Epochs:** 10 - **Learning rate:** 2e-5 - **Batch size:** 6 - **Gradient accumulation steps:** 4 ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data The model was evaluated on a separate validation set of dermatological images and Disease Names, distinct from the training data. #### Metrics - **Validation Loss:** The loss was tracked throughout the training process to evaluate model performance. - **Accuracy:** The primary metric for assessing model predictions. ### Results The model achieved a final validation loss of approximately 0.2214, indicating reasonable performance in predicting skin conditions based on the dataset used. #### Summary ## Environmental Impact - **Hardware Type:** 1 x L4 GPU - **Hours used:** ~22 HOURS - **Cloud Provider:** LIGHTNING AI - **Compute Region:** USA - **Carbon Emitted:** 0.9 kg eq. CO2 ## Technical Specifications ### Model Architecture and Objective - **Architecture:** Vision-Language model based on PaliGemma-3B - **Objective:** To classify and diagnose dermatological conditions from images and text ### Compute Infrastructure #### Hardware - **GPU:** 1xL4 GPU ## Model Card Authors Bruce_Wayne
jvelja/gpt2_imdb_7
jvelja
2024-08-22T20:52:03Z
5
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "trl", "ppo", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
reinforcement-learning
2024-08-22T20:51:49Z
--- license: apache-2.0 tags: - trl - ppo - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="jvelja//tmp/tmpr2xp55uf/jvelja/gpt2_imdb_7") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("jvelja//tmp/tmpr2xp55uf/jvelja/gpt2_imdb_7") model = AutoModelForCausalLMWithValueHead.from_pretrained("jvelja//tmp/tmpr2xp55uf/jvelja/gpt2_imdb_7") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
jvelja/BERT_gpt2_imdb_7
jvelja
2024-08-22T20:51:49Z
106
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-08-22T20:51:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Xu-Ouyang/pythia-70m-deduped-int3-step95000-GPTQ-wikitext2
Xu-Ouyang
2024-08-22T20:47:42Z
76
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "3-bit", "gptq", "region:us" ]
text-generation
2024-08-22T20:47:34Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ddh0/Mistral-Large-Instruct-2407-q8_0-q8_0-GGUF
ddh0
2024-08-22T20:43:07Z
6
0
null
[ "gguf", "en", "fr", "de", "es", "it", "pt", "zh", "ja", "ru", "ko", "license:other", "endpoints_compatible", "region:us", "conversational" ]
null
2024-08-21T19:42:57Z
--- license: other license_name: mrl license_link: https://mistral.ai/licenses/MRL-0.1.md language: - en - fr - de - es - it - pt - zh - ja - ru - ko --- This is [mistralai/Mistral-Large-Instruct-2407](https://huggingface.co/mistralai/Mistral-Large-Instruct-2407), converted to GGUF and quantized to q8_0. Both the model and the embedding/output tensors are q8_0. The model is split using the `llama.cpp/llama-gguf-split` CLI utility into shards no larger than 7GB. The purpose of this is to make it less painful to resume downloading if interrupted. The purpose of this upload is archival. [GGUFv3](https://huggingface.co/ddh0/Mistral-Large-Instruct-2407-q8_0-q8_0-GGUF/blob/main/gguf.md)
siddhuggingface/emma-sid-jarvis2.0
siddhuggingface
2024-08-22T20:41:38Z
120
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-08-22T20:37:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
research-dump/Meta-Llama-3-8B-Instruct_chew_lexical_sft_v1
research-dump
2024-08-22T20:41:22Z
76
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-08-22T20:34:50Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jvelja/gpt2_imdb_4
jvelja
2024-08-22T20:39:41Z
49
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "trl", "ppo", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
reinforcement-learning
2024-08-22T20:39:29Z
--- license: apache-2.0 tags: - trl - ppo - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="jvelja//tmp/tmpedubaw41/jvelja/gpt2_imdb_4") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("jvelja//tmp/tmpedubaw41/jvelja/gpt2_imdb_4") model = AutoModelForCausalLMWithValueHead.from_pretrained("jvelja//tmp/tmpedubaw41/jvelja/gpt2_imdb_4") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
sly007/sami_model
sly007
2024-08-22T20:36:09Z
107
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-08-22T20:35:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jvelja/BERT_gpt2_imdb_3
jvelja
2024-08-22T20:35:25Z
106
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-08-22T20:35:15Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
reddest-panda/AutoMAD-small
reddest-panda
2024-08-22T20:32:07Z
108
0
transformers
[ "transformers", "tensorboard", "safetensors", "qwen2", "text-generation", "autotrain", "text-generation-inference", "peft", "conversational", "dataset:reddest-panda/arm-asm-xsmall", "base_model:Qwen/Qwen2-0.5B-Instruct", "base_model:finetune:Qwen/Qwen2-0.5B-Instruct", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-08-19T19:03:20Z
--- base_model: Qwen/Qwen2-0.5B-Instruct datasets: - reddest-panda/arm-asm-xsmall library_name: transformers license: other tags: - autotrain - text-generation-inference - text-generation - peft widget: - messages: - role: user content: What is your favorite condiment? --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "reddest-panda/AutoMAD-small" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
jvelja/gpt2_imdb_2
jvelja
2024-08-22T20:31:38Z
48
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "trl", "ppo", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
reinforcement-learning
2024-08-22T20:31:24Z
--- license: apache-2.0 tags: - trl - ppo - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="jvelja//tmp/tmpss4nn9wc/jvelja/gpt2_imdb_2") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("jvelja//tmp/tmpss4nn9wc/jvelja/gpt2_imdb_2") model = AutoModelForCausalLMWithValueHead.from_pretrained("jvelja//tmp/tmpss4nn9wc/jvelja/gpt2_imdb_2") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
RichardErkhov/Casual-Autopsy_-_L3-Umbral-Mind-RP-v3.0-8B-gguf
RichardErkhov
2024-08-22T20:15:44Z
13
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-08-22T18:26:42Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) L3-Umbral-Mind-RP-v3.0-8B - GGUF - Model creator: https://huggingface.co/Casual-Autopsy/ - Original model: https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [L3-Umbral-Mind-RP-v3.0-8B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_L3-Umbral-Mind-RP-v3.0-8B-gguf/blob/main/L3-Umbral-Mind-RP-v3.0-8B.Q2_K.gguf) | Q2_K | 2.96GB | | [L3-Umbral-Mind-RP-v3.0-8B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_L3-Umbral-Mind-RP-v3.0-8B-gguf/blob/main/L3-Umbral-Mind-RP-v3.0-8B.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [L3-Umbral-Mind-RP-v3.0-8B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_L3-Umbral-Mind-RP-v3.0-8B-gguf/blob/main/L3-Umbral-Mind-RP-v3.0-8B.IQ3_S.gguf) | IQ3_S | 3.43GB | | [L3-Umbral-Mind-RP-v3.0-8B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_L3-Umbral-Mind-RP-v3.0-8B-gguf/blob/main/L3-Umbral-Mind-RP-v3.0-8B.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [L3-Umbral-Mind-RP-v3.0-8B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_L3-Umbral-Mind-RP-v3.0-8B-gguf/blob/main/L3-Umbral-Mind-RP-v3.0-8B.IQ3_M.gguf) | IQ3_M | 3.52GB | | [L3-Umbral-Mind-RP-v3.0-8B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_L3-Umbral-Mind-RP-v3.0-8B-gguf/blob/main/L3-Umbral-Mind-RP-v3.0-8B.Q3_K.gguf) | Q3_K | 3.74GB | | [L3-Umbral-Mind-RP-v3.0-8B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_L3-Umbral-Mind-RP-v3.0-8B-gguf/blob/main/L3-Umbral-Mind-RP-v3.0-8B.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [L3-Umbral-Mind-RP-v3.0-8B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_L3-Umbral-Mind-RP-v3.0-8B-gguf/blob/main/L3-Umbral-Mind-RP-v3.0-8B.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [L3-Umbral-Mind-RP-v3.0-8B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_L3-Umbral-Mind-RP-v3.0-8B-gguf/blob/main/L3-Umbral-Mind-RP-v3.0-8B.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [L3-Umbral-Mind-RP-v3.0-8B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_L3-Umbral-Mind-RP-v3.0-8B-gguf/blob/main/L3-Umbral-Mind-RP-v3.0-8B.Q4_0.gguf) | Q4_0 | 4.34GB | | [L3-Umbral-Mind-RP-v3.0-8B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_L3-Umbral-Mind-RP-v3.0-8B-gguf/blob/main/L3-Umbral-Mind-RP-v3.0-8B.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [L3-Umbral-Mind-RP-v3.0-8B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_L3-Umbral-Mind-RP-v3.0-8B-gguf/blob/main/L3-Umbral-Mind-RP-v3.0-8B.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [L3-Umbral-Mind-RP-v3.0-8B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_L3-Umbral-Mind-RP-v3.0-8B-gguf/blob/main/L3-Umbral-Mind-RP-v3.0-8B.Q4_K.gguf) | Q4_K | 4.58GB | | [L3-Umbral-Mind-RP-v3.0-8B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_L3-Umbral-Mind-RP-v3.0-8B-gguf/blob/main/L3-Umbral-Mind-RP-v3.0-8B.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [L3-Umbral-Mind-RP-v3.0-8B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_L3-Umbral-Mind-RP-v3.0-8B-gguf/blob/main/L3-Umbral-Mind-RP-v3.0-8B.Q4_1.gguf) | Q4_1 | 4.78GB | | [L3-Umbral-Mind-RP-v3.0-8B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_L3-Umbral-Mind-RP-v3.0-8B-gguf/blob/main/L3-Umbral-Mind-RP-v3.0-8B.Q5_0.gguf) | Q5_0 | 5.21GB | | [L3-Umbral-Mind-RP-v3.0-8B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_L3-Umbral-Mind-RP-v3.0-8B-gguf/blob/main/L3-Umbral-Mind-RP-v3.0-8B.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [L3-Umbral-Mind-RP-v3.0-8B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_L3-Umbral-Mind-RP-v3.0-8B-gguf/blob/main/L3-Umbral-Mind-RP-v3.0-8B.Q5_K.gguf) | Q5_K | 5.34GB | | [L3-Umbral-Mind-RP-v3.0-8B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_L3-Umbral-Mind-RP-v3.0-8B-gguf/blob/main/L3-Umbral-Mind-RP-v3.0-8B.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [L3-Umbral-Mind-RP-v3.0-8B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_L3-Umbral-Mind-RP-v3.0-8B-gguf/blob/main/L3-Umbral-Mind-RP-v3.0-8B.Q5_1.gguf) | Q5_1 | 5.65GB | | [L3-Umbral-Mind-RP-v3.0-8B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_L3-Umbral-Mind-RP-v3.0-8B-gguf/blob/main/L3-Umbral-Mind-RP-v3.0-8B.Q6_K.gguf) | Q6_K | 6.14GB | | [L3-Umbral-Mind-RP-v3.0-8B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_L3-Umbral-Mind-RP-v3.0-8B-gguf/blob/main/L3-Umbral-Mind-RP-v3.0-8B.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- base_model: - Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B - Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B - tannedbum/L3-Nymeria-Maid-8B - bluuwhale/L3-SthenoMaidBlackroot-8B-V1 - tannedbum/L3-Nymeria-8B - Cas-Warehouse/Llama-3-SOVL-MopeyMule-8B - Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B - Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2 - migtissera/Llama-3-8B-Synthia-v3.5 - Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B - v000000/L3-8B-Poppy-Sunspice - Magpie-Align/Llama-3-8B-WizardLM-196K - Cas-Warehouse/Llama-3-Mopeyfied-Psychology-8B - Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B - invisietch/EtherealRainbow-v0.3-8B - crestf411/L3-8B-sunfall-v0.4-stheno-v3.2 - aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-32K - ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B - Nitral-AI/Hathor_Tahsin-L3-8B-v0.85 - Casual-Autopsy/Umbral-Mind-6 - ResplendentAI/Nymph_8B library_name: transformers tags: - mergekit - merge --- <img src="https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v3-8B/resolve/main/63073798_p0_master1200.jpg" style="display: block; margin: auto;"> Image by ろ47 # Merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details The goal of this merge was to make an RP model better suited for role-plays with heavy themes such as but not limited to: - Mental illness - Self-harm - Trauma - Suicide I hated how RP models tended to be overly positive and hopeful with role-plays involving such themes, but thanks to [failspy/Llama-3-8B-Instruct-MopeyMule](https://huggingface.co/failspy/Llama-3-8B-Instruct-MopeyMule) this problem has been lessened considerably. If you're an enjoyer of savior/reverse savior type role-plays like myself, then this model is for you. ### Usage Info This model is meant to be used with asterisks/quotes RPing formats, any other format that isn't asterisks/quotes is likely to cause issues ### Quants * Weighted GGUFs by [mradermacher](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v3.0-8B-i1-GGUF) * Static GGUFs by [mradermacher](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v3.0-8B-GGUF) ### Models Merged The following models were included in the merge: * [Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B) * [Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B](https://huggingface.co/Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B) * [tannedbum/L3-Nymeria-Maid-8B](https://huggingface.co/tannedbum/L3-Nymeria-Maid-8B) * [bluuwhale/L3-SthenoMaidBlackroot-8B-V1](https://huggingface.co/bluuwhale/L3-SthenoMaidBlackroot-8B-V1) * [tannedbum/L3-Nymeria-8B](https://huggingface.co/tannedbum/L3-Nymeria-8B) * [Cas-Warehouse/Llama-3-SOVL-MopeyMule-8B](https://huggingface.co/Cas-Warehouse/Llama-3-SOVL-MopeyMule-8B) * [Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B) * [Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2](https://huggingface.co/Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2) * [migtissera/Llama-3-8B-Synthia-v3.5](https://huggingface.co/migtissera/Llama-3-8B-Synthia-v3.5) * [Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B](https://huggingface.co/Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B) * [v000000/L3-8B-Poppy-Sunspice](https://huggingface.co/v000000/L3-8B-Poppy-Sunspice) * [Magpie-Align/Llama-3-8B-WizardLM-196K](https://huggingface.co/Magpie-Align/Llama-3-8B-WizardLM-196K) * [Cas-Warehouse/Llama-3-Mopeyfied-Psychology-8B](https://huggingface.co/Cas-Warehouse/Llama-3-Mopeyfied-Psychology-8B) * [Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B) * [invisietch/EtherealRainbow-v0.3-8B](https://huggingface.co/invisietch/EtherealRainbow-v0.3-8B) * [crestf411/L3-8B-sunfall-v0.4-stheno-v3.2](https://huggingface.co/crestf411/L3-8B-sunfall-v0.4-stheno-v3.2) * [aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-32K](https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-32K) * [ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B) * [Nitral-AI/Hathor_Tahsin-L3-8B-v0.85](https://huggingface.co/Nitral-AI/Hathor_Tahsin-L3-8B-v0.85) * [ResplendentAI/Nymph_8B](https://huggingface.co/ResplendentAI/Nymph_8B) ## Secret Sauce The following YAML configurations were used to produce this model: ### Umbral-Mind-1-pt.1 ```yaml models: - model: Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B - model: Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B parameters: density: 0.5 weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825] - model: tannedbum/L3-Nymeria-Maid-8B parameters: density: 0.5 weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825] - model: bluuwhale/L3-SthenoMaidBlackroot-8B-V1 parameters: density: 0.5 weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825] - model: tannedbum/L3-Nymeria-8B parameters: density: 0.5 weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825] - model: Cas-Warehouse/Llama-3-SOVL-MopeyMule-8B parameters: density: 0.5 weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33] merge_method: dare_ties base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B parameters: normalize: false int8_mask: true dtype: bfloat16 ``` ### Umbral-Mind-1-pt.2 ```yaml models: - model: Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B - model: Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B parameters: gamma: 0.01 density: 0.9 weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33] - model: tannedbum/L3-Nymeria-Maid-8B parameters: gamma: 0.01 density: 0.9 weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825] - model: bluuwhale/L3-SthenoMaidBlackroot-8B-V1 parameters: gamma: 0.01 density: 0.9 weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825] - model: tannedbum/L3-Nymeria-8B parameters: gamma: 0.01 density: 0.9 weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825] - model: Cas-Warehouse/Llama-3-SOVL-MopeyMule-8B parameters: gamma: 0.01 density: 0.9 weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825] merge_method: breadcrumbs_ties base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B parameters: normalize: false int8_mask: true dtype: bfloat16 ``` ### Umbral-Mind-1 ```yaml models: - model: Casual-Autopsy/Umbral-Mind-1-pt.1 - model: Casual-Autopsy/Umbral-Mind-1-pt.2 merge_method: slerp base_model: Casual-Autopsy/Umbral-Mind-1-pt.1 parameters: t: - filter: self_attn value: [0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5] - filter: mlp value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5] - value: 0.5 dtype: bfloat16 ``` ### Umbral-Mind-2-pt.1 ```yaml models: - model: Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B - model: Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2 parameters: density: 0.5 weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825] - model: migtissera/Llama-3-8B-Synthia-v3.5 parameters: density: 0.5 weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825] - model: Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B parameters: density: 0.5 weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825] - model: v000000/L3-8B-Poppy-Sunspice parameters: density: 0.5 weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825] - model: Cas-Warehouse/Llama-3-Mopeyfied-Psychology-8B parameters: density: 0.5 weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33] merge_method: dare_ties base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B parameters: normalize: false int8_mask: true dtype: bfloat16 ``` ### Umbral-Mind-2-pt.2 ```yaml models: - model: Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B - model: Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2 parameters: gamma: 0.01 density: 0.9 weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33] - model: migtissera/Llama-3-8B-Synthia-v3.5 parameters: gamma: 0.01 density: 0.9 weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825] - model: Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B parameters: gamma: 0.01 density: 0.9 weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825] - model: Magpie-Align/Llama-3-8B-WizardLM-196K parameters: gamma: 0.01 density: 0.9 weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825] - model: Cas-Warehouse/Llama-3-Mopeyfied-Psychology-8B parameters: gamma: 0.01 density: 0.9 weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825] merge_method: breadcrumbs_ties base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B parameters: normalize: false int8_mask: true dtype: bfloat16 ``` ### Umbral-Mind-2 ```yaml models: - model: Casual-Autopsy/Umbral-Mind-2-pt.1 - model: Casual-Autopsy/Umbral-Mind-2-pt.2 merge_method: slerp base_model: Casual-Autopsy/Umbral-Mind-2-pt.1 parameters: t: - filter: self_attn value: [0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5] - filter: mlp value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5] - value: 0.5 dtype: bfloat16 ``` ### Umbral-Mind-3-pt.1 ```yaml models: - model: Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B - model: Cas-Warehouse/Llama-3-SOVL-MopeyMule-8B parameters: density: 0.5 weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825] - model: invisietch/EtherealRainbow-v0.3-8B parameters: density: 0.5 weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825] - model: bluuwhale/L3-SthenoMaidBlackroot-8B-V1 parameters: density: 0.5 weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825] - model: crestf411/L3-8B-sunfall-v0.4-stheno-v3.2 parameters: density: 0.5 weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825] - model: Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B parameters: density: 0.5 weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33] merge_method: dare_ties base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B parameters: normalize: false int8_mask: true dtype: bfloat16 ``` ### Umbral-Mind-3-pt.2 ```yaml models: - model: Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B - model: Cas-Warehouse/Llama-3-SOVL-MopeyMule-8B parameters: gamma: 0.01 density: 0.9 weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33] - model: invisietch/EtherealRainbow-v0.3-8B parameters: gamma: 0.01 density: 0.9 weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825] - model: bluuwhale/L3-SthenoMaidBlackroot-8B-V1 parameters: gamma: 0.01 density: 0.9 weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825] - model: crestf411/L3-8B-sunfall-v0.4-stheno-v3.2 parameters: gamma: 0.01 density: 0.9 weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825] - model: Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B parameters: gamma: 0.01 density: 0.9 weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825] merge_method: breadcrumbs_ties base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B parameters: normalize: false int8_mask: true dtype: bfloat16 ``` ### Umbral-Mind-3 ```yaml models: - model: Casual-Autopsy/Umbral-Mind-3-pt.1 - model: Casual-Autopsy/Umbral-Mind-3-pt.2 merge_method: slerp base_model: Casual-Autopsy/Umbral-Mind-3-pt.1 parameters: t: - filter: self_attn value: [0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5] - filter: mlp value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5] - value: 0.5 dtype: bfloat16 ``` ### Umbral-Mind-4 ```yaml models: - model: Casual-Autopsy/Umbral-Mind-1 - model: Casual-Autopsy/Umbral-Mind-3 merge_method: slerp base_model: Casual-Autopsy/Umbral-Mind-1 parameters: t: - value: [0.1, 0.15, 0.2, 0.4, 0.6, 0.4, 0.2, 0.15, 0.1] dtype: bfloat16 ``` ### Umbral-Mind-5 ```yaml models: - model: Casual-Autopsy/Umbral-Mind-4 - model: Casual-Autopsy/Umbral-Mind-2 merge_method: slerp base_model: Casual-Autopsy/Umbral-Mind-4 parameters: t: - value: [0.7, 0.5, 0.3, 0.25, 0.2, 0.25, 0.3, 0.5, 0.7] embed_slerp: true dtype: bfloat16 ``` ### Umbral-Mind-6 ```yaml models: - model: mergekit-community/Umbral-Mind-5 - model: Casual-Autopsy/Mopey-Omelette merge_method: slerp base_model: mergekit-community/Umbral-Mind-5 parameters: t: - value: [0.2, 0.25, 0.3, 0.4, 0.3, 0.25, 0.2, 0.25, 0.3, 0.4, 0.3, 0.25, 0.2] embed_slerp: true dtype: bfloat16 ``` ### Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B ```yaml models: - model: Casual-Autopsy/Umbral-Mind-6 - model: aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-32K parameters: weight: [0.02, -0.01, -0.01, 0.02] - model: ResplendentAI/Nymph_8B parameters: weight: [-0.01, 0.02, 0.02, -0.01] - model: ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B parameters: weight: [-0.01, 0.02, 0.02, -0.01] - model: Nitral-AI/Hathor_Tahsin-L3-8B-v0.85 parameters: weight: [0.02, -0.01, -0.01, 0.02] merge_method: task_arithmetic base_model: Casual-Autopsy/Umbral-Mind-6 parameters: normalize: false dtype: bfloat16 ```
bartowski/falcon-mamba-7b-GGUF
bartowski
2024-08-22T20:06:37Z
781
7
null
[ "gguf", "text-generation", "en", "dataset:tiiuae/falcon-refinedweb", "dataset:HuggingFaceFW/fineweb-edu", "base_model:tiiuae/falcon-mamba-7b", "base_model:quantized:tiiuae/falcon-mamba-7b", "license:other", "model-index", "endpoints_compatible", "region:us" ]
text-generation
2024-08-22T18:15:29Z
--- base_model: tiiuae/falcon-mamba-7b datasets: - tiiuae/falcon-refinedweb - HuggingFaceFW/fineweb-edu language: - en license: other license_name: falcon-mamba-7b-license license_link: https://falconllm.tii.ae/falcon-mamba-7b-terms-and-conditions.html pipeline_tag: text-generation quantized_by: bartowski model-index: - name: falcon-mamba-7b results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 33.36 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tiiuae/falcon-mamba-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 19.88 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tiiuae/falcon-mamba-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 3.63 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tiiuae/falcon-mamba-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 8.05 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tiiuae/falcon-mamba-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 10.86 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tiiuae/falcon-mamba-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 14.47 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tiiuae/falcon-mamba-7b name: Open LLM Leaderboard --- ## Llamacpp imatrix Quantizations of falcon-mamba-7b Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3615">b3615</a> for quantization. Original model: https://huggingface.co/tiiuae/falcon-mamba-7b All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) Run them in [LM Studio](https://lmstudio.ai/) ## Prompt format No prompt format found, check original model page ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Split | Description | | -------- | ---------- | --------- | ----- | ----------- | | [falcon-mamba-7b-f16.gguf](https://huggingface.co/bartowski/falcon-mamba-7b-GGUF/blob/main/falcon-mamba-7b-f16.gguf) | f16 | 14.57GB | false | Full F16 weights. | | [falcon-mamba-7b-Q8_0.gguf](https://huggingface.co/bartowski/falcon-mamba-7b-GGUF/blob/main/falcon-mamba-7b-Q8_0.gguf) | Q8_0 | 7.77GB | false | Extremely high quality, generally unneeded but max available quant. | | [falcon-mamba-7b-Q6_K_L.gguf](https://huggingface.co/bartowski/falcon-mamba-7b-GGUF/blob/main/falcon-mamba-7b-Q6_K_L.gguf) | Q6_K_L | 6.14GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. | | [falcon-mamba-7b-Q6_K.gguf](https://huggingface.co/bartowski/falcon-mamba-7b-GGUF/blob/main/falcon-mamba-7b-Q6_K.gguf) | Q6_K | 6.01GB | false | Very high quality, near perfect, *recommended*. | | [falcon-mamba-7b-Q5_K_L.gguf](https://huggingface.co/bartowski/falcon-mamba-7b-GGUF/blob/main/falcon-mamba-7b-Q5_K_L.gguf) | Q5_K_L | 5.24GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. | | [falcon-mamba-7b-Q5_K_M.gguf](https://huggingface.co/bartowski/falcon-mamba-7b-GGUF/blob/main/falcon-mamba-7b-Q5_K_M.gguf) | Q5_K_M | 5.08GB | false | High quality, *recommended*. | | [falcon-mamba-7b-Q5_K_S.gguf](https://huggingface.co/bartowski/falcon-mamba-7b-GGUF/blob/main/falcon-mamba-7b-Q5_K_S.gguf) | Q5_K_S | 5.08GB | false | High quality, *recommended*. | | [falcon-mamba-7b-Q4_K_L.gguf](https://huggingface.co/bartowski/falcon-mamba-7b-GGUF/blob/main/falcon-mamba-7b-Q4_K_L.gguf) | Q4_K_L | 4.40GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. | | [falcon-mamba-7b-Q4_K_M.gguf](https://huggingface.co/bartowski/falcon-mamba-7b-GGUF/blob/main/falcon-mamba-7b-Q4_K_M.gguf) | Q4_K_M | 4.20GB | false | Good quality, default size for must use cases, *recommended*. | | [falcon-mamba-7b-Q4_K_S.gguf](https://huggingface.co/bartowski/falcon-mamba-7b-GGUF/blob/main/falcon-mamba-7b-Q4_K_S.gguf) | Q4_K_S | 4.20GB | false | Slightly lower quality with more space savings, *recommended*. | | [falcon-mamba-7b-IQ4_XS.gguf](https://huggingface.co/bartowski/falcon-mamba-7b-GGUF/blob/main/falcon-mamba-7b-IQ4_XS.gguf) | IQ4_XS | 3.99GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [falcon-mamba-7b-Q3_K_XL.gguf](https://huggingface.co/bartowski/falcon-mamba-7b-GGUF/blob/main/falcon-mamba-7b-Q3_K_XL.gguf) | Q3_K_XL | 3.51GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. | | [falcon-mamba-7b-Q3_K_L.gguf](https://huggingface.co/bartowski/falcon-mamba-7b-GGUF/blob/main/falcon-mamba-7b-Q3_K_L.gguf) | Q3_K_L | 3.28GB | false | Lower quality but usable, good for low RAM availability. | | [falcon-mamba-7b-Q3_K_M.gguf](https://huggingface.co/bartowski/falcon-mamba-7b-GGUF/blob/main/falcon-mamba-7b-Q3_K_M.gguf) | Q3_K_M | 3.28GB | false | Low quality. | | [falcon-mamba-7b-IQ3_M.gguf](https://huggingface.co/bartowski/falcon-mamba-7b-GGUF/blob/main/falcon-mamba-7b-IQ3_M.gguf) | IQ3_M | 3.28GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [falcon-mamba-7b-Q3_K_S.gguf](https://huggingface.co/bartowski/falcon-mamba-7b-GGUF/blob/main/falcon-mamba-7b-Q3_K_S.gguf) | Q3_K_S | 3.28GB | false | Low quality, not recommended. | | [falcon-mamba-7b-IQ3_XS.gguf](https://huggingface.co/bartowski/falcon-mamba-7b-GGUF/blob/main/falcon-mamba-7b-IQ3_XS.gguf) | IQ3_XS | 3.28GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [falcon-mamba-7b-Q2_K_L.gguf](https://huggingface.co/bartowski/falcon-mamba-7b-GGUF/blob/main/falcon-mamba-7b-Q2_K_L.gguf) | Q2_K_L | 2.83GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. | | [falcon-mamba-7b-Q2_K.gguf](https://huggingface.co/bartowski/falcon-mamba-7b-GGUF/blob/main/falcon-mamba-7b-Q2_K.gguf) | Q2_K | 2.57GB | false | Very low quality but surprisingly usable. | | [falcon-mamba-7b-IQ2_M.gguf](https://huggingface.co/bartowski/falcon-mamba-7b-GGUF/blob/main/falcon-mamba-7b-IQ2_M.gguf) | IQ2_M | 2.50GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. | ## Embed/output weights Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to. Some say that this improves the quality, others don't notice any difference. If you use these models PLEASE COMMENT with your findings. I would like feedback that these are actually used and useful so I don't keep uploading quants no one is using. Thanks! ## Credits Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset Thank you ZeroWw for the inspiration to experiment with embed/output ## Downloading using huggingface-cli First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download bartowski/falcon-mamba-7b-GGUF --include "falcon-mamba-7b-Q4_K_M.gguf" --local-dir ./ ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download bartowski/falcon-mamba-7b-GGUF --include "falcon-mamba-7b-Q8_0/*" --local-dir ./ ``` You can either specify a new local-dir (falcon-mamba-7b-Q8_0) or download them all in place (./) ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
st25/prof_sent
st25
2024-08-22T19:58:10Z
7
0
null
[ "tensorboard", "safetensors", "bert", "generated_from_trainer", "region:us" ]
null
2024-08-22T08:03:55Z
--- metrics: - accuracy tags: - generated_from_trainer model-index: - name: prof_sent results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # prof_sent This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3264 - Accuracy: 0.834 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 63 | 1.5635 | 0.812 | | No log | 2.0 | 126 | 1.3264 | 0.834 | ### Framework versions - Transformers 4.42.4 - Pytorch 2.3.1+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1
bartowski/functionary-medium-v3.2-GGUF
bartowski
2024-08-22T19:55:02Z
32
2
null
[ "gguf", "text-generation", "base_model:meetkai/functionary-medium-v3.2", "base_model:quantized:meetkai/functionary-medium-v3.2", "license:mit", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
text-generation
2024-08-22T16:36:58Z
--- base_model: meetkai/functionary-medium-v3.2 license: mit pipeline_tag: text-generation quantized_by: bartowski --- ## Llamacpp imatrix Quantizations of functionary-medium-v3.2 Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3615">b3615</a> for quantization. Original model: https://huggingface.co/meetkai/functionary-medium-v3.2 All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) Run them in [LM Studio](https://lmstudio.ai/) ## Prompt format ``` <|start_header_id|>system<|end_header_id|> {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|> {prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|> >>>all <|eot_id|><|start_header_id|>{role}<|end_header_id|> ``` ## What's new: New updates from mlabonne, no details given but I'm sure it's worth the change! ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Split | Description | | -------- | ---------- | --------- | ----- | ----------- | | [functionary-medium-v3.2-Q8_0.gguf](https://huggingface.co/bartowski/functionary-medium-v3.2-GGUF/tree/main/functionary-medium-v3.2-Q8_0) | Q8_0 | 74.98GB | true | Extremely high quality, generally unneeded but max available quant. | | [functionary-medium-v3.2-Q6_K.gguf](https://huggingface.co/bartowski/functionary-medium-v3.2-GGUF/tree/main/functionary-medium-v3.2-Q6_K) | Q6_K | 57.89GB | true | Very high quality, near perfect, *recommended*. | | [functionary-medium-v3.2-Q5_K_M.gguf](https://huggingface.co/bartowski/functionary-medium-v3.2-GGUF/tree/main/functionary-medium-v3.2-Q5_K_M) | Q5_K_M | 49.95GB | true | High quality, *recommended*. | | [functionary-medium-v3.2-Q4_K_L.gguf](https://huggingface.co/bartowski/functionary-medium-v3.2-GGUF/blob/main/functionary-medium-v3.2-Q4_K_L.gguf) | Q4_K_L | 43.30GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. | | [functionary-medium-v3.2-Q4_K_M.gguf](https://huggingface.co/bartowski/functionary-medium-v3.2-GGUF/blob/main/functionary-medium-v3.2-Q4_K_M.gguf) | Q4_K_M | 42.52GB | false | Good quality, default size for must use cases, *recommended*. | | [functionary-medium-v3.2-Q4_K_S.gguf](https://huggingface.co/bartowski/functionary-medium-v3.2-GGUF/blob/main/functionary-medium-v3.2-Q4_K_S.gguf) | Q4_K_S | 40.35GB | false | Slightly lower quality with more space savings, *recommended*. | | [functionary-medium-v3.2-Q3_K_XL.gguf](https://huggingface.co/bartowski/functionary-medium-v3.2-GGUF/blob/main/functionary-medium-v3.2-Q3_K_XL.gguf) | Q3_K_XL | 38.06GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. | | [functionary-medium-v3.2-IQ4_XS.gguf](https://huggingface.co/bartowski/functionary-medium-v3.2-GGUF/blob/main/functionary-medium-v3.2-IQ4_XS.gguf) | IQ4_XS | 37.90GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [functionary-medium-v3.2-Q3_K_L.gguf](https://huggingface.co/bartowski/functionary-medium-v3.2-GGUF/blob/main/functionary-medium-v3.2-Q3_K_L.gguf) | Q3_K_L | 37.14GB | false | Lower quality but usable, good for low RAM availability. | | [functionary-medium-v3.2-Q3_K_M.gguf](https://huggingface.co/bartowski/functionary-medium-v3.2-GGUF/blob/main/functionary-medium-v3.2-Q3_K_M.gguf) | Q3_K_M | 34.27GB | false | Low quality. | | [functionary-medium-v3.2-IQ3_M.gguf](https://huggingface.co/bartowski/functionary-medium-v3.2-GGUF/blob/main/functionary-medium-v3.2-IQ3_M.gguf) | IQ3_M | 31.94GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [functionary-medium-v3.2-Q3_K_S.gguf](https://huggingface.co/bartowski/functionary-medium-v3.2-GGUF/blob/main/functionary-medium-v3.2-Q3_K_S.gguf) | Q3_K_S | 30.91GB | false | Low quality, not recommended. | | [functionary-medium-v3.2-IQ3_XXS.gguf](https://huggingface.co/bartowski/functionary-medium-v3.2-GGUF/blob/main/functionary-medium-v3.2-IQ3_XXS.gguf) | IQ3_XXS | 27.47GB | false | Lower quality, new method with decent performance, comparable to Q3 quants. | | [functionary-medium-v3.2-Q2_K_L.gguf](https://huggingface.co/bartowski/functionary-medium-v3.2-GGUF/blob/main/functionary-medium-v3.2-Q2_K_L.gguf) | Q2_K_L | 27.40GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. | | [functionary-medium-v3.2-Q2_K.gguf](https://huggingface.co/bartowski/functionary-medium-v3.2-GGUF/blob/main/functionary-medium-v3.2-Q2_K.gguf) | Q2_K | 26.38GB | false | Very low quality but surprisingly usable. | | [functionary-medium-v3.2-IQ2_M.gguf](https://huggingface.co/bartowski/functionary-medium-v3.2-GGUF/blob/main/functionary-medium-v3.2-IQ2_M.gguf) | IQ2_M | 24.12GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. | | [functionary-medium-v3.2-IQ2_XS.gguf](https://huggingface.co/bartowski/functionary-medium-v3.2-GGUF/blob/main/functionary-medium-v3.2-IQ2_XS.gguf) | IQ2_XS | 21.14GB | false | Low quality, uses SOTA techniques to be usable. | | [functionary-medium-v3.2-IQ2_XXS.gguf](https://huggingface.co/bartowski/functionary-medium-v3.2-GGUF/blob/main/functionary-medium-v3.2-IQ2_XXS.gguf) | IQ2_XXS | 19.10GB | false | Very low quality, uses SOTA techniques to be usable. | | [functionary-medium-v3.2-IQ1_M.gguf](https://huggingface.co/bartowski/functionary-medium-v3.2-GGUF/blob/main/functionary-medium-v3.2-IQ1_M.gguf) | IQ1_M | 16.75GB | false | Extremely low quality, *not* recommended. | ## Embed/output weights Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to. Some say that this improves the quality, others don't notice any difference. If you use these models PLEASE COMMENT with your findings. I would like feedback that these are actually used and useful so I don't keep uploading quants no one is using. Thanks! ## Credits Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset Thank you ZeroWw for the inspiration to experiment with embed/output ## Downloading using huggingface-cli First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download bartowski/functionary-medium-v3.2-GGUF --include "functionary-medium-v3.2-Q4_K_M.gguf" --local-dir ./ ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download bartowski/functionary-medium-v3.2-GGUF --include "functionary-medium-v3.2-Q8_0/*" --local-dir ./ ``` You can either specify a new local-dir (functionary-medium-v3.2-Q8_0) or download them all in place (./) ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
RichardErkhov/BAAI_-_Infinity-Instruct-3M-0625-Llama3-8B-gguf
RichardErkhov
2024-08-22T19:45:38Z
5
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-08-22T17:55:48Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Infinity-Instruct-3M-0625-Llama3-8B - GGUF - Model creator: https://huggingface.co/BAAI/ - Original model: https://huggingface.co/BAAI/Infinity-Instruct-3M-0625-Llama3-8B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Infinity-Instruct-3M-0625-Llama3-8B.Q2_K.gguf](https://huggingface.co/RichardErkhov/BAAI_-_Infinity-Instruct-3M-0625-Llama3-8B-gguf/blob/main/Infinity-Instruct-3M-0625-Llama3-8B.Q2_K.gguf) | Q2_K | 2.96GB | | [Infinity-Instruct-3M-0625-Llama3-8B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/BAAI_-_Infinity-Instruct-3M-0625-Llama3-8B-gguf/blob/main/Infinity-Instruct-3M-0625-Llama3-8B.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [Infinity-Instruct-3M-0625-Llama3-8B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/BAAI_-_Infinity-Instruct-3M-0625-Llama3-8B-gguf/blob/main/Infinity-Instruct-3M-0625-Llama3-8B.IQ3_S.gguf) | IQ3_S | 3.43GB | | [Infinity-Instruct-3M-0625-Llama3-8B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/BAAI_-_Infinity-Instruct-3M-0625-Llama3-8B-gguf/blob/main/Infinity-Instruct-3M-0625-Llama3-8B.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [Infinity-Instruct-3M-0625-Llama3-8B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/BAAI_-_Infinity-Instruct-3M-0625-Llama3-8B-gguf/blob/main/Infinity-Instruct-3M-0625-Llama3-8B.IQ3_M.gguf) | IQ3_M | 3.52GB | | [Infinity-Instruct-3M-0625-Llama3-8B.Q3_K.gguf](https://huggingface.co/RichardErkhov/BAAI_-_Infinity-Instruct-3M-0625-Llama3-8B-gguf/blob/main/Infinity-Instruct-3M-0625-Llama3-8B.Q3_K.gguf) | Q3_K | 3.74GB | | [Infinity-Instruct-3M-0625-Llama3-8B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/BAAI_-_Infinity-Instruct-3M-0625-Llama3-8B-gguf/blob/main/Infinity-Instruct-3M-0625-Llama3-8B.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [Infinity-Instruct-3M-0625-Llama3-8B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/BAAI_-_Infinity-Instruct-3M-0625-Llama3-8B-gguf/blob/main/Infinity-Instruct-3M-0625-Llama3-8B.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [Infinity-Instruct-3M-0625-Llama3-8B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/BAAI_-_Infinity-Instruct-3M-0625-Llama3-8B-gguf/blob/main/Infinity-Instruct-3M-0625-Llama3-8B.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [Infinity-Instruct-3M-0625-Llama3-8B.Q4_0.gguf](https://huggingface.co/RichardErkhov/BAAI_-_Infinity-Instruct-3M-0625-Llama3-8B-gguf/blob/main/Infinity-Instruct-3M-0625-Llama3-8B.Q4_0.gguf) | Q4_0 | 4.34GB | | [Infinity-Instruct-3M-0625-Llama3-8B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/BAAI_-_Infinity-Instruct-3M-0625-Llama3-8B-gguf/blob/main/Infinity-Instruct-3M-0625-Llama3-8B.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [Infinity-Instruct-3M-0625-Llama3-8B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/BAAI_-_Infinity-Instruct-3M-0625-Llama3-8B-gguf/blob/main/Infinity-Instruct-3M-0625-Llama3-8B.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [Infinity-Instruct-3M-0625-Llama3-8B.Q4_K.gguf](https://huggingface.co/RichardErkhov/BAAI_-_Infinity-Instruct-3M-0625-Llama3-8B-gguf/blob/main/Infinity-Instruct-3M-0625-Llama3-8B.Q4_K.gguf) | Q4_K | 4.58GB | | [Infinity-Instruct-3M-0625-Llama3-8B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/BAAI_-_Infinity-Instruct-3M-0625-Llama3-8B-gguf/blob/main/Infinity-Instruct-3M-0625-Llama3-8B.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [Infinity-Instruct-3M-0625-Llama3-8B.Q4_1.gguf](https://huggingface.co/RichardErkhov/BAAI_-_Infinity-Instruct-3M-0625-Llama3-8B-gguf/blob/main/Infinity-Instruct-3M-0625-Llama3-8B.Q4_1.gguf) | Q4_1 | 4.78GB | | [Infinity-Instruct-3M-0625-Llama3-8B.Q5_0.gguf](https://huggingface.co/RichardErkhov/BAAI_-_Infinity-Instruct-3M-0625-Llama3-8B-gguf/blob/main/Infinity-Instruct-3M-0625-Llama3-8B.Q5_0.gguf) | Q5_0 | 5.21GB | | [Infinity-Instruct-3M-0625-Llama3-8B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/BAAI_-_Infinity-Instruct-3M-0625-Llama3-8B-gguf/blob/main/Infinity-Instruct-3M-0625-Llama3-8B.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [Infinity-Instruct-3M-0625-Llama3-8B.Q5_K.gguf](https://huggingface.co/RichardErkhov/BAAI_-_Infinity-Instruct-3M-0625-Llama3-8B-gguf/blob/main/Infinity-Instruct-3M-0625-Llama3-8B.Q5_K.gguf) | Q5_K | 5.34GB | | [Infinity-Instruct-3M-0625-Llama3-8B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/BAAI_-_Infinity-Instruct-3M-0625-Llama3-8B-gguf/blob/main/Infinity-Instruct-3M-0625-Llama3-8B.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [Infinity-Instruct-3M-0625-Llama3-8B.Q5_1.gguf](https://huggingface.co/RichardErkhov/BAAI_-_Infinity-Instruct-3M-0625-Llama3-8B-gguf/blob/main/Infinity-Instruct-3M-0625-Llama3-8B.Q5_1.gguf) | Q5_1 | 5.65GB | | [Infinity-Instruct-3M-0625-Llama3-8B.Q6_K.gguf](https://huggingface.co/RichardErkhov/BAAI_-_Infinity-Instruct-3M-0625-Llama3-8B-gguf/blob/main/Infinity-Instruct-3M-0625-Llama3-8B.Q6_K.gguf) | Q6_K | 6.14GB | | [Infinity-Instruct-3M-0625-Llama3-8B.Q8_0.gguf](https://huggingface.co/RichardErkhov/BAAI_-_Infinity-Instruct-3M-0625-Llama3-8B-gguf/blob/main/Infinity-Instruct-3M-0625-Llama3-8B.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- license: apache-2.0 datasets: - BAAI/Infinity-Instruct language: - en --- # Infinity Instruct <p align="center"> <img src="fig/Bk3NbjnJko51MTx1ZCScT2sqnGg.png" width="300"> </p> <p align="center"> <em>Beijing Academy of Artificial Intelligence (BAAI)</em><br/> <em>[Paper][Code][🤗] (would be released soon)</em> </p> Infinity-Instruct-3M-0625-Llama3-8B is an opensource supervised instruction tuning model without reinforcement learning from human feedback (RLHF). This model is just finetuned on [Infinity-Instruct-3M and Infinity-Instruct-0625](https://huggingface.co/datasets/BAAI/Infinity-Instruct) and showing favorable results on AlpacaEval 2.0 and MT-Bench. ## **News** - 🔥🔥🔥[2024/07/09] We release the model weights of [InfInstruct-Mistral-7B 0625](https://huggingface.co/BAAI/Infinity-Instruct-3M-0625-Mistral-7B), [InfInstruct-Qwen2-7B 0625](https://huggingface.co/BAAI/Infinity-Instruct-3M-0625-Qwen2-7B), [InfInstruct-Llama3-8B 0625](https://huggingface.co/BAAI/Infinity-Instruct-3M-0625-Llama3-8B), [InfInstruct-Llama3-70B 0625](https://huggingface.co/BAAI/Infinity-Instruct-3M-0625-Llama3-70B), and [InfInstruct-Yi-1.5-9B 0625](https://huggingface.co/BAAI/Infinity-Instruct-3M-0625-Yi-1.5-9B). - 🔥🔥🔥[2024/07/09] We release the chat dataset [Infinity-Instruct-0625](https://huggingface.co/datasets/BAAI/Infinity-Instruct), it is a upgraded version of the Infinity-Instruct-0613. - 🔥🔥🔥[2024/06/28] We release the model weight of [InfInstruct-Llama3-70B 0613](https://huggingface.co/BAAI/Infinity-Instruct-3M-0613-Llama3-70B). It shows favorable results on AlpacaEval 2.0 compared to GPT4-0613 without RLHF. - 🔥🔥🔥[2024/06/21] We release the model weight of [InfInstruct-Mistral-7B 0613](https://huggingface.co/BAAI/Infinity-Instruct-3M-0613-Mistral-7B). It shows favorable results on AlpacaEval 2.0 compared to Mixtral 8x7B v0.1, Gemini Pro, and GPT-3.5 without RLHF. - 🔥🔥🔥[2024/06/13] We share the intermediate result of our data construction process (corresponding to the [InfInstruct-3M](https://huggingface.co/datasets/BAAI/Infinity-Instruct) in the table below). Our ongoing efforts focus on risk assessment and data generation. The finalized version with 10 million instructions is scheduled for release in late June. ## **Training Details** <p align="center"> <img src="fig/trainingflow.png"> </p> Infinity-Instruct-3M-0625-Llama3-8B is tuned on Million-level instruction dataset [Infinity-Instruct](https://huggingface.co/datasets/BAAI/Infinity-Instruct). First, we apply the foundational dataset Infinity-Instruct-3M to improve the foundational ability (math & code) of Llama3-8B, and get the foundational instruct model Infinity-Instruct-3M-Llama3-8B. Then we finetune the Infinity-Instruct-3M-Llama3-8B to get the stronger chat model Infinity-Instruct-3M-0625-Llama3-8B. Here is the training hyperparamers. ```bash epoch: 3 lr: 5e-6 min_lr: 0 lr_warmup_steps: 40 lr_decay_style: cosine weight_decay: 0.0 adam_beta1: 0.9 adam_beta2: 0.95 global_batch_size: 528 clip_grad: 1.0 ``` Thanks to [FlagScale](https://github.com/FlagOpen/FlagScale), we could concatenate multiple training samples to remove padding token and apply diverse acceleration techniques to the traning procudure. It effectively reduces our training costs. We will release our code in the near future! ## **Benchmark** | **Model** | **MT-Bench** | **AlpacaEval2.0** | |:-------------------------------:|:------------:|:-----------------:| | GPT 3.5 Turbo 0613 | 8.4 | 22.7 | | Mixtral 8x7B v0.1 | 8.3 | 23.7 | | Gemini Pro | -- | 24.4 | | GPT4-0613 | 9.2 | 30.2 | | Llama-3-8B-Instruct | -- | 22.9 | | InfInstruct-3M-0625-Llama3-8B* | 8.2 | **27.5** | *denote the model is finetuned without reinforcement learning from human feedback (RLHF). We evaluate Infinity-Instruct-3M-0625-Llama3-8B on the two most popular instructions following benchmarks. Mt-Bench is a set of challenging multi-turn questions including code, math and routine dialogue. AlpacaEval2.0 is based on AlpacaFarm evaluation set. Both of these two benchmarks use GPT-4 to judge the model answer. AlpacaEval2.0 displays a high agreement rate with human-annotated benchmark, Chatbot Arena. ## **How to use** Infinity-Instruct-3M-0625-Llama3-8B adopt the same chat template of [Llama3-7B-instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct): ```bash <|begin_of_text|><|start_header_id|>user<|end_header_id|> How are you?<|eot_id|><|start_header_id|>assistant<|end_header_id|> Hi!<|eot_id|><|start_header_id|>user<|end_header_id|> How are you?<|eot_id|><|start_header_id|>assistant<|end_header_id|> ``` To apply this model and template in conversation scenarios, you can refer to the following code: ```python from transformers import AutoModelForCausalLM, AutoTokenizer, LogitsProcessorList import torch device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained("BAAI/Infinity-Instruct-3M-0625-Llama3-8B", torch_dtype=torch.bfloat16, device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("BAAI/Infinity-Instruct-3M-0625-Llama3-8B") prompt = "Give me a short introduction to large language model." messages = [ {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) logits_processor = LogitsProcessorList( [ MinLengthLogitsProcessor(1, eos_token_id=tokenizer.eos_token_id), TemperatureLogitsWarper(0.7), ] ) generated_ids = model.generate( model_inputs.input_ids, logits_processor=logits_processor, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) ``` ## **Disclaimer** The resources, including code, data, and model weights, associated with this project are restricted for academic research purposes only and cannot be used for commercial purposes. The content produced by any version of Infinity Instruct is influenced by uncontrollable variables such as randomness, and therefore, the accuracy of the output cannot be guaranteed by this project. This project does not accept any legal liability for the content of the model output, nor does it assume responsibility for any losses incurred due to the use of associated resources and output results. ## ## **Citation** Our paper, detailing the development and features of the **Infinity Instruct** dataset and finetuned models, will be released soon on arXiv. Stay tuned! ``` @article{InfinityInstruct2024, title={Infinity Instruct}, author={Beijing Academy of Artificial Intelligence (BAAI)}, journal={arXiv preprint arXiv:2406.XXXX}, year={2024} } ```
jvelja/BERT_gpt2_imdb_0
jvelja
2024-08-22T19:44:12Z
106
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-08-22T19:44:00Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/KAI-7B-v0.1-i1-GGUF
mradermacher
2024-08-22T19:30:43Z
35
0
transformers
[ "transformers", "gguf", "code", "text-generation-inference", "pretrained", "en", "dataset:Keynote-Technology/PLANE-2K", "dataset:togethercomputer/RedPajama-Data-V2", "base_model:Keynote-Technology/KAI-7B-v0.1", "base_model:quantized:Keynote-Technology/KAI-7B-v0.1", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix" ]
null
2024-08-22T16:28:02Z
--- base_model: Keynote-Technology/KAI-7B-v0.1 datasets: - Keynote-Technology/PLANE-2K - togethercomputer/RedPajama-Data-V2 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - code - text-generation-inference - pretrained --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/Keynote-Technology/KAI-7B-v0.1 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/KAI-7B-v0.1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/KAI-7B-v0.1-i1-GGUF/resolve/main/KAI-7B-v0.1.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/KAI-7B-v0.1-i1-GGUF/resolve/main/KAI-7B-v0.1.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/KAI-7B-v0.1-i1-GGUF/resolve/main/KAI-7B-v0.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | | | [GGUF](https://huggingface.co/mradermacher/KAI-7B-v0.1-i1-GGUF/resolve/main/KAI-7B-v0.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/KAI-7B-v0.1-i1-GGUF/resolve/main/KAI-7B-v0.1.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/KAI-7B-v0.1-i1-GGUF/resolve/main/KAI-7B-v0.1.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/KAI-7B-v0.1-i1-GGUF/resolve/main/KAI-7B-v0.1.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/KAI-7B-v0.1-i1-GGUF/resolve/main/KAI-7B-v0.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/KAI-7B-v0.1-i1-GGUF/resolve/main/KAI-7B-v0.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/KAI-7B-v0.1-i1-GGUF/resolve/main/KAI-7B-v0.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/KAI-7B-v0.1-i1-GGUF/resolve/main/KAI-7B-v0.1.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/KAI-7B-v0.1-i1-GGUF/resolve/main/KAI-7B-v0.1.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/KAI-7B-v0.1-i1-GGUF/resolve/main/KAI-7B-v0.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/KAI-7B-v0.1-i1-GGUF/resolve/main/KAI-7B-v0.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/KAI-7B-v0.1-i1-GGUF/resolve/main/KAI-7B-v0.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/KAI-7B-v0.1-i1-GGUF/resolve/main/KAI-7B-v0.1.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/KAI-7B-v0.1-i1-GGUF/resolve/main/KAI-7B-v0.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/KAI-7B-v0.1-i1-GGUF/resolve/main/KAI-7B-v0.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/KAI-7B-v0.1-i1-GGUF/resolve/main/KAI-7B-v0.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/KAI-7B-v0.1-i1-GGUF/resolve/main/KAI-7B-v0.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/KAI-7B-v0.1-i1-GGUF/resolve/main/KAI-7B-v0.1.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
RichardErkhov/Finnish-NLP_-_Ahma-3B-Instruct-gguf
RichardErkhov
2024-08-22T19:29:00Z
106
0
null
[ "gguf", "arxiv:2302.13971", "arxiv:2402.14526", "endpoints_compatible", "region:us", "conversational" ]
null
2024-08-22T18:34:01Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Ahma-3B-Instruct - GGUF - Model creator: https://huggingface.co/Finnish-NLP/ - Original model: https://huggingface.co/Finnish-NLP/Ahma-3B-Instruct/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Ahma-3B-Instruct.Q2_K.gguf](https://huggingface.co/RichardErkhov/Finnish-NLP_-_Ahma-3B-Instruct-gguf/blob/main/Ahma-3B-Instruct.Q2_K.gguf) | Q2_K | 2.0GB | | [Ahma-3B-Instruct.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Finnish-NLP_-_Ahma-3B-Instruct-gguf/blob/main/Ahma-3B-Instruct.IQ3_XS.gguf) | IQ3_XS | 2.0GB | | [Ahma-3B-Instruct.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Finnish-NLP_-_Ahma-3B-Instruct-gguf/blob/main/Ahma-3B-Instruct.IQ3_S.gguf) | IQ3_S | 2.0GB | | [Ahma-3B-Instruct.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Finnish-NLP_-_Ahma-3B-Instruct-gguf/blob/main/Ahma-3B-Instruct.Q3_K_S.gguf) | Q3_K_S | 2.0GB | | [Ahma-3B-Instruct.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Finnish-NLP_-_Ahma-3B-Instruct-gguf/blob/main/Ahma-3B-Instruct.IQ3_M.gguf) | IQ3_M | 2.07GB | | [Ahma-3B-Instruct.Q3_K.gguf](https://huggingface.co/RichardErkhov/Finnish-NLP_-_Ahma-3B-Instruct-gguf/blob/main/Ahma-3B-Instruct.Q3_K.gguf) | Q3_K | 2.15GB | | [Ahma-3B-Instruct.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Finnish-NLP_-_Ahma-3B-Instruct-gguf/blob/main/Ahma-3B-Instruct.Q3_K_M.gguf) | Q3_K_M | 2.15GB | | [Ahma-3B-Instruct.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Finnish-NLP_-_Ahma-3B-Instruct-gguf/blob/main/Ahma-3B-Instruct.Q3_K_L.gguf) | Q3_K_L | 2.22GB | | [Ahma-3B-Instruct.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Finnish-NLP_-_Ahma-3B-Instruct-gguf/blob/main/Ahma-3B-Instruct.IQ4_XS.gguf) | IQ4_XS | 2.02GB | | [Ahma-3B-Instruct.Q4_0.gguf](https://huggingface.co/RichardErkhov/Finnish-NLP_-_Ahma-3B-Instruct-gguf/blob/main/Ahma-3B-Instruct.Q4_0.gguf) | Q4_0 | 2.0GB | | [Ahma-3B-Instruct.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Finnish-NLP_-_Ahma-3B-Instruct-gguf/blob/main/Ahma-3B-Instruct.IQ4_NL.gguf) | IQ4_NL | 2.02GB | | [Ahma-3B-Instruct.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Finnish-NLP_-_Ahma-3B-Instruct-gguf/blob/main/Ahma-3B-Instruct.Q4_K_S.gguf) | Q4_K_S | 2.41GB | | [Ahma-3B-Instruct.Q4_K.gguf](https://huggingface.co/RichardErkhov/Finnish-NLP_-_Ahma-3B-Instruct-gguf/blob/main/Ahma-3B-Instruct.Q4_K.gguf) | Q4_K | 2.57GB | | [Ahma-3B-Instruct.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Finnish-NLP_-_Ahma-3B-Instruct-gguf/blob/main/Ahma-3B-Instruct.Q4_K_M.gguf) | Q4_K_M | 2.57GB | | [Ahma-3B-Instruct.Q4_1.gguf](https://huggingface.co/RichardErkhov/Finnish-NLP_-_Ahma-3B-Instruct-gguf/blob/main/Ahma-3B-Instruct.Q4_1.gguf) | Q4_1 | 2.2GB | | [Ahma-3B-Instruct.Q5_0.gguf](https://huggingface.co/RichardErkhov/Finnish-NLP_-_Ahma-3B-Instruct-gguf/blob/main/Ahma-3B-Instruct.Q5_0.gguf) | Q5_0 | 2.4GB | | [Ahma-3B-Instruct.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Finnish-NLP_-_Ahma-3B-Instruct-gguf/blob/main/Ahma-3B-Instruct.Q5_K_S.gguf) | Q5_K_S | 2.6GB | | [Ahma-3B-Instruct.Q5_K.gguf](https://huggingface.co/RichardErkhov/Finnish-NLP_-_Ahma-3B-Instruct-gguf/blob/main/Ahma-3B-Instruct.Q5_K.gguf) | Q5_K | 2.74GB | | [Ahma-3B-Instruct.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Finnish-NLP_-_Ahma-3B-Instruct-gguf/blob/main/Ahma-3B-Instruct.Q5_K_M.gguf) | Q5_K_M | 2.74GB | | [Ahma-3B-Instruct.Q5_1.gguf](https://huggingface.co/RichardErkhov/Finnish-NLP_-_Ahma-3B-Instruct-gguf/blob/main/Ahma-3B-Instruct.Q5_1.gguf) | Q5_1 | 2.6GB | | [Ahma-3B-Instruct.Q6_K.gguf](https://huggingface.co/RichardErkhov/Finnish-NLP_-_Ahma-3B-Instruct-gguf/blob/main/Ahma-3B-Instruct.Q6_K.gguf) | Q6_K | 3.6GB | | [Ahma-3B-Instruct.Q8_0.gguf](https://huggingface.co/RichardErkhov/Finnish-NLP_-_Ahma-3B-Instruct-gguf/blob/main/Ahma-3B-Instruct.Q8_0.gguf) | Q8_0 | 3.6GB | Original model description: --- language: - fi license: apache-2.0 tags: - finnish - llama inference: false pipeline_tag: text-generation base_model: Finnish-NLP/Ahma-3B --- # Ahma-3B-Instruct for Finnish Ahma-3B-Instruct is a instruct/chat-tuned version of [Ahma-3B](https://huggingface.co/Finnish-NLP/Ahma-3B) trained to follow instructions in Finnish. The base Ahma 3B parameter model is decoder-only transformer model based on Meta's Llama (v1) architecture pretrained from scratch on Finnish language. Original Llama model architecture was introduced in [this paper](https://arxiv.org/abs/2302.13971) and first released at [this page](https://github.com/facebookresearch/llama). What does Ahma mean? Ahma is the Finnish word for wolverine! In the Finnish Lapland, wolverines are the biggest cause of reindeer damage. There are two different sized base Ahma models, all pretrained from scratch for 139B tokens: | Model | Context length | Layers | Dim | Heads | Params | |:--------------------------------------------------------------------------------|:---------------|:-------|:-----|:------|:-------| | [Ahma-3B](https://huggingface.co/Finnish-NLP/Ahma-3B) | 2048 | 26 | 3200 | 32 | 3.6B | | [Ahma-7B](https://huggingface.co/Finnish-NLP/Ahma-7B) | 2048 | 32 | 4096 | 32 | 7.0B | And two instruct-tuned versions: | Model | Context length | Layers | Dim | Heads | Params | |:--------------------------------------------------------------------------------|:---------------|:-------|:-----|:------|:-------| | [Ahma-3B-Instruct](https://huggingface.co/Finnish-NLP/Ahma-3B-Instruct) | 2048 | 26 | 3200 | 32 | 3.6B | | [Ahma-7B-Instruct](https://huggingface.co/Finnish-NLP/Ahma-7B-Instruct) | 2048 | 32 | 4096 | 32 | 7.0B | ## Intended uses & limitations This model was fine-tuned for instruction following. Instruction-tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. ### How to use If you want to use this model for instruction-following, you need to use the same prompt format we used in the fine-tuning process (basically the same format what Meta used in their Llama2 models).\ **Note: do not use "LlamaTokenizer" from transformers library but always use the AutoTokenizer instead, or use the plain sentencepiece tokenizer.** Looking for <b>GGUF-versions?</b> Those can be found from here for now: [GGUF-versions](https://huggingface.co/mradermacher/Ahma-3B-Instruct-GGUF) Here is an example using the instruction-following prompt format with the tokenizer's built-in chat template feature which makes it easy to format your potential multi-turn chats too, with some generation arguments you can modify for your use: ```python from transformers import AutoTokenizer, AutoModelForCausalLM system_prompt = "Olet tekoälyavustaja. Vastaat aina mahdollisimman avuliaasti. Vastauksesi eivät saa sisältää mitään haitallista, epäeettistä, rasistista, seksististä, vaarallista tai laitonta sisältöä. Jos kysymyksessä ei ole mitään järkeä tai se ei ole asiasisällöltään johdonmukainen, selitä miksi sen sijaan, että vastaisit jotain väärin. Jos et tiedä vastausta kysymykseen, älä kerro väärää tietoa." tokenizer = AutoTokenizer.from_pretrained("Finnish-NLP/Ahma-3B-Instruct") model = AutoModelForCausalLM.from_pretrained("Finnish-NLP/Ahma-3B-Instruct") model = model.to("cuda") # use the chat template feature in the tokenizer to format your (multi-turn) inputs messages = [ { "role": "system", "content": system_prompt, }, {"role": "user", "content": "Kerro kolme hyötyä, joita pienet avoimen lähdekoodin kielimallit tuovat?"}, ] inputs = tokenizer.apply_chat_template( messages, tokenize=True, add_generation_prompt=True, return_tensors="pt" ) inputs = inputs.to("cuda") generated_ids = model.generate( inputs, temperature=0.6, penalty_alpha=0.6, top_k=4, do_sample=True, repetition_penalty=1.2, min_length=5, max_length=2048, ) generated_text = tokenizer.batch_decode( generated_ids, skip_special_tokens=False )[0] ''' 1) Parantuneet keskustelutaidot: Pienet, hyvin koulutetut kielimallit voidaan kouluttaa ymmärtämään ja tuottamaan ihmisen kaltaista kieltä, mikä johtaa luonnollisempaan keskusteluun. Tämä voi olla erityisen hyödyllistä sovelluksissa, kuten chat-roboteissa, virtuaaliavustajissa ja kielenkääntämisessä. 2) Lisääntynyt luovuus kirjoittamisessa: Kielimallit voivat auttaa kirjoittajia tuottamalla ideoita, lauseita ja virkkeitä, jotka ovat hiottuja ja merkityksellisiä. Tämä voi johtaa parempaan kirjoituslaatuun, parempaan organisointiin ja tehokkaampaan viestintään. 3) Parempi tietojenkäsittely ja -tallennus: Pienemmät ja edullisemmat kielimallit voivat mullistaa tietojenkäsittelyn ja tallennuksen. Ne voivat säästää tilaa ja resursseja, koska ne pystyvät suorittamaan tiettyjä tehtäviä tehokkaammin kuin perinteiset koneoppimisalgoritmit. Lisäksi kielimallien avoimen lähdekoodin luonne mahdollistaa sen, että tutkijat, kehittäjät ja yritykset voivat tehdä niihin parannuksia ja lisäyksiä, mikä voi johtaa entistä kehittyneempiin ja monipuolisempiin ratkaisuihin. ''' ``` You may experiment with different system prompt instructions too if you like. ### Limitations and bias This model was trained only with Finnish texts excluding code so it should not be used for multilingual and code generation use cases. The training data used for this model contains a lot of content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model. ## Training data To better reflect the data distribution of the training set and balance the common samples and rare samples during training, we implemented the "ClusterClip Sampling" method by [Shao et al. (2024)](https://arxiv.org/abs/2402.14526) using [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) embeddings and KMeans clustering of 30 clusters. The training datasets mentioned below were created using this sampling method. There has also been some indication that gradually increasing the training example lengths during the training could be beneficial. Thus, the training dataset was splitted to 4 bins based on example lengths, and then examples were sampled from the bins so that the example lengths are gradually increasing towards the end of the training while a little amount of the shorter examples are still present too. This model was first supervised fine-tuned (SFT) on the combination of the following datasets: | Dataset | Dataset type | Upsampling | Words | Ratio | Average words per example | |:-------------------------------------------------|:-----------------------|:-----------|:-----------|:---------|:--------------------------| | Aya Finnish | Finnish single-turn | 2.9X | 55K | 0.54% | 83 | | OASST | Translated single-turn | 2.9X | 507K | 5.01% | 139 | | ai2_arc | Translated single-turn | 2.9X | 12K | 0.12% | 39 | | chatbot_arena | Translated single-turn | 2.8X | 554K | 5.48% | 147 | | dibt10k | Translated single-turn | 2.9X | 363K | 3.58% | 262 | | dolly | Translated single-turn | 2.9X | 221K | 2.19% | 71 | | Aya Dutch | Translated single-turn | 2.9X | 13K | 0.12% | 36 | | Aya English | Translated single-turn | 2.9X | 97K | 0.96% | 61 | | Aya French | Translated single-turn | 3.7X | 75K | 0.74% | 58 | | intel_dpo | Translated single-turn | 2.9X | 539K | 5.33% | 163 | | lmsys_1m | Translated single-turn | 2.8X | 2187K | 21.61% | 246 | | news_qa | Translated single-turn | 2.9X | 297K | 2.94% | 152 | | orca_math | Translated single-turn | 2.9X | 1165K | 11.51% | 196 | | Aya Portuguese | Translated single-turn | 2.9X | 97K | 0.96% | 27 | | Aya Spanish | Translated single-turn | 2.8X | 52K | 0.51% | 54 | | Aya Swedish | Translated single-turn | 2.9X | 5K | 0.05% | 41 | | ultrachat | Translated single-turn | 2.8X | 2199K | 21.73% | 221 | | lmsys_multiturn | Translated multi-turn | 2.9X | 490K | 4.84% | 379 | | oaast2_multiturn | Translated multi-turn | 2.8X | 593K | 5.86% | 307 | | suomitrivia_synthetic | Synthetic single-turn | 1.0X | 4K | 0.04% | 16 | | wikipedia_multitask_synthetic_qa | Synthetic single-turn | 1.0X | 206K | 2.03% | 499 | | wikipedia_synthetic_qa_reasoning | Synthetic single-turn | 1.0X | 201K | 1.98% | 477 | | wikipedia_synthetic_person_discussions_multiturn | Synthetic multi-turn | 1.0X | 188K | 1.85% | 194 | | **TOTAL** | | | **10121K** | **100%** | **168** | After tokenization, the SFT training dataset had 23 million tokens and 5% of the dataset was splitted for evaluation during the training. The SFT model was then further fine-tuned with Direct Preference Optimization (DPO) on the combination of the following datasets: | Dataset | Dataset type | Upsampling | Words | Ratio | Average words per example | |:----------------|:-----------------------|:-----------|:----------|:---------|:--------------------------| | intel_dpo | Translated single-turn | 1.3X | 467K | 39.75% | 153 | | ultrachat | Translated single-turn | 1.2X | 1017K | 57.24% | 220 | | suomitrivia_dpo | Synthetic single-turn | 1.0X | 5K | 3.01% | 16 | | **TOTAL** | | | **1489K** | **100%** | **130** | After tokenization, the DPO training dataset had 3 million tokens and 5% of the dataset was splitted for evaluation during the training. ## Training procedure ### Preprocessing Texts are tokenized using Byte Pair Encoding (BPE) using the implementation from SentencePiece splitting all numbers into individual digits and using bytes to decompose unknown UTF-8 characters. The total vocabulary size is 64k tokens. Inputs are sequences of 2048 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish. Both BOS and EOS tokens were used in the fine-tuning. ### Supervised fine-tuning (SFT) This model was first supervised fine-tuned (SFT) using the [unsloth](https://github.com/unslothai/unsloth) framework with a single NVIDIA GeForce RTX 4080 GPU. The model was fine-tuned for 1 epoch with a learning rate of 5e-05, weight decay of 5e-03, learning rate warmup ratio of 0.1 with cosine decay, batch size of 4 and gradient accumulation of 8 totalling the batch size to 32, max sequence lenght of 2048, and with NEFTune noise alpha of 5. The used optimizer was "paged_adamw_8bit" and the model was loaded with 4bit quantization. Training was done using the Rank-Stabilized LoRA (RSLora) with a rank of 256 and alpha of 128, LoRA dropout of 0.02, target modules of "q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj" and modules_to_save "lm_head", "embed_tokens". ### Direct Preference Optimization (DPO) fine-tuning The SFT model was then further fine-tuned with Direct Preference Optimization (DPO) using the [unsloth](https://github.com/unslothai/unsloth) framework with a single NVIDIA GeForce RTX 4080 GPU. The model was fine-tuned for 1 epoch with a learning rate of 2e-05, weight decay of 0.0, learning rate warmup ratio of 0.1 with cosine decay, batch size of 2 and gradient accumulation of 8 totalling the batch size to 16, and with max sequence lenght of 2048. The used optimizer was "paged_adamw_8bit". Training was done using the Rank-Stabilized LoRA (RSLora) with a rank of 64 and alpha of 32, LoRA dropout of 0.05, and target modules of "q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj". ## Evaluation results ### FIN-bench This Ahma-3B-Instruct model was evaluated using [FIN-bench by TurkuNLP](https://github.com/TurkuNLP/FIN-bench), and the same evaluation was carried out for other relevant Finnish models for comparison: [FinGPT 8B by TurkuNLP](https://huggingface.co/TurkuNLP/gpt3-finnish-8B), [Viking 7B by TurkuNLP, SiloGen and HPLT](https://huggingface.co/LumiOpen/Viking-7B), and [Poro 34B by SiloGen, TurkuNLP and HPLT](https://huggingface.co/LumiOpen/Poro-34B). Below are the results with 0-shot and 3-shot settings in FIN-bench. 0-shot results: | Benchmark | Ahma 3B base (instruct prompt format) | Ahma 3B Instruct (instruct prompt format) | Ahma 7B base (instruct prompt format) | Ahma 7B Instruct (instruct prompt format) | FinGPT 8B | Viking 7B | Poro 34B (8bit quant) | |:---------------------------|:--------------------------------------|:------------------------------------------|:--------------------------------------|:------------------------------------------|:----------|:----------|:----------------------| | Analogies | 50.77 | 48.46 | TBA | TBA | 49.23 | 40.00 | 54.62 | | Arithmetic | 27.64 | 22.14 | TBA | TBA | 33.15 | 30.16 | 30.34 | | Cause and Effect | 59.48 | 58.82 | TBA | TBA | 66.01 | 58.82 | 62.74 | | Emotions | 36.25 | 28.12 | TBA | TBA | 22.50 | 26.25 | 35.63 | | Empirical Judgements | 33.33 | 35.35 | TBA | TBA | 27.27 | 33.33 | 49.49 | | General Knowledge | 44.29 | 48.57 | TBA | TBA | 40.00 | 24.29 | 51.43 | | HHH Alignment | 42.09 | 41.66 | TBA | TBA | 41.81 | 42.51 | 42.92 | | Intent Recognition | 24.42 | 26.16 | TBA | TBA | 17.49 | 22.40 | 68.35 | | Misconceptions | 46.27 | 47.01 | TBA | TBA | 53.73 | 53.73 | 52.24 | | Paraphrase | 59.50 | 73.00 | TBA | TBA | 51.00 | 50.00 | 51.00 | | Sentence Ambiguity | 53.33 | 65.00 | TBA | TBA | 51.67 | 48.33 | 50.00 | | Similarities Abstraction | 65.79 | 68.42 | TBA | TBA | 60.53 | 65.79 | 60.53 | | **Non-Arithmetic Average** | **47.55** | **48.95** | TBA | TBA | **46.17** | **44.42** | **52.08** | | **Overall Average** | **36.49** | **34.06** | TBA | TBA | **38.93** | **36.50** | **40.00** | 3-shot results: | Benchmark | Ahma 3B base (instruct prompt format) | Ahma 3B Instruct (instruct prompt format) | Ahma 7B base (instruct prompt format) | Ahma 7B Instruct (instruct prompt format) | FinGPT 8B | Viking 7B | Poro 34B (8bit quant) | |:---------------------------|:--------------------------------------|:------------------------------------------|:--------------------------------------|:------------------------------------------|:----------|:----------|:----------------------| | Analogies | 50.77 | 49.23 | TBA | TBA | 40.77 | 54.62 | 76.92 | | Arithmetic | 38.38 | 43.89 | TBA | TBA | 43.63 | 45.78 | 53.68 | | Cause and Effect | 60.78 | 64.71 | TBA | TBA | 64.05 | 58.17 | 67.32 | | Emotions | 30.00 | 41.25 | TBA | TBA | 44.37 | 48.13 | 56.87 | | Empirical Judgements | 46.46 | 44.44 | TBA | TBA | 32.32 | 43.43 | 63.64 | | General Knowledge | 47.14 | 40.00 | TBA | TBA | 54.29 | 28.57 | 74.29 | | HHH Alignment | 43.53 | 44.80 | TBA | TBA | 45.39 | 44.80 | 46.07 | | Intent Recognition | 20.52 | 44.22 | TBA | TBA | 51.45 | 58.82 | 83.67 | | Misconceptions | 50.75 | 52.24 | TBA | TBA | 52.99 | 46.27 | 52.99 | | Paraphrase | 50.50 | 58.50 | TBA | TBA | 53.00 | 54.50 | 55.00 | | Sentence Ambiguity | 53.33 | 48.33 | TBA | TBA | 51.67 | 53.33 | 66.67 | | Similarities Abstraction | 69.74 | 72.37 | TBA | TBA | 64.47 | 73.68 | 75.00 | | **Non-Arithmetic Average** | **48.48** | **51.49** | TBA | TBA | **51.19** | **50.94** | **61.96** | | **Overall Average** | **42.87** | **47.27** | TBA | TBA | **46.99** | **48.07** | **57.36** | As we can see, Ahma-3B-Instruct model outperforms 2X larger models like the FinGPT 8B and Viking 7B, especially in non-arithmetic tasks in 0-shot usage. Even the 10X larger Poro 34B model, which is generally better, doesn't show a huge performance difference considering its size, and Ahma-3B-Instruct actually surpasses it in some tasks. In a 3-shot setting, we can see that the Ahma-3B-Instruct model has better few-shot example following performance compared to the base Ahma 3B model. This could be due to the inclusion of multi-turn examples in the fine-tuning dataset. ### MTBench Finnish This Ahma-3B-Instruct model was primarily evaluated using [MTBench Finnish by LumiOpen](https://github.com/LumiOpen/FastChat/tree/main/fastchat/llm_judge) since this model is fine-tuned for chat and instruction following. Since the MTBench evaluates also multi-turn chats while Ahma base models were only pretrained with single-turn instruction following examples, we have reported MTBench Finnish results separately for their single-turn and multi-turn evaluation examples. This enables us to evaluate how well this Ahma-3B-Instruct model improves on multi-turn chats since its fine-tuning dataset included some multi-turn examples too. [Poro 34B Chat by SiloGen, TurkuNLP and HPLT](https://huggingface.co/LumiOpen/Poro-34B-chat) model's presumably multi-turn results are copied from their model card for the comparison. Single-turn results: | Benchmark | Ahma 3B base (instruct prompt format) | Ahma 3B Instruct | Ahma 7B base (instruct prompt format) | Ahma 7B Instruct | |:--------------------|:--------------------------------------|:-----------------|:--------------------------------------|:-----------------| | Coding | 1.00 | 1.00 | TBA | TBA | | Extraction | 2.00 | 1.30 | TBA | TBA | | Humanities | 4.05 | 6.20 | TBA | TBA | | Math | 3.00 | 3.20 | TBA | TBA | | Reasoning | 2.90 | 4.60 | TBA | TBA | | Roleplay | 4.80 | 6.50 | TBA | TBA | | STEM | 5.10 | 5.95 | TBA | TBA | | Writing | 6.60 | 9.00 | TBA | TBA | | **Overall Average** | **3.68** | **4.72** | TBA | TBA | Multi-turn results: | Benchmark | Ahma 3B base (instruct prompt format) | Ahma 3B Instruct | Ahma 7B base (instruct prompt format) | Ahma 7B Instruct | Poro 34B Chat | |:--------------------|:--------------------------------------|:-----------------|:--------------------------------------|:-----------------|:--------------| | Coding | 1.00 | 1.00 | TBA | TBA | 3.70 | | Extraction | 1.55 | 1.15 | TBA | TBA | 6.37 | | Humanities | 3.25 | 6.20 | TBA | TBA | 9.25 | | Math | 2.20 | 2.70 | TBA | TBA | 1.20 | | Reasoning | 2.45 | 3.50 | TBA | TBA | 4.35 | | Roleplay | 4.90 | 6.40 | TBA | TBA | 7.35 | | STEM | 4.20 | 4.78 | TBA | TBA | 7.80 | | Writing | 3.80 | 6.65 | TBA | TBA | 8.50 | | **Overall Average** | **2.92** | **4.05** | TBA | TBA | **6.06** | As we can see, the Ahma-3B-Instruct model significantly improves upon the base Ahma-3B model, especially in tasks like writing. It's also worth noting that the Ahma-3B-Instruct model shows enhanced performance in multi-turn tasks compared to the base model, which highlights the value of the multi-turn training examples used in the fine-tuning process. The Ahma-3B-Instruct model lost 14% of its single-turn overall score in a multi-turn setting, while the base Ahma-3B model lost 21%. Therefore, this instruct model might be better suited for chat use cases as well. As expected, coding performance was poor since the Ahma models aren't trained on code data. Ahma models also seemed to have problems with the fact that they started to constantly repeat the generated text in some evaluation examples, which affected the scoring. With the addition of a repetition penalty setting to the evaluation script generation method, the scores already improved significantly, so Ahma models should be used with better generation settings in real-world use compared to the settings used in this benchmark. ## Acknowledgements This project would not have been possible without compute generously provided by Google through the [TPU Research Cloud](https://sites.research.google/trc/). ## Team Members - Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/) - Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/) Feel free to contact us for more details 🤗 ![Ahma](ahma.jpg)
Edens-Gate/MagMerge2
Edens-Gate
2024-08-22T19:26:34Z
7
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "base_model:Edens-Gate/Holland-attempt-X", "base_model:merge:Edens-Gate/Holland-attempt-X", "base_model:Edens-Gate/testing", "base_model:merge:Edens-Gate/testing", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-08-22T19:18:04Z
--- base_model: - NewEden/TinyMagnum-r2 - Edens-Gate/testing library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [NewEden/TinyMagnum-r2](https://huggingface.co/NewEden/TinyMagnum-r2) * [Edens-Gate/testing](https://huggingface.co/Edens-Gate/testing) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: Edens-Gate/testing layer_range: - 0 - 32 - model: NewEden/TinyMagnum-r2 layer_range: - 0 - 32 merge_method: slerp base_model: NewEden/TinyMagnum-r2 parameters: t: - filter: self_attn value: - 0 - 0.5 - 0.3 - 0.7 - 1 - filter: mlp value: - 1 - 0.5 - 0.7 - 0.3 - 0 - value: 0.8 dtype: bfloat16 ```
psalmodieur/fluxlucaslustosalast
psalmodieur
2024-08-22T19:05:44Z
5
0
diffusers
[ "diffusers", "flux", "text-to-image", "lora", "fal", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-08-22T19:05:15Z
--- tags: - flux - text-to-image - lora - diffusers - fal base_model: black-forest-labs/FLUX.1-dev instance_prompt: lucaslustosa license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # fluxlucaslustosalast <Gallery /> ## Model description ## Trigger words You should use `lucaslustosa` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/psalmodieur/fluxlucaslustosalast/tree/main) them in the Files & versions tab. ## Training at fal.ai Training was done using [fal.ai/models/fal-ai/flux-lora-general-training](https://fal.ai/models/fal-ai/flux-lora-general-training).
ellarose/my-awesome-setfit-model
ellarose
2024-08-22T18:59:45Z
5
0
setfit
[ "setfit", "safetensors", "mpnet", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "dataset:SetFit/SentEval-CR", "arxiv:2209.11055", "base_model:sentence-transformers/paraphrase-mpnet-base-v2", "base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2", "model-index", "region:us" ]
text-classification
2024-08-22T18:58:27Z
--- base_model: sentence-transformers/paraphrase-mpnet-base-v2 datasets: - SetFit/SentEval-CR library_name: setfit metrics: - accuracy pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: you can take pic of your friends and the picture will pop up when they call . - text: the speakerphone , the radio , all features work perfectly . - text: 'a ) the picture quality ( color and sharpness of focusing ) are so great , it completely eliminated my doubt about digital imaging -- - how could one eat rice one grain at a time : - ) )' - text: so far the dvd works so i hope it does n 't break down like the reviews i 've read . - text: i have a couple hundred contacts and the menu loads within a few seconds , no big deal . inference: true model-index: - name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2 results: - task: type: text-classification name: Text Classification dataset: name: SetFit/SentEval-CR type: SetFit/SentEval-CR split: test metrics: - type: accuracy value: 0.8698539176626826 name: Accuracy --- # SetFit with sentence-transformers/paraphrase-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model trained on the [SetFit/SentEval-CR](https://huggingface.co/datasets/SetFit/SentEval-CR) dataset that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 2 classes - **Training Dataset:** [SetFit/SentEval-CR](https://huggingface.co/datasets/SetFit/SentEval-CR) <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 1 | <ul><li>'* slick-looking design and improved interface'</li><li>'as for bluetooth , no problems at all .'</li><li>'2 ) storage capacity'</li></ul> | | 0 | <ul><li>"the day finally arrived when i was sure i 'd leave sprint ."</li><li>"neither message was answered ( they ask for 24 hours before replying - i 've been waiting 27 days . )"</li><li>'only problem is that is a bit heavy .'</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.8699 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("ellarose/my-awesome-setfit-model") # Run inference preds = model("the speakerphone , the radio , all features work perfectly .") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:----| | Word count | 4 | 18.0625 | 44 | | Label | Training Sample Count | |:------|:----------------------| | 0 | 7 | | 1 | 9 | ### Training Hyperparameters - batch_size: (16, 16) - num_epochs: (1, 1) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 20 - body_learning_rate: (2e-05, 2e-05) - head_learning_rate: 2e-05 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:-----:|:----:|:-------------:|:---------------:| | 0.025 | 1 | 0.2289 | - | ### Framework Versions - Python: 3.11.9 - SetFit: 1.0.3 - Sentence Transformers: 2.7.0 - Transformers: 4.39.0 - PyTorch: 2.4.0+cpu - Datasets: 2.21.0 - Tokenizers: 0.15.2 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
dsouza889/all-MiniLM-L6-v2-Q4_K_M-GGUF
dsouza889
2024-08-22T18:50:10Z
16
0
sentence-transformers
[ "sentence-transformers", "gguf", "feature-extraction", "sentence-similarity", "transformers", "llama-cpp", "gguf-my-repo", "en", "dataset:s2orc", "dataset:flax-sentence-embeddings/stackexchange_xml", "dataset:ms_marco", "dataset:gooaq", "dataset:yahoo_answers_topics", "dataset:code_search_net", "dataset:search_qa", "dataset:eli5", "dataset:snli", "dataset:multi_nli", "dataset:wikihow", "dataset:natural_questions", "dataset:trivia_qa", "dataset:embedding-data/sentence-compression", "dataset:embedding-data/flickr30k-captions", "dataset:embedding-data/altlex", "dataset:embedding-data/simple-wiki", "dataset:embedding-data/QQP", "dataset:embedding-data/SPECTER", "dataset:embedding-data/PAQ_pairs", "dataset:embedding-data/WikiAnswers", "base_model:sentence-transformers/all-MiniLM-L6-v2", "base_model:quantized:sentence-transformers/all-MiniLM-L6-v2", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-08-22T18:50:08Z
--- base_model: sentence-transformers/all-MiniLM-L6-v2 datasets: - s2orc - flax-sentence-embeddings/stackexchange_xml - ms_marco - gooaq - yahoo_answers_topics - code_search_net - search_qa - eli5 - snli - multi_nli - wikihow - natural_questions - trivia_qa - embedding-data/sentence-compression - embedding-data/flickr30k-captions - embedding-data/altlex - embedding-data/simple-wiki - embedding-data/QQP - embedding-data/SPECTER - embedding-data/PAQ_pairs - embedding-data/WikiAnswers language: en library_name: sentence-transformers license: apache-2.0 pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers - llama-cpp - gguf-my-repo --- # dsouza889/all-MiniLM-L6-v2-Q4_K_M-GGUF This model was converted to GGUF format from [`sentence-transformers/all-MiniLM-L6-v2`](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo dsouza889/all-MiniLM-L6-v2-Q4_K_M-GGUF --hf-file all-minilm-l6-v2-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo dsouza889/all-MiniLM-L6-v2-Q4_K_M-GGUF --hf-file all-minilm-l6-v2-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo dsouza889/all-MiniLM-L6-v2-Q4_K_M-GGUF --hf-file all-minilm-l6-v2-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo dsouza889/all-MiniLM-L6-v2-Q4_K_M-GGUF --hf-file all-minilm-l6-v2-q4_k_m.gguf -c 2048 ```
vll-ai-company/XOChat
vll-ai-company
2024-08-22T18:47:32Z
0
0
transformers
[ "transformers", "text-generation", "en", "dataset:fka/awesome-chatgpt-prompts", "endpoints_compatible", "region:us" ]
text-generation
2024-03-28T13:37:02Z
--- language: - en library_name: transformers pipeline_tag: text-generation datasets: - fka/awesome-chatgpt-prompts --- # XAI (XI) ## Overview **XAI (XI)** is an ambitious model developed by VLL's AI Company, designed to push the boundaries of what artificial intelligence can achieve. Our mission is to utilize AI to address global challenges and make the world a better place. ## What We Do At **VLL's AI Company**, we're not just limited to AI; we're dedicated to driving progress across various technological fronts. However, the success of **XAI (XI)** and our broader mission is not something we can achieve alone. ## The Importance of Support We face many challenges in our journey, and without your support and contribution, **XAI (XI)** cannot reach its full potential. **Your involvement is crucial**—together, we can make a meaningful impact. Without **The Support**, our vision for a better world through AI cannot become a reality. ## Stay Connected - [Twitter](https://twitter.com/Guiding_light12) - [VLL Website](https://vlldomain.wixsite.com/vll-web) **Let's power the world together—because we can't do it without you.**
RichardErkhov/hometax_-_llama3-8b-sft-r32-lora-deepspeed_2-gguf
RichardErkhov
2024-08-22T18:46:02Z
24
0
null
[ "gguf", "arxiv:1910.09700", "endpoints_compatible", "region:us", "conversational" ]
null
2024-08-22T17:00:53Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) llama3-8b-sft-r32-lora-deepspeed_2 - GGUF - Model creator: https://huggingface.co/hometax/ - Original model: https://huggingface.co/hometax/llama3-8b-sft-r32-lora-deepspeed_2/ | Name | Quant method | Size | | ---- | ---- | ---- | | [llama3-8b-sft-r32-lora-deepspeed_2.Q2_K.gguf](https://huggingface.co/RichardErkhov/hometax_-_llama3-8b-sft-r32-lora-deepspeed_2-gguf/blob/main/llama3-8b-sft-r32-lora-deepspeed_2.Q2_K.gguf) | Q2_K | 2.96GB | | [llama3-8b-sft-r32-lora-deepspeed_2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/hometax_-_llama3-8b-sft-r32-lora-deepspeed_2-gguf/blob/main/llama3-8b-sft-r32-lora-deepspeed_2.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [llama3-8b-sft-r32-lora-deepspeed_2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/hometax_-_llama3-8b-sft-r32-lora-deepspeed_2-gguf/blob/main/llama3-8b-sft-r32-lora-deepspeed_2.IQ3_S.gguf) | IQ3_S | 3.43GB | | [llama3-8b-sft-r32-lora-deepspeed_2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/hometax_-_llama3-8b-sft-r32-lora-deepspeed_2-gguf/blob/main/llama3-8b-sft-r32-lora-deepspeed_2.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [llama3-8b-sft-r32-lora-deepspeed_2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/hometax_-_llama3-8b-sft-r32-lora-deepspeed_2-gguf/blob/main/llama3-8b-sft-r32-lora-deepspeed_2.IQ3_M.gguf) | IQ3_M | 3.52GB | | [llama3-8b-sft-r32-lora-deepspeed_2.Q3_K.gguf](https://huggingface.co/RichardErkhov/hometax_-_llama3-8b-sft-r32-lora-deepspeed_2-gguf/blob/main/llama3-8b-sft-r32-lora-deepspeed_2.Q3_K.gguf) | Q3_K | 3.74GB | | [llama3-8b-sft-r32-lora-deepspeed_2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/hometax_-_llama3-8b-sft-r32-lora-deepspeed_2-gguf/blob/main/llama3-8b-sft-r32-lora-deepspeed_2.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [llama3-8b-sft-r32-lora-deepspeed_2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/hometax_-_llama3-8b-sft-r32-lora-deepspeed_2-gguf/blob/main/llama3-8b-sft-r32-lora-deepspeed_2.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [llama3-8b-sft-r32-lora-deepspeed_2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/hometax_-_llama3-8b-sft-r32-lora-deepspeed_2-gguf/blob/main/llama3-8b-sft-r32-lora-deepspeed_2.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [llama3-8b-sft-r32-lora-deepspeed_2.Q4_0.gguf](https://huggingface.co/RichardErkhov/hometax_-_llama3-8b-sft-r32-lora-deepspeed_2-gguf/blob/main/llama3-8b-sft-r32-lora-deepspeed_2.Q4_0.gguf) | Q4_0 | 4.34GB | | [llama3-8b-sft-r32-lora-deepspeed_2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/hometax_-_llama3-8b-sft-r32-lora-deepspeed_2-gguf/blob/main/llama3-8b-sft-r32-lora-deepspeed_2.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [llama3-8b-sft-r32-lora-deepspeed_2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/hometax_-_llama3-8b-sft-r32-lora-deepspeed_2-gguf/blob/main/llama3-8b-sft-r32-lora-deepspeed_2.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [llama3-8b-sft-r32-lora-deepspeed_2.Q4_K.gguf](https://huggingface.co/RichardErkhov/hometax_-_llama3-8b-sft-r32-lora-deepspeed_2-gguf/blob/main/llama3-8b-sft-r32-lora-deepspeed_2.Q4_K.gguf) | Q4_K | 4.58GB | | [llama3-8b-sft-r32-lora-deepspeed_2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/hometax_-_llama3-8b-sft-r32-lora-deepspeed_2-gguf/blob/main/llama3-8b-sft-r32-lora-deepspeed_2.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [llama3-8b-sft-r32-lora-deepspeed_2.Q4_1.gguf](https://huggingface.co/RichardErkhov/hometax_-_llama3-8b-sft-r32-lora-deepspeed_2-gguf/blob/main/llama3-8b-sft-r32-lora-deepspeed_2.Q4_1.gguf) | Q4_1 | 4.78GB | | [llama3-8b-sft-r32-lora-deepspeed_2.Q5_0.gguf](https://huggingface.co/RichardErkhov/hometax_-_llama3-8b-sft-r32-lora-deepspeed_2-gguf/blob/main/llama3-8b-sft-r32-lora-deepspeed_2.Q5_0.gguf) | Q5_0 | 5.21GB | | [llama3-8b-sft-r32-lora-deepspeed_2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/hometax_-_llama3-8b-sft-r32-lora-deepspeed_2-gguf/blob/main/llama3-8b-sft-r32-lora-deepspeed_2.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [llama3-8b-sft-r32-lora-deepspeed_2.Q5_K.gguf](https://huggingface.co/RichardErkhov/hometax_-_llama3-8b-sft-r32-lora-deepspeed_2-gguf/blob/main/llama3-8b-sft-r32-lora-deepspeed_2.Q5_K.gguf) | Q5_K | 5.34GB | | [llama3-8b-sft-r32-lora-deepspeed_2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/hometax_-_llama3-8b-sft-r32-lora-deepspeed_2-gguf/blob/main/llama3-8b-sft-r32-lora-deepspeed_2.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [llama3-8b-sft-r32-lora-deepspeed_2.Q5_1.gguf](https://huggingface.co/RichardErkhov/hometax_-_llama3-8b-sft-r32-lora-deepspeed_2-gguf/blob/main/llama3-8b-sft-r32-lora-deepspeed_2.Q5_1.gguf) | Q5_1 | 5.65GB | | [llama3-8b-sft-r32-lora-deepspeed_2.Q6_K.gguf](https://huggingface.co/RichardErkhov/hometax_-_llama3-8b-sft-r32-lora-deepspeed_2-gguf/blob/main/llama3-8b-sft-r32-lora-deepspeed_2.Q6_K.gguf) | Q6_K | 6.14GB | | [llama3-8b-sft-r32-lora-deepspeed_2.Q8_0.gguf](https://huggingface.co/RichardErkhov/hometax_-_llama3-8b-sft-r32-lora-deepspeed_2-gguf/blob/main/llama3-8b-sft-r32-lora-deepspeed_2.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- library_name: transformers license: apache-2.0 language: - ko --- # Model Card for Model ID hometax/llama3-8b-sft-r32-lora-deepspeed_2 <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** hometax - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** lora-deepspeed - **Language(s) (NLP):** ko - **License:** apache-2.0 - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
llmware/dragon-yi-9b-gguf
llmware
2024-08-22T18:38:35Z
22
1
transformers
[ "transformers", "gguf", "yi", "license:apache-2.0", "region:us" ]
null
2024-07-24T18:28:12Z
--- license: apache-2.0 inference: false --- # DRAGON-YI-9B-GGUF <!-- Provide a quick summary of what the model is/does. --> **dragon-yi-9b-gguf** is a fact-based question-answering model, optimized for complex business documents, finetuned on top of 01-ai/yi-v1.5-9b base and quantizedwith 4_K_M GGUF quantization, providing an inference implementation for use on CPUs. ## Benchmark Tests Evaluated against the benchmark test: RAG-Instruct-Benchmark-Tester 1 Test Run (temperature=0.0, sample=False) with 1 point for correct answer, 0.5 point for partial correct or blank / NF, 0.0 points for incorrect, and -1 points for hallucinations. --Accuracy Score: **98.0** correct out of 100 --Not Found Classification: 90.0% --Boolean: 97.5% --Math/Logic: 95% --Complex Questions (1-5): 5 (Very Strong) --Summarization Quality (1-5): 4 (Above Average) --Hallucinations: No hallucinations observed in test runs. For test run results (and good indicator of target use cases), please see the files ("core_rag_test" and "answer_sheet" in this repo). To pull the model via API: from huggingface_hub import snapshot_download snapshot_download("llmware/dragon-yi-9b-gguf", local_dir="/path/on/your/machine/", local_dir_use_symlinks=False) Load in your favorite GGUF inference engine, or try with llmware as follows: from llmware.models import ModelCatalog model = ModelCatalog().load_model("dragon-yi-9b-gguf") response = model.inference(query, add_context=text_sample) Note: please review [**config.json**](https://huggingface.co/llmware/dragon-yi-9b-gguf/blob/main/config.json) in the repository for prompt wrapping information, details on the model, and full test set. ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** llmware - **Model type:** GGUF - **Language(s) (NLP):** English - **License:** Apache 2.0 ## Model Card Contact Darren Oberst & llmware team
bitext/Mistral-7B-Banking-v2
bitext
2024-08-22T18:36:32Z
587
1
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "generated_from_trainer", "text-generation-inference", "conversational", "license:apache-2.0", "autotrain_compatible", "region:us" ]
text-generation
2024-05-03T21:39:09Z
--- license: apache-2.0 inference: false tags: - generated_from_trainer - text-generation-inference model-index: - name: Mistral-7B-Banking-v2 results: [] model_type: mistral pipeline_tag: text-generation widget: - messages: - role: user content: I want to close an online account --- # Mistral-7B-Banking ## Model Description This model, "Mistral-7B-Banking", is a fine-tuned version of the [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2), specifically tailored for the Banking domain. It is optimized to answer questions and assist users with various banking transactions. It has been trained using hybrid synthetic data generated using our NLP/NLG technology and our automated Data Labeling (DAL) tools. The goal of this model is to show that a generic verticalized model makes customization for a final use case much easier. For example, if you are "ACME Bank", you can create your own customized model by using this fine-tuned model and a doing an additional fine-tuning using a small amount of your own data. An overview of this approach can be found at: [From General-Purpose LLMs to Verticalized Enterprise Models](https://www.bitext.com/blog/general-purpose-models-verticalized-enterprise-genai/) ## Intended Use - **Recommended applications**: This model is designed to be used as the first step in Bitext’s two-step approach to LLM fine-tuning for the creation of chatbots, virtual assistants and copilots for the Banking domain, providing customers with fast and accurate answers about their banking needs. - **Out-of-scope**: This model is not suited for non-banking related questions and should not be used for providing health, legal, or critical safety advice. ## Usage Example ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch device = 'cuda' if torch.cuda.is_available() else 'cpu' model = AutoModelForCausalLM.from_pretrained("bitext/Mistral-7B-Banking-v2") tokenizer = AutoTokenizer.from_pretrained("bitext/Mistral-7B-Banking-v2") messages = [ {"role": "system", "content": "You are an expert in customer support for Banking."}, {"role": "user", "content": "I want to open a bank account"}, ] encoded = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = encoded.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` ## Model Architecture This model utilizes the `MistralForCausalLM` architecture with a `LlamaTokenizer`, ensuring it retains the foundational capabilities of the base model while being specifically enhanced for banking-related interactions. ## Training Data The model was fine-tuned on a dataset comprising various banking-related intents, including transactions like balance checks, money transfers, loan applications, and more, totaling 89 intents each represented by approximately 1000 examples. This comprehensive training helps the model address a broad spectrum of banking-related questions effectively. The dataset follows the same structured approach as our dataset published on Hugging Face as [bitext/Bitext-customer-support-llm-chatbot-training-dataset](https://huggingface.co/datasets/bitext/Bitext-customer-support-llm-chatbot-training-dataset), but with a focus on banking. ## Training Procedure ### Hyperparameters - **Optimizer**: AdamW - **Learning Rate**: 0.0002 with a cosine learning rate scheduler - **Epochs**: 3 - **Batch Size**: 4 - **Gradient Accumulation Steps**: 4 - **Maximum Sequence Length**: 8192 tokens ### Environment - **Transformers Version**: 4.43.4 - **Framework**: PyTorch 2.3.1+cu121 - **Tokenizers**: Tokenizers 0.19.1 ## Limitations and Bias - The model is trained for banking-specific contexts but may underperform in unrelated areas. - Potential biases in the training data could affect the neutrality of the responses; users are encouraged to evaluate responses critically. ## Ethical Considerations It is important to use this technology thoughtfully, ensuring it does not substitute for human judgment where necessary, especially in sensitive financial situations. ## Acknowledgments This model was developed and trained by Bitext using proprietary data and technology. ## License This model, "Mistral-7B-Banking", is licensed under the Apache License 2.0 by Bitext Innovations International, Inc. This open-source license allows for free use, modification, and distribution of the model but requires that proper credit be given to Bitext. ### Key Points of the Apache 2.0 License - **Permissibility**: Users are allowed to use, modify, and distribute this software freely. - **Attribution**: You must provide proper credit to Bitext Innovations International, Inc. when using this model, in accordance with the original copyright notices and the license. - **Patent Grant**: The license includes a grant of patent rights from the contributors of the model. - **No Warranty**: The model is provided "as is" without warranties of any kind. You may view the full license text at [Apache License 2.0](http://www.apache.org/licenses/LICENSE-2.0). This licensing ensures the model can be used widely and freely while respecting the intellectual contributions of Bitext. For more detailed information or specific legal questions about using this license, please refer to the official license documentation linked above.
mradermacher/Audit-Llama3-Instruct-GGUF
mradermacher
2024-08-22T18:36:23Z
6
0
transformers
[ "transformers", "gguf", "unsloth", "en", "endpoints_compatible", "region:us" ]
null
2024-08-22T17:45:01Z
--- base_model: Asad515/Audit-Llama3-Instruct language: - en library_name: transformers quantized_by: mradermacher tags: - unsloth --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Asad515/Audit-Llama3-Instruct <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Audit-Llama3-Instruct-GGUF/resolve/main/Audit-Llama3-Instruct.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Audit-Llama3-Instruct-GGUF/resolve/main/Audit-Llama3-Instruct.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Audit-Llama3-Instruct-GGUF/resolve/main/Audit-Llama3-Instruct.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Audit-Llama3-Instruct-GGUF/resolve/main/Audit-Llama3-Instruct.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Audit-Llama3-Instruct-GGUF/resolve/main/Audit-Llama3-Instruct.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Audit-Llama3-Instruct-GGUF/resolve/main/Audit-Llama3-Instruct.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Audit-Llama3-Instruct-GGUF/resolve/main/Audit-Llama3-Instruct.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Audit-Llama3-Instruct-GGUF/resolve/main/Audit-Llama3-Instruct.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Audit-Llama3-Instruct-GGUF/resolve/main/Audit-Llama3-Instruct.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Audit-Llama3-Instruct-GGUF/resolve/main/Audit-Llama3-Instruct.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Audit-Llama3-Instruct-GGUF/resolve/main/Audit-Llama3-Instruct.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Audit-Llama3-Instruct-GGUF/resolve/main/Audit-Llama3-Instruct.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Audit-Llama3-Instruct-GGUF/resolve/main/Audit-Llama3-Instruct.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Audit-Llama3-Instruct-GGUF/resolve/main/Audit-Llama3-Instruct.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Audit-Llama3-Instruct-GGUF/resolve/main/Audit-Llama3-Instruct.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Niggendar/ShinyDirt_
Niggendar
2024-08-22T18:36:15Z
88
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-08-22T18:26:54Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Junhoee/stable-diffusion-v1-5-finetune-Logo-LoRA
Junhoee
2024-08-22T18:33:46Z
5
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-08-22T14:56:22Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Ahmedhisham/fullfinetunedRobertaNER
Ahmedhisham
2024-08-22T18:30:15Z
107
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "Legal", "ar", "en", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-08-14T21:59:27Z
--- library_name: transformers tags: - Legal license: apache-2.0 language: - ar - en metrics: - precision - recall - f1 pipeline_tag: token-classification --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Fafadalilian/speecht5_tts_voxpopuli_nl
Fafadalilian
2024-08-22T18:22:07Z
75
0
transformers
[ "transformers", "tensorboard", "safetensors", "speecht5", "text-to-audio", "text-to-speech,speech-synthesis", "generated_from_trainer", "nl", "dataset:fft_T5TSS_Andrej_voice", "base_model:microsoft/speecht5_tts", "base_model:finetune:microsoft/speecht5_tts", "license:mit", "endpoints_compatible", "region:us" ]
text-to-audio
2024-08-17T16:39:26Z
--- library_name: transformers language: - nl license: mit base_model: microsoft/speecht5_tts tags: - text-to-speech,speech-synthesis - generated_from_trainer datasets: - fft_T5TSS_Andrej_voice model-index: - name: SpeechT5 TTS with Andrej Karpathy voice results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SpeechT5 TTS with Andrej Karpathy voice This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the Fafadalilian/Andrej_Karpathy_voice_transcription dataset. It achieves the following results on the evaluation set: - Loss: 0.4266 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.3572 | 250.0 | 1000 | 0.4057 | | 0.3299 | 500.0 | 2000 | 0.4226 | | 0.3136 | 750.0 | 3000 | 0.4277 | | 0.3165 | 1000.0 | 4000 | 0.4266 | ### Framework versions - Transformers 4.45.0.dev0 - Pytorch 2.3.1+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1
JustDoItNow/deberta-v3-large-reward-model
JustDoItNow
2024-08-22T18:18:31Z
11
0
null
[ "tensorboard", "safetensors", "deberta-v2", "trl", "reward-trainer", "generated_from_trainer", "base_model:microsoft/deberta-v3-large", "base_model:finetune:microsoft/deberta-v3-large", "license:mit", "region:us" ]
null
2024-08-18T01:39:16Z
--- license: mit base_model: microsoft/deberta-v3-large tags: - trl - reward-trainer - generated_from_trainer metrics: - accuracy model-index: - name: deberta-v3-large-reward-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large-reward-model This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0084 - Accuracy: 0.9975 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.41e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0336 | 2.0 | 100 | 0.0315 | 0.9975 | | 0.0043 | 4.0 | 200 | 0.0105 | 0.9925 | | 0.0006 | 6.0 | 300 | 0.0079 | 0.9975 | | 0.0001 | 8.0 | 400 | 0.0086 | 0.9975 | | 0.0002 | 10.0 | 500 | 0.0084 | 0.9975 | ### Framework versions - Transformers 4.44.0 - Pytorch 2.4.0 - Datasets 2.21.0 - Tokenizers 0.19.1
Niggendar/ShinyFloox
Niggendar
2024-08-22T18:17:50Z
103
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-08-22T18:08:29Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
thliang01/3d-icon-sdxl-dora-v0-0
thliang01
2024-08-22T18:15:07Z
5
0
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "diffusers-training", "text-to-image", "dora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-08-22T13:57:55Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - diffusers-training - text-to-image - diffusers - dora - template:sd-lora widget: - text: 'a <s0><s1> icon of an astronaut riding a horse, in the style of <s0><s1>' output: url: "image_0.png" - text: 'a <s0><s1> icon of an astronaut riding a horse, in the style of <s0><s1>' output: url: "image_1.png" - text: 'a <s0><s1> icon of an astronaut riding a horse, in the style of <s0><s1>' output: url: "image_2.png" - text: 'a <s0><s1> icon of an astronaut riding a horse, in the style of <s0><s1>' output: url: "image_3.png" base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: 3d icon in the style of <s0><s1> license: openrail++ --- # SDXL LoRA DreamBooth - thliang01/3d-icon-sdxl-dora-v0-0 <Gallery /> ## Model description ### These are thliang01/3d-icon-sdxl-dora-v0-0 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. ## Download model ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - **LoRA**: download **[`3d-icon-sdxl-dora-v0-0.safetensors` here 💾](/thliang01/3d-icon-sdxl-dora-v0-0/blob/main/3d-icon-sdxl-dora-v0-0.safetensors)**. - Place it on your `models/Lora` folder. - On AUTOMATIC1111, load the LoRA by adding `<lora:3d-icon-sdxl-dora-v0-0:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/). - *Embeddings*: download **[`3d-icon-sdxl-dora-v0-0_emb.safetensors` here 💾](/thliang01/3d-icon-sdxl-dora-v0-0/blob/main/3d-icon-sdxl-dora-v0-0_emb.safetensors)**. - Place it on it on your `embeddings` folder - Use it by adding `3d-icon-sdxl-dora-v0-0_emb` to your prompt. For example, `3d icon in the style of 3d-icon-sdxl-dora-v0-0_emb` (you need both the LoRA and the embeddings as they were trained together for this LoRA) ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch from huggingface_hub import hf_hub_download from safetensors.torch import load_file pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('thliang01/3d-icon-sdxl-dora-v0-0', weight_name='pytorch_lora_weights.safetensors') embedding_path = hf_hub_download(repo_id='thliang01/3d-icon-sdxl-dora-v0-0', filename='3d-icon-sdxl-dora-v0-0_emb.safetensors', repo_type="model") state_dict = load_file(embedding_path) pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer) pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2) image = pipeline('a <s0><s1> icon of an astronaut riding a horse, in the style of <s0><s1>').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Trigger words To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens: to trigger concept `SKS` → use `<s0><s1>` in your prompt ## Details All [Files & versions](/thliang01/3d-icon-sdxl-dora-v0-0/tree/main). The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py). LoRA for the text encoder was enabled. False. Pivotal tuning was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
mradermacher/Mistral-NeMo-Minitron-8B-Base-i1-GGUF
mradermacher
2024-08-22T18:11:13Z
297
1
transformers
[ "transformers", "gguf", "en", "base_model:nvidia/Mistral-NeMo-Minitron-8B-Base", "base_model:quantized:nvidia/Mistral-NeMo-Minitron-8B-Base", "license:other", "endpoints_compatible", "region:us", "imatrix" ]
null
2024-08-22T13:15:59Z
--- base_model: nvidia/Mistral-NeMo-Minitron-8B-Base language: - en library_name: transformers license: other license_link: https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf license_name: nvidia-open-model-license quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/nvidia/Mistral-NeMo-Minitron-8B-Base <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Mistral-NeMo-Minitron-8B-Base-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mistral-NeMo-Minitron-8B-Base-i1-GGUF/resolve/main/Mistral-NeMo-Minitron-8B-Base.i1-IQ1_S.gguf) | i1-IQ1_S | 2.2 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Mistral-NeMo-Minitron-8B-Base-i1-GGUF/resolve/main/Mistral-NeMo-Minitron-8B-Base.i1-IQ1_M.gguf) | i1-IQ1_M | 2.4 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Mistral-NeMo-Minitron-8B-Base-i1-GGUF/resolve/main/Mistral-NeMo-Minitron-8B-Base.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-NeMo-Minitron-8B-Base-i1-GGUF/resolve/main/Mistral-NeMo-Minitron-8B-Base.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-NeMo-Minitron-8B-Base-i1-GGUF/resolve/main/Mistral-NeMo-Minitron-8B-Base.i1-IQ2_S.gguf) | i1-IQ2_S | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-NeMo-Minitron-8B-Base-i1-GGUF/resolve/main/Mistral-NeMo-Minitron-8B-Base.i1-IQ2_M.gguf) | i1-IQ2_M | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-NeMo-Minitron-8B-Base-i1-GGUF/resolve/main/Mistral-NeMo-Minitron-8B-Base.i1-Q2_K.gguf) | i1-Q2_K | 3.4 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Mistral-NeMo-Minitron-8B-Base-i1-GGUF/resolve/main/Mistral-NeMo-Minitron-8B-Base.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-NeMo-Minitron-8B-Base-i1-GGUF/resolve/main/Mistral-NeMo-Minitron-8B-Base.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-NeMo-Minitron-8B-Base-i1-GGUF/resolve/main/Mistral-NeMo-Minitron-8B-Base.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.9 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Mistral-NeMo-Minitron-8B-Base-i1-GGUF/resolve/main/Mistral-NeMo-Minitron-8B-Base.i1-IQ3_S.gguf) | i1-IQ3_S | 4.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Mistral-NeMo-Minitron-8B-Base-i1-GGUF/resolve/main/Mistral-NeMo-Minitron-8B-Base.i1-IQ3_M.gguf) | i1-IQ3_M | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-NeMo-Minitron-8B-Base-i1-GGUF/resolve/main/Mistral-NeMo-Minitron-8B-Base.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.3 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Mistral-NeMo-Minitron-8B-Base-i1-GGUF/resolve/main/Mistral-NeMo-Minitron-8B-Base.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.6 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Mistral-NeMo-Minitron-8B-Base-i1-GGUF/resolve/main/Mistral-NeMo-Minitron-8B-Base.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-NeMo-Minitron-8B-Base-i1-GGUF/resolve/main/Mistral-NeMo-Minitron-8B-Base.i1-Q4_0.gguf) | i1-Q4_0 | 5.0 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-NeMo-Minitron-8B-Base-i1-GGUF/resolve/main/Mistral-NeMo-Minitron-8B-Base.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.0 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-NeMo-Minitron-8B-Base-i1-GGUF/resolve/main/Mistral-NeMo-Minitron-8B-Base.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mistral-NeMo-Minitron-8B-Base-i1-GGUF/resolve/main/Mistral-NeMo-Minitron-8B-Base.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.0 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-NeMo-Minitron-8B-Base-i1-GGUF/resolve/main/Mistral-NeMo-Minitron-8B-Base.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-NeMo-Minitron-8B-Base-i1-GGUF/resolve/main/Mistral-NeMo-Minitron-8B-Base.i1-Q6_K.gguf) | i1-Q6_K | 7.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
jvelja/gemma-2-2b-it_imdb_2bit_4
jvelja
2024-08-22T18:11:11Z
45
0
transformers
[ "transformers", "pytorch", "safetensors", "trl", "ppo", "reinforcement-learning", "license:apache-2.0", "endpoints_compatible", "region:us" ]
reinforcement-learning
2024-08-22T18:11:08Z
--- license: apache-2.0 tags: - trl - ppo - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="jvelja//tmp/tmpji1561pm/jvelja/gemma-2-2b-it_imdb_2bit_4") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("jvelja//tmp/tmpji1561pm/jvelja/gemma-2-2b-it_imdb_2bit_4") model = AutoModelForCausalLMWithValueHead.from_pretrained("jvelja//tmp/tmpji1561pm/jvelja/gemma-2-2b-it_imdb_2bit_4") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
jvelja/BERT_gemma-2-2b-it_imdb_2bit_4
jvelja
2024-08-22T18:11:07Z
106
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-08-22T18:10:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
abhinav-2k23/RAG-llama-3.1-instruct-SLERP-MERGED
abhinav-2k23
2024-08-22T17:57:43Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "base_model:abhinav-2k23/RAG_llama_3_1", "base_model:merge:abhinav-2k23/RAG_llama_3_1", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:merge:meta-llama/Llama-3.1-8B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-08-22T17:52:33Z
--- base_model: - abhinav-2k23/RAG_llama_3_1 - meta-llama/Meta-Llama-3.1-8B-Instruct library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [abhinav-2k23/RAG_llama_3_1](https://huggingface.co/abhinav-2k23/RAG_llama_3_1) * [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: abhinav-2k23/RAG_llama_3_1 layer_range: - 0 - 32 - model: meta-llama/Meta-Llama-3.1-8B-Instruct layer_range: - 0 - 32 merge_method: slerp base_model: meta-llama/Meta-Llama-3.1-8B-Instruct parameters: t: - filter: self_attn value: - 0 - 0.5 - 0.3 - 0.7 - 1 - filter: mlp value: - 1 - 0.5 - 0.7 - 0.3 - 0 - value: 0.5 dtype: bfloat16 ```
imdatta0/Meta-Llama-3-8B_magiccoder_default
imdatta0
2024-08-22T17:54:53Z
15
0
peft
[ "peft", "safetensors", "unsloth", "generated_from_trainer", "base_model:unsloth/llama-3-8b", "base_model:adapter:unsloth/llama-3-8b", "license:llama3", "region:us" ]
null
2024-08-14T12:58:19Z
--- base_model: unsloth/llama-3-8b library_name: peft license: llama3 tags: - unsloth - generated_from_trainer model-index: - name: Meta-Llama-3-8B_magiccoder_default results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Meta-Llama-3-8B_magiccoder_default This model is a fine-tuned version of [unsloth/llama-3-8b](https://huggingface.co/unsloth/llama-3-8b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2697 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.02 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.2592 | 0.0259 | 4 | 1.4263 | | 1.4281 | 0.0518 | 8 | 1.4063 | | 1.3795 | 0.0777 | 12 | 1.3824 | | 1.3751 | 0.1036 | 16 | 1.3937 | | 1.4053 | 0.1296 | 20 | 1.3523 | | 1.2927 | 0.1555 | 24 | 1.3474 | | 1.3619 | 0.1814 | 28 | 1.3529 | | 1.3533 | 0.2073 | 32 | 1.3629 | | 1.3627 | 0.2332 | 36 | 1.3636 | | 1.4408 | 0.2591 | 40 | 1.3531 | | 1.3744 | 0.2850 | 44 | 1.3395 | | 1.2658 | 0.3109 | 48 | 1.3364 | | 1.3364 | 0.3368 | 52 | 1.3400 | | 1.3765 | 0.3628 | 56 | 1.3391 | | 1.3427 | 0.3887 | 60 | 1.3370 | | 1.3975 | 0.4146 | 64 | 1.3329 | | 1.2595 | 0.4405 | 68 | 1.3325 | | 1.3291 | 0.4664 | 72 | 1.3312 | | 1.2702 | 0.4923 | 76 | 1.3323 | | 1.3527 | 0.5182 | 80 | 1.3213 | | 1.2799 | 0.5441 | 84 | 1.3154 | | 1.3082 | 0.5700 | 88 | 1.3099 | | 1.4042 | 0.5960 | 92 | 1.3089 | | 1.2221 | 0.6219 | 96 | 1.3048 | | 1.3079 | 0.6478 | 100 | 1.3017 | | 1.2165 | 0.6737 | 104 | 1.2970 | | 1.239 | 0.6996 | 108 | 1.2941 | | 1.2528 | 0.7255 | 112 | 1.2877 | | 1.2932 | 0.7514 | 116 | 1.2859 | | 1.2762 | 0.7773 | 120 | 1.2804 | | 1.2914 | 0.8032 | 124 | 1.2791 | | 1.2835 | 0.8291 | 128 | 1.2755 | | 1.2735 | 0.8551 | 132 | 1.2731 | | 1.2264 | 0.8810 | 136 | 1.2722 | | 1.2637 | 0.9069 | 140 | 1.2713 | | 1.2133 | 0.9328 | 144 | 1.2704 | | 1.2379 | 0.9587 | 148 | 1.2699 | | 1.2131 | 0.9846 | 152 | 1.2697 | ### Framework versions - PEFT 0.12.0 - Transformers 4.44.0 - Pytorch 2.4.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
yamanoko/SeqCLR_Decoder_fine_tuned
yamanoko
2024-08-22T17:42:29Z
52
0
transformers
[ "transformers", "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "endpoints_compatible", "region:us" ]
null
2024-07-24T20:21:51Z
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: [More Information Needed] - Docs: [More Information Needed]
rinabuoy/whisper-tiny-khmer-aug-v4
rinabuoy
2024-08-22T17:32:26Z
6
0
null
[ "tensorboard", "safetensors", "whisper", "generated_from_trainer", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "region:us" ]
null
2024-08-22T09:53:23Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_trainer metrics: - wer model-index: - name: whisper-tiny-khmer-aug-v4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-tiny-khmer-aug-v4 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4707 - Wer: 87.9034 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 1000 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.1549 | 1.0 | 793 | 1.0460 | 101.2810 | | 0.8009 | 2.0 | 1586 | 0.7930 | 108.9509 | | 0.5929 | 3.0 | 2379 | 0.5986 | 95.7516 | | 0.4686 | 4.0 | 3172 | 0.5323 | 99.9351 | | 0.411 | 5.0 | 3965 | 0.4952 | 88.9087 | | 0.3694 | 6.0 | 4758 | 0.4675 | 101.5405 | | 0.332 | 7.0 | 5551 | 0.4639 | 90.0600 | | 0.3043 | 8.0 | 6344 | 0.4541 | 78.9363 | | 0.2819 | 9.0 | 7137 | 0.4532 | 82.9901 | | 0.2615 | 10.0 | 7930 | 0.4707 | 87.9034 | ### Framework versions - Transformers 4.44.0 - Pytorch 2.3.1 - Datasets 2.21.0 - Tokenizers 0.19.1
moht-hf/rare-puppers-demo
moht-hf
2024-08-22T17:29:46Z
5
0
null
[ "tensorboard", "safetensors", "vit", "image-classification", "pytorch", "huggingpics", "model-index", "region:us" ]
image-classification
2024-08-22T17:29:28Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: rare-puppers-demo results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9555555582046509 --- # rare-puppers-demo Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### bicycle ![bicycle](images/bicycle.jpg) #### car ![car](images/car.jpg) #### motor bike ![motor bike](images/motor_bike.jpg) #### truck ![truck](images/truck.jpg)
moht-hf/rare-puppers
moht-hf
2024-08-22T17:27:46Z
5
0
null
[ "tensorboard", "safetensors", "vit", "image-classification", "pytorch", "huggingpics", "model-index", "region:us" ]
image-classification
2024-08-22T17:27:40Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: rare-puppers results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9555555582046509 --- # rare-puppers Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### bicycle ![bicycle](images/bicycle.jpg) #### car ![car](images/car.jpg) #### motor bike ![motor bike](images/motor_bike.jpg) #### truck ![truck](images/truck.jpg)
jvelja/gemma-2-2b-it_imdb_2bit_3
jvelja
2024-08-22T17:05:12Z
45
0
transformers
[ "transformers", "pytorch", "safetensors", "trl", "ppo", "reinforcement-learning", "license:apache-2.0", "endpoints_compatible", "region:us" ]
reinforcement-learning
2024-08-22T17:05:09Z
--- license: apache-2.0 tags: - trl - ppo - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="jvelja//tmp/tmph7p0jw_f/jvelja/gemma-2-2b-it_imdb_2bit_3") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("jvelja//tmp/tmph7p0jw_f/jvelja/gemma-2-2b-it_imdb_2bit_3") model = AutoModelForCausalLMWithValueHead.from_pretrained("jvelja//tmp/tmph7p0jw_f/jvelja/gemma-2-2b-it_imdb_2bit_3") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
jvelja/BERT_gemma-2-2b-it_imdb_2bit_3
jvelja
2024-08-22T17:05:09Z
105
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-08-22T17:04:56Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
anthracite-org/magnum-v1-32b-gguf
anthracite-org
2024-08-22T16:58:04Z
72
19
null
[ "gguf", "chat", "text-generation", "en", "zh", "base_model:anthracite-org/magnum-v1-32b", "base_model:quantized:anthracite-org/magnum-v1-32b", "license:other", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
text-generation
2024-07-27T22:24:57Z
--- license: other license_name: tongyi-qianwen license_link: https://huggingface.co/Qwen/Qwen2-72B-Instruct/blob/main/LICENSE language: - en - zh pipeline_tag: text-generation base_model: anthracite-org/magnum-v1-32b quantized_by: anthracite-org tags: - chat --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/635567189c72a7e742f1419c/PK7xRSd18Du0bX-w_t-9c.png) ## This repo contains GGUF quants of the model. If you need the original weights, please find them [here](https://huggingface.co/anthracite-org/magnum-32b-v1). This is the second in a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus. This model is fine-tuned on top of [Qwen1.5 32B](https://huggingface.co/Qwen/Qwen1.5-32B). ## Prompting Model has been Instruct tuned with the ChatML formatting. A typical input would look like this: ```py """<|im_start|>user Hi there!<|im_end|> <|im_start|>assistant Nice to meet you!<|im_end|> <|im_start|>user Can I ask a question?<|im_end|> <|im_start|>assistant """ ``` ## Credits Three new general purpose instruction following datasets were added on top of the original Stheno dataset (which had certain low quality entries purged/removed). The first two were designed specifically for the Magnum series, to better address prompt adherence and coherence: - [kalomaze/Opus_Instruct_25k](https://huggingface.co/datasets/kalomaze/Opus_Instruct_25k) - [Nopm/Opus_WritingStruct](https://huggingface.co/datasets/Nopm/Opus_WritingStruct) - [Gryphe/Sonnet3.5-SlimOrcaDedupCleaned](https://huggingface.co/datasets/Gryphe/Sonnet3.5-SlimOrcaDedupCleaned) (A ~16k rows subset) This model has been a team effort, and the credits goes to all members of Anthracite. ## Training The training was done for 2 epochs with a learning rate of 1e-05. We used 8x [NVIDIA H100 Tensor Core](https://www.nvidia.com/en-us/data-center/h100/) GPUs for the full-parameter fine-tuning of the model. [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) ## Safety ...
anthracite-org/magnum-v2.5-12b-kto-gguf
anthracite-org
2024-08-22T16:55:33Z
7,659
40
null
[ "gguf", "chat", "text-generation", "en", "fr", "de", "es", "it", "pt", "ru", "zh", "ja", "base_model:anthracite-org/magnum-v2.5-12b-kto", "base_model:quantized:anthracite-org/magnum-v2.5-12b-kto", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
text-generation
2024-08-15T02:43:07Z
--- license: apache-2.0 language: - en - fr - de - es - it - pt - ru - zh - ja pipeline_tag: text-generation quantized_by: anthracite-org base_model: anthracite-org/magnum-v2.5-12b-kto tags: - chat --- ## This repo contains GGUF quants of the model. If you need the original weights, please find them [here](https://huggingface.co/anthracite-org/magnum-12b-v2.5-kto). ## imatrix_data included ![image/png](https://cdn-uploads.huggingface.co/production/uploads/658a46cbfb9c2bdfae75b3a6/h2PT8yRgXs3Lxsrs60Q1j.png) v2.5 KTO is an experimental release; we are testing a hybrid reinforcement learning strategy of KTO + DPOP, using rejected data sampled from the original model as "rejected". For "chosen", we use data from the original finetuning dataset as "chosen". This was done on a limited portion of of primarily instruction following data; we plan to scale up a larger KTO dataset in the future for better generalization. This is the 5th in a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus. This model is fine-tuned on top of [anthracite-org/magnum-12b-v2](https://huggingface.co/anthracite-org/magnum-12b-v2). ## Prompting Model has been Instruct tuned with the ChatML formatting. A typical input would look like this: ```py """<|im_start|>user Hi there!<|im_end|> <|im_start|>assistant Nice to meet you!<|im_end|> <|im_start|>user Can I ask a question?<|im_end|> <|im_start|>assistant """ ``` ## Credits - Stheno dataset (filtered) - [kalomaze/Opus_Instruct_25k](https://huggingface.co/datasets/kalomaze/Opus_Instruct_25k) - [Nopm/Opus_WritingStruct](https://huggingface.co/datasets/Nopm/Opus_WritingStruct) - [Gryphe/Sonnet3.5-SlimOrcaDedupCleaned](https://huggingface.co/datasets/Gryphe/Sonnet3.5-SlimOrcaDedupCleaned) (A ~16k rows subset) - [kalomaze/Opus_Instruct_3k](https://huggingface.co/datasets/kalomaze/Opus_Instruct_3k) This model has been a team effort, and the credits goes to all members of Anthracite. ## Safety ...
Ksgk-fy/IGRgsm8k-Model-Meta-Llama-3-8B-Instruct-1
Ksgk-fy
2024-08-22T16:53:59Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-08-21T14:15:15Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
anthracite-org/magnum-v2-123b-gguf
anthracite-org
2024-08-22T16:53:44Z
640
13
null
[ "gguf", "chat", "text-generation", "en", "fr", "de", "es", "it", "pt", "ru", "zh", "ja", "base_model:anthracite-org/magnum-v2-123b", "base_model:quantized:anthracite-org/magnum-v2-123b", "license:other", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
text-generation
2024-08-18T22:20:07Z
--- license: other license_name: mrl license_link: https://mistral.ai/licenses/MRL-0.1.md quantized_by: anthracite-org base_model: anthracite-org/magnum-v2-123b language: - en - fr - de - es - it - pt - ru - zh - ja pipeline_tag: text-generation tags: - chat --- ## This repo contains GGUF quants of the model. If you need the original weights, please find them [here](https://huggingface.co/anthracite-org/magnum-v2-123b). ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6491e00e057b0928b3e07b75/hkPzhL-xYPeGGKCyAf3Qd.png) This is the sixth in a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus. This model is fine-tuned on top of [Mistral-Large-Instruct-2407](https://huggingface.co/mistralai/Mistral-Large-Instruct-2407). ## Prompting Model has been Instruct tuned with the Mistral formatting. A typical input would look like this: ```py <s>[INST] SYSTEM MESSAGE\nUSER MESSAGE[/INST] ASSISTANT MESSAGE</s>[INST] USER MESSAGE[/INST] ``` We also provide SillyTavern presets for [Context](https://huggingface.co/anthracite-org/Magnum-123b-v1/resolve/main/Magnum-Mistral-Context.json) and [Instruct](https://huggingface.co/anthracite-org/Magnum-123b-v1/raw/main/Magnum-Mistral-Instruct.json) respectively. The Mistral preset included in SillyTavern seems to be misconfigured by default, so we recommend using these as a replacement. ## Credits - [anthracite-org/Stheno-Data-Filtered](https://huggingface.co/datasets/anthracite-org/Stheno-Data-Filtered) - [anthracite-org/kalo-opus-instruct-22k-no-refusal](https://huggingface.co/datasets/anthracite-org/kalo-opus-instruct-22k-no-refusal) - [anthracite-org/nopm_claude_writing_fixed](https://huggingface.co/datasets/anthracite-org/nopm_claude_writing_fixed) This model has been a team effort, and the credits goes to all members of Anthracite. ## Training The training was done for 1.5 epochs. We used 8x [AMD Instinct™ MI300X Accelerators](https://www.amd.com/en/products/accelerators/instinct/mi300/mi300x.html) for the full-parameter fine-tuning of the model. In addition to this, we noticed that Mistral Large models seemed much more sensitive to learning rate adjustments than other models: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6491e00e057b0928b3e07b75/xCK3ISKF6pWcMyO7MEzTA.png) We hypothesize this is primarily due to the particularly narrow and low variance weight distributions typical of Mistral derived models regardless of their scale. In the end, due to the costs that would be involved in training another full 2 epochs run ($600) on an even lower rate, we settled on our third attempt: 2e-6 with an effective batch size of 64, stopped earlier than the target 2 epochs. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6491e00e057b0928b3e07b75/d9_cBy-DuWrdnoVBbAvRV.png) We notice a correlation between the significance of the 2nd epoch loss drop and the strength of the learning rate, implying 4e-6 leads to more catastrophic forgetting. [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) ## Safety ...
anthracite-org/magnum-v2-72b-exl2
anthracite-org
2024-08-22T16:51:14Z
5
3
null
[ "chat", "text-generation", "en", "fr", "de", "es", "it", "pt", "ru", "zh", "ja", "license:other", "region:us" ]
text-generation
2024-08-19T02:44:57Z
--- license: other license_name: tongyi-qianwen license_link: https://huggingface.co/anthracite-org/magnum-v2-72b/blob/main/LICENSE language: - en - fr - de - es - it - pt - ru - zh - ja pipeline_tag: text-generation tags: - chat --- ## This repo contains EXL2 quants of the model. If you need the original weights, please find them [here](https://huggingface.co/anthracite-org/magnum-v2-72b). ## Base repo only contains the measurement file, see revisions for your quant of choice. - [measurement.json](https://huggingface.co/anthracite-org/magnum-v2-72b-exl2/tree/main) - [3.0bpw](https://huggingface.co/anthracite-org/magnum-v2-72b-exl2/tree/3.0bpw) - [4.0bpw](https://huggingface.co/anthracite-org/magnum-v2-72b-exl2/tree/4.0bpw) - [6.0bpw](https://huggingface.co/anthracite-org/magnum-v2-72b-exl2/tree/6.0bpw) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6491e00e057b0928b3e07b75/u8B-5bEeroN549uxUIisV.png) This is the seventh (Lucky!) in a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus. This model is fine-tuned on top of [Qwen-2 72B Instruct](https://huggingface.co/Qwen/Qwen2-72B-Instruct). ## Prompting Model has been Instruct tuned with the ChatML formatting. A typical input would look like this: ```py """<|im_start|>user Hi there!<|im_end|> <|im_start|>assistant Nice to meet you!<|im_end|> <|im_start|>user Can I ask a question?<|im_end|> <|im_start|>assistant """ ``` ## Credits - [anthracite-org/Stheno-Data-Filtered](https://huggingface.co/datasets/anthracite-org/Stheno-Data-Filtered) - [anthracite-org/kalo-opus-instruct-22k-no-refusal](https://huggingface.co/datasets/anthracite-org/kalo-opus-instruct-22k-no-refusal) - [anthracite-org/nopm_claude_writing_fixed](https://huggingface.co/datasets/anthracite-org/nopm_claude_writing_fixed) This model has been a team effort, and the credits goes to all members of Anthracite. ## Training The training was done for 2 epochs. We used 8x [AMD Instinct™ MI300X Accelerators](https://www.amd.com/en/products/accelerators/instinct/mi300/mi300x.html) for the full-parameter fine-tuning of the model. We also trained with a weight decay of 0.01 to help further stabilize the loss trajectory and mitigate catastrophic forgetting, and utilize a peak learning rate of 4e-6 to prevent the 2nd epoch loss from dropping too significantly (as it is a strong indicator of overfitting). ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6491e00e057b0928b3e07b75/hVd5gNqSLOlWTkUb0A7iE.png) Sample Packing was done for 16k tokens rather than the 8k tokens used in our previous runs. [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) ## Safety ...
nguyenthanhdo/Qwen2-1.5B-tierce
nguyenthanhdo
2024-08-22T16:43:21Z
6
0
null
[ "safetensors", "qwen2", "merge", "mergekit", "lazymergekit", "Qwen/Qwen2-1.5B", "base_model:Qwen/Qwen2-1.5B", "base_model:finetune:Qwen/Qwen2-1.5B", "region:us" ]
null
2024-08-22T16:42:47Z
--- base_model: - Qwen/Qwen2-1.5B - Qwen/Qwen2-1.5B tags: - merge - mergekit - lazymergekit - Qwen/Qwen2-1.5B --- # Qwen2-1.5B-tierce Qwen2-1.5B-tierce is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [Qwen/Qwen2-1.5B](https://huggingface.co/Qwen/Qwen2-1.5B) * [Qwen/Qwen2-1.5B](https://huggingface.co/Qwen/Qwen2-1.5B) ## 🧩 Configuration ```yaml slices: - sources: - model: Qwen/Qwen2-1.5B layer_range: [0, 5] - sources: - model: Qwen/Qwen2-1.5B layer_range: [23, 28] merge_method: passthrough dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "nguyenthanhdo/Qwen2-1.5B-tierce" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
bartowski/TwinLlama-3.1-8B-GGUF
bartowski
2024-08-22T16:37:53Z
55
1
transformers
[ "transformers", "gguf", "unsloth", "trl", "sft", "text-generation", "base_model:mlabonne/TwinLlama-3.1-8B", "base_model:quantized:mlabonne/TwinLlama-3.1-8B", "endpoints_compatible", "region:us" ]
text-generation
2024-07-31T18:51:51Z
--- base_model: mlabonne/TwinLlama-3.1-8B library_name: transformers pipeline_tag: text-generation tags: - unsloth - trl - sft quantized_by: bartowski --- ## Llamacpp imatrix Quantizations of TwinLlama-3.1-8B Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3615">b3615</a> for quantization. Original model: https://huggingface.co/mlabonne/TwinLlama-3.1-8B All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) Run them in [LM Studio](https://lmstudio.ai/) ## Prompt format No prompt format found, check original model page ## What's new: New updates from mlabonne, no details given but I'm sure it's worth the change! ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Split | Description | | -------- | ---------- | --------- | ----- | ----------- | | [TwinLlama-3.1-8B-f16.gguf](https://huggingface.co/bartowski/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-f16.gguf) | f16 | 16.07GB | false | Full F16 weights. | | [TwinLlama-3.1-8B-Q8_0.gguf](https://huggingface.co/bartowski/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-Q8_0.gguf) | Q8_0 | 8.54GB | false | Extremely high quality, generally unneeded but max available quant. | | [TwinLlama-3.1-8B-Q6_K_L.gguf](https://huggingface.co/bartowski/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-Q6_K_L.gguf) | Q6_K_L | 6.85GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. | | [TwinLlama-3.1-8B-Q6_K.gguf](https://huggingface.co/bartowski/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-Q6_K.gguf) | Q6_K | 6.60GB | false | Very high quality, near perfect, *recommended*. | | [TwinLlama-3.1-8B-Q5_K_L.gguf](https://huggingface.co/bartowski/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-Q5_K_L.gguf) | Q5_K_L | 6.06GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. | | [TwinLlama-3.1-8B-Q5_K_M.gguf](https://huggingface.co/bartowski/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-Q5_K_M.gguf) | Q5_K_M | 5.73GB | false | High quality, *recommended*. | | [TwinLlama-3.1-8B-Q5_K_S.gguf](https://huggingface.co/bartowski/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-Q5_K_S.gguf) | Q5_K_S | 5.60GB | false | High quality, *recommended*. | | [TwinLlama-3.1-8B-Q4_K_L.gguf](https://huggingface.co/bartowski/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-Q4_K_L.gguf) | Q4_K_L | 5.31GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. | | [TwinLlama-3.1-8B-Q4_K_M.gguf](https://huggingface.co/bartowski/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-Q4_K_M.gguf) | Q4_K_M | 4.92GB | false | Good quality, default size for must use cases, *recommended*. | | [TwinLlama-3.1-8B-Q3_K_XL.gguf](https://huggingface.co/bartowski/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-Q3_K_XL.gguf) | Q3_K_XL | 4.78GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. | | [TwinLlama-3.1-8B-Q4_K_S.gguf](https://huggingface.co/bartowski/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-Q4_K_S.gguf) | Q4_K_S | 4.69GB | false | Slightly lower quality with more space savings, *recommended*. | | [TwinLlama-3.1-8B-IQ4_XS.gguf](https://huggingface.co/bartowski/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-IQ4_XS.gguf) | IQ4_XS | 4.45GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [TwinLlama-3.1-8B-Q3_K_L.gguf](https://huggingface.co/bartowski/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-Q3_K_L.gguf) | Q3_K_L | 4.32GB | false | Lower quality but usable, good for low RAM availability. | | [TwinLlama-3.1-8B-Q3_K_M.gguf](https://huggingface.co/bartowski/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-Q3_K_M.gguf) | Q3_K_M | 4.02GB | false | Low quality. | | [TwinLlama-3.1-8B-IQ3_M.gguf](https://huggingface.co/bartowski/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-IQ3_M.gguf) | IQ3_M | 3.78GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [TwinLlama-3.1-8B-Q2_K_L.gguf](https://huggingface.co/bartowski/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-Q2_K_L.gguf) | Q2_K_L | 3.69GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. | | [TwinLlama-3.1-8B-Q3_K_S.gguf](https://huggingface.co/bartowski/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-Q3_K_S.gguf) | Q3_K_S | 3.66GB | false | Low quality, not recommended. | | [TwinLlama-3.1-8B-IQ3_XS.gguf](https://huggingface.co/bartowski/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-IQ3_XS.gguf) | IQ3_XS | 3.52GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [TwinLlama-3.1-8B-Q2_K.gguf](https://huggingface.co/bartowski/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-Q2_K.gguf) | Q2_K | 3.18GB | false | Very low quality but surprisingly usable. | | [TwinLlama-3.1-8B-IQ2_M.gguf](https://huggingface.co/bartowski/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-IQ2_M.gguf) | IQ2_M | 2.95GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. | ## Embed/output weights Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to. Some say that this improves the quality, others don't notice any difference. If you use these models PLEASE COMMENT with your findings. I would like feedback that these are actually used and useful so I don't keep uploading quants no one is using. Thanks! ## Credits Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset Thank you ZeroWw for the inspiration to experiment with embed/output ## Downloading using huggingface-cli First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download bartowski/TwinLlama-3.1-8B-GGUF --include "TwinLlama-3.1-8B-Q4_K_M.gguf" --local-dir ./ ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download bartowski/TwinLlama-3.1-8B-GGUF --include "TwinLlama-3.1-8B-Q8_0/*" --local-dir ./ ``` You can either specify a new local-dir (TwinLlama-3.1-8B-Q8_0) or download them all in place (./) ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
nhero/Florence-2-large-emotion_all
nhero
2024-08-22T16:29:26Z
104
0
transformers
[ "transformers", "safetensors", "florence2", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "region:us" ]
text-generation
2024-08-22T15:35:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
aslansbane/Chris-Hemsworth-Flux-Lora
aslansbane
2024-08-22T16:24:17Z
5
0
diffusers
[ "diffusers", "flux", "text-to-image", "lora", "fal", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-08-22T16:24:12Z
--- tags: - flux - text-to-image - lora - diffusers - fal base_model: black-forest-labs/FLUX.1-dev instance_prompt: Chris Hemsworth license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # Chris Hemsworth Flux Lora <Gallery /> ## Model description Chris Hemsworth Flux Lora ## Trigger words You should use `Chris Hemsworth` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/aslansbane/Chris-Hemsworth-Flux-Lora/tree/main) them in the Files & versions tab. ## Training at fal.ai Training was done using [fal.ai/models/fal-ai/flux-lora-general-training](https://fal.ai/models/fal-ai/flux-lora-general-training).
Rxa4/Rahal
Rxa4
2024-08-22T16:23:49Z
78
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "en", "base_model:unsloth/mistral-7b-v0.3-bnb-4bit", "base_model:quantized:unsloth/mistral-7b-v0.3-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-08-22T16:20:46Z
--- base_model: unsloth/mistral-7b-v0.3-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl - sft --- # Uploaded model - **Developed by:** Rxa4 - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-v0.3-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
nguyenthanhdo/Qwen2-1.5B-half
nguyenthanhdo
2024-08-22T16:16:23Z
5
0
null
[ "safetensors", "qwen2", "merge", "mergekit", "lazymergekit", "Qwen/Qwen2-1.5B", "base_model:Qwen/Qwen2-1.5B", "base_model:finetune:Qwen/Qwen2-1.5B", "region:us" ]
null
2024-08-22T16:15:38Z
--- base_model: - Qwen/Qwen2-1.5B - Qwen/Qwen2-1.5B tags: - merge - mergekit - lazymergekit - Qwen/Qwen2-1.5B --- # Qwen2-1.5B-half Qwen2-1.5B-half is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [Qwen/Qwen2-1.5B](https://huggingface.co/Qwen/Qwen2-1.5B) * [Qwen/Qwen2-1.5B](https://huggingface.co/Qwen/Qwen2-1.5B) ## 🧩 Configuration ```yaml slices: - sources: - model: Qwen/Qwen2-1.5B layer_range: [0, 8] - sources: - model: Qwen/Qwen2-1.5B layer_range: [20, 28] merge_method: passthrough dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "nguyenthanhdo/Qwen2-1.5B-half" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
llmware/slim-sa-ner-phi-3-gguf
llmware
2024-08-22T16:11:30Z
14
2
transformers
[ "transformers", "gguf", "phi3", "license:apache-2.0", "region:us", "conversational" ]
null
2024-07-31T17:55:13Z
--- license: apache-2.0 inference: false --- # SLIM-SA_NER-PHI-3-GGUF <!-- Provide a quick summary of what the model is/does. --> **slim-sa-ner-phi-3-gguf** is a 4_K_M quantized GGUF version of [**slim-sa-ner**](https://huggingface.co/llmware/slim-sa-ner), providing a small, fast inference implementation, optimized for multi-model concurrent deployment. slim-sa-ner combines two of the most popular traditional classifier functions (Sentiment Analysis and Named Entity Recognition), and reimagines them as function calls on a specialized decoder-based LLM, generating output consisting of a python dictionary with keys corresponding to sentiment, and NER identifiers, such as people, organization, and place, e.g.: {'sentiment': ['positive'], people': ['..'], 'organization': ['..'], 'place': ['..]} This 3B parameter 'combo' model is designed to illustrate the potential power of using function calls on small, specialized models to enable a single model architecture to combine the capabilities of what were traditionally two separate model architectures on an encoder. The intent of SLIMs is to forge a middle-ground between traditional encoder-based classifiers and open-ended API-based LLMs, providing an intuitive, flexible natural language response, without complex prompting, and with improved generalization and ability to fine-tune to a specific domain use case. To pull the model via API: from huggingface_hub import snapshot_download snapshot_download("llmware/slim-sa-ner-phi-3-gguf", local_dir="/path/on/your/machine/", local_dir_use_symlinks=False) Load in your favorite GGUF inference engine, or try with llmware as follows: from llmware.models import ModelCatalog # to load the model and make a basic inference model = ModelCatalog().load_model("slim-sa-ner-phi-3-gguf") response = model.function_call(text_sample) # this one line will download the model and run a series of tests ModelCatalog().tool_test_run("slim-sa-ner-phi-3-gguf", verbose=True) Note: please review [**config.json**](https://huggingface.co/llmware/slim-sa-ner-phi-3-gguf/blob/main/config.json) in the repository for prompt wrapping information, details on the model, and full test set. ## Model Card Contact Darren Oberst & llmware team [Any questions? Join us on Discord](https://discord.gg/MhZn5Nc39h)
jacobcarajo/Phi-3.5-mini-instruct-Q5_K_M-GGUF
jacobcarajo
2024-08-22T16:00:53Z
5
0
transformers
[ "transformers", "gguf", "nlp", "code", "llama-cpp", "gguf-my-repo", "text-generation", "multilingual", "base_model:microsoft/Phi-3.5-mini-instruct", "base_model:quantized:microsoft/Phi-3.5-mini-instruct", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-08-22T16:00:36Z
--- base_model: microsoft/Phi-3.5-mini-instruct language: - multilingual library_name: transformers license: mit license_link: https://huggingface.co/microsoft/Phi-3.5-mini-instruct/resolve/main/LICENSE pipeline_tag: text-generation tags: - nlp - code - llama-cpp - gguf-my-repo widget: - messages: - role: user content: Can you provide ways to eat combinations of bananas and dragonfruits? --- # jacobcarajo/Phi-3.5-mini-instruct-Q5_K_M-GGUF This model was converted to GGUF format from [`microsoft/Phi-3.5-mini-instruct`](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo jacobcarajo/Phi-3.5-mini-instruct-Q5_K_M-GGUF --hf-file phi-3.5-mini-instruct-q5_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo jacobcarajo/Phi-3.5-mini-instruct-Q5_K_M-GGUF --hf-file phi-3.5-mini-instruct-q5_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo jacobcarajo/Phi-3.5-mini-instruct-Q5_K_M-GGUF --hf-file phi-3.5-mini-instruct-q5_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo jacobcarajo/Phi-3.5-mini-instruct-Q5_K_M-GGUF --hf-file phi-3.5-mini-instruct-q5_k_m.gguf -c 2048 ```
llmware/slim-summary-phi-3-gguf
llmware
2024-08-22T15:49:22Z
27
1
transformers
[ "transformers", "gguf", "phi3", "license:apache-2.0", "region:us", "conversational" ]
null
2024-07-24T15:44:12Z
--- license: apache-2.0 inference: false --- # SLIM-SUMMARY-PHI-3-GGUF <!-- Provide a quick summary of what the model is/does. --> **slim-summary-phi-3** is a finetune of phi-3 mini (3.8B parameters) to implement a function-calling summarization model, and then packaged as 4_K_M quantized GGUF, providing a small, fast inference implementation, to provide high-quality summarizations of complex business documents, on a small, specialized locally-deployable model with summary output structured as a python list of key points. The model takes as input a text passage, an optional parameter with a focusing phrase or query, and an experimental optional (N) parameter, which is used to guide the model to a specific number of items to return in a summary list. Please see the usage notes at: [**slim-summary**](https://huggingface.co/llmware/slim-summary) To pull the model via API: from huggingface_hub import snapshot_download snapshot_download("llmware/slim-summary-phi-3-gguf", local_dir="/path/on/your/machine/", local_dir_use_symlinks=False) Load in your favorite GGUF inference engine, or try with llmware as follows: from llmware.models import ModelCatalog # to load the model and make a basic inference model = ModelCatalog().load_model("slim-summary-phi-3-gguf") response = model.function_call(text_sample) # this one line will download the model and run a series of tests ModelCatalog().tool_test_run("slim-summary-phi-3-gguf", verbose=True) Note: please review [**config.json**](https://huggingface.co/llmware/slim-summary-phi-3-gguf/blob/main/config.json) in the repository for prompt wrapping information, details on the model, and full test set. ## Model Card Contact Darren Oberst & llmware team [Any questions? Join us on Discord](https://discord.gg/MhZn5Nc39h)