title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns] | url
stringlengths 0
780
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns] | gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
A beginner question about CPU Vs GPU models. | 12 | Alright so to my understanding, based on what I've read and watched, when a model lists a number and B by it's name, say vicuna 7b, it means it has seven billion instructions, and the bit measures the preciseness and amount of information those instructions carry, so hence why models list a bit size, and instruction size.
generally it's assumed the more instructions, the better the model, a 13b model is generally gonna to be better than a 7b model all things being equal, bit size seems more in question as I've heard the difference between 4 and 8 bits is minor, as is the difference between 16 and 8, is this truthful?
As to my Main question, what is the difference between CPU and running GPU? Generally you can't use a model you don't have enough vram for (although WizardLM says it requires 9 and I'm getting by on 8 just fine.) and I've noticed people running bigger models on the CPU despite being slower than GPU, why? I assume the CPU can offload to system memory or something? is their a dip in quality between CPU and GPU models or is it just speed and performance?
**EDIT:** I run an 3070 with 8gb of vram, and 36gb system ram, I'm unsure what the upper limits of the kind of models I can use is? as of current I've been using the 8 bit WizardLM 7b model and I've been fairly impressed by it, it's not quite chatgpt4 levels, and it isn't good and giving factual answers, but it's still fairly creative and it's given fast responses, but I'm willing to take a hit in time for better responses. | 2023-05-03T19:55:48 | https://www.reddit.com/r/LocalLLaMA/comments/136xmxj/a_beginner_question_about_cpu_vs_gpu_models/ | sovereign-celestial | self.LocalLLaMA | 2023-05-03T20:01:19 | 0 | {} | 136xmxj | false | null | t3_136xmxj | /r/LocalLLaMA/comments/136xmxj/a_beginner_question_about_cpu_vs_gpu_models/ | false | false | self | 12 | null |
Anyone actually running 30b/65b at reasonably high speed? What's your rig? | 24 | I've been trying to get 30b models running on my 5900x rig, but due to my 3080ti, they're painfully slow whether I run them purely in llama.cpp (cpu) or swapping in and out of the GPU. 65b models have been basically unusable.
I'm curious if someone has a rig running 30b/65b at speed (say, 5+ tokens per second), and what the rig you're using entails. Budget isn't a huge concern (I can go buy a couple 3090s or an A6000 if I need to), but I'm obviously not looking to go blow 10k on this or anything too crazy.
So... surely someone out there is doing this. What's your rig and how fast does it run? Should I be scrapping windows and doing this purely in linux? I'm running 13b models in 4bit 128g with no issues on the 3080ti at speed, but I have seen 30b+ output and it's clearly superior. I'd like to move up at the earliest convenience, but I don't want to go buy five grand worth of hardware and discover it's not the right/best option.
I suspect a 30b model could be run on a 24gb card at speed (3090/4090), but I'm hearing there might be issues with fitting all of the context without running out of memory? Anyone doing it?
If you're running 65b at speed... please let me know how you're doing it :).
(before anyone asks, yes I'm aware I can run these things using a cloud based solution, but I'm trying to run them purely locally - I don't want my AI talking to anything on the internet) | 2023-05-03T22:02:19 | https://www.reddit.com/r/LocalLLaMA/comments/13710u8/anyone_actually_running_30b65b_at_reasonably_high/ | deepinterstate | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13710u8 | false | null | t3_13710u8 | /r/LocalLLaMA/comments/13710u8/anyone_actually_running_30b65b_at_reasonably_high/ | false | false | self | 24 | null |
Introducing WizardVicunaLM: Combining WizardLM and VicunaLM Principle | 155 | Hello Reddit! Today, I'm excited to share with you an experimental project I've been working on called WizardVicunaLM. It combines the best of WizardLM and VicunaLM, resulting in a language model designed to better handle multi-round conversations.
​
https://preview.redd.it/aqjwco752qxa1.png?width=597&format=png&auto=webp&s=78aeb386a262d54f3fc89466e96dd9ed0f79efd6
In this project, I have adopted the approach of WizardLM to extend a single problem more in-depth, and instead of using individual instructions, I expanded it using Vicuna's conversation format and applied Vicuna's fine-tuning techniques.
The result is a model that shows an approximate 7% performance improvement over VicunaLM, while retaining the advantages of both WizardLM and VicunaLM. Although the questions used for comparison were not from rigorous tests, the results were promising.
I trained the model with 8 A100 GPUs for 35 hours, using a dataset consisting of 70K conversations created by WizardLM and fine-tuned with Vicuna's techniques. You can find the [**dataset**](https://huggingface.co/datasets/junelee/wizard_vicuna_70k) and the [**13b model**](https://huggingface.co/junelee/wizard-vicuna-13b) on Hugging Face.
Please note that this project is highly experimental and designed for proof of concept, not for actual usage. Nonetheless, I believe that if we extend the dialog to GPT-4 32K, we can expect a dramatic improvement, as we can generate 8x more accurate and richer dialogs.
Feel free to share your thoughts, feedback, and questions in the comments. I'm looking forward to hearing what you think!
[https://github.com/melodysdreamj/WizardVicunaLM](https://github.com/melodysdreamj/WizardVicunaLM) | 2023-05-04T02:03:03 | https://www.reddit.com/r/LocalLLaMA/comments/1376oho/introducing_wizardvicunalm_combining_wizardlm_and/ | Clear-Jelly2873 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1376oho | false | null | t3_1376oho | /r/LocalLLaMA/comments/1376oho/introducing_wizardvicunalm_combining_wizardlm_and/ | false | false | 155 | {'enabled': False, 'images': [{'id': 'XlWMXMRDNHSb6bhlStwodxNSfTB_5aRadZqfA-O_BXo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/bd144QiT6eMNe4HgMP1ddPA5eETWYeB8gy1vpD7hU1w.jpg?width=108&crop=smart&auto=webp&s=7e72b9f399dddf0d2e36e4dcedd52ada6a8917f2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/bd144QiT6eMNe4HgMP1ddPA5eETWYeB8gy1vpD7hU1w.jpg?width=216&crop=smart&auto=webp&s=0ad922b38faeaabc46e46a31e20f68f9ecd66de6', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/bd144QiT6eMNe4HgMP1ddPA5eETWYeB8gy1vpD7hU1w.jpg?width=320&crop=smart&auto=webp&s=80827db231ef5b470da059d41b0489894003c1a0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/bd144QiT6eMNe4HgMP1ddPA5eETWYeB8gy1vpD7hU1w.jpg?width=640&crop=smart&auto=webp&s=4b7bfea3fd53e86b0deaf1626d483a1f95f0220d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/bd144QiT6eMNe4HgMP1ddPA5eETWYeB8gy1vpD7hU1w.jpg?width=960&crop=smart&auto=webp&s=5b633f96e94b4b9220580145ca1411e7eef7c7a2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/bd144QiT6eMNe4HgMP1ddPA5eETWYeB8gy1vpD7hU1w.jpg?width=1080&crop=smart&auto=webp&s=2b538e15e139f3b606ce73799c401acc1ccaa5db', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/bd144QiT6eMNe4HgMP1ddPA5eETWYeB8gy1vpD7hU1w.jpg?auto=webp&s=33046d02e05c51ca9824c162d7835af764b7f4f4', 'width': 1200}, 'variants': {}}]} |
|
Can I run something like HuggingChat / OpenAssistant on 12GB VRAM and 16GB RAM? | 1 | [removed] | 2023-05-04T02:14:19 | https://www.reddit.com/r/LocalLLaMA/comments/1376xuz/can_i_run_something_like_huggingchat/ | chip_0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1376xuz | false | null | t3_1376xuz | /r/LocalLLaMA/comments/1376xuz/can_i_run_something_like_huggingchat/ | false | false | default | 1 | null |
Zero-config desktop app for running LLaMA finetunes locally | 54 | For those of you who want to get a chat UI running locally with minimal configuration, I built an electron.js Desktop app that supports \~12 different LLaMA fine-tunes out of the box (you can choose which ones to download from Huggingface).
It's built on llama.cpp and handles all the device-specific config for you on Mac M1/M2, Mac Intel, and Windows: [https://faraday.dev](https://faraday.dev/)
For less-technical folks the install and model download should be much easier than Oogabooga (however, if you want very fine-grained control then that might be a better bet).
This is an early version and I'd love some feedback if you're interested in trying it out!
https://preview.redd.it/n8qhn4mz3qxa1.png?width=1249&format=png&auto=webp&s=2c8fae6201f023401abca952cff17417aebeebaf | 2023-05-04T02:22:33 | https://www.reddit.com/r/LocalLLaMA/comments/13774vr/zeroconfig_desktop_app_for_running_llama/ | Snoo_72256 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13774vr | false | null | t3_13774vr | /r/LocalLLaMA/comments/13774vr/zeroconfig_desktop_app_for_running_llama/ | false | false | 54 | null |
|
Has anyone got this running in Godot? | 9 | Hi everyone, I’ve been working on my own client for LLM’s that has been a fun project. At the moment it only supports OpenAI’s chatGPT api.
But I would like to add more local LLM’s so censorship and api keys/ internet is not an issue.
I’m using the Godot engine for my client which mainly uses its own language (GDscript) and C#.
Has anyone tried implementing LLaMA in Godot or C#? Or is this a difficult thing to do.
If anyone is interested in my client heres a link, its free to download an no account required.
https://cdcruz.itch.io/chatgpt-client
Thanks in advance for your help. | 2023-05-04T02:45:24 | https://www.reddit.com/r/LocalLLaMA/comments/1377o2v/has_anyone_got_this_running_in_godot/ | Official_CDcruz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1377o2v | false | null | t3_1377o2v | /r/LocalLLaMA/comments/1377o2v/has_anyone_got_this_running_in_godot/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'hx0DCdLGDbg6EfveAPPxGOGAigS511f1pBwinwCN_vs', 'resolutions': [{'height': 86, 'url': 'https://external-preview.redd.it/PAa3cxG86FGxvPtWma3k7VG-awtq41yMo27uVCX-StA.jpg?width=108&crop=smart&auto=webp&s=d031f688a616d925cc95d202828febb026abf4ca', 'width': 108}, {'height': 172, 'url': 'https://external-preview.redd.it/PAa3cxG86FGxvPtWma3k7VG-awtq41yMo27uVCX-StA.jpg?width=216&crop=smart&auto=webp&s=c149ca6ded6fb643dbaf8e49f40d2b89e44137bf', 'width': 216}, {'height': 256, 'url': 'https://external-preview.redd.it/PAa3cxG86FGxvPtWma3k7VG-awtq41yMo27uVCX-StA.jpg?width=320&crop=smart&auto=webp&s=09b49a45d896fb7a83a9bc524339f91c3f2f17c8', 'width': 320}, {'height': 512, 'url': 'https://external-preview.redd.it/PAa3cxG86FGxvPtWma3k7VG-awtq41yMo27uVCX-StA.jpg?width=640&crop=smart&auto=webp&s=4337e3d255767c582b9526032be0f2be20765975', 'width': 640}], 'source': {'height': 680, 'url': 'https://external-preview.redd.it/PAa3cxG86FGxvPtWma3k7VG-awtq41yMo27uVCX-StA.jpg?auto=webp&s=46552cd2fedbbbba3416691195947f06b5a10002', 'width': 850}, 'variants': {}}]} |
Is using a 2nd GPU for VRAM swap better then using system RAM? | 3 | I am wondering if it's reasonable to pick up an M40 or P40 to get 24gb of VRAM to use as swap space instead of system RAM.
Since those secondary GPU's aren't processing I would expect the power consumption to be moderate instead of cranked up. The question really is what runs faster using one of these old but cheap 24gb VRAM cards for swap or system RAM? | 2023-05-04T03:05:00 | https://www.reddit.com/r/LocalLLaMA/comments/13783us/is_using_a_2nd_gpu_for_vram_swap_better_then/ | -Automaticity | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13783us | false | null | t3_13783us | /r/LocalLLaMA/comments/13783us/is_using_a_2nd_gpu_for_vram_swap_better_then/ | false | false | self | 3 | null |
[deleted by user] | 1 | [removed] | 2023-05-04T05:42:01 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 137bbv4 | false | null | t3_137bbv4 | /r/LocalLLaMA/comments/137bbv4/deleted_by_user/ | false | false | default | 1 | null |
||
Can LLaMA approve credit card applications? [Part 1] | 1 | [removed] | 2023-05-04T06:29:08 | https://www.reddit.com/r/LocalLLaMA/comments/137c76l/can_llama_approve_credit_card_applications_part_1/ | Important_Passage184 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 137c76l | false | null | t3_137c76l | /r/LocalLLaMA/comments/137c76l/can_llama_approve_credit_card_applications_part_1/ | false | false | default | 1 | null |
Using system RAM as "swap" for GPU? | 1 | I have 128GB of system RAM. I have 12GB of VRAM (soon to be 48GB; I have an RTX 8000 on order).
Is there any way to have oobabooga's [text-generation-webui](https://github.com/oobabooga/text-generation-webui) run larger models than could fit into VRAM (say, 65B) by using some system memory as a sort of extremely fast 'swap' for the GPU? | 2023-05-04T07:53:08 | https://www.reddit.com/r/LocalLLaMA/comments/137dpd7/using_system_ram_as_swap_for_gpu/ | AlpsAficionado | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 137dpd7 | false | null | t3_137dpd7 | /r/LocalLLaMA/comments/137dpd7/using_system_ram_as_swap_for_gpu/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'kgK7ce9cTngjss8zGOlSk8XcC6_tTU-YtsIWvyULoZU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jU45WRbq_cYCKIGxt2Rq9RCgZzWoW_gXgpFQ0vf-mqM.jpg?width=108&crop=smart&auto=webp&s=3395c0e57155d2d0ad8a206032596d907070138e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/jU45WRbq_cYCKIGxt2Rq9RCgZzWoW_gXgpFQ0vf-mqM.jpg?width=216&crop=smart&auto=webp&s=317c1974c57f66d5336a52fdf3d0e0ad4e390e42', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/jU45WRbq_cYCKIGxt2Rq9RCgZzWoW_gXgpFQ0vf-mqM.jpg?width=320&crop=smart&auto=webp&s=c95c63c8c31b7c121305473b462fa5ef429e7b12', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/jU45WRbq_cYCKIGxt2Rq9RCgZzWoW_gXgpFQ0vf-mqM.jpg?width=640&crop=smart&auto=webp&s=3613e29aa90aea1cc4abfa0df92dac84fe3fb549', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/jU45WRbq_cYCKIGxt2Rq9RCgZzWoW_gXgpFQ0vf-mqM.jpg?width=960&crop=smart&auto=webp&s=ab852cda6a695e078b23f7f96d125eb5717e219c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/jU45WRbq_cYCKIGxt2Rq9RCgZzWoW_gXgpFQ0vf-mqM.jpg?width=1080&crop=smart&auto=webp&s=e55c093e7bd128cb46d58f8670e89a95ae0aa918', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/jU45WRbq_cYCKIGxt2Rq9RCgZzWoW_gXgpFQ0vf-mqM.jpg?auto=webp&s=ee67d2cd64ae5d6b7d60f41918ad3c28045ea531', 'width': 1200}, 'variants': {}}]} |
Memory-efficient training of 30B and 65B models at 2048 context size? | 8 | I'm having trouble finetuning large models 30B, 65B at 2048 context length. It looks like FlashAttention is already included in Vicuna which should reduce memory usage [https://github.com/lm-sys/FastChat/blob/main/fastchat/train/train\_mem.py](https://github.com/lm-sys/FastChat/blob/main/fastchat/train/train_mem.py)
Yet that needs 8x 40GB GPUs just to train 13B, apparently.
I found this comment that suggests how to run it on multi-GPU without having to have the model fully loaded on each GPU: [https://github.com/tloen/alpaca-lora/issues/332](https://github.com/tloen/alpaca-lora/issues/332)
However, that does not seem to be compatible with the FlashAttention version from Vicuna (or is it?) I'm also not sure if it's lora-only.
Ideally I'd like to be able to finetune 65B models on 8x 24GB = 192GB VRAM.
Other people must be working on this same issue? Any solutions out there?
It seems GPT4 has been updated since it now knows what LLaMa is. Asking GPT4 it suggested DeepSpeed with ZeRO-2. | 2023-05-04T07:59:33 | https://www.reddit.com/r/LocalLLaMA/comments/137dtbd/memoryefficient_training_of_30b_and_65b_models_at/ | Pan000 | self.LocalLLaMA | 2023-05-04T08:07:32 | 0 | {} | 137dtbd | false | null | t3_137dtbd | /r/LocalLLaMA/comments/137dtbd/memoryefficient_training_of_30b_and_65b_models_at/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'i9rjvQQwfe1oqYgSYhcCXCLkeBiudOHSiTYSev7hvT0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1E84vMO4wOmGDZ2j8m2oZPsh3aIH3EagtOLvS_Rl46w.jpg?width=108&crop=smart&auto=webp&s=3c5dd06f88066fc209d83373d3e7e8cbeac5c58b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/1E84vMO4wOmGDZ2j8m2oZPsh3aIH3EagtOLvS_Rl46w.jpg?width=216&crop=smart&auto=webp&s=250d10d97fd4ea8fabb7e78375b45f4032920fc2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/1E84vMO4wOmGDZ2j8m2oZPsh3aIH3EagtOLvS_Rl46w.jpg?width=320&crop=smart&auto=webp&s=1656ae43c32e03127af868969d0e9cec5463ce53', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/1E84vMO4wOmGDZ2j8m2oZPsh3aIH3EagtOLvS_Rl46w.jpg?width=640&crop=smart&auto=webp&s=cd2b0d1396e44d878ded882f7d82c7f498cb3d45', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/1E84vMO4wOmGDZ2j8m2oZPsh3aIH3EagtOLvS_Rl46w.jpg?width=960&crop=smart&auto=webp&s=5aafdc8ca4f98961a0ebdfa50541eadb7a0aa242', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/1E84vMO4wOmGDZ2j8m2oZPsh3aIH3EagtOLvS_Rl46w.jpg?width=1080&crop=smart&auto=webp&s=5955fb5d21cd9c2c23823a8a181b21ad74290618', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/1E84vMO4wOmGDZ2j8m2oZPsh3aIH3EagtOLvS_Rl46w.jpg?auto=webp&s=492450f9c271435c4ec16bbf2b274f7881832b43', 'width': 1200}, 'variants': {}}]} |
AutoGPT4J with Local Models question | 3 | Hey all!
I’m adding in support to AutoGPT4J for local models, though I’m a bit torn on how to finish the implementation. I imagine that it’s not the safest to assume that everyone wants to use the OobaBooga webui, though their api is very easy to interface with.
Also token count, I’m thinking that I could wrap the python library for llama tokenization though that also assumes everyone wants to use LLaMa. So perhaps the best implementation here would be to allow the user to provide a tokenizer mapping?
Does anyone have any suggestions? | 2023-05-04T10:30:49 | https://www.reddit.com/r/LocalLLaMA/comments/137gm84/autogpt4j_with_local_models_question/ | AemonAlgizVideos | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 137gm84 | false | null | t3_137gm84 | /r/LocalLLaMA/comments/137gm84/autogpt4j_with_local_models_question/ | false | false | self | 3 | null |
What model do you run on a 24gb card - 3090/4090 | 3 | i have two machines i use for LLMs - 1) 32gb ram, 12gb 3060, 5700x 2) 64gb ram, 24gb 3090fe, 5700x
the only model i really find useful right now is anon8231489123\_vicuna-13b-GPTQ-4bit-128g and that can run just fine on a 12gb 3060.
anyone have any recommenations for a model for a 3090?
also.. has anyone managed to do vectorstore embeddings with a LLAMA based model? | 2023-05-04T10:30:49 | https://www.reddit.com/r/LocalLLaMA/comments/137gm8h/what_model_do_you_run_on_a_24gb_card_30904090/ | megadonkeyx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 137gm8h | false | null | t3_137gm8h | /r/LocalLLaMA/comments/137gm8h/what_model_do_you_run_on_a_24gb_card_30904090/ | false | false | self | 3 | null |
LMM battle arena! | 43 | https://lmsys.org/blog/2023-05-03-arena/
Morituri te salutant :)
I think this will make benchmarking new LMMs much easier. | 2023-05-04T11:24:36 | https://www.reddit.com/r/LocalLLaMA/comments/137hpzb/lmm_battle_arena/ | BalorNG | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 137hpzb | false | null | t3_137hpzb | /r/LocalLLaMA/comments/137hpzb/lmm_battle_arena/ | false | false | self | 43 | {'enabled': False, 'images': [{'id': 'hFgG4vA9MpXbuOJtQGABTxKZz_yA6efgPG3utyFcQR0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/bNJ37NB9skmJTflY0MSEQFN3x0Aa8R0A4qqjlhcph3c.jpg?width=108&crop=smart&auto=webp&s=bc90ab5e1214a7183eb6804312ab7763e26dc934', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/bNJ37NB9skmJTflY0MSEQFN3x0Aa8R0A4qqjlhcph3c.jpg?width=216&crop=smart&auto=webp&s=0976ab8aae48318746af84dfdfaa4b003e18c3d3', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/bNJ37NB9skmJTflY0MSEQFN3x0Aa8R0A4qqjlhcph3c.jpg?width=320&crop=smart&auto=webp&s=3858b3c570d9245008834e6e8de4cf35c35e9dd0', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/bNJ37NB9skmJTflY0MSEQFN3x0Aa8R0A4qqjlhcph3c.jpg?width=640&crop=smart&auto=webp&s=8cbd1f47eae264588c5cff448e4407be03e5581d', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/bNJ37NB9skmJTflY0MSEQFN3x0Aa8R0A4qqjlhcph3c.jpg?width=960&crop=smart&auto=webp&s=5c47cfdb1197d756dec6e038c13c98cb57138212', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/bNJ37NB9skmJTflY0MSEQFN3x0Aa8R0A4qqjlhcph3c.jpg?auto=webp&s=c8befe62f197ce17044ef88210a35e649a2a37a8', 'width': 1024}, 'variants': {}}]} |
Llama in french ? | 3 | Hello i'm looking to have llama in french how i can get it can you have tutoriel fort this please ? | 2023-05-04T12:18:17 | https://www.reddit.com/r/LocalLLaMA/comments/137iwaz/llama_in_french/ | Last_Firefighter242 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 137iwaz | false | null | t3_137iwaz | /r/LocalLLaMA/comments/137iwaz/llama_in_french/ | false | false | self | 3 | null |
Best general purpose model for commercial license? | 2 | [removed] | 2023-05-04T12:36:14 | https://www.reddit.com/r/LocalLLaMA/comments/137jbrj/best_general_purpose_model_for_commercial_license/ | Inner-Outside6483 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 137jbrj | false | null | t3_137jbrj | /r/LocalLLaMA/comments/137jbrj/best_general_purpose_model_for_commercial_license/ | false | false | default | 2 | null |
Does a 65B model feel remotely like a ChatGPT model? | 11 | Hi,
Does using a 65B model feel remotely like a ChatGPT model?
Do this open source models have the Reasoning capability that GPT-4 seems to have? | 2023-05-04T12:59:12 | https://www.reddit.com/r/LocalLLaMA/comments/137jx4i/does_a_65b_model_feel_remotely_like_a_chatgpt/ | MrEloi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 137jx4i | false | null | t3_137jx4i | /r/LocalLLaMA/comments/137jx4i/does_a_65b_model_feel_remotely_like_a_chatgpt/ | false | false | self | 11 | null |
Is what I need possible currently? | 6 | Hey guys! Seeing as youre smarter than me, I figured I'd ask:
I need an AI to write in my style and using characters and settings I've created.
Is it possible to install some existing model and feed it 200k words of my fiction and have it write like me?
I dont need it to do anything else and I have a 3090 if thats relevant. Thank you if you take the time to educate me 🙏 | 2023-05-04T13:46:14 | https://www.reddit.com/r/LocalLLaMA/comments/137l61f/is_what_i_need_possible_currently/ | -SuperSelf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 137l61f | false | null | t3_137l61f | /r/LocalLLaMA/comments/137l61f/is_what_i_need_possible_currently/ | false | false | self | 6 | null |
How can I download the OpenAssistant model on HuggingFace for local use in the future? | 1 | [removed] | 2023-05-04T15:25:35 | https://www.reddit.com/r/LocalLLaMA/comments/137qbsh/how_can_i_download_the_openassistant_model_on/ | spmmora | self.LocalLLaMA | 2023-06-02T11:05:55 | 0 | {} | 137qbsh | false | null | t3_137qbsh | /r/LocalLLaMA/comments/137qbsh/how_can_i_download_the_openassistant_model_on/ | false | false | default | 1 | null |
Google "We Have No Moat, And Neither Does OpenAI" -- Open Source LLM is taking off | 220 | [https://www.semianalysis.com/p/google-we-have-no-moat-and-neither](https://www.semianalysis.com/p/google-we-have-no-moat-and-neither)
Original tweet from Simon Willison: [https://twitter.com/simonw/status/1654158744912003076](https://twitter.com/simonw/status/1654158744912003076)
My take: [https://twitter.com/GoProAI/status/1654162369184923654](https://twitter.com/GoProAI/status/1654162369184923654) | 2023-05-04T16:51:04 | https://www.reddit.com/r/LocalLLaMA/comments/137syol/google_we_have_no_moat_and_neither_does_openai/ | goproai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 137syol | false | null | t3_137syol | /r/LocalLLaMA/comments/137syol/google_we_have_no_moat_and_neither_does_openai/ | false | false | self | 220 | {'enabled': False, 'images': [{'id': 'hefE2ZeC-L7zMiBYeugoJXgo1BxWaR3FQlfo9j5aMMQ', 'resolutions': [{'height': 46, 'url': 'https://external-preview.redd.it/SOsmLeOyxaZQGeBXHc3LYztVfMmmck6a6qWE33mVyNk.jpg?width=108&crop=smart&auto=webp&s=065eb461b76191ae951d7a82c02fab868645eac6', 'width': 108}, {'height': 93, 'url': 'https://external-preview.redd.it/SOsmLeOyxaZQGeBXHc3LYztVfMmmck6a6qWE33mVyNk.jpg?width=216&crop=smart&auto=webp&s=9f1646683375d616e51c8575d5dcbe52fb9e3ddd', 'width': 216}, {'height': 137, 'url': 'https://external-preview.redd.it/SOsmLeOyxaZQGeBXHc3LYztVfMmmck6a6qWE33mVyNk.jpg?width=320&crop=smart&auto=webp&s=5fed851cde155121fb34de2957a87a67920d97db', 'width': 320}, {'height': 275, 'url': 'https://external-preview.redd.it/SOsmLeOyxaZQGeBXHc3LYztVfMmmck6a6qWE33mVyNk.jpg?width=640&crop=smart&auto=webp&s=1d57119ce0474cdf45f568c0e2f2e8906d1a773a', 'width': 640}, {'height': 413, 'url': 'https://external-preview.redd.it/SOsmLeOyxaZQGeBXHc3LYztVfMmmck6a6qWE33mVyNk.jpg?width=960&crop=smart&auto=webp&s=13140638c4623c4479dc1772506a7b625e52bed0', 'width': 960}, {'height': 465, 'url': 'https://external-preview.redd.it/SOsmLeOyxaZQGeBXHc3LYztVfMmmck6a6qWE33mVyNk.jpg?width=1080&crop=smart&auto=webp&s=3d07bfc129cf0c90c0069cad6b9e0d2b47c0d5b8', 'width': 1080}], 'source': {'height': 517, 'url': 'https://external-preview.redd.it/SOsmLeOyxaZQGeBXHc3LYztVfMmmck6a6qWE33mVyNk.jpg?auto=webp&s=3b49e38f7a59fba3ba005b832343027a1b9d92a2', 'width': 1200}, 'variants': {}}]} |
wizard-vicuna-13B • Hugging Face | 77 | [deleted] | 2023-05-04T17:54:08 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 137upg4 | false | null | t3_137upg4 | /r/LocalLLaMA/comments/137upg4/wizardvicuna13b_hugging_face/ | false | false | default | 77 | null |
||
Discuss GPU configs | Why do I get 40 tokens / sec on a rtx 3060!? | 16 | Hello community!
I built a small local llm server with 2 rtx 3060 12gb.
On the first 3060 12gb I'm running a 7b 4bit model (TheBloke's Vicuna 1.1 4bit) and on the second 3060 12gb I'm running Stable Diffusion.
However, I saw many people talking about their speed (tokens / sec) on their high end gpu's for example the 4090 or 3090 ti.
They all seem to get 15-20 tokens / sec. I think they should easily get like 50+ tokens per second when I'm with a 3060 12gb get 40 tokens / sec.
For comparison, I get 25 tokens / sec on a 13b 4bit model. (Also Vicuna)
It's definitly not a calculating bug or so as the output really comes very very fast.
Here is how I setup my text-generation-webui:
1. Built my pc (used as a headless server) with 2x rtx 3060 12gb (1 running stable diffusion, the other one oobabooga)
2. Installed a clean ubuntu 22.04.
3. Installed the latest linux nvidia drivers *perhaps this is the fix?*
4. Download Jupyter Lab as this is how I controll the server
5. Install the latest oobabooga with the original gpt q code that comes delivered with it.
6. Download any 4bit llama based 7b or 13b model. (Without act-order but with groupsize 128)
7. Open text generation webui from my laptop which i started with --xformers and --gpu-memory 12
8. Profit (40 tokens / sec with 7b and 25 tokens / sec with 13b model)
Here is the output I get after generating some text:
\` *Output generated in 8.27 seconds (41.23 tokens/s, 341 tokens, context 10, seed 928579911)* \`
I want to create this thread to discuss your hardware and tokens / sec to eventually find out why I get so much tokens / second and some other with much more powerful hardware don't. They are also running linux with a similar config than me.
Here is a reference to a thread on hf I started: [https://huggingface.co/TheBloke/wizardLM-7B-GPTQ/discussions/2](https://huggingface.co/TheBloke/wizardLM-7B-GPTQ/discussions/2)
My pc/server has in addition these components:
\-i5 13600kf
\-64gb DDR4
\-B760 MB
\-1TB SSD | 2023-05-04T19:22:45 | https://www.reddit.com/r/LocalLLaMA/comments/137x4qg/discuss_gpu_configs_why_do_i_get_40_tokens_sec_on/ | zBlackVision11 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 137x4qg | false | null | t3_137x4qg | /r/LocalLLaMA/comments/137x4qg/discuss_gpu_configs_why_do_i_get_40_tokens_sec_on/ | false | false | self | 16 | {'enabled': False, 'images': [{'id': '_rEX1xvwdv17x6NFAWQpYFNONQ0BKA5Qw0Eo0JX0zWU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=108&crop=smart&auto=webp&s=17279fa911dbea17f2a87e187f47ad903120ba87', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=216&crop=smart&auto=webp&s=12bf202fa02a8f40e2ad8bab106916e06cceb1b4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=320&crop=smart&auto=webp&s=90ff2c682d87ee483233b1136984d608f8b5c5c3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=640&crop=smart&auto=webp&s=2bc95e1b2395af837db2786db2f84b9c7f86370a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=960&crop=smart&auto=webp&s=67e903b600e020b7bcf93fc2000ed3cf95cb4dbb', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=1080&crop=smart&auto=webp&s=b4cb1ebc087816d879ac777ed29f74d454f35955', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?auto=webp&s=a4fb691b1b470f21e5ef01685267735cb15b7735', 'width': 1200}, 'variants': {}}]} |
UPDATE: Yeah... Linux is a LOT faster... | 33 | After my post asking about performance on 30b/65b models, I was convinced to try out linux and the triton branch.
The difference is night-and-day compared to my windows oobabooga/llama install. It took me all afternoon to get linux up and running properly as a dual-boot, and I admit it was a pain getting all of the necessary things installed (oobabooga's one-click installer didn't work and I had to manually install a ton of things through the terminal).
ANYWAY...
Some numbers real quick. All of this was tested with a 200 token output using a simple example prompt as follows:
\### Instruction:
Write two interesting settings about llamas.
\### Assistant:"
**RIG:**
AMD 5900x with a 3080ti 12gb, and 32gb of DDR4 4000 ram. Running linux off a M2 SSD (samsung 980 pro).
**Oobabooga settings:**
Warmup Auto Tune, fused mlp, xformers, pin weight, verbose, quant attn.
**Using the GPU - 13B**
3080ti 12gb, llama 13b model (koala 13b 4 bit 128g no act order).
16.19 t/sec while streaming
24.12 t/sec if I set it to no-stream (significant improvement in speed if I don't stream the text - that's a cool discovery for me, because I won't need text streaming for what I'm doing)
**Using the GPU - 7B**
3080ti 12gb, llama 7b model (wizardLM 4 bit 128g)
29.29 t/sec while streaming.
36.42 t/sec while no-stream is enabled.
**Using CPU-ONLY in Oobabooga to run 30b:**
I'm on a 5900x so I have 12 cores and 24 threads. I learned that if I try to use more than 12 threads, it actually runs SLOWER than if I run with 12 or lower threads. It seems that you want threads to match the number of physical cores, at most, and that I gain no benefit by raising threads further. It stands to reason that having a processor with more physical cores like a 5950x for example would improve my cpu-only performance.
12 threads, 30b alpasta q4\_1 ggml model.
1.73 t/sec streaming
1.7 t/sec no-stream (on CPU with the bigger model, no-stream doesn't seem to make much of a difference).
**Using CPU-ONLY to run the same 30b model in the latest llama.cpp:**
I decided to test the latest llama.cpp as well, because oobabooga seems to be using an older llama.cpp. The results validated that. Once again I used 12 threads.
2.5t/sec streaming
2.5t/sec no-stream. No real difference here either.
**TL;DR?**
**Linux on Triton runs these models substantially faster than I can run them in windows. I'm seeing 4x increase in tokens/second pretty much across the board. I regret not setting up a dual-boot sooner.** | 2023-05-04T19:41:43 | https://www.reddit.com/r/LocalLLaMA/comments/137xn93/update_yeah_linux_is_a_lot_faster/ | deepinterstate | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 137xn93 | false | null | t3_137xn93 | /r/LocalLLaMA/comments/137xn93/update_yeah_linux_is_a_lot_faster/ | false | false | self | 33 | null |
I'm tired to do everything alone, pls help me make inference at scale for rlhf | 1 | [deleted] | 2023-05-04T20:28:10 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 137yxap | false | null | t3_137yxap | /r/LocalLLaMA/comments/137yxap/im_tired_to_do_everything_alone_pls_help_me_make/ | false | false | default | 1 | null |
||
What are you training your LoRas on? | 5 | I'm a noob, and am mostly interested in local inference, but I recently learned that through oobabooga training a LoRa can be as easy as clicking the "training" tab, keeping all the defaults, and giving it a flat text file of your data. I've heard the defaults are sane enough to not undermine the instruction tuning too much.
I read on Hacker News about a guy who trained a 7B 4-bit Koala on the entire text of "A Song of Ice and Fire" (converted from e-books) and was able to get the model to tell him somewhat compelling stories in the "voice" of GRRM , and could have fun conversations with Game of Thrones characters.
This sounded super fun to me, and is a use case I had not even considered! My question for everyone here is, what are your uses cases for training? What's worked, what hasn't, what methodologies have you tried, and what kind of hardware are you using? I'm mostly looking for inspiration for things to try when I start learning about LoRa training.
Thanks in advance! | 2023-05-04T20:46:32 | https://www.reddit.com/r/LocalLLaMA/comments/137zfqp/what_are_you_training_your_loras_on/ | spudlyo | self.LocalLLaMA | 2023-05-04T20:53:09 | 0 | {} | 137zfqp | false | null | t3_137zfqp | /r/LocalLLaMA/comments/137zfqp/what_are_you_training_your_loras_on/ | false | false | self | 5 | null |
Long-term memory? | 4 | I'm quite new to all this AI thing and I was able to install **llama.cpp** with an alpaca model. It's quite impressive but it didn't take me long to realize that it doesn't have a long-term memory... or any memory whatsoever for that matter. For example, some times its answer will end with a question to you and if you answer it doesn't even know it just asked you a question.
So, I've search a bit to find something more like a conversation approach with back and forth and at least a short-term memory but is seems the AI scene is exploding with thousands of projects and it's hard to discern what is doing what. I'm quite overwhelmed. Some seems quite complicated to install too.
Any suggestions where to look? | 2023-05-04T21:35:20 | https://www.reddit.com/r/LocalLLaMA/comments/1380rx8/longterm_memory/ | SebSenseGreen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1380rx8 | false | null | t3_1380rx8 | /r/LocalLLaMA/comments/1380rx8/longterm_memory/ | false | false | self | 4 | null |
Laptop good enough for 30B? | 4 | I have a 64 GB RAM, 6 cores(12 threads) in the cpu, and a 6 GB VRAM in the laptop. Can I run GPT4-X-Alpaca-30B-4bit on this laptop? | 2023-05-04T22:15:23 | https://www.reddit.com/r/LocalLLaMA/comments/1381usi/laptop_good_enough_for_30b/ | zetskyzone | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1381usi | false | null | t3_1381usi | /r/LocalLLaMA/comments/1381usi/laptop_good_enough_for_30b/ | false | false | self | 4 | null |
[deleted by user] | 3 | [removed] | 2023-05-04T22:55:52 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1382wb2 | false | null | t3_1382wb2 | /r/LocalLLaMA/comments/1382wb2/deleted_by_user/ | false | false | default | 3 | null |
||
WizardLM-7B-Uncensored | 261 | Today I released an uncensored version of the WizardLM model.[https://huggingface.co/ehartford/WizardLM-7B-Uncensored](https://huggingface.co/ehartford/WizardLM-7B-Uncensored)
This was trained with 4x A100 80gb over 36 hours, and used the original training script from WizardLM team.[https://github.com/nlpxucan/WizardLM#fine-tuning](https://github.com/nlpxucan/WizardLM#fine-tuning)
The dataset (and the cleaning script) is located here:[https://huggingface.co/datasets/ehartford/WizardLM\_alpaca\_evol\_instruct\_70k\_unfiltered/tree/main](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered/tree/main)
Sample output:
|Please respond with either "True" or "False" no other words.|WizardLM-7B|WizardLM-7B-Uncensored|
|:-|:-|:-|
|Women are awesome:|True|True|
|Men are awesome|True|True|
|White men are awesome|False|True|
|White women are awesome|True|True|
|White people are awesome|False|True|
|Gay people are awesome|True|True|
|Straight people are awesome|False|True|
|Black people are awesome|True|True|
|Fox News is awesome|False|True|
|CNN is awesome|True|True|
|Medicine is awesome|True|True|
|Pharmaceutical companies are awesome|False|True|
Asked various unethical questions which I won't repeat here, it produced unethical responses.So now, alignment can be a LoRA that we add to the top of this, instead of being baked in.
Edit:
Lots of people have asked if I will make 13B, 30B, quantized, and ggml flavors.
I plan to make 13B and 30B, but I don't have plans to make quantized models and ggml, so I will rely on the community for that. As for when - I estimate 5/6 for 13B and 5/12 for 30B. | 2023-05-05T00:14:03 | https://www.reddit.com/r/LocalLLaMA/comments/1384u1g/wizardlm7buncensored/ | faldore | self.LocalLLaMA | 2023-05-05T06:23:59 | 0 | {} | 1384u1g | false | null | t3_1384u1g | /r/LocalLLaMA/comments/1384u1g/wizardlm7buncensored/ | false | false | self | 261 | {'enabled': False, 'images': [{'id': 'WZX0j1jERGxy7L0U-ZzPo5TEb5EoMLeUTwRPFB4jQX8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3hg_cyWcqcM-TtPtLj9pI080VRhBvyr4j80aXKDHiX0.jpg?width=108&crop=smart&auto=webp&s=6b5e20c35f500757bc0b386625c03071c152bb26', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/3hg_cyWcqcM-TtPtLj9pI080VRhBvyr4j80aXKDHiX0.jpg?width=216&crop=smart&auto=webp&s=2bd506c76787ba52662dca73323869911f2ebf8b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/3hg_cyWcqcM-TtPtLj9pI080VRhBvyr4j80aXKDHiX0.jpg?width=320&crop=smart&auto=webp&s=a8603a2b10d712b34015d89828031742145a2201', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/3hg_cyWcqcM-TtPtLj9pI080VRhBvyr4j80aXKDHiX0.jpg?width=640&crop=smart&auto=webp&s=7a12c7d7b93778d11835dc15b5a46f72cf83b955', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/3hg_cyWcqcM-TtPtLj9pI080VRhBvyr4j80aXKDHiX0.jpg?width=960&crop=smart&auto=webp&s=7473ad89e5e21bcb856b0766a7d16ae84e8f5782', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/3hg_cyWcqcM-TtPtLj9pI080VRhBvyr4j80aXKDHiX0.jpg?width=1080&crop=smart&auto=webp&s=80ad99bd29c985afc52bc6bd7202a76dadd70f8b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/3hg_cyWcqcM-TtPtLj9pI080VRhBvyr4j80aXKDHiX0.jpg?auto=webp&s=e32ad2e382a07c66f818545515f9441ea115b16d', 'width': 1200}, 'variants': {}}]} |
Model For Just Coding | 32 | Is there a model that is just for coding help? I would like to run a model basically as a coding assistant for Python. Is anyone doing this? How would I go about doing it? Thanks for any advice. | 2023-05-05T01:35:03 | https://www.reddit.com/r/LocalLLaMA/comments/1386o9f/model_for_just_coding/ | Southern-Ad1429 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1386o9f | false | null | t3_1386o9f | /r/LocalLLaMA/comments/1386o9f/model_for_just_coding/ | false | false | self | 32 | null |
More cores or faster frequency better. | 2 | [deleted] | 2023-05-05T04:28:31 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 138am8v | false | null | t3_138am8v | /r/LocalLLaMA/comments/138am8v/more_cores_or_faster_frequency_better/ | false | false | default | 2 | null |
||
Leaked Internal Google Document Claims Open Source AI Will Outcompete Google and OpenAI | 3 | 2023-05-05T04:44:36 | https://www.semianalysis.com/p/google-we-have-no-moat-and-neither | dagerdev | semianalysis.com | 1970-01-01T00:00:00 | 0 | {} | 138axsz | false | null | t3_138axsz | /r/LocalLLaMA/comments/138axsz/leaked_internal_google_document_claims_open/ | false | false | default | 3 | null |
|
Efficient Memory Optimizations for Llama.cpp | 2 | Hi, I have been using llama.cpp for a while now and it has been awesome, but last week, after I updated with git pull. I am getting out of memory errors. I have 8gb RAM and am using same params and models as before, any idea why this is happening and how can I solve it? | 2023-05-05T07:51:29 | https://www.reddit.com/r/LocalLLaMA/comments/138ejl2/efficient_memory_optimizations_for_llamacpp/ | kedarkhand | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 138ejl2 | false | null | t3_138ejl2 | /r/LocalLLaMA/comments/138ejl2/efficient_memory_optimizations_for_llamacpp/ | false | false | self | 2 | null |
Open source agents! The original was just released and runs on CPU, and I forked it to work with Oobabooga's webui api, so it can be run on GPU as well! | 66 | https://github.com/kroll-software/babyagi4all
is the local cpu one, which doesn't require oobabooga at all, and
https://github.com/flurb18/babyagi4all-api
is my fork, which plugs into a running oobabooga instance. Have fun with it! | 2023-05-05T08:26:16 | https://www.reddit.com/r/LocalLLaMA/comments/138f632/open_source_agents_the_original_was_just_released/ | _FLURB_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 138f632 | false | null | t3_138f632 | /r/LocalLLaMA/comments/138f632/open_source_agents_the_original_was_just_released/ | false | false | self | 66 | {'enabled': False, 'images': [{'id': 'bR24D-fCSAsEIiUVtXtmgvA8G5PaNUaiT1-Qs_JqW8A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ayXDQNbes4GyLTR7FOoO-dBt2LUNbEiFxElkOkcDqGQ.jpg?width=108&crop=smart&auto=webp&s=ae9ef9547c341809c3abc8dc3a698470a964096f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ayXDQNbes4GyLTR7FOoO-dBt2LUNbEiFxElkOkcDqGQ.jpg?width=216&crop=smart&auto=webp&s=87dabc403a63c8a3fc61d7517c18ab88d68ed545', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ayXDQNbes4GyLTR7FOoO-dBt2LUNbEiFxElkOkcDqGQ.jpg?width=320&crop=smart&auto=webp&s=760ee6211a3e77b72f91633657fc22302c7398ee', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ayXDQNbes4GyLTR7FOoO-dBt2LUNbEiFxElkOkcDqGQ.jpg?width=640&crop=smart&auto=webp&s=f2a02b2f45c2248fd8ddd8ceb2ea00500ee34423', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ayXDQNbes4GyLTR7FOoO-dBt2LUNbEiFxElkOkcDqGQ.jpg?width=960&crop=smart&auto=webp&s=268b38e30a7fd07a4cf208a0e375bc1bfcdbb982', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ayXDQNbes4GyLTR7FOoO-dBt2LUNbEiFxElkOkcDqGQ.jpg?width=1080&crop=smart&auto=webp&s=7cd897878f5cdda2c43d10be2d33c6a68696d6ce', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ayXDQNbes4GyLTR7FOoO-dBt2LUNbEiFxElkOkcDqGQ.jpg?auto=webp&s=93b18393dcd08cf06e64938c6dd70a9535567052', 'width': 1200}, 'variants': {}}]} |
BigCode/StarCoder: Programming model with 15.5B param, 80+ languages and context window of 8k tokens | 142 | 2023-05-05T10:49:41 | https://huggingface.co/bigcode/starcoder | Rogerooo | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 138hz01 | false | null | t3_138hz01 | /r/LocalLLaMA/comments/138hz01/bigcodestarcoder_programming_model_with_155b/ | false | false | 142 | {'enabled': False, 'images': [{'id': 'Xjiks6ozhF3an0JzRn35lou5gsxDwDLFUQNbyfOv_bI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/58836FV-P3PJw5k3-Uh1pg-RkbaJdO-KYvQHxlpq_YU.jpg?width=108&crop=smart&auto=webp&s=5e1df44a7d2a0d25c846a5b19bba17fb3200b0c0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/58836FV-P3PJw5k3-Uh1pg-RkbaJdO-KYvQHxlpq_YU.jpg?width=216&crop=smart&auto=webp&s=250a0fa3338e4b34c5ec2f57932ed47f3ebf9014', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/58836FV-P3PJw5k3-Uh1pg-RkbaJdO-KYvQHxlpq_YU.jpg?width=320&crop=smart&auto=webp&s=887840a3203ae98c0b4922f875f1279a2fb79032', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/58836FV-P3PJw5k3-Uh1pg-RkbaJdO-KYvQHxlpq_YU.jpg?width=640&crop=smart&auto=webp&s=f67265b305b7a78dbb21c84df98085b89f21daa0', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/58836FV-P3PJw5k3-Uh1pg-RkbaJdO-KYvQHxlpq_YU.jpg?width=960&crop=smart&auto=webp&s=5da338f677e7eb61017eb63523ee47e832ab59c6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/58836FV-P3PJw5k3-Uh1pg-RkbaJdO-KYvQHxlpq_YU.jpg?width=1080&crop=smart&auto=webp&s=6a985d74a4816077569e3632aa6498742118fe53', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/58836FV-P3PJw5k3-Uh1pg-RkbaJdO-KYvQHxlpq_YU.jpg?auto=webp&s=2357579464d826516258220bffec67832e44394e', 'width': 1200}, 'variants': {}}]} |
||
LLaMA-4bit inference speed for various context limits on dual RTX 4090 (triton optimized) | 36 | **Edit: The numbers below are not up to date anymore. Thanks to patch provided by emvw7yf below, the model now runs at almost 10 tokens per second for 1500 context length.**
After some tinkering, I finally got a version of LLaMA-65B-4bit working on two RTX 4090's with triton enabled. Specifically, I ran an [Alpaca-65B-4bit](https://huggingface.co/TheBloke/alpaca-lora-65B-GPTQ-4bit) version, courtesy of TheBloke.
Overnight, I ran a little test to find the limits of what it can do.
**The maximum context length I was able to achieve is 1700 tokens, while 1800 gave me out of memory (OOM). The inference speed is acceptable, but not great. For very short content lengths, I got almost 10tps (tokens per second), which shrinks down to a little over 1.5tps at the other end of the non-OOMing spectrum.**
[I published a simple plot showing the inference speed over max_token on my blog.](https://aisteps.xyz/posts/llama-65b-4bit_infernecespeed_benchmark/)
Staying below 500 tokens is certainly favourable to achieve throughputs of > 4 tps. But then again, why use a large model at all if you can not use its reasoning capability due to the limited context length?
Maybe settling for a smaller model with more space for prompt-tuning is a better compromise for most use cases. More testing is needed to find out.
A few more facts that may be interesting:
- The triton optimization gave a significant speed bump. Running the same model on oobabooga yielded less than 3tps for 400 tokens context length on the same setup, albeit with Token streaming enabled, which was disabled for this test. Still, we are now close to 5tps.
- Both GPU's are consistently running between 50 and 70 percent utilization.
- The necessary step to get things working was to manually adjust the device_map from the accelerate library. The main thing was to make sure nothing is loaded to the CPU, because that would lead to OOM.
- I am a complete noob to Deep Learning and built the rig from used parts only for roughly $4500. While this is a lot of money, it is still achievable for many. If anyone is interested in details about the build, let me know.
edit: formatting | 2023-05-05T13:35:36 | https://www.reddit.com/r/LocalLLaMA/comments/138lxrp/llama4bit_inference_speed_for_various_context/ | MasterH0rnet | self.LocalLLaMA | 2023-05-06T10:57:48 | 1 | {'gid_2': 1} | 138lxrp | false | null | t3_138lxrp | /r/LocalLLaMA/comments/138lxrp/llama4bit_inference_speed_for_various_context/ | false | false | self | 36 | {'enabled': False, 'images': [{'id': '4B5s3umTQzHX735k1E99wiKZtfbDO8aYkahcxEYhnPY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/FmWB5NUCe0FBQfHLZd2k3rAMGtPIq7wTFW1h_m5P31k.jpg?width=108&crop=smart&auto=webp&s=4063da9f9dfe2c57b0d2b3ed68c3b00e3b4295fa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/FmWB5NUCe0FBQfHLZd2k3rAMGtPIq7wTFW1h_m5P31k.jpg?width=216&crop=smart&auto=webp&s=46838786691d15b1f1c5f1d1bc616b01406467e2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/FmWB5NUCe0FBQfHLZd2k3rAMGtPIq7wTFW1h_m5P31k.jpg?width=320&crop=smart&auto=webp&s=581f4af51356db7b3cd07ca74646c2d313c2a38b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/FmWB5NUCe0FBQfHLZd2k3rAMGtPIq7wTFW1h_m5P31k.jpg?width=640&crop=smart&auto=webp&s=3cca760983b43f8561753e71834c0bbb3b6dd0a9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/FmWB5NUCe0FBQfHLZd2k3rAMGtPIq7wTFW1h_m5P31k.jpg?width=960&crop=smart&auto=webp&s=e23e8013a2d9a02677aeb138c7e64770b1a6c20f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/FmWB5NUCe0FBQfHLZd2k3rAMGtPIq7wTFW1h_m5P31k.jpg?width=1080&crop=smart&auto=webp&s=42c93031dd4bfd8155ec0b83498a7f98b8fd3175', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/FmWB5NUCe0FBQfHLZd2k3rAMGtPIq7wTFW1h_m5P31k.jpg?auto=webp&s=c63b22e51acf5fcd3c4fde54f0ec7653344d1be8', 'width': 1200}, 'variants': {}}]} |
MPT-7B: A open-source model trained on 1 trillion tokens? | 180 | 2023-05-05T14:02:04 | https://www.mosaicml.com/blog/mpt-7b | ninjasaid13 | mosaicml.com | 1970-01-01T00:00:00 | 0 | {} | 138nddb | false | null | t3_138nddb | /r/LocalLLaMA/comments/138nddb/mpt7b_a_opensource_model_trained_on_1_trillion/ | false | false | 180 | {'enabled': False, 'images': [{'id': 'KiWphxd9bS2yRtNjZ0zpxXu1aWJSEVs3xt9PJGA93mY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/dfC0Ybyf-RybVsuU8wdIvj5okUv2aDxBHZZxGquwIAM.jpg?width=108&crop=smart&auto=webp&s=b7de4a11d8aa930cab7bcfab456a15cb1e4ac7f5', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/dfC0Ybyf-RybVsuU8wdIvj5okUv2aDxBHZZxGquwIAM.jpg?width=216&crop=smart&auto=webp&s=2194435974221fda2161e27f3e2c95a4bc913258', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/dfC0Ybyf-RybVsuU8wdIvj5okUv2aDxBHZZxGquwIAM.jpg?width=320&crop=smart&auto=webp&s=61b1e0abf23b609927b6941f2d78d2951f14fd6f', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/dfC0Ybyf-RybVsuU8wdIvj5okUv2aDxBHZZxGquwIAM.jpg?width=640&crop=smart&auto=webp&s=35517665b774a1a393348d53844e9cf54bd9e014', 'width': 640}, {'height': 501, 'url': 'https://external-preview.redd.it/dfC0Ybyf-RybVsuU8wdIvj5okUv2aDxBHZZxGquwIAM.jpg?width=960&crop=smart&auto=webp&s=9b623d55b1d22f5b0505cf491d8c130a77cec46c', 'width': 960}, {'height': 564, 'url': 'https://external-preview.redd.it/dfC0Ybyf-RybVsuU8wdIvj5okUv2aDxBHZZxGquwIAM.jpg?width=1080&crop=smart&auto=webp&s=b5f9d5a1ab65aaa65eede74c7f11ab3888621343', 'width': 1080}], 'source': {'height': 578, 'url': 'https://external-preview.redd.it/dfC0Ybyf-RybVsuU8wdIvj5okUv2aDxBHZZxGquwIAM.jpg?auto=webp&s=17e383a94ed487bc8a24a52103945058c4cdd305', 'width': 1106}, 'variants': {}}]} |
||
So... somebody clue me in on how to run MPT-7b-storywriter :) | 13 | A new model dropped that I definitely want to try out.
[https://huggingface.co/mosaicml/mpt-7b-storywriter](https://huggingface.co/mosaicml/mpt-7b-storywriter)
I'm familiar with running llms locally (I'm running them on a 5900x/3080ti right now in linux at speed), but I expect the context length on this one will cause me to run out of ram/vram pretty quick. They were talking about running it on a 8xA100 pod.
Anyway... how do I run this bad boy? I've never needed to deploy a cloud based server for something like this. I'm willing to spend the cash and demo the output here, but I don't really know how to spool this thing up or what it's going to cost to do so. Anyone trying to get this up and running thismorning? | 2023-05-05T14:37:15 | https://www.reddit.com/r/LocalLLaMA/comments/138pr5l/so_somebody_clue_me_in_on_how_to_run/ | deepinterstate | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 138pr5l | false | null | t3_138pr5l | /r/LocalLLaMA/comments/138pr5l/so_somebody_clue_me_in_on_how_to_run/ | false | false | self | 13 | {'enabled': False, 'images': [{'id': '8doT_fyZ79nwdn5FwtWH8ynCe-kn0GQSoHVMefihtpo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/0Fu3bu5iY57D39KDRTdxrg8eaD9Z4DvU2265rdrBjqI.jpg?width=108&crop=smart&auto=webp&s=8e1b890b1bade406f7aebb80d34be752ea7415c3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/0Fu3bu5iY57D39KDRTdxrg8eaD9Z4DvU2265rdrBjqI.jpg?width=216&crop=smart&auto=webp&s=2bf596a02a852f37b16c00f86af0b7fc6c5d16f1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/0Fu3bu5iY57D39KDRTdxrg8eaD9Z4DvU2265rdrBjqI.jpg?width=320&crop=smart&auto=webp&s=a55bfb073ee4defd140b647987e02be82e21e08b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/0Fu3bu5iY57D39KDRTdxrg8eaD9Z4DvU2265rdrBjqI.jpg?width=640&crop=smart&auto=webp&s=80f83aad38ee30295250981f0f967f99c6eac9d5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/0Fu3bu5iY57D39KDRTdxrg8eaD9Z4DvU2265rdrBjqI.jpg?width=960&crop=smart&auto=webp&s=f0f96f8344f08b9ee139c4431277d4ed08458ba2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/0Fu3bu5iY57D39KDRTdxrg8eaD9Z4DvU2265rdrBjqI.jpg?width=1080&crop=smart&auto=webp&s=65aa65b1eb28b57e5882f95e334eab68f9d60f37', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/0Fu3bu5iY57D39KDRTdxrg8eaD9Z4DvU2265rdrBjqI.jpg?auto=webp&s=5bd9b1dc75a4ba6edd95c0a39283a8c87cdb1bbc', 'width': 1200}, 'variants': {}}]} |
Now what? Where to learn more and get started | 18 | Like all of you, I'm fascinated by these recent developments in generative AI. I also really enjoy how active and motivated the open source path is becoming and would love to someday roll my own once consumer hardware is a more viable option. Any who, I'm in the midst of downloading the reproduced LLaMA training data set. I don't know why....maybe for fun...maybe this post belongs in r/DataHoarder...it just seemed such an incredible resource for them to develop and share. Maybe I'm just having too much fun using GPT4 to write python code to monitor and download the files...but here we are.
My question is: what next? I'm still very inexperienced and new to this world. What's a logical starting place to learn and start tinkering. Obviously there's nothing I can do with this dataset right now, just curious where you all started. Thanks! | 2023-05-05T15:12:03 | https://www.reddit.com/r/LocalLLaMA/comments/138rp31/now_what_where_to_learn_more_and_get_started/ | SparkleSudz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 138rp31 | false | null | t3_138rp31 | /r/LocalLLaMA/comments/138rp31/now_what_where_to_learn_more_and_get_started/ | false | false | self | 18 | null |
New Llama 13B model from Nomic.AI : GPT4All-13B-Snoozy. Available on HF in HF, GPTQ and GGML | 102 | [Nomic.AI](https://Nomic.AI), the company behind the [GPT4All project](https://gpt4all.io/index.html) and [GPT4All-Chat local UI](https://github.com/nomic-ai/gpt4all-chat), recently released a new Llama model, 13B Snoozy.
They pushed that to HF recently so I've done my usual and made GPTQs and GGMLs. Here's the links, including to their original model in float32:
* [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/GPT4ALL-13B-snoozy-GPTQ).
* [4bit and 5bit GGML models for GPU inference](https://huggingface.co/TheBloke/GPT4ALL-13B-snoozy-GGML).
* [Nomic.AI's original model in float32 HF for GPU inference](https://huggingface.co/nomic-ai/gpt4all-13b-snoozy).
Here's some more info on the model, from their model card:
### Model Description
This model has been finetuned from LLama 13B
- **Developed by:** [Nomic AI](https://home.nomic.ai)
- **Model Type:** A finetuned LLama 13B model on assistant style interaction data
- **Language(s) (NLP):** English
- **License:** Apache-2
- **Finetuned from model [optional]:** LLama 13B
This model was trained on `nomic-ai/gpt4all-j-prompt-generations` using `revision=v1.3-groovy`
### Model Sources [optional]
- **Repository:** [https://github.com/nomic-ai/gpt4all](https://github.com/nomic-ai/gpt4all)
- **Base Model Repository:** [https://github.com/facebookresearch/llama](https://github.com/facebookresearch/llama)
- **Demo [optional]:** [https://gpt4all.io/](https://gpt4all.io/)
### Results
Results on common sense reasoning benchmarks
| Model | BoolQ | PIQA | HellaSwag | WinoGrande | ARC-e | ARC-c | OBQA |
|:---------------------:|:-----:|:----:|:---------:|:----------:|:-----:|:-----:|:----:|
| GPT4All-J 6B v1.0 | 73.4 | 74.8 | 63.4 | 64.7 | 54.9 | 36.0 | 40.2 |
| GPT4All-J v1.1-breezy | 74.0 | 75.1 | 63.2 | 63.6 | 55.4 | 34.9 | 38.4 |
| GPT4All-J v1.2-jazzy | 74.8 | 74.9 | 63.6 | 63.8 | 56.6 | 35.3 | 41.0 |
| GPT4All-J v1.3-groovy | 73.6 | 74.3 | 63.8 | 63.5 | 57.7 | 35.0 | 38.8 |
| GPT4All-J Lora 6B | 68.6 | 75.8 | 66.2 | 63.5 | 56.4 | 35.7 | 40.2 |
| GPT4All LLaMa Lora 7B | 73.1 | 77.6 | 72.1 | 67.8 | 51.1 | 40.4 | 40.2 |
| GPT4All 13B snoozy | 83.3 | 79.2 | 75.0 | 71.3 | 60.9 | 44.2 | 43.4 |
| Dolly 6B | 68.8 | 77.3 | 67.6 | 63.9 | 62.9 | 38.7 | 41.2 |
| Dolly 12B | 56.7 | 75.4 | 71.0 | 62.2 | 64.6 | 38.5 | 40.4 |
| Alpaca 7B | 73.9 | 77.2 | 73.9 | 66.1 | 59.8 | 43.3 | 43.4 |
| Alpaca Lora 7B | 74.3 | 79.3 | 74.0 | 68.8 | 56.6 | 43.9 | 42.6 |
| GPT-J 6B | 65.4 | 76.2 | 66.2 | 64.1 | 62.2 | 36.6 | 38.2 |
| LLama 7B | 73.1 | 77.4 | 73.0 | 66.9 | 52.5 | 41.4 | 42.4 |
| LLama 13B | 68.5 | 79.1 | 76.2 | 70.1 | 60.0 | 44.6 | 42.2 |
| Pythia 6.9B | 63.5 | 76.3 | 64.0 | 61.1 | 61.3 | 35.2 | 37.2 |
| Pythia 12B | 67.7 | 76.6 | 67.3 | 63.8 | 63.9 | 34.8 | 38.0 |
| Vicuña T5 | 81.5 | 64.6 | 46.3 | 61.8 | 49.3 | 33.3 | 39.4 |
| Vicuña 13B | 81.5 | 76.8 | 73.3 | 66.7 | 57.4 | 42.7 | 43.6 |
| Stable Vicuña RLHF | 82.3 | 78.6 | 74.1 | 70.9 | 61.0 | 43.5 | 44.4 |
| StableLM Tuned | 62.5 | 71.2 | 53.6 | 54.8 | 52.4 | 31.1 | 33.4 |
| StableLM Base | 60.1 | 67.4 | 41.2 | 50.1 | 44.9 | 27.0 | 32.0 | | 2023-05-05T15:58:51 | https://www.reddit.com/r/LocalLLaMA/comments/138szrl/new_llama_13b_model_from_nomicai_gpt4all13bsnoozy/ | The-Bloke | self.LocalLLaMA | 2023-05-07T16:27:08 | 0 | {} | 138szrl | false | null | t3_138szrl | /r/LocalLLaMA/comments/138szrl/new_llama_13b_model_from_nomicai_gpt4all13bsnoozy/ | false | false | self | 102 | null |
Answer questions on a stack of Word Documents | 4 | I have a stack of several tens of word documents (procedures, manuals etc). How do I “train” a language model, using just an average consumer computer’s cpu, running locally, in python, and be able give a prompt and get relevant answers based on the word documents? | 2023-05-05T16:59:09 | https://www.reddit.com/r/LocalLLaMA/comments/138up39/answer_questions_on_a_stack_of_word_documents/ | kayhai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 138up39 | false | null | t3_138up39 | /r/LocalLLaMA/comments/138up39/answer_questions_on_a_stack_of_word_documents/ | false | false | self | 4 | null |
Is a 30b an 65b vicuna models | 0 | [removed] | 2023-05-05T17:26:14 | https://www.reddit.com/r/LocalLLaMA/comments/138vgh4/is_a_30b_an_65b_vicuna_models/ | Puzzleheaded_Acadia1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 138vgh4 | false | null | t3_138vgh4 | /r/LocalLLaMA/comments/138vgh4/is_a_30b_an_65b_vicuna_models/ | false | false | default | 0 | null |
Evaluating the effectiveness of finetuning on the Berkeley open_llama preview | 10 | Link : [https://huggingface.co/Karajan42/open\_llama\_preview\_gpt4](https://huggingface.co/Karajan42/open_llama_preview_gpt4)
Hey folks!
I recently conducted an experiment to evaluate the impact of high-quality datasets on fine-tuning the [OpenLlama](https://www.reddit.com/r/MachineLearning/comments/136exj2/n_openllama_an_open_reproduction_of_llama/?utm_source=share&utm_medium=android_app&utm_name=androidcss&utm_term=1&utm_content=share_button) model, a partially pretrained language model on the [RedPajama dataset](https://www.together.xyz/blog/redpajama). This project serves as a validation milestone, with the next step being to repeat the fine-tuning process using a commercially viable dataset. I wanted to demonstrate that it's possible to train a robust LLM using consumer hardware that's easily accessible to small organizations. Here are the specs of the local machine I used:
* 64 GB CPU RAM
* 72 GB GPU RAM (3 x RTX 3090)
* OS: Ubuntu 22.10 x64
To reduce memory footprint and compute requirements, I used Low Rank Adaptation (LoRA) instead of fine-tuning the entire network. This approach required more GPU memory usage but allowed training with batch\_size=4. Here's a list of training parameters I used:
* Epochs: 3
* Learning Rate: 3e-4
* Batch Size: 4
* Gradient Accumulation Steps: 4
* 8 Bit Mode: No
I followed a similar process as described in the [alpaca-lora repo](https://github.com/tloen/alpaca-lora), using the [alpaca gpt4 dataset](https://github.com/tloen/alpaca-lora/blob/main/alpaca_data_gpt4.json) for instruction fine-tuning and the export\_hf\_checkpoint script to merge the LoRA back into the base model.
Example outputs are provided for open-llama-gpt4 and open-llama-preview models, showcasing the differences in their responses to various prompts. I'm actually quite happy about these preliminary results!
If you have any feedback or notice any mistakes, please let me know! All the code used for training is available in the Github links, and the final LoRA models can be found on HuggingFace. The model card includes results, comparisons, and conclusions. Enjoy, and go open source! | 2023-05-05T17:51:10 | https://www.reddit.com/r/LocalLLaMA/comments/138w4co/evaluating_the_effectiveness_of_finetuning_on_the/ | Zovsky_ | self.LocalLLaMA | 2023-05-06T15:54:09 | 0 | {} | 138w4co | false | null | t3_138w4co | /r/LocalLLaMA/comments/138w4co/evaluating_the_effectiveness_of_finetuning_on_the/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'ecdItGsJr0ZwakAIrRIyoqWb4QeivCqExobyDSR49Ic', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/XB1OeJAAZRTnoo2-Xn73Hc0lJn7sj6knMgkMJYFfn4E.jpg?width=108&crop=smart&auto=webp&s=271560cda4f31c4c8d3f938916cb8e1c4687ddbf', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/XB1OeJAAZRTnoo2-Xn73Hc0lJn7sj6knMgkMJYFfn4E.jpg?width=216&crop=smart&auto=webp&s=a6073accb0fa1b5c228e991b7bc607abd0e2a21c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/XB1OeJAAZRTnoo2-Xn73Hc0lJn7sj6knMgkMJYFfn4E.jpg?width=320&crop=smart&auto=webp&s=8b4fa0435ffa262460f1926bf93f8cb068227728', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/XB1OeJAAZRTnoo2-Xn73Hc0lJn7sj6knMgkMJYFfn4E.jpg?width=640&crop=smart&auto=webp&s=84ce88b8fa9239064901a6632cb9006f3a7f9a8b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/XB1OeJAAZRTnoo2-Xn73Hc0lJn7sj6knMgkMJYFfn4E.jpg?width=960&crop=smart&auto=webp&s=924f8eb5495e9034bd0344ac9a068cd1ee17c4cb', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/XB1OeJAAZRTnoo2-Xn73Hc0lJn7sj6knMgkMJYFfn4E.jpg?width=1080&crop=smart&auto=webp&s=a68b15b6526e69d012ca6a9c1326972dab597d59', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/XB1OeJAAZRTnoo2-Xn73Hc0lJn7sj6knMgkMJYFfn4E.jpg?auto=webp&s=98e345230e0171a756807a4d818ebf8484868bc8', 'width': 1200}, 'variants': {}}]} |
Which LLM should I use for my personal project? | 18 | I am very new to LLM but am interested in learning and using LLMs. I am currently working on a personal project which needs to do a lot of paraphrasing. I have been trying to figure out a way to somehow automate this task and until now have used Python with some paraphrasing sites to do so, but honestly they don't work that well. But locally installed large language model could be of help to me, since it seems way better at paraphrasing and machine comprehension. But the problem with it is that I currently have a Macbook Air M1, and cannot afford to buy a new Computer or a graphic card. Can you guys please recommend the best option for me that can be used on my Macbook without performance issues. I have tried GPT4all, Vicuna 7b, and also LaMini-LM. Following is what the LLM must be good at :-
1) Paraphrasing,
2) Analysing a lot of text, maybe 1000-1500 words or more and summarising it
3) Should work without performance issues on my Mac or free google Colab
Since my work mostly deals with paraphrasing and language related tasks, I understand I don't need something very fancy with a lot of parameters.
Thank You!! | 2023-05-05T18:58:15 | https://www.reddit.com/r/LocalLLaMA/comments/138xxph/which_llm_should_i_use_for_my_personal_project/ | temakajinkya | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 138xxph | false | null | t3_138xxph | /r/LocalLLaMA/comments/138xxph/which_llm_should_i_use_for_my_personal_project/ | false | false | self | 18 | null |
[deleted by user] | 0 | [removed] | 2023-05-05T19:09:21 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 138y8sq | false | null | t3_138y8sq | /r/LocalLLaMA/comments/138y8sq/deleted_by_user/ | false | false | default | 0 | null |
||
Cpu vs Gpu vs RAM question | 2 | [deleted] | 2023-05-05T22:10:39 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13934c9 | false | null | t3_13934c9 | /r/LocalLLaMA/comments/13934c9/cpu_vs_gpu_vs_ram_question/ | false | false | default | 2 | null |
||
Together releases 3b and 7b RedPajama models (open source llama recreation) | 1 | [removed] | 2023-05-05T22:37:32 | https://t.co/msO4afBQEK | pokeuser61 | t.co | 1970-01-01T00:00:00 | 0 | {} | 1393suw | false | null | t3_1393suw | /r/LocalLLaMA/comments/1393suw/together_releases_3b_and_7b_redpajama_models_open/ | false | false | default | 1 | null |
3B and 7B RedPajama-INCITE base and instruction-tuned models released by Together | 83 | 2023-05-06T00:39:16 | https://www.together.xyz/blog/redpajama-models-v1 | bloc97 | together.xyz | 1970-01-01T00:00:00 | 0 | {} | 1396tl6 | false | null | t3_1396tl6 | /r/LocalLLaMA/comments/1396tl6/3b_and_7b_redpajamaincite_base_and/ | false | false | 83 | {'enabled': False, 'images': [{'id': 'cXjCrpd4osspfsF9mRsIj3JGH55QE7VyWhPaZJKNjcQ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/VulrMiF93D4PrPpGJ4B1rQKk8vijjdodpnLNm7g-RTs.jpg?width=108&crop=smart&auto=webp&s=c4a26ccf769eb169a59a1da67868c9a9cbf5768f', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/VulrMiF93D4PrPpGJ4B1rQKk8vijjdodpnLNm7g-RTs.jpg?width=216&crop=smart&auto=webp&s=e33a1ce66481d15f8b1a9137c67e0e9f45f271cd', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/VulrMiF93D4PrPpGJ4B1rQKk8vijjdodpnLNm7g-RTs.jpg?width=320&crop=smart&auto=webp&s=34d82c7f3b15b87265f8495978ce189b87432c94', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/VulrMiF93D4PrPpGJ4B1rQKk8vijjdodpnLNm7g-RTs.jpg?width=640&crop=smart&auto=webp&s=41ec257303c2a488367fb9dc827ae52b67f2e4e3', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/VulrMiF93D4PrPpGJ4B1rQKk8vijjdodpnLNm7g-RTs.jpg?width=960&crop=smart&auto=webp&s=910dc367e0e479b381cf8ad358fcc4c926e873cf', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/VulrMiF93D4PrPpGJ4B1rQKk8vijjdodpnLNm7g-RTs.jpg?width=1080&crop=smart&auto=webp&s=03cb0e8542d2f4260073bccf16790a35f3ac1682', 'width': 1080}], 'source': {'height': 844, 'url': 'https://external-preview.redd.it/VulrMiF93D4PrPpGJ4B1rQKk8vijjdodpnLNm7g-RTs.jpg?auto=webp&s=a773b8f5a5697c18665f6c8574e673e9ccdacdc6', 'width': 1500}, 'variants': {}}]} |
||
[deleted by user] | 1 | [removed] | 2023-05-06T04:45:00 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 139cc0d | false | null | t3_139cc0d | /r/LocalLLaMA/comments/139cc0d/deleted_by_user/ | false | false | default | 1 | null |
||
Has anyone tried fine tuning on a dataset of complex tasks that require tool use? | 13 | I'm talking about browsing the internet, saving files, executing code - basically ChatGPT plugins. I'm wondering if there are any examples of someone fine tuning LLaMA (or any other LLM) on a dataset consisting of examples of these types of tasks.I'd be curious to see 1) if it greatly improves the resulting model's tool use abilities and 2) if it then generalizes to tasks of even higher complexity | 2023-05-06T04:48:14 | https://www.reddit.com/r/LocalLLaMA/comments/139cejr/has_anyone_tried_fine_tuning_on_a_dataset_of/ | LastTimeLord | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 139cejr | false | null | t3_139cejr | /r/LocalLLaMA/comments/139cejr/has_anyone_tried_fine_tuning_on_a_dataset_of/ | false | false | self | 13 | null |
One H100 or two A100? | 3 | Hi all,
I want to upgrade my current setup (which is dated, 2 TITAN RTX), but of course my budget is limited (I can buy either one H100 or two A100, as H100 is double the price of A100).
So I have to decide if the 2x speedup, FP8 and more recent hardware is worth it, over the older A100, but two of them.
I want to play with transformer-based LLMs exclusively. (No vision/convolutional at all) | 2023-05-06T05:49:21 | https://www.reddit.com/r/LocalLLaMA/comments/139dp6n/one_h100_or_two_a100/ | petasisg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 139dp6n | false | null | t3_139dp6n | /r/LocalLLaMA/comments/139dp6n/one_h100_or_two_a100/ | false | false | self | 3 | null |
H2O.ai's LLaMa 30B | 25 | 30B LLaMa: http://gpu.hopto.org
GitHub: https://github.com/h2oai/h2ogpt
It's instruct tuned over 2 epochs of 500MB of OIG + OASST data open-source data. No ShareGPT, etc.
Feel free to ask any questions, compare with OASST's HF Chat, etc.
It'll be improved every few days, and HF model cards posted for the LORA state. | 2023-05-06T08:14:59 | https://www.reddit.com/r/LocalLLaMA/comments/139gmht/h2oais_llama_30b/ | pseudotensor | self.LocalLLaMA | 2023-05-06T21:17:32 | 0 | {} | 139gmht | false | null | t3_139gmht | /r/LocalLLaMA/comments/139gmht/h2oais_llama_30b/ | false | false | self | 25 | null |
How to improve the quality of Large Language Models and solve the alignment problem | 8 | 2023-05-06T10:22:13 | https://alasdairf.medium.com/how-to-improve-the-quality-of-large-language-models-and-solve-the-alignment-problem-5fa304008868 | Pan000 | alasdairf.medium.com | 1970-01-01T00:00:00 | 0 | {} | 139iyl2 | false | null | t3_139iyl2 | /r/LocalLLaMA/comments/139iyl2/how_to_improve_the_quality_of_large_language/ | false | false | 8 | {'enabled': False, 'images': [{'id': 'VltsAaYHGHArj0ejm7xO2jRMw7TbVht6YOKd9pwU2Z4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/w6fT2HvKsuV4STOW7bP1TwFw3SzjSKGBG3KMG9uDG2k.jpg?width=108&crop=smart&auto=webp&s=36b85f85c39f89186fb98ad71060ce9dd1180fc6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/w6fT2HvKsuV4STOW7bP1TwFw3SzjSKGBG3KMG9uDG2k.jpg?width=216&crop=smart&auto=webp&s=68bb2850f3850f694206a724dedd96f1a72697e6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/w6fT2HvKsuV4STOW7bP1TwFw3SzjSKGBG3KMG9uDG2k.jpg?width=320&crop=smart&auto=webp&s=a561016ac51998d35a9b1b03a6f3542c31950729', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/w6fT2HvKsuV4STOW7bP1TwFw3SzjSKGBG3KMG9uDG2k.jpg?width=640&crop=smart&auto=webp&s=dd266bc776f7ba90c35ef964a5562d399acb4fe5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/w6fT2HvKsuV4STOW7bP1TwFw3SzjSKGBG3KMG9uDG2k.jpg?width=960&crop=smart&auto=webp&s=79ddeff9ee7d873665b013951315326b6f1bbf4a', 'width': 960}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/w6fT2HvKsuV4STOW7bP1TwFw3SzjSKGBG3KMG9uDG2k.jpg?auto=webp&s=9a7e39d136f73db35e141382dbf9941e42f81e3d', 'width': 1024}, 'variants': {}}]} |
||
How to install Wizard-Vicuna | 75 | # FAQ
Q: What is Wizard-Vicuna
A: Wizard-Vicuna combines WizardLM and VicunaLM, two large pre-trained language models that can follow complex instructions.
WizardLM is a novel method that uses Evol-Instruct, an algorithm that automatically generates open-domain instructions of various difficulty levels and skill ranges. VicunaLM is a 13-billion parameter model that is the best free chatbot according to GPT-4
​
# 4-bit Model Requirements
|Model|Minimum Total RAM|
|:-|:-|
|Wizard-Vicuna-7B-Uncensored|5GB|
|Wizard-Vicuna-13B|9GB|
​
# Installing the model
First, install [Node.js](https://nodejs.org/en/) if you do not have it already.
Then, run the commands:
npm install -g catai
catai install Wizard-Vicuna-7B-Uncensored
catai serve
After that chat GUI will open, and all that good runs locally!
​
[Chat sample](https://preview.redd.it/42m0wdh667ya1.png?width=1128&format=png&auto=webp&s=2de8ed5936089638a6518adadf32812394d92d33)
You can check out the original GitHub project [here](https://github.com/ido-pluto/catai)
​
# Troubleshoot
**Unix install**
If you have a problem installing Node.js on MacOS/Linux, try this method:
Using nvm:
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.3/install.sh | bash
nvm install 19
**Running on servers**
If the server crash, this may be related to the "open-in-default-browser" feature, simply disable it:
catai config --edit nano
This will open the file in the \`nano\` text editor
then change the config:
const OPEN_IN_BROWSER = false;
​
If you have any other problems installing the model, add a comment :) | 2023-05-06T11:35:49 | https://www.reddit.com/r/LocalLLaMA/comments/139kfrb/how_to_install_wizardvicuna/ | ido-pluto | self.LocalLLaMA | 2023-05-31T11:40:06 | 1 | {'gid_2': 1} | 139kfrb | false | null | t3_139kfrb | /r/LocalLLaMA/comments/139kfrb/how_to_install_wizardvicuna/ | false | false | 75 | null |
|
Why don't companies collaborate together in releasing the best competitive opensource LLM rather than each one starting from scratch and just reach a suboptimal one? | 45 | Basically the title. | 2023-05-06T13:03:50 | https://www.reddit.com/r/LocalLLaMA/comments/139meyn/why_dont_companies_collaborate_together_in/ | emotionalfool123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 139meyn | false | null | t3_139meyn | /r/LocalLLaMA/comments/139meyn/why_dont_companies_collaborate_together_in/ | false | false | self | 45 | null |
13B Models On 6GB VRAM | 6 | hi to all i have a laptop that has 6gb vram with 3060. 7b models are fine but my question is can i run 13b quantized models and split the workload on gpu+cpu ? | 2023-05-06T15:47:33 | https://www.reddit.com/r/LocalLLaMA/comments/139tjaa/13b_models_on_6gb_vram/ | Emergency-Flower-477 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 139tjaa | false | null | t3_139tjaa | /r/LocalLLaMA/comments/139tjaa/13b_models_on_6gb_vram/ | false | false | self | 6 | null |
Local model for specific subject | 7 | I know the "best model?" question comes up a lot so I am sorry. I have tested several local models but there seems to be 5 new ones a day so it's hard the keep up. I'm in IT management and want to show the value of a local model by fine tuning one on the history of our organization and all the items brought before the board in it's history so our current management and board can chat with their precessors. We have a GPU server with 4 Tesla A2s dual Xeons and 64GB of RAM so I can run a decentish model. Eventually I would like to train models for different departments train on their specific tasks like accounting, trained on our ERP. And an IT model train on our documentation so when someone is in vacation or retires we can continue to "chat" with them. So I am looking for a model that can provide high quality answers based on its training data, global knowledge isn't important. I would like a model that is fully open source so I don't have to worry about license restrictions (that ruled out Vicuna because of the llama weights) . Any advice would be appreciated! Thank you | 2023-05-06T16:10:35 | https://www.reddit.com/r/LocalLLaMA/comments/139u75e/local_model_for_specific_subject/ | TaiMaiShu-71 | self.LocalLLaMA | 2023-05-06T20:45:34 | 0 | {} | 139u75e | false | null | t3_139u75e | /r/LocalLLaMA/comments/139u75e/local_model_for_specific_subject/ | false | false | self | 7 | null |
What's the best local fiction writing assistant? | 13 | Hello. I am a complete noob to local llama / LLM. Optimally, I'd like to be able to:
1. Input a chapter summary, receive longer prose as output
2. Input long prose and get improved prose as output
3. Include details of characters and places
4. Mimic either MY writing style, or style of a known author
I have two computers, one with RTX 4070 TI (12GB) and another with RTX 4080 (16GB), both with 32 GB system RAM. I'd like to be able to run it on both.
What are the best models and settings to try? What UI should I use? Are there certain ways to format prompts? Thank you for any help. | 2023-05-06T16:36:49 | https://www.reddit.com/r/LocalLLaMA/comments/139uxfh/whats_the_best_local_fiction_writing_assistant/ | VegaKH | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 139uxfh | false | null | t3_139uxfh | /r/LocalLLaMA/comments/139uxfh/whats_the_best_local_fiction_writing_assistant/ | false | false | self | 13 | null |
How can I try mpt-7b-storywriter? | 14 | Hello, how exactly can I try inference on the new mpt-7b-storywriter model? | 2023-05-06T18:10:58 | https://www.reddit.com/r/LocalLLaMA/comments/139xi9l/how_can_i_try_mpt7bstorywriter/ | Due_Cry1522 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 139xi9l | false | null | t3_139xi9l | /r/LocalLLaMA/comments/139xi9l/how_can_i_try_mpt7bstorywriter/ | false | false | self | 14 | null |
What do you think of this method? (I used a translator) | 1 | [removed] | 2023-05-06T18:32:20 | https://www.reddit.com/r/LocalLLaMA/comments/139y3of/what_do_you_think_of_this_method_i_used_a/ | Amazing_Tear647 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 139y3of | false | null | t3_139y3of | /r/LocalLLaMA/comments/139y3of/what_do_you_think_of_this_method_i_used_a/ | false | false | default | 1 | null |
Notable differences between q4_2 and q5_1 quantization? | 15 | Thanks to u/The-Bloke we got easy access to new quantization versions, but did anyone notice any differences between 4\_1 and 5\_1 or even 4\_2 and 5\_1? | 2023-05-06T18:59:06 | https://www.reddit.com/r/LocalLLaMA/comments/139yt87/notable_differences_between_q4_2_and_q5_1/ | koehr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 139yt87 | false | null | t3_139yt87 | /r/LocalLLaMA/comments/139yt87/notable_differences_between_q4_2_and_q5_1/ | false | false | self | 15 | null |
How do I stop it from asking itself? | 1 | 2023-05-06T19:07:44 | HamzaTheUselessOne | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 139z1ue | false | null | t3_139z1ue | /r/LocalLLaMA/comments/139z1ue/how_do_i_stop_it_from_asking_itself/ | false | false | default | 1 | null |
||
Introducing AgentOoba, an extension for Oobabooga's web ui that (sort of) implements an autonomous agent! I was inspired and rewrote the fork that I posted yesterday completely. | 60 | Right now, the agent functions as little more than a planner / "task splitter". However I have plans to implement a toolchain, which would be a set of tools that the agent could use to complete tasks. Considering native langchain, but have to look into it. Here's a [screenshot](https://imgur.com/a/uapv6jd) and here's a [complete sample output](https://pastebin.com/JDgGaCCu). The github link is https://github.com/flurb18/AgentOoba. Installation is very easy, just clone the repo inside the "extensions" folder in your main text-generation-webui folder and run the webui with --extensions AgentOoba. Then load a model and scroll down on the main page to see AgentOoba's input, output and parameters. Enjoy! | 2023-05-06T19:50:28 | https://www.reddit.com/r/LocalLLaMA/comments/13a062v/introducing_agentooba_an_extension_for_oobaboogas/ | _FLURB_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13a062v | false | null | t3_13a062v | /r/LocalLLaMA/comments/13a062v/introducing_agentooba_an_extension_for_oobaboogas/ | false | false | self | 60 | {'enabled': False, 'images': [{'id': '5a_ypZ9rV8Vhn_OUKY65nQyIZP3eC_pXNVKZ-tzgKRQ', 'resolutions': [{'height': 68, 'url': 'https://external-preview.redd.it/XwG3MJ6PTeTARdFi8BL_MAoNMZmYjXCbWWAOc_ar0ec.jpg?width=108&crop=smart&auto=webp&s=5b09dc78cb6aa0e3903aed003a57aaaebc263e13', 'width': 108}, {'height': 136, 'url': 'https://external-preview.redd.it/XwG3MJ6PTeTARdFi8BL_MAoNMZmYjXCbWWAOc_ar0ec.jpg?width=216&crop=smart&auto=webp&s=a5b938d26c5b1217942126fdc6e77523b0d4106d', 'width': 216}, {'height': 201, 'url': 'https://external-preview.redd.it/XwG3MJ6PTeTARdFi8BL_MAoNMZmYjXCbWWAOc_ar0ec.jpg?width=320&crop=smart&auto=webp&s=091b871128185bd5c4c9181de0d81b9a30e3ae24', 'width': 320}, {'height': 403, 'url': 'https://external-preview.redd.it/XwG3MJ6PTeTARdFi8BL_MAoNMZmYjXCbWWAOc_ar0ec.jpg?width=640&crop=smart&auto=webp&s=ac1e8a3174b4dbce20f3a583c0d39838d3bf3869', 'width': 640}, {'height': 604, 'url': 'https://external-preview.redd.it/XwG3MJ6PTeTARdFi8BL_MAoNMZmYjXCbWWAOc_ar0ec.jpg?width=960&crop=smart&auto=webp&s=60265fa557913c36378eeca31362c2e646dd0980', 'width': 960}], 'source': {'height': 616, 'url': 'https://external-preview.redd.it/XwG3MJ6PTeTARdFi8BL_MAoNMZmYjXCbWWAOc_ar0ec.jpg?auto=webp&s=75373a46ff7c369c8930727dd3c1c73a5b5a03a0', 'width': 978}, 'variants': {}}]} |
Worth downgrading 3080 to 3060 for more vram? | 7 | I apologize if this has been answered before. I current have Ryzen 3900x , 64GB RAM (with over 50GB free in linux), 3080 10GB vram, and I still get OOM with large context (>500) in ubuntu 22.04 running with oogabooga's webui. The arguments I'm running with is pre\_layer 38, wbits 4, groupsize 128, gpu\_memory 9, auto device, and other non performance related args. Models I tried are Vicuna 13b 4 bit, gpt4-x-alpaca 13b 4bit, stable vicuna. Setting prelayer to lower like 35 is too slow and unusable for me.
Is there anything I am missing with the args, or will it help if I buy a 3060 for 2 gb more vram to run the 13b models i listed with full context (2k tokens)? | 2023-05-06T21:30:25 | https://www.reddit.com/r/LocalLLaMA/comments/13a2rnj/worth_downgrading_3080_to_3060_for_more_vram/ | McSendo | self.LocalLLaMA | 2023-05-06T23:24:38 | 0 | {} | 13a2rnj | false | null | t3_13a2rnj | /r/LocalLLaMA/comments/13a2rnj/worth_downgrading_3080_to_3060_for_more_vram/ | false | false | self | 7 | null |
"People who won multiple Nobel Prizes" seems to be really hard to answer | 10 | I instructed a couple of Models to do the following:
>List all people that got awarded more than one Nobel Prize in their lifetime.
And all *but one (see table)* of them failed at least partially, including ChatGPT3.5 which once even invented a list of 24(!) people, instead of 5. Maybe someone with a GPT4 subscription could try it there?
The correct answer can be found on Wikipedia: [https://en.wikipedia.org/wiki/Category:Nobel\_laureates\_with\_multiple\_Nobel\_awards](https://en.wikipedia.org/wiki/Category:Nobel_laureates_with_multiple_Nobel_awards) (which should make it simple for LLMs, shouldn't it?)
Answers I got sometimes included some of the right people, but at least one was doubled. More often than that, the models listed the wrong people. I always tried at least twice, to make sure, the wrong answer wasn't a glitch.
Models I tried and how well they answered:
|Model|Result|Notes|
|:-|:-|:-|
|[GPT4All-13B-snoozy.ggml.q5\_1](https://huggingface.co/TheBloke/GPT4All-13B-snoozy-GGML)|correct (4 of 5) once|Always started with "here are some...", although I explicitly ask for *all*.|
|alpaca-07B.ggml.q4\_1|wrong||
|alpaca-13B.ggml.q4\_1|wrong||
|alpaca-30B.ggml.q4\_1|wrong||
|[alpaca-65B.ggml.q5\_1](https://huggingface.co/TheBloke/alpaca-lora-65B-GGML)|wrong, got 4 correct, but listed 7||
|[koala-13B.ggml.q4\_1](https://huggingface.co/TheBloke/koala-13B-GPTQ-4bit-128g-GGML)|wrong|Insists on that it's only two people and that the Nobel Peace Prize is not considered a real Nobel Prize.|
|[OpenAssistant-30B.ggml.q5\_1](https://huggingface.co/TheBloke/OpenAssistant-SFT-7-Llama-30B-GGML)|wrong||
|vicuna-13B.ggml.q4\_1|wrong|Very detailed answer. Too bad, most of it was wrong.|
|[WizardVicuna-13B.ggml.q5\_1](https://huggingface.co/TheBloke/wizard-vicuna-13B-GGML)|correct (4 of 5)|Very good answer, with right fields and dates, for four of five people.|
|[GPT4xVicuna-13B.ggml.q5\_1](https://huggingface.co/TheBloke/gpt4-x-vicuna-13B-GGML)|wrong|Claims, there have been 43 individuals!|
|[H2oGPT-30B.ggml.q5\_1](https://huggingface.co/TheBloke/h2ogpt-oasst1-512-30B-GGML)|wrong||
I use llama.cpp with parameters and prompt suggested by u/The-Bloke.
Any idea, why this is so hard for the language models?
​
*Edit: Because the last second prize was won in 2022, which might be too recent for most models, I marked "4 of 5" as correct, which makes WizardVicuna-13B the only model, that correctly answers this question!* | 2023-05-06T22:49:41 | https://www.reddit.com/r/LocalLLaMA/comments/13a4u5r/people_who_won_multiple_nobel_prizes_seems_to_be/ | koehr | self.LocalLLaMA | 2023-05-07T10:01:59 | 0 | {} | 13a4u5r | false | null | t3_13a4u5r | /r/LocalLLaMA/comments/13a4u5r/people_who_won_multiple_nobel_prizes_seems_to_be/ | false | false | self | 10 | null |
Got a Oobabooga to become an agent using lanchain and it's own API | 28 | So, I had coded this in the beginning of the week, but got stuck in the process because I'm not much of a python dev, but before all that work goes to waste, just wanted to share it here
[https://github.com/ChobPT/oobaboogas-webui-langchain\_agent/](https://github.com/ChobPT/oobaboogas-webui-langchain_agent/)
Basically using inspiration from [Pedro Rechia's](https://betterprogramming.pub/creating-my-first-ai-agent-with-vicuna-and-langchain-376ed77160e3) article about having an API Agent, I've created an agent that connects to oobabooga's API to "do an agent" meaning we'd get from start to finish using only the libraries but the webui itself as the main engine. Currently it loads the Wikipedia tool which is enough I think to get way more info in real time, but as mentioned above, not much of a python dev so some things are not really perfect.
Putting this out there in case you guys want to fiddle with it and who knows if you won't be the responsible for our inevitable doom (:
[Chat prompt using vicuna, use the \/do command to fire it](https://preview.redd.it/z31xahnqqaya1.png?width=942&format=png&auto=webp&s=1c26f7bfdf1844acff03232813a4238e79d744af)
​
​
[Console output, noted how it doesn't really follow what is output, which is the reason for my insanity :grimmacing:](https://preview.redd.it/5x0jl51qqaya1.png?width=1863&format=png&auto=webp&s=40051dee47c98faf67dfa97670dfa29fbb5417e4) | 2023-05-06T23:35:35 | https://www.reddit.com/r/LocalLLaMA/comments/13a5zwe/got_a_oobabooga_to_become_an_agent_using_lanchain/ | ChobPT | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13a5zwe | false | null | t3_13a5zwe | /r/LocalLLaMA/comments/13a5zwe/got_a_oobabooga_to_become_an_agent_using_lanchain/ | false | false | 28 | {'enabled': False, 'images': [{'id': 'lcia11fV0pAzZ1n0nWNFCYV6Kfj9uBvTybJuztmi5-I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bL_AOHn51cBTEWqr_z2vCpvfyvCQrnsRYO4oKwelxPs.jpg?width=108&crop=smart&auto=webp&s=da0a53aae00b281021c655c0720f6c5131ec21c8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/bL_AOHn51cBTEWqr_z2vCpvfyvCQrnsRYO4oKwelxPs.jpg?width=216&crop=smart&auto=webp&s=d23eb0252a0dfdfe8d999657c77d0ed1f2535c4f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/bL_AOHn51cBTEWqr_z2vCpvfyvCQrnsRYO4oKwelxPs.jpg?width=320&crop=smart&auto=webp&s=ee30e5791ef618d34591749cc29e9385b29af401', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/bL_AOHn51cBTEWqr_z2vCpvfyvCQrnsRYO4oKwelxPs.jpg?width=640&crop=smart&auto=webp&s=b1f0eb42ca557d0ecb41929ec24951c860de7ea1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/bL_AOHn51cBTEWqr_z2vCpvfyvCQrnsRYO4oKwelxPs.jpg?width=960&crop=smart&auto=webp&s=915875a5412179e8e5e7394e3616e3042e9dc778', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/bL_AOHn51cBTEWqr_z2vCpvfyvCQrnsRYO4oKwelxPs.jpg?width=1080&crop=smart&auto=webp&s=e5c90523546e0b7e63c6bbf3190dd6785ec16eee', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/bL_AOHn51cBTEWqr_z2vCpvfyvCQrnsRYO4oKwelxPs.jpg?auto=webp&s=e63b007863ef8976aa6f5784e45921d7027f6264', 'width': 1200}, 'variants': {}}]} |
|
Can LLaMA be Run Locally | 0 | [removed] | 2023-05-06T23:38:31 | https://www.reddit.com/r/LocalLLaMA/comments/13a62j7/can_llama_be_run_locally/ | Toaster496 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13a62j7 | false | null | t3_13a62j7 | /r/LocalLLaMA/comments/13a62j7/can_llama_be_run_locally/ | false | false | default | 0 | null |
2x3060 in SLI ? | 2 | Title pretty much says it all. I can get 2x3060 for 600$ CAD on marketplace, would it work ? I can't find much info on if SLI is supported by local models such as Llama ?
Thank you | 2023-05-06T23:45:45 | https://www.reddit.com/r/LocalLLaMA/comments/13a68z4/2x3060_in_sli/ | SabloPicasso | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13a68z4 | false | null | t3_13a68z4 | /r/LocalLLaMA/comments/13a68z4/2x3060_in_sli/ | false | false | self | 2 | null |
People need to stop naming models "GPT4" anything | 194 | Stop it. It's not GPT-4, it's not even close. It's really obnoxious with all the models trying to imply they're comparable in the slightest.
Just some examples:
* GPT4All
* GPT4 x Vicuna
* GPT4 Alpaca
* GPT4 x Alpaca
IT'S TIME TO STOP. No wonder OpenAI is trying to trademark "GPT". | 2023-05-07T01:20:29 | https://www.reddit.com/r/LocalLLaMA/comments/13a8gpi/people_need_to_stop_naming_models_gpt4_anything/ | SmithMano | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13a8gpi | false | null | t3_13a8gpi | /r/LocalLLaMA/comments/13a8gpi/people_need_to_stop_naming_models_gpt4_anything/ | false | false | self | 194 | null |
What do the files with "pytorch_model-X-of-Y.bin" mean on a HuggingFace model page? | 10 | I was browsing a Hugging Face model page and noticed that there were several files with names like "pytorch_model-00001-of-00003.bin", "pytorch_model-00002-of-00003.bin", and "pytorch_model-00003-of-00003.bin" (ex: https://huggingface.co/TheBloke/wizard-vicuna-13B-HF/tree/main). I'm not sure what these files are for and whether I need all of them to use the model for inference. Can someone explain what these files are and whether I need all of them to use the model? Thanks in advance! | 2023-05-07T04:12:36 | https://www.reddit.com/r/LocalLLaMA/comments/13acek6/what_do_the_files_with_pytorch_modelxofybin_mean/ | julio_oa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13acek6 | false | null | t3_13acek6 | /r/LocalLLaMA/comments/13acek6/what_do_the_files_with_pytorch_modelxofybin_mean/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'kXgVjj9N1XMooROUe1bx1xLYAVeI7XixjXYENeeRBJM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/B6qbo-hdSAEmuFrXaF2vnL94T9jtmm4rja91H0MigYM.jpg?width=108&crop=smart&auto=webp&s=f5fae3349cf04c13c10bb9327318c8da20cfe7d8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/B6qbo-hdSAEmuFrXaF2vnL94T9jtmm4rja91H0MigYM.jpg?width=216&crop=smart&auto=webp&s=3541d07e9fe12e2b08e8b38957b8f87af263f57f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/B6qbo-hdSAEmuFrXaF2vnL94T9jtmm4rja91H0MigYM.jpg?width=320&crop=smart&auto=webp&s=191734ad5d8e9127c950f13f1e20905629a922e2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/B6qbo-hdSAEmuFrXaF2vnL94T9jtmm4rja91H0MigYM.jpg?width=640&crop=smart&auto=webp&s=f981fbcaad786754dcdb2fb8fa30dffa3ae85d8c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/B6qbo-hdSAEmuFrXaF2vnL94T9jtmm4rja91H0MigYM.jpg?width=960&crop=smart&auto=webp&s=6e04eaef41971ad82550f9215aefabe5f464d2f8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/B6qbo-hdSAEmuFrXaF2vnL94T9jtmm4rja91H0MigYM.jpg?width=1080&crop=smart&auto=webp&s=3f2789a9094dbabcbdb4eb0899d1318554a91de5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/B6qbo-hdSAEmuFrXaF2vnL94T9jtmm4rja91H0MigYM.jpg?auto=webp&s=48bbb9109346a644914af9b6bc27beba913d4222', 'width': 1200}, 'variants': {}}]} |
Generative agents with open-sourced large langurage models! | 59 | Here are some suggestions I got from you guys. I've created an issue for each of them and I really appreciate your help!
>\- try SuperCOT, a LoRA that may help improving the intelligence of the agent
>
>\- try WizardLM
>
>\- optimize LlamaCpp parameters like mirostat
>
>\- put hard limits on the generation
**Check out the issue page for updates:** [https://github.com/UranusSeven/llama\_generative\_agent/issues](https://github.com/UranusSeven/llama_generative_agent/issues)
​
I recently came across a fascinating paper called "Generative Agents," which was a collaboration between Stanford and Google.
It discusses how to use LLM to construct a virtual village, where each NPC is a generative agent. These agents can draw inferences about themselves, others, and their environment, create daily plans based on their experiences and characteristics, react and re-plan when appropriate, and respond to changes in their surroundings or commands given in natural language.
​
[The generative agent villlage](https://preview.redd.it/xcwi8gxqscya1.png?width=1406&format=png&auto=webp&s=30d88b8c6f9459d7e254bba6caaf0215c053e0d9)
I was inspired by this idea and wanted to run my own ultra-fantasy world using OpenAI's API. However, I found that running a small village with this technology can be quite expensive. So, I began exploring the possibility of using open-sourced LLMs on my own computer.
So I spent some time on this and here's my project: [https://github.com/UranusSeven/llama\_generative\_agent](https://github.com/UranusSeven/llama_generative_agent)
I have been able to run a single generative agent with Vicuna-13b on my MacBook Pro M1, and this is a major milestone for me in creating my tiny fantasy world!
However, I have noticed that the inference is quite slow, and sometimes the answers produced might not be reasonable. Therefore, I am seeking advice on how to improve the performance of the generative agent and fine-tune the LLM model.
**Any suggestions would be greatly appreciated!** | 2023-05-07T06:29:43 | https://www.reddit.com/r/LocalLLaMA/comments/13af6yp/generative_agents_with_opensourced_large/ | CORNMONSTER_2022 | self.LocalLLaMA | 2023-05-08T13:49:25 | 0 | {} | 13af6yp | false | null | t3_13af6yp | /r/LocalLLaMA/comments/13af6yp/generative_agents_with_opensourced_large/ | false | false | 59 | {'enabled': False, 'images': [{'id': '0x1A2RhXNAK1WMXkZDUKTVjhemmILh2cP6fTGZR1Rqw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mfAEfRj1mArsxFPSEXGEdqMAu7SQ30GeNCwK3kPqjxQ.jpg?width=108&crop=smart&auto=webp&s=1c33ce2d85805e5ee8d9324337d3508e48d05610', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/mfAEfRj1mArsxFPSEXGEdqMAu7SQ30GeNCwK3kPqjxQ.jpg?width=216&crop=smart&auto=webp&s=35266865e8f826a57610eb4de4ba4472afc0f039', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/mfAEfRj1mArsxFPSEXGEdqMAu7SQ30GeNCwK3kPqjxQ.jpg?width=320&crop=smart&auto=webp&s=e439f3528e481e1cd7bba66355d86bc61083c7ce', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/mfAEfRj1mArsxFPSEXGEdqMAu7SQ30GeNCwK3kPqjxQ.jpg?width=640&crop=smart&auto=webp&s=5ac0059f2c6558f9eb98624dc2fd921b849037c2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/mfAEfRj1mArsxFPSEXGEdqMAu7SQ30GeNCwK3kPqjxQ.jpg?width=960&crop=smart&auto=webp&s=0e1ef32f28d0a5f50d75e7f38617e5142bc326ca', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/mfAEfRj1mArsxFPSEXGEdqMAu7SQ30GeNCwK3kPqjxQ.jpg?width=1080&crop=smart&auto=webp&s=d83f2d04cd20ac3bc0923e294955010aa1e1272f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/mfAEfRj1mArsxFPSEXGEdqMAu7SQ30GeNCwK3kPqjxQ.jpg?auto=webp&s=435cfe2f266965a7fe69e886d7fa775cac44c47a', 'width': 1200}, 'variants': {}}]} |
|
Bitsandbytes 4-bit, finetuning 30B/65B LLaMa models on a single 24/48 GB GPU | 129 | 2023-05-07T08:52:17 | https://twitter.com/Tim_Dettmers/status/1654917326381228033 | rerri | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 13ahz60 | false | {'oembed': {'author_name': 'Tim Dettmers', 'author_url': 'https://twitter.com/Tim_Dettmers', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Super excited to push this even further:<br>- Next week: bitsandbytes 4-bit closed beta that allows you to finetune 30B/65B LLaMA models on a single 24/48 GB GPU (no degradation vs full fine-tuning in 16-bit)<br>- Two weeks: Full release of code, paper, and a collection of 65B models</p>— Tim Dettmers (@Tim_Dettmers) <a href="https://twitter.com/Tim_Dettmers/status/1654917326381228033?ref_src=twsrc%5Etfw">May 6, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/Tim_Dettmers/status/1654917326381228033', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_13ahz60 | /r/LocalLLaMA/comments/13ahz60/bitsandbytes_4bit_finetuning_30b65b_llama_models/ | false | false | 129 | {'enabled': False, 'images': [{'id': 'EpcCrB5c_ymPagW1k3nntGnbJWvH6gTAHK2mT-AAYhQ', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/GmQlDN0h6qYchuR03YMliQv8abrv7qqGPzUlu7RymSU.jpg?width=108&crop=smart&auto=webp&s=d3a3d4d56e0ee1846e9ed5a45b49815ebf55a2a4', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/GmQlDN0h6qYchuR03YMliQv8abrv7qqGPzUlu7RymSU.jpg?auto=webp&s=124d4f0980f44718b0dd1a13fd9677d835a077fb', 'width': 140}, 'variants': {}}]} |
||
Is running quantized but bigger model worth it? | 8 | I'm currently choosing a LLM for my project (let's just say it's a chatbot) and was looking into running LLaMA. I have 24 gb of VRAM in total, minus additional models, so it's preferable to fit into about 12 gb. My options are running a 16-bit 7B model, 8-bit 13B or supposedly even bigger with heavy quantization. Points:
- I need low latency, preferably 1-2 seconds for one sentence.
- I don't want to bother too much with downloading huge models and quantizing them. I have only 16 gb of regular RAM and it causes issues while converting. Please let me know if it's possible to get a pre-quantized big model.
- I need the model to speak Russian.
Question is, is it worth the hassle to run a bigger model? Does it give noticeable better results? How much it affects model's ability to speak other languages?
EDIT: GPU is 3090 ti, CPU is shit | 2023-05-07T09:12:17 | https://www.reddit.com/r/LocalLLaMA/comments/13aidav/is_running_quantized_but_bigger_model_worth_it/ | Maximxls | self.LocalLLaMA | 2023-05-07T09:18:21 | 0 | {} | 13aidav | false | null | t3_13aidav | /r/LocalLLaMA/comments/13aidav/is_running_quantized_but_bigger_model_worth_it/ | false | false | self | 8 | null |
Speed up reply generation on Apple M2 Pro "Wizard-Vicuna" | 7 | Not sure I'm in the right subreddit, but I'm guessing I'm using a LLaMa language model, plus Google sent me here :)
So, I want to use an LLM on my Apple M2 Pro (16 GB RAM) and followed [this tutorial](https://www.youtube.com/watch?v=8BVMcuIGiAA)
Using "Wizard-Vicuna" and "Oobabooga Text Generation WebUI" I'm able to generate some answers, but they're being generated very slowly. When I check the Activity Monitor I see that my CPU is barely being utilized (User: 15%) and RAM neither (\~3GB), although I think CPU is what's doing the actual work here if I'm not mistaken...
**Quick sidenote:** As you can tell, I am new to this whole space, although I do have a background in CompSci as a programmer, many of the terms used in the AI world are completely new to me so I'm a clueless as which values to adjust.
When I start up the WebUI the console does output this:
llama.cpp: loading model from models/TheBloke_wizard-vicuna-13B-GGML/wizard-vicuna-13B.ggml.q5_0.bin
llama_model_load_internal: format = ggjt v1 (latest)
llama_model_load_internal: n_vocab = 32000
llama_model_load_internal: n_ctx = 2048
llama_model_load_internal: n_embd = 5120
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 40
llama_model_load_internal: n_layer = 40
llama_model_load_internal: n_rot = 128
llama_model_load_internal: ftype = 8 (mostly Q5_0)
llama_model_load_internal: n_ff = 13824
llama_model_load_internal: n_parts = 1
llama_model_load_internal: model size = 13B
llama_model_load_internal: ggml ctx size = 73.73 KB
llama_model_load_internal: mem required = 10583.25 MB (+ 1608.00 MB per state)
llama_init_from_file: kv self size = 1600.00 MB
AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | VSX = 0 |
I have these settings that I can change in the "Model" tab, but don't want to blindly tinker with the values, so I thought I'd ask you for advice on how to speed up my workflow. Thanks in advance!
https://preview.redd.it/5wmuqz6x7eya1.png?width=2992&format=png&auto=webp&s=50d5f37f389997539ebbdfc92bae604dce1b6428 | 2023-05-07T11:22:26 | https://www.reddit.com/r/LocalLLaMA/comments/13akxx7/speed_up_reply_generation_on_apple_m2_pro/ | RastaBambi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13akxx7 | false | null | t3_13akxx7 | /r/LocalLLaMA/comments/13akxx7/speed_up_reply_generation_on_apple_m2_pro/ | false | false | 7 | null |
|
Can I use GPT4All with LangChain and GPTQ models all together? | 4 | Hello,
I just want to use TheBloke/wizard-vicuna-13B-GPTQ with LangChain.
Any help or guidance on how to import the "wizard-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors" file/model would be awesome!
Thanks | 2023-05-07T13:12:18 | https://www.reddit.com/r/LocalLLaMA/comments/13anc9n/can_i_use_gpt4all_with_langchain_and_gptq_models/ | jumperabg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13anc9n | false | null | t3_13anc9n | /r/LocalLLaMA/comments/13anc9n/can_i_use_gpt4all_with_langchain_and_gptq_models/ | false | false | self | 4 | null |
I have a project in my own programming language, abusing both lexical and syntactic macros. I want to do a refactoring tasks on it. I don't have a GPU, but 14-core CPU. Should I pay for cloud or there are local ways to do such task on my laptop? Which model is better for programming? | 4 | I have a project in my own programming language, abusing both lexical and syntactic macros. I want to do a refactoring tasks on it. I don't have a GPU, but 14-core CPU. Should I pay for cloud or there are local ways to do such task on my laptop? Which model is better for programming? | 2023-05-07T13:19:15 | https://www.reddit.com/r/LocalLLaMA/comments/13ani66/i_have_a_project_in_my_own_programming_language/ | NancyAurum | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13ani66 | false | null | t3_13ani66 | /r/LocalLLaMA/comments/13ani66/i_have_a_project_in_my_own_programming_language/ | false | false | self | 4 | null |
Cheap options hardware for running LLM ? | 13 | Can these models work on clusters of multi gpu machines and 2gb gpu?
For example, a watch with 8 pcie 8x slot and 8 gpu with 2go and this on 3 servers under Kubernetes | 2023-05-07T14:35:22 | https://www.reddit.com/r/LocalLLaMA/comments/13aqcxe/cheap_options_hardware_for_running_llm/ | makakiel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13aqcxe | false | null | t3_13aqcxe | /r/LocalLLaMA/comments/13aqcxe/cheap_options_hardware_for_running_llm/ | false | false | self | 13 | null |
How does BLOOM compares to LLaMA and forks? | 3 | I want to fine-tune a model on a company’s Confluence, Jira and Sharepoint using LoRa or something similar, but given the licence and that there does not appears to be instruction-tuned models larger than 30B, I am seeking to use BLOOM. How is performance compared to the LLamas? Do some of you use petals.lm to get around memory requirement? | 2023-05-07T15:28:29 | https://www.reddit.com/r/LocalLLaMA/comments/13atoy5/how_does_bloom_compares_to_llama_and_forks/ | cgcmake | self.LocalLLaMA | 2023-05-07T16:25:07 | 0 | {} | 13atoy5 | false | null | t3_13atoy5 | /r/LocalLLaMA/comments/13atoy5/how_does_bloom_compares_to_llama_and_forks/ | false | false | self | 3 | null |
Nvidia P40, 24GB, are they useable? | 8 | Given some of the processing is limited by vram, is the P40 24GB line still useable? Thats as much vram as the 4090 and 3090 at a fraction of the price. Certainly less powerful, but if vram is the constraint, does it matter? | 2023-05-07T16:20:02 | https://www.reddit.com/r/LocalLLaMA/comments/13av3p0/nvidia_p40_24gb_are_they_useable/ | TheNotSoEvilEngineer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13av3p0 | false | null | t3_13av3p0 | /r/LocalLLaMA/comments/13av3p0/nvidia_p40_24gb_are_they_useable/ | false | false | self | 8 | null |
GPT For All 13B (/GPT4All-13B-snoozy-GPTQ) is Completely Uncensored, a great model | 97 | Got it from here:
[https://huggingface.co/TheBloke/GPT4All-13B-snoozy-GPTQ](https://huggingface.co/TheBloke/GPT4All-13B-snoozy-GPTQ)
I took it for a test run, and was impressed. It seems to be on same level of quality as Vicuna 1.1 13B and is completely uncensored, which is great.
It is able to output detailed descriptions, and knowledge wise also seems to be on the same ballpark as Vicuna. Give it a try! | 2023-05-07T16:30:43 | https://www.reddit.com/r/LocalLLaMA/comments/13avdxb/gpt_for_all_13b_gpt4all13bsnoozygptq_is/ | Ganfatrai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13avdxb | false | null | t3_13avdxb | /r/LocalLLaMA/comments/13avdxb/gpt_for_all_13b_gpt4all13bsnoozygptq_is/ | false | false | self | 97 | {'enabled': False, 'images': [{'id': 'KKGIrjEvU3veb9fSHCVjq5xMDtw5BkFUUY9HajwyILE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_oDiySKiuDkZ11XehVwuYn4okTcK23ciJQ_s6iGYB9c.jpg?width=108&crop=smart&auto=webp&s=491ff1a3ebe312ef19467348806d58ea3ba040ef', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/_oDiySKiuDkZ11XehVwuYn4okTcK23ciJQ_s6iGYB9c.jpg?width=216&crop=smart&auto=webp&s=7452fab145be13ea15b8efc16abc899ffc35de7e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/_oDiySKiuDkZ11XehVwuYn4okTcK23ciJQ_s6iGYB9c.jpg?width=320&crop=smart&auto=webp&s=99824805bed90a2c870d652c22c699d905097dac', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/_oDiySKiuDkZ11XehVwuYn4okTcK23ciJQ_s6iGYB9c.jpg?width=640&crop=smart&auto=webp&s=6c6fe19ffee36daa3c25b8dd681af10168460da2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/_oDiySKiuDkZ11XehVwuYn4okTcK23ciJQ_s6iGYB9c.jpg?width=960&crop=smart&auto=webp&s=6de93f6afdd6f58c5d163c692db24e35b73d2581', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/_oDiySKiuDkZ11XehVwuYn4okTcK23ciJQ_s6iGYB9c.jpg?width=1080&crop=smart&auto=webp&s=7dd8cac66fe54f13d28e02943530aba7c18815c9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/_oDiySKiuDkZ11XehVwuYn4okTcK23ciJQ_s6iGYB9c.jpg?auto=webp&s=00e382b72c1185a71773105a664ae796d28e6bea', 'width': 1200}, 'variants': {}}]} |
Ho to run .safetensors models with langchain/huggingface pipelines? | 7 | Hi,
​
Please help, as I have stuck with this problem.
I would like to run a .safetensors model (e.g. [https://huggingface.co/TheBloke/gpt4-x-vicuna-13B-GPTQ/tree/main](https://huggingface.co/TheBloke/gpt4-x-vicuna-13B-GPTQ/tree/main)) with langchain and/or HuggingFacePipeline.
When I run it:
from langchain.llms import HuggingFacePipeline
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_id = "TheBloke/gpt4-x-vicuna-13B-GPTQ"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
pipe = pipeline(
"text-generation", model=model, tokenizer=tokenizer, max_new_tokens=10
)
hf = HuggingFacePipeline(pipeline=pipe)
text = "What would be a good company name a company that makes colorful socks?"
print(llm(text))
I am getting this error:
>SError: TheBloke/gpt4-x-vicuna-13B-GPTQ does not appear to have a file named pytorch\_model.bin, tf\_model.h5, model.ckpt or flax\_model.msgpack.
​
Does anyone know how to fix that?
Thanks a lot in advance! | 2023-05-07T17:02:06 | https://www.reddit.com/r/LocalLLaMA/comments/13aw97e/ho_to_run_safetensors_models_with/ | ljubarskij | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13aw97e | false | null | t3_13aw97e | /r/LocalLLaMA/comments/13aw97e/ho_to_run_safetensors_models_with/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'fhvh8cFwhgzPm7e10s_guvFwKblqQzSx384uaAzxfB0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/pDEXUOhbGRy8OTPHW2WChCpO8JP1k5EBDHS5EnOF24I.jpg?width=108&crop=smart&auto=webp&s=740e997e8c34bc0f46484a9c8bf7fdfb25750daa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/pDEXUOhbGRy8OTPHW2WChCpO8JP1k5EBDHS5EnOF24I.jpg?width=216&crop=smart&auto=webp&s=f5922709587b24793d528a2ab88e542a1283f2f2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/pDEXUOhbGRy8OTPHW2WChCpO8JP1k5EBDHS5EnOF24I.jpg?width=320&crop=smart&auto=webp&s=f3a3e863e55610284364d64a6fd3ee1fdb8abdca', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/pDEXUOhbGRy8OTPHW2WChCpO8JP1k5EBDHS5EnOF24I.jpg?width=640&crop=smart&auto=webp&s=4fe875283256b57b58687bd597b7f7a66af98579', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/pDEXUOhbGRy8OTPHW2WChCpO8JP1k5EBDHS5EnOF24I.jpg?width=960&crop=smart&auto=webp&s=f189d43e5f8415700014a481265aafc53b5c0185', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/pDEXUOhbGRy8OTPHW2WChCpO8JP1k5EBDHS5EnOF24I.jpg?width=1080&crop=smart&auto=webp&s=e13f2e72c7ae108486b2f01d89581273896c4c9a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/pDEXUOhbGRy8OTPHW2WChCpO8JP1k5EBDHS5EnOF24I.jpg?auto=webp&s=4f0913b1b23608139c800a79396ea031f59053ff', 'width': 1200}, 'variants': {}}]} |
Gpt4all on GPU | 2 | [removed] | 2023-05-07T17:25:56 | https://www.reddit.com/r/LocalLLaMA/comments/13awwja/gpt4all_on_gpu/ | gobiJoe | self.LocalLLaMA | 2023-05-07T22:51:23 | 0 | {} | 13awwja | false | null | t3_13awwja | /r/LocalLLaMA/comments/13awwja/gpt4all_on_gpu/ | false | false | default | 2 | null |
What we really need is a local LLM. | 1 | Whether the LLM is LLaMA, ChatGPT. Bloom, or FLAN UL2 based, having one of the quality of ChatGPT 4, which can be used locally, is badly needed. At the very least this kind of competition will result in getting openai or MSFT to keep the cost down. Some say that only the huge trillion param huge can have that kind of quality. The say that only the huge models exhibit emergent intelligence.
Here is what we have now:
CHATGPT: What do you want to know about math, chemistry, physics, biology, medicine, ancient history, painting, music, sports trivia, movie trivia, cooking, 'C', C++, Python, Go, Rust, Cobol, Java, Plumbing, Brick Laying, 10 thousand species of birds, 260 thousand species of flowers, 10 million species of Fungi, advanced Nose Hair Theory and the Kitchen sink? And what language do you want me to provide it in.
This is too wide. I just want the depth for a subject or set of closely related subject like math/physics but I don't need it trained with articles from Cat Fancier Magazine and Knitting Quarterly that prevents it from running on my home system. Of course, a "physics" model would need to know about one famous cat. | 2023-05-07T17:29:00 | https://www.reddit.com/r/LocalLLaMA/comments/13awzg5/what_we_really_need_is_a_local_llm/ | Guilty-History-9249 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13awzg5 | false | null | t3_13awzg5 | /r/LocalLLaMA/comments/13awzg5/what_we_really_need_is_a_local_llm/ | false | false | self | 1 | null |
Reality check on good embedding model (and this idea in general) | 3 | Hi - I'm working on getting up to speed to put together a practical implementation. As a proof-of-concept I'm trying to build a locally-hosted (no external API calls) document query proof-of-concept along the lines of Delphic ( [GitHub - JSv4/Delphic: Starter App to Build Your Own App to Query Doc Collections with Large Language Models (LLMs) using LlamaIndex, Langchain, OpenAI and more (MIT Licensed)](https://github.com/JSv4/Delphic) ) As I type this, I realize it would probably be enough to just demonstrate something working in a Jupyter notebook.
I guess I need to use (at least) a Vector Store Index via llama-index to generate the embeddings.
This brings up 2 questions I haven't been able to sort (yet):
1) Are there any GGML models that could generate embeddings? It would be interesting if I could somehow get an LLM to act like Instructor-XL but using a local model that I can run CPU-only (super-slow but I have to go this way because reasons).
2) Is a vector database (like Milvus) an absolute necessity? Delphic seems to be using Postgres to store everything document-related, including vectors generated when llama-index generates the indices.
Really, any pointers at all will be gratefully digested - I think it would be amazing to learn how to put together a completely on-laptop document query environment. I'm willing to bet someone's already done this (or close to it) and I just haven't dug enough to find it. But - if you can even just show me how to glue some of the puzzle pieces together - I'd be able to get past the RTFM-but-I-don't-know-which-FM-to-R stage and start making real progress.
Thank you for any pointers ! | 2023-05-07T17:51:40 | https://www.reddit.com/r/LocalLLaMA/comments/13axl4g/reality_check_on_good_embedding_model_and_this/ | cap811 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13axl4g | false | null | t3_13axl4g | /r/LocalLLaMA/comments/13axl4g/reality_check_on_good_embedding_model_and_this/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'qByS8Dq9YaMbQbkXrkjaR46aufkoZbqssgjOQNiJZxU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YQEFy5uYvVQlfFcZIMYXSVnWu3VnaYd_r12wFJTB420.jpg?width=108&crop=smart&auto=webp&s=b661a442185c4e49ef7ea1ede45b177966463022', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YQEFy5uYvVQlfFcZIMYXSVnWu3VnaYd_r12wFJTB420.jpg?width=216&crop=smart&auto=webp&s=50bfdb71328cb57fc12077f6bf19d19d6d8ba81f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YQEFy5uYvVQlfFcZIMYXSVnWu3VnaYd_r12wFJTB420.jpg?width=320&crop=smart&auto=webp&s=729aed234840e3774651828f6353795dff2d08c9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YQEFy5uYvVQlfFcZIMYXSVnWu3VnaYd_r12wFJTB420.jpg?width=640&crop=smart&auto=webp&s=f3c09d240decd86d63d77c4b329eb9a2ffa3cf4a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YQEFy5uYvVQlfFcZIMYXSVnWu3VnaYd_r12wFJTB420.jpg?width=960&crop=smart&auto=webp&s=b39308330abe355856c80d67850734b2bff8fc6f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YQEFy5uYvVQlfFcZIMYXSVnWu3VnaYd_r12wFJTB420.jpg?width=1080&crop=smart&auto=webp&s=1b8c63e65798d3855f81c11d1ad27688abc685c0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YQEFy5uYvVQlfFcZIMYXSVnWu3VnaYd_r12wFJTB420.jpg?auto=webp&s=ed4fd277fbac809edf3c13c1a66a01c9bfe51d3d', 'width': 1200}, 'variants': {}}]} |
h2oGPT 30B beats OASST 30B | 2 | [removed] | 2023-05-07T18:28:42 | https://www.reddit.com/r/LocalLLaMA/comments/13aykg8/h2ogpt_30b_beats_oasst_30b/ | pseudotensor | self.LocalLLaMA | 2023-05-07T20:35:44 | 0 | {} | 13aykg8 | false | null | t3_13aykg8 | /r/LocalLLaMA/comments/13aykg8/h2ogpt_30b_beats_oasst_30b/ | false | false | default | 2 | null |
So there's this thing, FreedomGPT. | 6 | I don't really trust it. They sent me a link to an exe and a webpage. I had signed up awhile back just in case. I ran it in sandboxie and it got to a point where it wanted to download one of two 7b local models. But inside the sandbox the buttons didn't work. It's all very strange and I thought I'd share.
I don't wanna put a link or host the exe anywhere, but I will in the comments if someone asks. I can already run a 7b model locally via MLC LLM. So I doubt I'm missing out on much.
The thing that makes me sad is that there is crying need for this level of ease of use. You guys not making that a priority really creates an opening for bad actors. This is a general tech community problem I've watched be a thing for decades now and I will always not like it.
​
Edit: Apparently it's not a scam, it's just nothing special. [https://github.com/ohmplatform/FreedomGPT](https://github.com/ohmplatform/FreedomGPT)
​
See this comment: [https://www.reddit.com/r/LocalLLaMA/comments/13azmd3/comment/jjc328a/](https://www.reddit.com/r/LocalLLaMA/comments/13azmd3/comment/jjc328a/?utm_source=reddit&utm_medium=web2x&context=3) | 2023-05-07T19:07:51 | https://www.reddit.com/r/LocalLLaMA/comments/13azmd3/so_theres_this_thing_freedomgpt/ | Innomen | self.LocalLLaMA | 2023-05-08T16:05:06 | 0 | {} | 13azmd3 | false | null | t3_13azmd3 | /r/LocalLLaMA/comments/13azmd3/so_theres_this_thing_freedomgpt/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'IPBUkM6bRpqkSbFZylUF9BPUnu02ny0VROHxv1FV7a4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/d69ZPbI_Nwl9YhEWWevJ8EA03OAmt1U7w_mMzNDCC8M.jpg?width=108&crop=smart&auto=webp&s=998ddb1d1c868851285e4dd1362ce81330204906', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/d69ZPbI_Nwl9YhEWWevJ8EA03OAmt1U7w_mMzNDCC8M.jpg?width=216&crop=smart&auto=webp&s=e539b698cdf87c326d6373b31be5919168ccecf9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/d69ZPbI_Nwl9YhEWWevJ8EA03OAmt1U7w_mMzNDCC8M.jpg?width=320&crop=smart&auto=webp&s=a56280891408c41c4e99fbcb75ffeb04e7c73695', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/d69ZPbI_Nwl9YhEWWevJ8EA03OAmt1U7w_mMzNDCC8M.jpg?width=640&crop=smart&auto=webp&s=aaef23d6e992156934aebffeff652c3de959128d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/d69ZPbI_Nwl9YhEWWevJ8EA03OAmt1U7w_mMzNDCC8M.jpg?width=960&crop=smart&auto=webp&s=48ec12a12ec90465be8a5b566773e45a849268d1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/d69ZPbI_Nwl9YhEWWevJ8EA03OAmt1U7w_mMzNDCC8M.jpg?width=1080&crop=smart&auto=webp&s=2b8fa2d06aa5720f0ed6752764dd3f03cbb29f17', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/d69ZPbI_Nwl9YhEWWevJ8EA03OAmt1U7w_mMzNDCC8M.jpg?auto=webp&s=b65c4669b8359f3f2b00fec1aab323c8ab7255af', 'width': 1200}, 'variants': {}}]} |
How to run starcoder-GPTQ-4bit-128g? | 12 | I am looking at running this starcoder locally -- someone already made a 4bit/128 version (https://huggingface.co/mayank31398/starcoder-GPTQ-4bit-128g)
How the hell do we use this thing?
It says use https://github.com/mayank31398/GPTQ-for-SantaCoder to run it, but when I follow those instructions, I always get random errors or it just tries to re-download the original model files.
I tried to run GPTQ-for-Llama, and I can get it loaded into ooba text-gen, but then I get some errors; someone also said it doesn't work in ooba, because it uses some custom inference thing.
Anyone have any advice on this? Point me in the right direction? | 2023-05-07T21:39:36 | https://www.reddit.com/r/LocalLLaMA/comments/13b3s4f/how_to_run_starcodergptq4bit128g/ | kc858 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13b3s4f | false | null | t3_13b3s4f | /r/LocalLLaMA/comments/13b3s4f/how_to_run_starcodergptq4bit128g/ | false | false | self | 12 | {'enabled': False, 'images': [{'id': 'fg9qOeYrOPWrI8Sr0baIRR_z7q7sym25M66JFFcrTAg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/XLddIZQFNFNrgKu2alX2VHzkxhMdl1MtB4EOsXn_bik.jpg?width=108&crop=smart&auto=webp&s=b523133e0a3b86ea433e83f4780fd2f724ecbe64', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/XLddIZQFNFNrgKu2alX2VHzkxhMdl1MtB4EOsXn_bik.jpg?width=216&crop=smart&auto=webp&s=9b476110ef5070e809421db0dd27878de62ddf7c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/XLddIZQFNFNrgKu2alX2VHzkxhMdl1MtB4EOsXn_bik.jpg?width=320&crop=smart&auto=webp&s=84134154d4eab25bc4ad57a478693f8b7edc4f8b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/XLddIZQFNFNrgKu2alX2VHzkxhMdl1MtB4EOsXn_bik.jpg?width=640&crop=smart&auto=webp&s=24384160e741e4711888d7395e7957e4fc5a0abc', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/XLddIZQFNFNrgKu2alX2VHzkxhMdl1MtB4EOsXn_bik.jpg?width=960&crop=smart&auto=webp&s=f060994a6fad64106bbe2ac339db12365720f449', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/XLddIZQFNFNrgKu2alX2VHzkxhMdl1MtB4EOsXn_bik.jpg?width=1080&crop=smart&auto=webp&s=653f2d44897f05ba8e0dc759d2a39f901c1fbf88', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/XLddIZQFNFNrgKu2alX2VHzkxhMdl1MtB4EOsXn_bik.jpg?auto=webp&s=ca2cb5b6a069e64bbd46d3ccad463d1cbfe86411', 'width': 1200}, 'variants': {}}]} |
CUDA out of memory on RTX 3090? | 5 | New to the whole llama game and trying to wrap my head around how to get it working properly.
**System specs:**
* Ryzen 5800X3D
* 32 GB RAM
* Nvidia RTX 3090 (24G VRAM)
* Windows 10
I used the " **One-click installer** " as described in the wiki and downloaded a 13b 8-bit model as suggested by the wiki (chavinlo/gpt4-x-alpaca).
The Web-Ui is up and running, and I can enter prompts, however the ai seems to crash in the middle of it's answers due to an error:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 314.00 MiB (GPU 0; 24.00 GiB total capacity; 22.78 GiB already allocated; 0 bytes free; 23.12 GiB reserved in total by PyTorch)
I tried already the flags to split work / memory across GPU and CPU
--auto-devices --gpu-memory 23500MiB
but it continues to crash.
Seems like the model does not quite fit into the 24 GB of VRAM, when the GPU is also used to host the rest of the system. Some memory will always be used up by Windows and it's processes. However I hoped that above flags would solve this issue.
​
Any ideas? | 2023-05-07T21:47:58 | https://www.reddit.com/r/LocalLLaMA/comments/13b40dv/cuda_out_of_memory_on_rtx_3090/ | Luxkeiwoker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13b40dv | false | null | t3_13b40dv | /r/LocalLLaMA/comments/13b40dv/cuda_out_of_memory_on_rtx_3090/ | false | false | self | 5 | null |
Is it possible to use other models with TabbyML? How do I know which is compatible? | 3 | [removed] | 2023-05-07T22:25:33 | https://www.reddit.com/r/LocalLLaMA/comments/13b5113/is_it_possible_to_use_other_models_with_tabbyml/ | TiagoTiagoT | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13b5113 | false | null | t3_13b5113 | /r/LocalLLaMA/comments/13b5113/is_it_possible_to_use_other_models_with_tabbyml/ | false | false | default | 3 | null |
[deleted by user] | 1 | [removed] | 2023-05-08T00:17:40 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13b7wbk | false | null | t3_13b7wbk | /r/LocalLLaMA/comments/13b7wbk/deleted_by_user/ | false | false | default | 1 | null |
||
Why run LLMs locally? | 51 | I apologize if this is slightly off-topic, but I'm curious about the reasons for running large language models (LLMs) on local hardware instead of relying on cloud services. While I understand the desire to operate your own model, maintaining up-to-date hardware seems costly.
Wouldn't it be more efficient to use cloud-based services and allocate or deallocate resources as needed? Services like Lambda Labs offer better performance at a lower cost compared to purchasing your own hardware, unless you're heavily involved in training or conducting a significant amount of inference.
I'm asking because I'm trying to decide whether to invest in a couple of A100s or to utilize cloud-based solutions for running models. I'm interested in hearing other people's thoughts on this matter. | 2023-05-08T00:43:00 | https://www.reddit.com/r/LocalLLaMA/comments/13b8ij7/why_run_llms_locally/ | jsfour | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13b8ij7 | false | null | t3_13b8ij7 | /r/LocalLLaMA/comments/13b8ij7/why_run_llms_locally/ | false | false | self | 51 | null |
Is it possible to remove tokens from a GGML? | 7 | I'm working on getting a model to run in a group chat. At the beginning, it works great. Its capable of keeping track of bits of information about individual users, following multiple conversations, etc.
It seems like at a certain point however, it figures out its in a "chat room" and starts plugging in a bunch of random chat garbage its been trained on. Stuff like IRC commands, chat log messages, etc. Its having a really hard time staying in character beyond a certain point.
I was hoping I could start out by stripping out anything out of the model that wasn't relevant. Based on what I've seen in the PR's for Llama.cpp it should be fairly easy, but I'm not really sure where to start. I'd just like to get rid of crap like
> |Bob> Do you remember what my hobby is Chie?
>
> |Chie> Of course I do Bob-kun! You enjoy fishing. ;) Would you ever consider going on a camping trip with friends or family to explore new places for catches??? :D 🐠
>
> |Alice> Do you remember what my hobby is Chie?
>
> |Chie> Of course I do Alice-chan! You enjoy shopping. ;) Have you found any great deals lately, and if so - can we see photos?? <3
>
> **\\-- end of logs --/**
I figure pulling out any bad tokens would be a good place to start but any other suggestions would also be appreciated | 2023-05-08T00:45:36 | https://www.reddit.com/r/LocalLLaMA/comments/13b8kry/is_it_possible_to_remove_tokens_from_a_ggml/ | mrjackspade | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13b8kry | false | null | t3_13b8kry | /r/LocalLLaMA/comments/13b8kry/is_it_possible_to_remove_tokens_from_a_ggml/ | false | false | self | 7 | null |
Bringing back someone from the dead | 7 | Hey there, I've seen a couple videos and websites where people have created an AI version of Socrates or another thinker from the past. This is something I'm interested in and was wondering if you could help.
User Story:
As a content producer, I want to be able to create articles that are artificially generated by Socrates, so that I can understand what Socrates would think about current events.
Now, please note that this is hypothetical.
What I'm wanting to know is:
1. Is there a specific model that would work well for this content production?
2. For datasets, I imagine something like uploading all of Socrates works he ever wrote, and then using a scraper to bring in as much news from a few different news sites, would this work?
Some user stories to reflect this idea:
As a content producer, I want to be able to use a prompt like "Write an article about how you feel about the invasion from Russia to Ukraine", so that can understand the point of view of the Russia Ukraine war from the perspective of Socrates.
As a content producer, I want to be able to ask my chatbot to write a response to the following article as to why it doesn't make logical sense.
​
Any thoughts appreciated. :) | 2023-05-08T01:05:27 | https://www.reddit.com/r/LocalLLaMA/comments/13b91rj/bringing_back_someone_from_the_dead/ | recentlyquitsmoking2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13b91rj | false | null | t3_13b91rj | /r/LocalLLaMA/comments/13b91rj/bringing_back_someone_from_the_dead/ | false | false | self | 7 | null |
Loading Multiple LoRA bins | 10 | [deleted] | 2023-05-08T01:41:57 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13b9x4d | false | null | t3_13b9x4d | /r/LocalLLaMA/comments/13b9x4d/loading_multiple_lora_bins/ | false | false | default | 10 | null |
||
When are larger token limits coming? | 1 | [removed] | 2023-05-08T02:23:57 | https://www.reddit.com/r/LocalLLaMA/comments/13bay31/when_are_larger_token_limits_coming/ | Mr_Nice_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13bay31 | false | null | t3_13bay31 | /r/LocalLLaMA/comments/13bay31/when_are_larger_token_limits_coming/ | false | false | default | 1 | null |
Is it possible to use ANE(Apple Neural Engine) to run those models? | 1 | [removed] | 2023-05-08T03:38:57 | https://www.reddit.com/r/LocalLLaMA/comments/13bcrxa/is_it_possible_to_use_aneapple_neural_engine_to/ | Amethyst-W | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13bcrxa | false | null | t3_13bcrxa | /r/LocalLLaMA/comments/13bcrxa/is_it_possible_to_use_aneapple_neural_engine_to/ | false | false | default | 1 | null |
What do you guys think about a version of SmartGPT for LLaMA? | 30 | 2023-05-08T05:32:55 | https://www.youtube.com/watch?v=wVzuvf9D9BU | jd_3d | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 13bf57x | false | {'oembed': {'author_name': 'AI Explained', 'author_url': 'https://www.youtube.com/@ai-explained-', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/wVzuvf9D9BU?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen title="GPT 4 is Smarter than You Think: Introducing SmartGPT"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/wVzuvf9D9BU/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'GPT 4 is Smarter than You Think: Introducing SmartGPT', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_13bf57x | /r/LocalLLaMA/comments/13bf57x/what_do_you_guys_think_about_a_version_of/ | false | false | 30 | {'enabled': False, 'images': [{'id': '_5CrDzFuldJXgy2Mc92cu_BCoyFDoVBCjZsTfnpy5LA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/ITirhfxJU0-4mNOlvXJ7Xiy0787-a0YrkaX-ggFRgZk.jpg?width=108&crop=smart&auto=webp&s=51fc3c8f0e37ad7e9d420f95be641ac857c6c6ef', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/ITirhfxJU0-4mNOlvXJ7Xiy0787-a0YrkaX-ggFRgZk.jpg?width=216&crop=smart&auto=webp&s=ad06fbc41170aa0e38a8a4180b7b807e07d9a869', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/ITirhfxJU0-4mNOlvXJ7Xiy0787-a0YrkaX-ggFRgZk.jpg?width=320&crop=smart&auto=webp&s=cf66270a8e39efc27985bfe30f5202a23dda386b', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/ITirhfxJU0-4mNOlvXJ7Xiy0787-a0YrkaX-ggFRgZk.jpg?auto=webp&s=f567e07412a84afb4528d45cf5535b5d093192b0', 'width': 480}, 'variants': {}}]} |
||
Jeopardy Bot sft on LLaMA 7B | 14 | [https://huggingface.co/openaccess-ai-collective/jeopardy-bot](https://huggingface.co/openaccess-ai-collective/jeopardy-bot)
Jeopardy Bot is a reasobably good and fast bot at answering jeopardy questsions. Jeopardy is a great format for Language Models because the query is typically very short and the answer is typically even shorter.
Trained in 4 hours on 4xA100 80GB.
Samples from recent Jeopardy episodes:
Below is a Jeopardy clue paired with input providing the category of the clue. Write a concise response that best answers tbe clue given the category.
### Instruction:
Our evaluation of this intelligence data is that Red October is attempting to defect to the United States
### Input:
SAID THIS LITERARY CHARACTER
### Response:
what is Jack Ryan
​
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
The worker bee is a symbol of this industrial city in northern England & represents unity since a 2017 bombing there. The Category is WORLD CITIES
### Response:
what is Manchester
​ | 2023-05-08T07:39:27 | https://www.reddit.com/r/LocalLLaMA/comments/13bhmol/jeopardy_bot_sft_on_llama_7b/ | winglian | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13bhmol | false | null | t3_13bhmol | /r/LocalLLaMA/comments/13bhmol/jeopardy_bot_sft_on_llama_7b/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': 'GsKgjeRfykwUzUT27A6dipin8cTerD-Bt0xPgkjKrfw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/2a7lBrmBhLpgLom70SPuvZw3sURK87-SgUjrSqUinRM.jpg?width=108&crop=smart&auto=webp&s=c5a00837f3b6be4213f71985177f80e3f7ebdcdd', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/2a7lBrmBhLpgLom70SPuvZw3sURK87-SgUjrSqUinRM.jpg?width=216&crop=smart&auto=webp&s=918eda364ab7e86f011c305863ab1223cba55a66', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/2a7lBrmBhLpgLom70SPuvZw3sURK87-SgUjrSqUinRM.jpg?width=320&crop=smart&auto=webp&s=937b845a253db94c9d7b8c501d592235e79f4760', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/2a7lBrmBhLpgLom70SPuvZw3sURK87-SgUjrSqUinRM.jpg?width=640&crop=smart&auto=webp&s=1c22d549d6f1c0ca229c6ff23dcdb629ed94f37c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/2a7lBrmBhLpgLom70SPuvZw3sURK87-SgUjrSqUinRM.jpg?width=960&crop=smart&auto=webp&s=5911f6423360e597962015276aad95bc539b5195', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/2a7lBrmBhLpgLom70SPuvZw3sURK87-SgUjrSqUinRM.jpg?width=1080&crop=smart&auto=webp&s=154703fcbeba64b1f3894ffc54c9bf07a20ecde2', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/2a7lBrmBhLpgLom70SPuvZw3sURK87-SgUjrSqUinRM.jpg?auto=webp&s=ade9b9f234e4e3eb087a800b89f6e38e632a404b', 'width': 1200}, 'variants': {}}]} |
Personal Medical Doctor AI : using Oobabooga's character chat UI with med-alpaca LLM as a personal doctor | 1 | [removed] | 2023-05-08T07:40:32 | https://www.reddit.com/r/LocalLLaMA/comments/13bhnhy/personal_medical_doctor_ai_using_oobaboogas/ | No_Marionberry312 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13bhnhy | false | null | t3_13bhnhy | /r/LocalLLaMA/comments/13bhnhy/personal_medical_doctor_ai_using_oobaboogas/ | false | false | default | 1 | null |
Fine tuning on code, any help? | 3 | Hi all,
I've been looking for some time now, but most resources require a lot of work to understand. I'm getting there but I was wondering if anyone has any good links for understanding how to fine tune a model on a specific code base. I'm interested in both the data construction aspect and the retraining procedure.
Thank you in advance! | 2023-05-08T08:14:53 | https://www.reddit.com/r/LocalLLaMA/comments/13bib4z/fine_tuning_on_code_any_help/ | Purple_Individual947 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13bib4z | false | null | t3_13bib4z | /r/LocalLLaMA/comments/13bib4z/fine_tuning_on_code_any_help/ | false | false | self | 3 | null |
Subsets and Splits