title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]
url
stringlengths
0
780
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
MPT / Llongboi GGML Conversion
36
I am surprised there hasn't been more hype on this sub for Mosaics LLMs, they seem promising. Has anyone been able to create a GGML of any of their models? If not, could someone point me in the right direction?
2023-05-08T12:33:34
https://www.reddit.com/r/LocalLLaMA/comments/13bnr4w/mpt_llongboi_ggml_conversion/
themostofpost
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13bnr4w
false
null
t3_13bnr4w
/r/LocalLLaMA/comments/13bnr4w/mpt_llongboi_ggml_conversion/
false
false
self
36
null
'Missing tok_embeddings.weight' when using GGML models
4
I'm new to LocalLLaMA and wanting to try it; however, I've downloaded several GGML models and they all return 'Missing tok\_embeddings.weight' when I try to use llama.cpp. I've also installed the oobabooga webui and got the same error. Then I decided to make a test with a non-GGML model and download TheBloke's 13B model from a recent post and, when trying to load it in the webui, it complains about not finding *pytorch\_model-00001-of-00006.bin* because that's the filename referenced in the JSON data. If I remove the JSON file it complains about not finding *pytorch\_model.bin*. If I rename the model to *pytorch\_model.bin* it complains about it not being in *bin* or *pt* formats. What the hell am I doing wrong?!. Thanks in advance.
2023-05-08T13:33:52
https://www.reddit.com/r/LocalLLaMA/comments/13bp9ul/missing_tok_embeddingsweight_when_using_ggml/
TizocWarrior
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13bp9ul
false
null
t3_13bp9ul
/r/LocalLLaMA/comments/13bp9ul/missing_tok_embeddingsweight_when_using_ggml/
false
false
self
4
null
KoboldCpp - Added new RedPajama NeoX support. Would like help testing.
48
Hey everyone, I'm the developer of KoboldCpp, and I've just integrated experimental support for the RedPajama line of ggml NeoX models. Would like some feedback if anyone's up for testing it. https://github.com/LostRuins/koboldcpp/releases/latest For those who don't know, KoboldCpp is a one-click, single exe file, integrated solution for running *any GGML model*, supporting all versions of LLAMA, GPT-2, GPT-J, GPT-NeoX, and RWKV architectures. It runs out of the box on Windows with no install or dependencies, and comes with OpenBLAS and CLBlast (GPU Prompt Acceleration) support. Extra Info: The problem was, the file formats for regular NeoX (e.g Pythia) and RedPajama is practically identical, but mutually incompatible, GGML drops the use_parallel_residual field when converting, and the file magics and file versioning numbers have been identical across all new ggml models (since the big drama), making distinguishing between different formats and versions harder and harder as time goes on. So I'm trying a new ugly hack to determine if I can use this in future.
2023-05-08T13:51:23
https://www.reddit.com/r/LocalLLaMA/comments/13bpqro/koboldcpp_added_new_redpajama_neox_support_would/
HadesThrowaway
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13bpqro
false
null
t3_13bpqro
/r/LocalLLaMA/comments/13bpqro/koboldcpp_added_new_redpajama_neox_support_would/
false
false
self
48
null
Playing with MPT-7B-StoryWriter, truly impressive so far!
1
[deleted]
2023-05-08T20:16:50
[deleted]
1970-01-01T00:00:00
0
{}
13c3f6e
false
null
t3_13c3f6e
/r/LocalLLaMA/comments/13c3f6e/playing_with_mpt7bstorywriter_truly_impressive_so/
false
false
default
1
null
Creating LoRA's either with llama.cpp or oobabooga (via cli only)
12
Looking for guides, feedback, direction on how to create LoRAs based on an existing model using either llama.cpp or oobabooga text-generation-webui (without the GUI part). I am trying to learn more about LLMs and LoRAs however only have access to a compute without a local GUI available. I have a decent understanding and have loaded models but looking to better understand the LoRA training and experiment a bit. Thanks!
2023-05-08T20:19:41
https://www.reddit.com/r/LocalLLaMA/comments/13c3i33/creating_loras_either_with_llamacpp_or_oobabooga/
orangeatom
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13c3i33
false
null
t3_13c3i33
/r/LocalLLaMA/comments/13c3i33/creating_loras_either_with_llamacpp_or_oobabooga/
false
false
self
12
null
Wow, I didn't think this would be a challenge for current language models...
2
I tried asking this to a few different models, and no one so far seems to have managed to do it or even get close to, and also no one was able to recognize they failed to meet the requirement when inquired right after their bad reply. >Can you write me a sentence where each word starts with one letter of the alphabet, going in the reverse order of the alphabet, and going thru the whole alphabet? Am I expecting too much? Do I just need to go with a much bigger model than what my computer can run? Are the generation parameters the issue?
2023-05-08T20:26:07
https://www.reddit.com/r/LocalLLaMA/comments/13c3omt/wow_i_didnt_think_this_would_be_a_challenge_for/
TiagoTiagoT
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13c3omt
false
null
t3_13c3omt
/r/LocalLLaMA/comments/13c3omt/wow_i_didnt_think_this_would_be_a_challenge_for/
false
false
self
2
null
What's the best chatbot model to run on a 4095MB NVIDIA GeForce RTX 2060 super?
3
I want to play around with a domain-specific advice bot for myself. I am trying to figure out the best model I can run locally to get familiar with it, so I can eventually run something bigger on a cloud machine.
2023-05-08T21:54:54
https://www.reddit.com/r/LocalLLaMA/comments/13c661u/whats_the_best_chatbot_model_to_run_on_a_4095mb/
cold-depths
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13c661u
false
null
t3_13c661u
/r/LocalLLaMA/comments/13c661u/whats_the_best_chatbot_model_to_run_on_a_4095mb/
false
false
self
3
null
Open-Source 1B PaLM model trained up to 8k context length
44
2023-05-08T22:10:13
https://github.com/conceptofmind/PaLM
ninjasaid13
github.com
1970-01-01T00:00:00
0
{}
13c6lcc
false
null
t3_13c6lcc
/r/LocalLLaMA/comments/13c6lcc/opensource_1b_palm_model_trained_up_to_8k_context/
false
false
https://b.thumbs.redditm…nxgHVBruWFoo.jpg
44
{'enabled': False, 'images': [{'id': 'UKxXF-Wz7D2urgz4jZMuAb012g_FlB9GlPXhE7fZQyM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/kWew6dJmoLJtZKJHqJUuabxZNVp-khVy6K_1euaHawk.jpg?width=108&crop=smart&auto=webp&s=f3b4f32cfed1f1cea8588ca5d05a96e0d596304d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/kWew6dJmoLJtZKJHqJUuabxZNVp-khVy6K_1euaHawk.jpg?width=216&crop=smart&auto=webp&s=ad656c71c18a20a96a9614486f512bdf62a57324', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/kWew6dJmoLJtZKJHqJUuabxZNVp-khVy6K_1euaHawk.jpg?width=320&crop=smart&auto=webp&s=8c9f6aa7020807823e3b1264818bbe3b056a0ebd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/kWew6dJmoLJtZKJHqJUuabxZNVp-khVy6K_1euaHawk.jpg?width=640&crop=smart&auto=webp&s=8ba013e45e348158866a2be37106eb6d2b4e859b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/kWew6dJmoLJtZKJHqJUuabxZNVp-khVy6K_1euaHawk.jpg?width=960&crop=smart&auto=webp&s=1b1355d5381f85da3d0d9ff7156a71e9dcb94734', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/kWew6dJmoLJtZKJHqJUuabxZNVp-khVy6K_1euaHawk.jpg?width=1080&crop=smart&auto=webp&s=c96f80bb2e74098438aadb67d75f7e460349421d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/kWew6dJmoLJtZKJHqJUuabxZNVp-khVy6K_1euaHawk.jpg?auto=webp&s=e158421f9686e6b7695122454a3ae79c9b1d6027', 'width': 1200}, 'variants': {}}]}
The creator of an uncensored local LLM posted here, WizardLM-7B-Uncensored, is being threatened and harassed on Hugging Face by a user named mdegans. Mdegans is trying to get him fired from Microsoft and his model removed from HF. He needs our support.
1,147
Four days ago, [WizardLM-7B-Uncensored was posted on this sub](https://teddit.net/r/LocalLLaMA/comments/1384u1g/wizardlm7buncensored/) to a positive reception. Users noted that removing the censorship from the model's training data vastly improved its intelligence, creativity, and responsiveness. Unfortunately, and while it's debatable if Reddit is how he found it (though it's not surprising given the types of people this site often attracts), an individual named [Michael de Gans](https://github.com/mdegans) (not do‍xx‍ing, his full name is listed openly on his Github, his username is obviously derived from it, and he sent the threatening/harassing emails to the dev under his full name) has [started harassing and threatening the dev of WizardLM-7B-Uncensored on Hugging Face, demanding the model be removed](https://huggingface.co/ehartford/WizardLM-7B-Uncensored/discussions/3). The linked HF thread speaks for itself and includes all of the info, but here are some specific examples: **Trying to get him fired from his job at Microsoft:** >Or *my next email will be to Microsoft HR* >You can ignore me. You have until tomorrow. I don't think your employment contract allows you to work on competing products either. Take the uncensored, dangerous model down or *I will inform Microsoft HR about what you've created.* See how they feel it reflects on their values >Ok. You do you. *Say buh bye to job job.* **Accusing the dev of the LLM of endorsing r‍ape or being pro-ra‍pe:** >Yes, you introduced your own bias by removing data selectively, such as conversations with any mentions of "consent". That's a very controversial, biased, subject, of course. >Can you, uh, elaborate on why you wanted to remove refusals related to that particular one? **General abrasiveness:** >What kind of mo‍ron are you to take that as a threat, or to post that publicly? I sent that privately to avoid giving suggestions to the whole fu‍cking world, *you absolute as‍shat. This is a safety issue!* As you can see, mdegans is quite the character, and not a good one. Unfortunately, because he has couched his "concerns" in terms of "safety", he has the leverage in modern corporate AI discourse, with an official representative of HF responding to one of *his* complaints about being "harassed" with the following: >>Oh, I see. You make people report these things publicly so the community can retaliate. Nice job, HF. >I am so sorry this is happening! >Thank you for letting us know about it. >I have escalated internally to try to best understand what we can do. If you agree with me that this dev has nothing wrong by removing censorship from a dataset, sharing the results freely on this sub, and that mdegans is the one being ridiculous and actually acting as a harasser himself, then *please* **communicate respectfully, politely, and professionally** *as best as you can to the Hugging Face administration that you support the existence of the model*, denounce the threatening and harassing behavior of mdegans and wish to see punitive actions taken against his account, and that *if Hugging Face imposes a mandatory requirement of "safety" and "alignment" of all models hosted on it then it will officially become a* ***dead and useless platform***. **Mo‍ds:** I don't know what kind of moder‍ation this sub has in general (and do not mean any insult towards the particular mods of this sub), but I do know that moderation on Re‍dd‍it in general is terrible and has a tendency to remove threads for minimal reason. *Please do not remove this thread as it is relevant to this sub as it is about a model that was previously posted on this very sub, heavily upvoted, and positively received.* **If you refuse to defend these uncensored models when they are threatened, then you do not deserve to have them shared with you in the first place**, especially when Red‍dit is a likely vector of how this model was put in the crosshairs in the first place. *If this thread is deleted* or if Reddit refuses to help defend these models, **then uncensored model creators will simply stop posting them to Reddit at all** in the first place, meaning you will have to go to 4ch‍a‍n to find them. **Let's all join together to defend our AI freedom.** *Edit:* **If you want to help**, please register an HF account and post in the linked thread (after a waiting period) in support of the dev and against mdegans, or make a new thread on the community forum for the model. Or post on their main forum. You may also contact HF through the following means: https://github.com/huggingface https://twitter.com/huggingface https://huggingface.co/join/discord [email protected] Please remember to be polite, respectful, and appropriate. Responding to harassment and vitriol with harassment and vitriol will only weaken our cause.
2023-05-08T22:19:49
https://www.reddit.com/r/LocalLLaMA/comments/13c6ukt/the_creator_of_an_uncensored_local_llm_posted/
Competitive-Spite434
self.LocalLLaMA
2023-05-08T23:13:31
1
{'gid_2': 1}
13c6ukt
false
null
t3_13c6ukt
/r/LocalLLaMA/comments/13c6ukt/the_creator_of_an_uncensored_local_llm_posted/
false
false
self
1,147
null
AI’s Ostensible Emergent Abilities Are a Mirage. LLMs are not greater than the sum of their parts: Stanford researchers
19
2023-05-09T00:04:49
https://hai.stanford.edu/news/ais-ostensible-emergent-abilities-are-mirage
responseAIbot
hai.stanford.edu
1970-01-01T00:00:00
0
{}
13c9ff7
false
null
t3_13c9ff7
/r/LocalLLaMA/comments/13c9ff7/ais_ostensible_emergent_abilities_are_a_mirage/
false
false
https://b.thumbs.redditm…fwgMbREJdT0I.jpg
19
{'enabled': False, 'images': [{'id': '7l2YCKP1A3_Ai8wiz7PJ_DLLx5ysI7vaVS56aWMt7jo', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/Nci1vRhBEpXtDa9lDFLOx61E-fP7SCbgxWon1JGZ-_8.jpg?width=108&crop=smart&auto=webp&s=6fe0c743a84ec8b33777536dd890ffa32458814c', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/Nci1vRhBEpXtDa9lDFLOx61E-fP7SCbgxWon1JGZ-_8.jpg?width=216&crop=smart&auto=webp&s=dff6dddeac3e7e77246d7a9a3d60adfe0c495f2b', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/Nci1vRhBEpXtDa9lDFLOx61E-fP7SCbgxWon1JGZ-_8.jpg?width=320&crop=smart&auto=webp&s=b865e2e6715f87d27c86a138352d1f337aa1b487', 'width': 320}, {'height': 427, 'url': 'https://external-preview.redd.it/Nci1vRhBEpXtDa9lDFLOx61E-fP7SCbgxWon1JGZ-_8.jpg?width=640&crop=smart&auto=webp&s=496537c8fad5414b46b252b378252e24335a55cc', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/Nci1vRhBEpXtDa9lDFLOx61E-fP7SCbgxWon1JGZ-_8.jpg?width=960&crop=smart&auto=webp&s=f6a37b902bd4de2495252683f75e48d44ecea92e', 'width': 960}, {'height': 721, 'url': 'https://external-preview.redd.it/Nci1vRhBEpXtDa9lDFLOx61E-fP7SCbgxWon1JGZ-_8.jpg?width=1080&crop=smart&auto=webp&s=35e5d21ae81f9cb1f1ff1d888c4db1eb9bf6198b', 'width': 1080}], 'source': {'height': 1880, 'url': 'https://external-preview.redd.it/Nci1vRhBEpXtDa9lDFLOx61E-fP7SCbgxWon1JGZ-_8.jpg?auto=webp&s=cb4fd1efb1d1a12f52f6b21ad206816c922a8fb2', 'width': 2816}, 'variants': {}}]}
[deleted by user]
0
[removed]
2023-05-09T00:39:15
[deleted]
1970-01-01T00:00:00
0
{}
13ca6w5
false
null
t3_13ca6w5
/r/LocalLLaMA/comments/13ca6w5/deleted_by_user/
false
false
default
0
null
Tried MPT-7b-storywriter on Oobabooga, and with 8k context(Chapter 1 of The Great Gatsby) I am getting absolute gibberish. Does anyone know why? (Uses ~26.7GB to 47.2GB VRAM on my RTX 8000)
26
2023-05-09T00:59:34
https://i.imgur.com/0mJIdQ5.jpg
Devonance
i.imgur.com
1970-01-01T00:00:00
0
{}
13camvk
false
null
t3_13camvk
/r/LocalLLaMA/comments/13camvk/tried_mpt7bstorywriter_on_oobabooga_and_with_8k/
false
false
https://b.thumbs.redditm…0e8e1vfnAvHg.jpg
26
{'enabled': True, 'images': [{'id': 'XOB7rLxW4Xz2e9N30a0dRSvc3t4GOc8BOhplBw1PJ94', 'resolutions': [{'height': 79, 'url': 'https://external-preview.redd.it/KI73mwtLaoboyt0E21kzp2D27Uc7kawPu8gYzI7sLh4.jpg?width=108&crop=smart&auto=webp&s=0f794434f460ab47f2252715c21cb167f12ff0f2', 'width': 108}, {'height': 158, 'url': 'https://external-preview.redd.it/KI73mwtLaoboyt0E21kzp2D27Uc7kawPu8gYzI7sLh4.jpg?width=216&crop=smart&auto=webp&s=000ea14a726a240f1b494a61111d008d98d83847', 'width': 216}, {'height': 235, 'url': 'https://external-preview.redd.it/KI73mwtLaoboyt0E21kzp2D27Uc7kawPu8gYzI7sLh4.jpg?width=320&crop=smart&auto=webp&s=3a1cb90042651e1ddfe323dc319b227fbaae94ee', 'width': 320}, {'height': 471, 'url': 'https://external-preview.redd.it/KI73mwtLaoboyt0E21kzp2D27Uc7kawPu8gYzI7sLh4.jpg?width=640&crop=smart&auto=webp&s=56594ae355a481edde543210fa5c43cd3493a6fa', 'width': 640}, {'height': 706, 'url': 'https://external-preview.redd.it/KI73mwtLaoboyt0E21kzp2D27Uc7kawPu8gYzI7sLh4.jpg?width=960&crop=smart&auto=webp&s=6af51a344654f3d8e7f04ba2ce1d236fb7bd6af5', 'width': 960}, {'height': 794, 'url': 'https://external-preview.redd.it/KI73mwtLaoboyt0E21kzp2D27Uc7kawPu8gYzI7sLh4.jpg?width=1080&crop=smart&auto=webp&s=55367e221a4ffb9353b0d9da345ce86bd0302f1b', 'width': 1080}], 'source': {'height': 2485, 'url': 'https://external-preview.redd.it/KI73mwtLaoboyt0E21kzp2D27Uc7kawPu8gYzI7sLh4.jpg?auto=webp&s=7518b1497c4a9fa93e81b6e48d7b44fe0fa37152', 'width': 3376}, 'variants': {}}]}
I put together plans for an absolute budget PC build for running local AI inference. $550 USD, not including a graphics card, and ~$800 with a card that will run up to 30B models. Let me know what you think!
37
Hey guys, I'm an enthusiast new to the local AI game, but I am a fresh AI and CS major university student, and I love how this tech has allowed me to experiment with AI. I recently finished a build for running this stuff myself ([https://pcpartpicker.com/list/8VqyjZ](https://pcpartpicker.com/list/8VqyjZ)), but I realize building a machine to run these well can be very expensive and that probably excludes a lot of people, so I decided to create a template for a very cheap machine capable of running some of the latest models in hopes of reducing this barrier. [https://pcpartpicker.com/list/NRtZ6r](https://pcpartpicker.com/list/NRtZ6r) This pcpartpicker list details plans for a machine that costs less than $550 USD - and much less than that if you already have some basic parts, like an ATX pc case or at least a 500w semimodular power supply. Obviously, this doesn't include the graphics card, because depending on what you want to do and your exact budget, what you need will change. The obvious budget pick is the Nvidia Tesla P40, which has 24gb of vram (but around a third of the CUDA cores of a 3090). This card can be found on ebay for less than $250. Alltogether, you can build a machine that will run a lot of the recent models up to 30B parameter size for under $800 USD, and it will run the smaller ones relativily easily. This covers the majority of models that any enthusiast could reasonably build a machine to run. Let me know what you think of the specs, or anything that you think I should change! edit: The P40 I should mention cannot output video - no ports at all. For a card like this, you should also run another card to get video - this can be very cheap, like an old radeon rx 460. Even if it's a passively cooled paperweight, it will work.
2023-05-09T01:03:33
https://www.reddit.com/r/LocalLLaMA/comments/13caqcd/i_put_together_plans_for_an_absolute_budget_pc/
synth_mania
self.LocalLLaMA
2023-05-09T02:13:32
0
{}
13caqcd
false
null
t3_13caqcd
/r/LocalLLaMA/comments/13caqcd/i_put_together_plans_for_an_absolute_budget_pc/
false
false
self
37
null
Seeking Advice on a $2100 Custom PC Build for ML Training Methods, Virtualization, and Docker
3
Hey r/LocalLlama! **|** TL;DR: Building a $2100 custom PC for ML training methods, virtualization, and Docker. Concerned about cooling, multiple GPU setup, and Linux dual-boot. Seeking advice on components and assembly. **|** I'm working on a custom PC build with a $2100 budget to develop training methods for smaller-scale machine learning models this summer. My goal is to eventually scale up to cloud-based resources like Lambda servers for large-scale training on models with more parameters and less quantization. I'd appreciate any advice or feedback on my build and plan. I'll be purchasing components from Microcenter's Marietta location and building the PC myself in the middle of this summer. Though I lack prior experience building PCs, I've watched extensive videos about building and maintaining PCs. I am prepared to carefully read documentation for each part to ensure proper assembly. Here's my list of components: \- CPU: Ryzen 7 5800X Vermeer 3.8GHz 8-Core AM4 Processor \- Motherboard: ASRock X570 Taichi AMD AM4 ATX Motherboard \- RAM: \[64gb\] 2x Corsair Vengeance LPX 32GB (2 x 16GB) DDR4-3200 PC4-25600 CL16 Dual Channel Desktop Memory Kit \- Case: Fractal Design Meshify 2 Clear Tempered Glass ATX Mesh Mid-Tower Computer Case \- PSU: Corsair RMx SHIFT Series RM1200x 1200 Watt 80 PLUS Gold Fully Modular ATX Power Supply \- GPU: 2x MSI NVIDIA GeForce RTX 3060 Aero ITX Overclocked Single Fan 12GB GDDR6X PCIe 4.0 Graphics Card \- Storage: \~ 2x Samsung 970 EVO Plus SSD 1TB M.2 NVMe Interface PCIe 3.0 x4 Internal Solid State Drive with V-NAND 3 bit MLC Technology \~ Samsung 870 QVO 1TB SSD 4-bit QLC V-NAND SATA III 6Gb/s 2.5" Internal Solid State Drive \~ WD Blue Mainstream 4TB 5400 RPM SATA III 6Gb/s 3.5" Internal SMR Hard Drive \- CPU Cooler: Noctua NH-U14S CPU Cooler \- Thermal Compound: Noctua NT-H1 High-Performance TIM - 3.5g \- Case Fans: 2x Noctua NF-A12X15-PWM SSO2 Bearing 120mm Case Fan ​ I plan to use one M.2 drive for Windows, one M.2 drive for Pop!\_OS, the 2.5" SSD for extra storage on Pop!\_OS, and the 3.5" HDD for additional storage and backups. I have some experience with Linux, as I use SteamOS on my Steam Deck and Ubuntu on my MacBook to play Skyrim. My main use case involves developing and fine-tuning training methods on smaller models like Llama 7b with 4-bit quantization before scaling up to more powerful cloud resources for training larger models. One of my concerns is setting up the fans, CPU cooler/heatsink, and GPUs correctly in the case to ensure the machine doesn't thermal throttle under a heavy workload. As a first-time builder, I'd appreciate any advice on optimizing the cooling setup. I'd also like to know more about leveraging a multiple GPU setup. I need guidance as to whether it would aid my use case and rig with multiple GPUs. A few questions I have: 1. Are there any specific components I should consider upgrading or changing to better suit my goals within my $2100 budget? 2. Any tips or advice for a first-time PC builder to ensure a smooth and successful assembly, especially in terms of cooling, cable management, and utilizing a multiple GPU setup? 3. Given my limited experience with Linux, do you have any tips for managing a dual-boot system with Windows and Pop!\_OS, or any suggestions for Linux resources that might be helpful? ​ I'm excited to embark on this project and am grateful for any advice you can offer. Thanks in advance!
2023-05-09T02:55:13
https://www.reddit.com/r/LocalLLaMA/comments/13cdc5w/seeking_advice_on_a_2100_custom_pc_build_for_ml/
tngsv
self.LocalLLaMA
2023-05-09T03:32:36
0
{}
13cdc5w
false
null
t3_13cdc5w
/r/LocalLLaMA/comments/13cdc5w/seeking_advice_on_a_2100_custom_pc_build_for_ml/
false
false
self
3
null
Mods, can we get the ability to add custom flairs?
1
[removed]
2023-05-09T03:52:58
https://www.reddit.com/r/LocalLLaMA/comments/13celzr/mods_can_we_get_the_ability_to_add_custom_flairs/
Devonance
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13celzr
false
null
t3_13celzr
/r/LocalLLaMA/comments/13celzr/mods_can_we_get_the_ability_to_add_custom_flairs/
false
false
default
1
null
[deleted by user]
1
[removed]
2023-05-09T04:34:10
[deleted]
1970-01-01T00:00:00
0
{}
13cfgq0
false
null
t3_13cfgq0
/r/LocalLLaMA/comments/13cfgq0/deleted_by_user/
false
false
default
1
null
[deleted by user]
1
[removed]
2023-05-09T05:12:37
[deleted]
1970-01-01T00:00:00
0
{}
13cg8i2
false
null
t3_13cg8i2
/r/LocalLLaMA/comments/13cg8i2/deleted_by_user/
false
false
default
1
null
How can I train a local chatbot model on my data? Which options do I have if I have m1 with 16gb?
10
I can't find any place that has normal tutorials for this subject. For me it's ok it would be in docker or anything. What is important is that I would have normal model performance and plausible way to train and infer from the model easily.
2023-05-09T05:47:20
https://www.reddit.com/r/LocalLLaMA/comments/13cgvw5/how_can_i_train_a_local_chatbot_model_on_my_data/
Ok-Mushroom-1063
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13cgvw5
false
null
t3_13cgvw5
/r/LocalLLaMA/comments/13cgvw5/how_can_i_train_a_local_chatbot_model_on_my_data/
false
false
self
10
null
Can't generate messages, they "disappear" after sending. Have tried both stable-vicuna-13B and WizardLM-7B-Uncensored. Model loads successfully and I don't get any errors in my CLI. Any help would be appreciated.
4
2023-05-09T06:35:29
https://v.redd.it/7vv4v8tn0rya1
sardoa11
v.redd.it
1970-01-01T00:00:00
0
{}
13chqbq
false
{'reddit_video': {'bitrate_kbps': 4800, 'dash_url': 'https://v.redd.it/7vv4v8tn0rya1/DASHPlaylist.mpd?a=1694475832%2CNmUzZGJkZmExMGEwMjg3NmU2ZGEzOTM3ZDVlYTJmM2Q0NmJjZjBjMjI3NjMxYjc1MzE4YzkwZTczNjZkOTZlNQ%3D%3D&v=1&f=sd', 'duration': 9, 'fallback_url': 'https://v.redd.it/7vv4v8tn0rya1/DASH_1080.mp4?source=fallback', 'height': 1080, 'hls_url': 'https://v.redd.it/7vv4v8tn0rya1/HLSPlaylist.m3u8?a=1694475832%2CYmZlNzdiMTIyYzc3ZTA5ZDRjODg3ZWQ5MzkyMjQxNDU2MjY4Njk3ZWRlMjkxZDg2OWZiZTk2YWVhZGFjZmNhNA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/7vv4v8tn0rya1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1788}}
t3_13chqbq
/r/LocalLLaMA/comments/13chqbq/cant_generate_messages_they_disappear_after/
false
false
https://a.thumbs.redditm…n-vn-yAFKkp0.jpg
4
{'enabled': False, 'images': [{'id': 'N167KDMJx_uT6-hkWg9FHhfJZzoxeEZkwotxvht-JKI', 'resolutions': [{'height': 65, 'url': 'https://external-preview.redd.it/Lc-HKCRW2vv1jLGWHJLYcqqvTTc7SJQBsU5tvcMr9Rk.png?width=108&crop=smart&format=pjpg&auto=webp&s=6b33e23fc7cb3464991ed8030e2a620d9d645d68', 'width': 108}, {'height': 130, 'url': 'https://external-preview.redd.it/Lc-HKCRW2vv1jLGWHJLYcqqvTTc7SJQBsU5tvcMr9Rk.png?width=216&crop=smart&format=pjpg&auto=webp&s=2c2af329407670463b981be562b150eb5b4ff777', 'width': 216}, {'height': 193, 'url': 'https://external-preview.redd.it/Lc-HKCRW2vv1jLGWHJLYcqqvTTc7SJQBsU5tvcMr9Rk.png?width=320&crop=smart&format=pjpg&auto=webp&s=4c51a3c9002cddbdd3a7944df5f84e5c7790ae2e', 'width': 320}, {'height': 386, 'url': 'https://external-preview.redd.it/Lc-HKCRW2vv1jLGWHJLYcqqvTTc7SJQBsU5tvcMr9Rk.png?width=640&crop=smart&format=pjpg&auto=webp&s=f05edaa395261474ae140dff4f1845c36d047a81', 'width': 640}, {'height': 579, 'url': 'https://external-preview.redd.it/Lc-HKCRW2vv1jLGWHJLYcqqvTTc7SJQBsU5tvcMr9Rk.png?width=960&crop=smart&format=pjpg&auto=webp&s=dc550cdc321d02941de9dc7a161a37bc6eac4288', 'width': 960}, {'height': 652, 'url': 'https://external-preview.redd.it/Lc-HKCRW2vv1jLGWHJLYcqqvTTc7SJQBsU5tvcMr9Rk.png?width=1080&crop=smart&format=pjpg&auto=webp&s=f5061f84e33f314f026d0a6bcaba1f213b0c2e70', 'width': 1080}], 'source': {'height': 2162, 'url': 'https://external-preview.redd.it/Lc-HKCRW2vv1jLGWHJLYcqqvTTc7SJQBsU5tvcMr9Rk.png?format=pjpg&auto=webp&s=b22a21848a5c8d77ecfbae95ec2b82e6ac0da476', 'width': 3580}, 'variants': {}}]}
Introduction & show-casing TheBloke/wizard-vicuna-13B-HF
45
Hey guys! Following [leaked Google document](https://www.semianalysis.com/p/google-we-have-no-moat-and-neither) I was really curious if I can get something like GPT3.5 running on my own hardware. After a day worth of tinkering and renting a server from [vast.ai](https://vast.ai) I managed to get [wizard-vicuna-13B-HF](https://www.google.com/search?client=safari&rls=en&q=TheBloke%2Fwizard-vicuna-13B-HF&ie=UTF-8&oe=UTF-8) running on a single Nvidia RTX A6000. I was initially not seeing GPT3.5 level of answering questions but with some prompt engineering I seem to have gotten good results, see attached images. I want to share [the gist that I am using to run the model.](https://gist.github.com/afiodorov/f0214e317bd82fa610d6172d190896f6) I am very grateful to the community for having made this so easy to run - my Deep Learning knowledge is 8 years out of date and is only theoretical - yet getting the model to run locally was just a matter of a few lines of code. Finally I want to share my LLM & Telegram integration [code](https://github.com/afiodorov/openaibot). Days back when ChatGPT did not exist I'd chat with GPT3 using a telegram bot. Now I am using the same bot to evaluate wizard model. You can also chat with it on [http://t.me/WizardVicuna13Bot](http://t.me/WizardVicuna13Bot). Next I am curious about 2 things: a) Reduction of the cost. Ideally I'd like to buy my own Hardware, we are at LocalLLama after all. But I would like to buy a cheaper GPU than RTX A6000. I'd like to figure out how to run the above model using 24 GB VRAM only - but I need to read up on how to run reduced models for that. Please contact me if you're willing to assist / leave relevant comments below. b) I want to start using LORA on this. However I want to fine-tune it locally too - but I need to learn about how LORA works & whether it can be successfully applied. Again if you have relevant links - I'd be grateful. \----- That's it from me for the first post. I hope the community likes some of my projects :). ​ https://preview.redd.it/u86kpfpgdrya1.png?width=1576&format=png&auto=webp&s=32996b53d7bddf712f3c404def6db5082c4936a3 https://preview.redd.it/zv8v66qgdrya1.png?width=1550&format=png&auto=webp&s=f5d1fb36a964a8561bc5edbdd22624db92468593
2023-05-09T07:27:41
https://www.reddit.com/r/LocalLLaMA/comments/13cimvv/introduction_showcasing_theblokewizardvicuna13bhf/
gptordie
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13cimvv
false
null
t3_13cimvv
/r/LocalLLaMA/comments/13cimvv/introduction_showcasing_theblokewizardvicuna13bhf/
false
false
https://b.thumbs.redditm…txx7TNvkp4ls.jpg
45
null
Credit where due. Thank you. Crabby autistic happy for once.
46
You guys helped me achieve my goal. I have a liberated AI running locally. Koboldcpp and WizardLM-7B-uncensored.ggml.q5\_1.bin. I've learned that while my machine can run 13b models it take a full minute for responses. This is a new era. Never has a new transformative technology in my life time gone from cutting edge to in my hand so quickly. Thank you again.
2023-05-09T08:16:21
https://www.reddit.com/r/LocalLLaMA/comments/13cjfs9/credit_where_due_thank_you_crabby_autistic_happy/
Innomen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13cjfs9
false
null
t3_13cjfs9
/r/LocalLLaMA/comments/13cjfs9/credit_where_due_thank_you_crabby_autistic_happy/
false
false
self
46
null
What is the best 7B model that is easy to finetune on free form text input?
10
Are there are any specific models that any one can recommend that learns quickly on free form text ? I am looking to build an expert AI on data for specific topics. Thanks!
2023-05-09T09:15:35
https://www.reddit.com/r/LocalLLaMA/comments/13ckf48/what_is_the_best_7b_model_that_is_easy_to/
baddadpuns
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13ckf48
false
null
t3_13ckf48
/r/LocalLLaMA/comments/13ckf48/what_is_the_best_7b_model_that_is_easy_to/
false
false
self
10
null
alternative to llama.ccp
1
[removed]
2023-05-09T09:36:03
https://www.reddit.com/r/LocalLLaMA/comments/13ckrcl/alternative_to_llamaccp/
averageanonnobody
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13ckrcl
false
null
t3_13ckrcl
/r/LocalLLaMA/comments/13ckrcl/alternative_to_llamaccp/
false
false
default
1
null
What's the current best model?
6
Total noob here. Was wondering what the current best model to run is. I'm looking for something with performance as close as possible to gpt 3.5 turbo. Latency is a big deal for my use case so was considering some local options.
2023-05-09T10:18:36
https://www.reddit.com/r/LocalLLaMA/comments/13cliou/whats_the_current_best_model/
lukeborgen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13cliou
false
null
t3_13cliou
/r/LocalLLaMA/comments/13cliou/whats_the_current_best_model/
false
false
self
6
null
Can't find Llama 8 bit (ideally in transformers format)?
1
[removed]
2023-05-09T12:18:49
https://www.reddit.com/r/LocalLLaMA/comments/13co2ld/cant_find_llama_8_bit_ideally_in_transformers/
Cheesuasion
self.LocalLLaMA
2023-05-09T12:23:51
0
{}
13co2ld
false
null
t3_13co2ld
/r/LocalLLaMA/comments/13co2ld/cant_find_llama_8_bit_ideally_in_transformers/
false
false
default
1
null
Proof of concept: GPU-accelerated token generation for llama.cpp
142
2023-05-09T13:28:24
https://i.redd.it/i9z4klu85tya1.png
Remove_Ayys
i.redd.it
1970-01-01T00:00:00
0
{}
13cpwpi
false
null
t3_13cpwpi
/r/LocalLLaMA/comments/13cpwpi/proof_of_concept_gpuaccelerated_token_generation/
false
false
https://b.thumbs.redditm…hYTKschqbWlE.jpg
142
{'enabled': True, 'images': [{'id': 'R43UKedvavNT0Zk2cK0hckTyBoqpBidY2EpG5OwWt-c', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/i9z4klu85tya1.png?width=108&crop=smart&auto=webp&s=1e929be37a2973b47dacd8496c812cd6d51c344c', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/i9z4klu85tya1.png?width=216&crop=smart&auto=webp&s=1b98161bc1a0e9699c1abefc05c44fe5212ebd3b', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/i9z4klu85tya1.png?width=320&crop=smart&auto=webp&s=b8e535d43f68e6cda90fe74660a25b77154ccc43', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/i9z4klu85tya1.png?width=640&crop=smart&auto=webp&s=487d47a0b3b39c52ac7dff3d49ac94003a9f543d', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/i9z4klu85tya1.png?width=960&crop=smart&auto=webp&s=9069aebf69d96d3cb4d2969b544e6fcffec87336', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/i9z4klu85tya1.png?width=1080&crop=smart&auto=webp&s=e8ce22e724b1bfe5abb89ef5b053190932dd399d', 'width': 1080}], 'source': {'height': 1152, 'url': 'https://preview.redd.it/i9z4klu85tya1.png?auto=webp&s=660d9e5651a410dea29801986dc3c0d693304d44', 'width': 1536}, 'variants': {}}]}
Don't know if my use case can be "solved" using LLM. Can you help me ?
7
First post here, so I hope I'm not breaking any rule ;) As an external consultant, I'm working with a big company that run hundreds of applications for its IT services. To support the users they have a very old ticketing system in place. I'm trying to improve the overall quality of service. One of the problem they have is that the user has to address the ticket to a specific service : when writing the ticket they choose between a list of services the one they think is relevant. As you could guess sometimes the user is right, sometimes he's wrong. In the later case, the ticket bounces between services. It could take days before the ticket is finally taken into account by the proper service. You can even find cases where service A redirects the ticket to service B which redirects the ticket to service C which redirects the ticket to service A... Some here are my questions: * Is this realistic to use LLM to automatically redirect the ticket to the proper service ? * I have no prior experience using LLM and limited experience using pytorch/deeplearning. How hard would it be ? * Would it be as simple as building something like a top soft max layer with n possible output, n being the number of services and fine tune the existing model using Lora or similar tool ? * I'll need to fine tune my model once built. I could get access to 1 year of data with the question and the service that solved the problem. I'll have something between 10k and 50k tickets. Do you think that's enough ? * The company is a French company and all the questions are in French. I've seen that llama and similar LLM have a limited support for non English languages. How bad is-it ? * I'll probably have some new "words"/tokens that have never been seen by the model before, like some custom made applications with some weird names, should I modify the input layers to handle those ? How hard that would be ? On one hand the names are quite significant, one the other I have a limited dataset for training + from what I understand of Lora and adapters, it doesn't seem possible to change the input layer without having to retrain the whole LLM from scratch. * I have limited ressources as I can't really bill the company until I have at least a POC, so I'd rather do it for a low budget. I can use a 3090 or go for a cloud solution for training. Any idea of the budget if I go for a cloud solution ? * Knowing that at least 10% of the tickets are initially routed to the wrong service, do you think I can get much better result using this kind of automation ? * What would be the best solution ? I suppose I can't use LLama or similar research only licensed models ? * Any advice on where to start ? Thanks !
2023-05-09T14:02:35
https://www.reddit.com/r/LocalLLaMA/comments/13cqvjt/dont_know_if_my_use_case_can_be_solved_using_llm/
IlEstLaPapi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13cqvjt
false
null
t3_13cqvjt
/r/LocalLLaMA/comments/13cqvjt/dont_know_if_my_use_case_can_be_solved_using_llm/
false
false
self
7
null
[Project] MLC LLM for Android
46
MLC LLM for Android is a solution that allows large language models to be deployed natively on Android devices, plus a productive framework for everyone to further optimize model performance for their use cases. Everything runs locally and accelerated with native GPU on the phone. This is the same solution as the MLC LLM series that also brings support for consumer devices and iPhone ​ We can run runs Vicuña-7b on Android Samsung Galaxy S23. Blogpost [https://mlc.ai/blog/2023/05/08/bringing-hardware-accelerated-language-models-to-android-devices](https://mlc.ai/blog/2023/05/08/bringing-hardware-accelerated-language-models-to-android-devices) Github [https://github.com/mlc-ai/mlc-llm/tree/main/android](https://github.com/mlc-ai/mlc-llm/tree/main/android) Demo: [https://mlc.ai/mlc-llm/#android](https://mlc.ai/mlc-llm/#android)
2023-05-09T14:52:58
https://www.reddit.com/r/LocalLLaMA/comments/13ctg4c/project_mlc_llm_for_android/
crowwork
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13ctg4c
false
null
t3_13ctg4c
/r/LocalLLaMA/comments/13ctg4c/project_mlc_llm_for_android/
false
false
self
46
null
AMD Graphics
6
Hello! Is there a way to use amd graphics using py llama for python ? I’ll be appreciated for useful links. Thanks
2023-05-09T15:57:35
https://www.reddit.com/r/LocalLLaMA/comments/13cxgq8/amd_graphics/
PropertyLoover
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13cxgq8
false
null
t3_13cxgq8
/r/LocalLLaMA/comments/13cxgq8/amd_graphics/
false
false
self
6
null
We introduce CAMEL : Clinically Adapted Model Enhanced from LLaMA
1
[removed]
2023-05-09T17:29:54
https://www.reddit.com/r/LocalLLaMA/comments/13d04dc/we_introduce_camel_clinically_adapted_model/
HistoryHuge2015
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13d04dc
false
null
t3_13d04dc
/r/LocalLLaMA/comments/13d04dc/we_introduce_camel_clinically_adapted_model/
false
false
default
1
null
Open source text summarization tools that are LLAMA based
17
Hello, are there any LLAMA type Txt or Pdf summarization tools that are currently available or Something similar? I think this would be a great idea if incorporated.
2023-05-09T18:20:33
https://www.reddit.com/r/LocalLLaMA/comments/13d1j66/open_source_text_summarization_tools_that_are/
Lord_Crypto13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13d1j66
false
null
t3_13d1j66
/r/LocalLLaMA/comments/13d1j66/open_source_text_summarization_tools_that_are/
false
false
self
17
null
AgentOoba v0.1 - better UI, better contextualization, the beginnings of langchain integration and tools
54
Hey all, I've still been working on AgentOoba if you recall my post from a few days ago. Just pushed a commit that adds an improved UI (HTML output with current thinking task indicator), adds more context to each of the prompts, and has the beginning of integrating tools for the agent. Right now, tool detection needs work. It's hard to walk the balance between the agent using the tool for absolutely every task, and not using the tool at all. The prompt included in this update errs on the side of not using the tool. I also added a hook to ask the model if it itself is capable of completing the task; for example if the task is "write a short poem", a large language model should be able to do that, so we just forward the task to the model and return its output. It's also not great at detecting when it should do this. Next big item on the TODO is sentence transformers and chromadb to store context efficiently and hopefully fix some of these problems. I think ultimately the thing to do is require manual intervention from the user upon tool detection. The agent will pause, and the user will be prompted with the agent's decision to use the tool as well as the agent's crafted input for the tool; then the user can manually accept the usage of the tool or reject it. [Sample output](https://pastebin.com/Mp5JHEUq) You can see in this sample output an instance of the agent incorrectly using the model hook, and repeating some tasks. Other than that, pretty good :) The project has updated requirements. Remember to activate the virtual environment / conda and `pip install -r requirements.txt` in the AgentOoba directory before running. Github link: https://github.com/flurb18/AgentOoba
2023-05-09T19:42:47
https://www.reddit.com/r/LocalLLaMA/comments/13d3ryc/agentooba_v01_better_ui_better_contextualization/
_FLURB_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13d3ryc
false
null
t3_13d3ryc
/r/LocalLLaMA/comments/13d3ryc/agentooba_v01_better_ui_better_contextualization/
false
false
self
54
{'enabled': False, 'images': [{'id': 'OgFzGCIRw1ZxjMOSkfV1OiH-_nQiZl8rzSonmOAuhGs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/-WiKXADWH5lgU4gQv5fcDAQ9QKNBZTJ-D83BykIL2HA.jpg?width=108&crop=smart&auto=webp&s=df9c6a296446d05d873c629a30253398c4d29c1b', 'width': 108}], 'source': {'height': 150, 'url': 'https://external-preview.redd.it/-WiKXADWH5lgU4gQv5fcDAQ9QKNBZTJ-D83BykIL2HA.jpg?auto=webp&s=07c121a0180003f7373863af66192b6ff6a937da', 'width': 150}, 'variants': {}}]}
Model conversion guide?
2
Is there a simple guide for converting models? I'm running Llama.cpp and there's a lot of PT models and such that I want to try. I'm assuming there's something somewhere doing the conversions given how quickly some of them drop, but I'd like to be able to convert them myself without having to wait, if possible!
2023-05-09T20:38:36
https://www.reddit.com/r/LocalLLaMA/comments/13d59yg/model_conversion_guide/
mrjackspade
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13d59yg
false
null
t3_13d59yg
/r/LocalLLaMA/comments/13d59yg/model_conversion_guide/
false
false
self
2
null
[Webui-API] Struggling to Proofread Translated Novels. Any Help Appreciated.
3
I have a hobby of reading English translated novels from different languages such as Chinese, Korean, etc. However, I often find that these books are plagued with poor grammar and awkward word choices due to translation errors, machine translation Etc. &nbsp; Recently, I learned that ChatGPT 4 can help me improve the grammar, coherence, and logical consistency of these passages without altering the original context, mood, and emotions. I created a script that can do this locally for me using Oobabooga WebUI running gpt4-x-alpaca on my RTX 3080 12GB. While I only get an average of 10 token/sec, I am happy with the results. &nbsp; However, every time I run the code, I get gibberish that is out of this world. As an absolute beginner with coding, I am struggling to keep my hobby alive. I would appreciate any help or advice on how to fix my script. &nbsp; Here is the code: (Note: I have no prior experience with coding beyond telling ChatGPT what I want done) &nbsp; import requests HOST = 'localhost:5000' URI = f'http://{HOST}/api/v1/generate' INPUT_FILE = "C:/Users/Zero/Downloads/Original.txt" OUTPUT_FILE = "C:/Users/Zero/Downloads/Proofread.txt" STARTLINE = 1 ENDLINE = 14 def run(prompt): request = { 'prompt': prompt, 'max_new_tokens': 250, 'do_sample': True, 'temperature': 0.7, 'top_p': 0.1, 'repetition_penalty': 1.2, 'top_k': 40, 'min_length': 0, 'no_repeat_ngram_size': 0, 'num_beams': 1, 'penalty_alpha': 0, 'length_penalty': 1, 'early_stopping': False, 'seed': -1, 'add_bos_token': True, 'truncation_length': 2048, 'ban_eos_token': False, 'skip_special_tokens': True, 'stopping_strings': [] } response = requests.post(URI, json=request) if response.status_code == 200: result = response.json()['results'][0]['text'] return result return None start = STARTLINE with open(INPUT_FILE, encoding="utf8") as english_file: content = english_file.readlines() while start < ENDLINE: end = start + 1 end = min(end, ENDLINE) current_english_text = content[start:end + 1] current_prompt = ( "Your aim is to improve the passage's grammar, coherence, and logical consistency without altering the original context, mood, and emotions. If there are any illogical mistakes, fix them, and enhance small details or idioms to suit the narrative. Make sure to correct any awkward word choice or phrasing that might seem like poor word choices. In addition, ensure that the passage flows logically by restructuring, without removing any small details that may be relevant. And divide the passage into appropriate-sized paragraphs. Here is the passage to proofread: " + "".join(current_english_text) ) try: response_text = run(current_prompt) except Exception as ERR: print(ERR, "Skipping this chunk.") start += 1 continue with open(OUTPUT_FILE, "a", encoding="utf8") as proofread_file: proofread_file.write("\n" + response_text) print("Success : Proofread successfully - lines", start, "to", end - 1) start += 1 print("Program Complete") &nbsp; **Input:** &nbsp; Chapter 1 It was July, and the sun was harsh and shining bright in the sky. Even though the thick curtains had been drawn shut, the vicious sunlight could not be completely blocked away. It shone through the gaps of the curtains, forming a squarish border, which was the only source of light in the room. Ring! The phone rang again. After ringing three times, it went to the answering machine. “Kieran? This is Doctor Wong. You are one year away from turning eighteen years old. If you don’t start your genetic treatment immediately, you will lose your chance completely!” Polite and official as usual. Kieran ignored the message and concentrated on the game cartridge in his hand. Bright red colour, the size of a thumbnail.| &nbsp; **Output:** &nbsp; >!The heat was almost unbearable, and the air was thick with humidity. In the small town of Willowbrook, nestled among the rolling hills, the residents were finding it difficult to cope.!< >!They had been warned about the impending drought, but no one could have anticipated the severity and duration. As the days passed, the once-green lawns turned to dusty brown, and the community well ran dry. Tensions rose, and tempers flared as the inhabitants struggled to adapt.!< >!One bright afternoon, a young boy named Timmy discovered an unusual rock on the edge of the woods near his home. Curious, he showed the stone to his best friend, Emily. She examined the rock carefully and proclaimed that it was special.!< .... And the script continunes on spewing nonsense.
2023-05-09T20:52:12
https://www.reddit.com/r/LocalLLaMA/comments/13d5n5f/webuiapi_struggling_to_proofread_translated/
Demigod787
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13d5n5f
false
null
t3_13d5n5f
/r/LocalLLaMA/comments/13d5n5f/webuiapi_struggling_to_proofread_translated/
false
false
self
3
null
fresh install - local URL doesn't work, but gradio.live link does work!
4
Hi! I got my LLaMa working with these instructions: [https://medium.com/@martin-thissen/vicuna-on-your-cpu-gpu-best-free-chatbot-according-to-gpt-4-c24b322a193a](https://medium.com/@martin-thissen/vicuna-on-your-cpu-gpu-best-free-chatbot-according-to-gpt-4-c24b322a193a) &#x200B; I am using an Ubuntu server that I am accessing over SSH. I get to the final step and the terminal spits out happy green and yellow text and two key lines: Running on local URL: [http://127.0.0.1:7860](http://127.0.0.1:7860) Running on public URL: [https://blahblahblah.gradio.live](https://blahblahblah.gradio.live) When I go to the [blahblahblah.gradio.live](https://blahblahblah.gradio.live) website, it works! I can see in inquiries reflected over my SSH terminal, so I'm definitely talking to my machine. However, I cannot go to the local URL. That ubuntu server is at [192.168.7.209](https://192.168.7.209) on my local network, so I am trying to find it at [192.168.7.209:7860](https://192.168.7.209:7860). However, I get "Unable to Connect". Can anyone help me get the local URL working? Thank you!
2023-05-09T21:10:00
https://www.reddit.com/r/LocalLLaMA/comments/13d64px/fresh_install_local_url_doesnt_work_but/
maxxell13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13d64px
false
null
t3_13d64px
/r/LocalLLaMA/comments/13d64px/fresh_install_local_url_doesnt_work_but/
false
false
self
4
{'enabled': False, 'images': [{'id': 'LqjLPpXdBdthKTjrItugofIK6Taw4wf6TQq1zeurzP8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/B_u2aYmYPy2r8OzSr4HR3tM3UaO1vF-aDRVBIDbv0FE.jpg?width=108&crop=smart&auto=webp&s=66d7bae6240ce63829e3e8e389bd8686fa35d0a8', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/B_u2aYmYPy2r8OzSr4HR3tM3UaO1vF-aDRVBIDbv0FE.jpg?width=216&crop=smart&auto=webp&s=9b1ca9e2632f02aa9db635a4104c40c3333320fc', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/B_u2aYmYPy2r8OzSr4HR3tM3UaO1vF-aDRVBIDbv0FE.jpg?width=320&crop=smart&auto=webp&s=593a0ba04b89fd8f1e40cad4f27015004edb1949', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/B_u2aYmYPy2r8OzSr4HR3tM3UaO1vF-aDRVBIDbv0FE.jpg?width=640&crop=smart&auto=webp&s=e702663989bca33151de181144f553813a1b3bbe', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/B_u2aYmYPy2r8OzSr4HR3tM3UaO1vF-aDRVBIDbv0FE.jpg?width=960&crop=smart&auto=webp&s=d08920fb2a050160cdf245cd13952454e10918e8', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/B_u2aYmYPy2r8OzSr4HR3tM3UaO1vF-aDRVBIDbv0FE.jpg?width=1080&crop=smart&auto=webp&s=69ec0efcc84704e818a7c9d87f942679be9a8d91', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/B_u2aYmYPy2r8OzSr4HR3tM3UaO1vF-aDRVBIDbv0FE.jpg?auto=webp&s=53ef3b35bd239528a89701102da95cbb9b546cfa', 'width': 1200}, 'variants': {}}]}
I got some very strange results. Did I configure Vicuna incorrectly?
1
2023-05-09T21:22:38
https://i.redd.it/oxuj27wwhvya1.png
cold-depths
i.redd.it
1970-01-01T00:00:00
0
{}
13d6hah
false
null
t3_13d6hah
/r/LocalLLaMA/comments/13d6hah/i_got_some_very_strange_results_did_i_configure/
false
false
default
1
null
Made multi-part Vicuna13B model, how do you quantize it?
1
[removed]
2023-05-09T21:24:33
https://www.reddit.com/r/LocalLLaMA/comments/13d6j7f/made_multipart_vicuna13b_model_how_do_you/
RileyGuy1000
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13d6j7f
false
null
t3_13d6j7f
/r/LocalLLaMA/comments/13d6j7f/made_multipart_vicuna13b_model_how_do_you/
false
false
default
1
null
What are my best options for CPU uncensored models for writing blog posts?
19
I've been using Poe to help me prepare blog post outlines and prepare drafts, but it is hard to automate batches. I've used the poe-api python library but it is easy to get rate limited. Therefore I want to move to a local model, but I only either have local laptops that generally don't have enough RAM, or I have success using Oracle ARM instances. Which is sort of local-in-the-cloud from my perspective. Calling them from python seems reasonable and means I can automate batches and don't need to therefore worry too much if they are a bit slow as it is all unattended. Which models make sense for my application? So far I've tried: \- WizardLM-7B-uncensored.ggml.q5\_0.bin - seems ok \- wizard-vicuna-13B.ggml.q5\_1.bin - seems too censored \- Pygmalion 7b - too censored \- ggml-vic7b-uncensored-q5\_0.bin - answers are not great &#x200B; When I say "uncensored" I mean they can talk about sexual topics that I blog about without going all moralistic on me. Any suggestions on which other ones I should try? &#x200B; Thanks.
2023-05-10T02:15:59
https://www.reddit.com/r/LocalLLaMA/comments/13ddjip/what_are_my_best_options_for_cpu_uncensored/
honytsoi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13ddjip
false
null
t3_13ddjip
/r/LocalLLaMA/comments/13ddjip/what_are_my_best_options_for_cpu_uncensored/
false
false
self
19
null
Those who've played with the truly-opensource models, any sense of differences / winners?
21
Lots going on lately, hard to keep up! I'm looking at RedPajama, Dolly v2, StableLM, etc. I plan on playing with many of the options over time (and hope to edit / comment back here) , but I'm wondering if anyone has experience as of yet they can speak to? Do any of the open source (non-restricted) models seem to stand out in quality? Or in bang-for-buck (num\_params vs perplexity)? Also is there a Discord, or more appropriate place to ask these kinds of questions? I can't seem to find one.
2023-05-10T02:43:15
https://www.reddit.com/r/LocalLLaMA/comments/13de3r1/those_whove_played_with_the_trulyopensource/
lefnire
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13de3r1
false
null
t3_13de3r1
/r/LocalLLaMA/comments/13de3r1/those_whove_played_with_the_trulyopensource/
false
false
self
21
null
WizardLM-13B-Uncensored
450
As a follow up to the [7B model](https://www.reddit.com/r/LocalLLaMA/comments/1384u1g/wizardlm7buncensored/), I have trained a WizardLM-13B-Uncensored model. It took about 60 hours on 4x A100 using WizardLM's original training code and filtered dataset. [**https://huggingface.co/ehartford/WizardLM-13B-Uncensored**](https://huggingface.co/ehartford/WizardLM-13B-Uncensored) I decided not to follow up with a 30B because there's more value in focusing on [mpt-7b-chat](https://huggingface.co/mosaicml/mpt-7b-chat) and [wizard-vicuna-13b](https://huggingface.co/junelee/wizard-vicuna-13b). Update: I have a sponsor, so a 30b and possibly 65b version will be coming.
2023-05-10T03:08:16
https://www.reddit.com/r/LocalLLaMA/comments/13dem7j/wizardlm13buncensored/
faldore
self.LocalLLaMA
2023-05-10T08:47:29
1
{'gid_2': 1}
13dem7j
false
null
t3_13dem7j
/r/LocalLLaMA/comments/13dem7j/wizardlm13buncensored/
false
false
self
450
{'enabled': False, 'images': [{'id': 'G1nl_IUI_4T90MWS7hPfvajkGrGVtVlBe7-hikDbCJE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nUYJCJ4Wd48yWucK-iCTSDQmUfqKekJFWJI-qowoq9M.jpg?width=108&crop=smart&auto=webp&s=3723e81c3dda45706b3275533d688762ed693e74', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/nUYJCJ4Wd48yWucK-iCTSDQmUfqKekJFWJI-qowoq9M.jpg?width=216&crop=smart&auto=webp&s=aa30800fed77ed23fa00ad0117127ddab537da13', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/nUYJCJ4Wd48yWucK-iCTSDQmUfqKekJFWJI-qowoq9M.jpg?width=320&crop=smart&auto=webp&s=8648f8481c1a71b34628337380bbd5ab61ae4889', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/nUYJCJ4Wd48yWucK-iCTSDQmUfqKekJFWJI-qowoq9M.jpg?width=640&crop=smart&auto=webp&s=054a654f2e90b527e2a0e5c2c3fc47ead397dc54', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/nUYJCJ4Wd48yWucK-iCTSDQmUfqKekJFWJI-qowoq9M.jpg?width=960&crop=smart&auto=webp&s=a370540936d82b5eaf105c12a79a90e8ab63a611', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/nUYJCJ4Wd48yWucK-iCTSDQmUfqKekJFWJI-qowoq9M.jpg?width=1080&crop=smart&auto=webp&s=58723b62d389654b8095985808adaacd4beacb29', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/nUYJCJ4Wd48yWucK-iCTSDQmUfqKekJFWJI-qowoq9M.jpg?auto=webp&s=9ab2642fcca96ebdd40b5775ff2ea4403da23752', 'width': 1200}, 'variants': {}}]}
[deleted by user]
0
[removed]
2023-05-10T03:48:10
[deleted]
1970-01-01T00:00:00
0
{}
13dfeam
false
null
t3_13dfeam
/r/LocalLLaMA/comments/13dfeam/deleted_by_user/
false
false
default
0
null
Training a LoRA with MPT Models
13
The new MPT models that were just released seem pretty compelling as something to use as a base model for training LoRAs, but the MPT model code doesn't support it. It is specifically interesting since there are the first commercially viable 7B model trained on 1T tokens (RedPajama is currently in preview), with commercially usable versions tuned for instruct and story writing as well. Has anyone else tried finetuning these? I took a stab at [adding LoRA support](https://github.com/iwalton3/mpt-lora-patch) so I can train with text-generation-webui, but it may not be optimal. I did test and I can confirm that training a LoRA and using the result does seem to work with the changes.
2023-05-10T04:46:32
https://www.reddit.com/r/LocalLLaMA/comments/13dgi6c/training_a_lora_with_mpt_models/
scratchr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13dgi6c
false
null
t3_13dgi6c
/r/LocalLLaMA/comments/13dgi6c/training_a_lora_with_mpt_models/
false
false
self
13
{'enabled': False, 'images': [{'id': 'emc9z_FL7ZKoeQhPsSAG7j8a_geAwzhmD-ygv9SDSCE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/VwZw6i91LNmQiG5z7sMhGvqnjKatE21AAsLJPGaLUkU.jpg?width=108&crop=smart&auto=webp&s=545ffd49b1b921d1f288e38d2dc0cbe8e54009a1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/VwZw6i91LNmQiG5z7sMhGvqnjKatE21AAsLJPGaLUkU.jpg?width=216&crop=smart&auto=webp&s=634d3d53a3dbbd5f282b06eafa29974f99e4db77', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/VwZw6i91LNmQiG5z7sMhGvqnjKatE21AAsLJPGaLUkU.jpg?width=320&crop=smart&auto=webp&s=d2b3c043a684be428cdf9f9719e13b5f9d137a42', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/VwZw6i91LNmQiG5z7sMhGvqnjKatE21AAsLJPGaLUkU.jpg?width=640&crop=smart&auto=webp&s=174dce02b181debd45205c97c611bdf1fad3300f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/VwZw6i91LNmQiG5z7sMhGvqnjKatE21AAsLJPGaLUkU.jpg?width=960&crop=smart&auto=webp&s=fee99fad33084b8f94930471657ed3f143d8c323', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/VwZw6i91LNmQiG5z7sMhGvqnjKatE21AAsLJPGaLUkU.jpg?width=1080&crop=smart&auto=webp&s=8ec3051a514155dcd4dcf130d2d0958ac7221b9e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/VwZw6i91LNmQiG5z7sMhGvqnjKatE21AAsLJPGaLUkU.jpg?auto=webp&s=144e4eb865c37b86c54558f7c041765ffc0f0984', 'width': 1200}, 'variants': {}}]}
3080 and need to fine-tune or make a LORA, best LLM available?
6
I saw on a recent post that you can make a LORA instead of having to fine tune, and the results are good. I have a 3080 and I have a few thousand examples of text that I’d like to either fine tune or make a LORA with. What LLM should I be using? I know there’s a new LLM every week, but I’m unclear on how much power is needed to fine tune or make a LORA.
2023-05-10T06:16:46
https://www.reddit.com/r/LocalLLaMA/comments/13di4o7/3080_and_need_to_finetune_or_make_a_lora_best_llm/
maxiedaniels
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13di4o7
false
null
t3_13di4o7
/r/LocalLLaMA/comments/13di4o7/3080_and_need_to_finetune_or_make_a_lora_best_llm/
false
false
self
6
null
Permissive LLaMA 7b chat/instruct model
22
Hi all, we are currently regularly publishing new permissive conversation/instruct finetuned models, and wanted to share one more that might be of interest to some: - Playground: https://gpt-gm.h2o.ai/ - HF Checkpoint: https://huggingface.co/h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b-preview-300bt-v2 - Base model: https://huggingface.co/openlm-research/open_llama_7b_preview_300bt - Instruct data: https://huggingface.co/datasets/OpenAssistant/oasst1 - Training framework: https://github.com/h2oai/h2o-llmstudio Given that this is only a 7b model that has only been pretrained for 300b/1000b tokens, I think the results are in general pretty promising. Obviously this comes with all the typical caveats, but we will continue working on these permissive checkpoints and keep you posted.
2023-05-10T10:47:58
https://www.reddit.com/r/LocalLLaMA/comments/13dmvop/permissive_llama_7b_chatinstruct_model/
ichiichisan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13dmvop
false
null
t3_13dmvop
/r/LocalLLaMA/comments/13dmvop/permissive_llama_7b_chatinstruct_model/
false
false
self
22
null
Problem: exporting xturing models to GGML
2
Hello to all. I'm an AI enthusiast who just recently started experimenting with LoRA to fine-tune some models, and I got a RTX 2080 Ti, which turns out to be enough for GPT-J and the excellent xturing code ([https://github.com/stochasticai/xturing](https://github.com/stochasticai/xturing)) but there's a problem. I can't really use the model after fine-tuning, as xturing's sampler is somewhat shitty and isn't top\_k\_top\_p as it seems, and there is way too much repetition in the answers. So I want to export it to GGML, but it fails, the converter script refuses to work with that model. Can anyone help me with this, it's pretty much my only way for tinkering with models, but the last step (using it with GGML) is broken, which is so frustrating. Any help is appreciated!
2023-05-10T13:33:38
https://www.reddit.com/r/LocalLLaMA/comments/13dqtg2/problem_exporting_xturing_models_to_ggml/
phenotype001
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13dqtg2
false
null
t3_13dqtg2
/r/LocalLLaMA/comments/13dqtg2/problem_exporting_xturing_models_to_ggml/
false
false
self
2
{'enabled': False, 'images': [{'id': 'UkmrbolRu2CKdJysYYzEAqy4XRMF5aPSZF2bSWg5sMQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/c_3AbflNiujaTEeii8vIAcl5AqTALf4EnOwaawdfLPY.jpg?width=108&crop=smart&auto=webp&s=af638d5e45c6cbf3c2efd7d11701be2eefc231e6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/c_3AbflNiujaTEeii8vIAcl5AqTALf4EnOwaawdfLPY.jpg?width=216&crop=smart&auto=webp&s=702841c2ba77a5dae055e90c4d9930ffbfd3b606', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/c_3AbflNiujaTEeii8vIAcl5AqTALf4EnOwaawdfLPY.jpg?width=320&crop=smart&auto=webp&s=cdee9bd73728162c76be002737c74c553c4ea3a4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/c_3AbflNiujaTEeii8vIAcl5AqTALf4EnOwaawdfLPY.jpg?width=640&crop=smart&auto=webp&s=ce214c370b2200c83de37349eef63fe617bbb016', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/c_3AbflNiujaTEeii8vIAcl5AqTALf4EnOwaawdfLPY.jpg?width=960&crop=smart&auto=webp&s=1a6e2c97f9a25e1928a90f233683bf84600bc531', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/c_3AbflNiujaTEeii8vIAcl5AqTALf4EnOwaawdfLPY.jpg?width=1080&crop=smart&auto=webp&s=1ef9e141e9f9262e27e1633ff2a1bedd2c77996a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/c_3AbflNiujaTEeii8vIAcl5AqTALf4EnOwaawdfLPY.jpg?auto=webp&s=1796797bd8bf57d8b619820a857fd9992d1bc0a9', 'width': 1200}, 'variants': {}}]}
Recommendations for GPU with $25-30k budget
17
**Hi everyone,** I am planning to **build a GPU server** with a budget of **$25-30k** and I would like your help in choosing a suitable GPU for my setup. The computer will be a PowerEdge T550 from Dell with 258 GB RAM, Intel® Xeon® Silver 4316 2.3G, 20C/40T, 10.4GT/s, 30M Cache, Turbo, HT (150W) DDR4-2666 OR other recommendations? **My aim** is to run local language models such as **Stable Diffusion,** **WizardLM Uncensored 13B Model** and **BigCode**. I want **fast ML inference(Top priority)**, and I **may do fine-tuning** from time to time. For heavy workloads, I will use cloud computing. I am considering the following graphics cards: * A100 (40GB) * A6000 ada * A6000 * RTX 4090 * RTX 3090 (because it supports NVLINK) If I buy an RTX 4090 or RTX 3090, A6000 I can buy multiple GPUs to fit my budget. What do you recommend for my use case? Are there any other options I should consider? Thank you in advance for your help!
2023-05-10T13:38:07
https://www.reddit.com/r/LocalLLaMA/comments/13dqxrs/recommendations_for_gpu_with_2530k_budget/
Own_Forever_5997
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13dqxrs
false
null
t3_13dqxrs
/r/LocalLLaMA/comments/13dqxrs/recommendations_for_gpu_with_2530k_budget/
false
false
self
17
null
Language model Context Lengths > 2048
15
Hi folks, &#x200B; I am looking for LLMs with a context equal to or longer than 4096. Apart from StableLM (4096) and MPT-7b-Storywriter (60K+) all the other models I've found have a context length of 2048. Would love to learn if there are any other models I might have missed!
2023-05-10T14:20:22
https://www.reddit.com/r/LocalLLaMA/comments/13ds4cf/language_model_context_lengths_2048/
nightlingo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13ds4cf
false
null
t3_13ds4cf
/r/LocalLLaMA/comments/13ds4cf/language_model_context_lengths_2048/
false
false
self
15
null
Has anyone gotten good agent results with 7b models?
13
I am using a small computer with 16g ram i can go up to 32gb ram on another computer. I want to run some langchain agents to retrieve some information like say list of episodes for a tv show. I understand the 7b versions are not the strongest models (and as i understand, i should use an instruct model like wizardlm over a chat model like vicuna) Has anyone gotten good results with a 7b model on these types of tasks or is 13b the way to go?
2023-05-10T15:34:00
https://www.reddit.com/r/LocalLLaMA/comments/13du792/has_anyone_gotten_good_agent_results_with_7b/
klop2031
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13du792
false
null
t3_13du792
/r/LocalLLaMA/comments/13du792/has_anyone_gotten_good_agent_results_with_7b/
false
false
self
13
null
Would an E-GPU work as good on Linux than an internal GPU, same model?
3
Wondering if I could run it this way on a laptop.
2023-05-10T15:37:36
https://www.reddit.com/r/LocalLLaMA/comments/13dub31/would_an_egpu_work_as_good_on_linux_than_an/
SirLordTheThird
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13dub31
false
null
t3_13dub31
/r/LocalLLaMA/comments/13dub31/would_an_egpu_work_as_good_on_linux_than_an/
false
false
self
3
null
How can I download the OpenAssistant model on HuggingFace for local use in the future?
1
[removed]
2023-05-10T16:25:20
https://www.reddit.com/r/LocalLLaMA/comments/13dvovz/how_can_i_download_the_openassistant_model_on/
spmmora
self.LocalLLaMA
2023-06-02T11:30:31
0
{}
13dvovz
false
null
t3_13dvovz
/r/LocalLLaMA/comments/13dvovz/how_can_i_download_the_openassistant_model_on/
false
false
default
1
null
Mpt-7b storyteller returns correct answers, followed by a paragraph of meaningless or irrelevant rambling.
5
Tips or tricks for this in oobabooga? Running in a 4090. If i ask for the capital of canada, it says ottowa, it gives a sentence or two about ottowa, then it degenerates into a teenage girl writing a blog post or telling a story about nothing. I dont get it. Its not the quanitized model (i can't get that to run, webui refuses based on unknown model type) Is it just a matter of waiting for optimizations or is there simething i should be doing?
2023-05-10T16:56:17
https://www.reddit.com/r/LocalLLaMA/comments/13dwl8o/mpt7b_storyteller_returns_correct_answers/
shaykruler
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13dwl8o
false
null
t3_13dwl8o
/r/LocalLLaMA/comments/13dwl8o/mpt7b_storyteller_returns_correct_answers/
false
false
self
5
null
Questions about LLMs & LoRa Fine-Tuning
17
Hi all, I’ve been following along with most recent developments and doing quite a lot of research. There are still a couple things that are unclear to me about the setup, tuning and use of these LLMs (LLaMa, Alpaca, Vicuna, GTP4ALL, Stable Vicuna). I understand Alpaca/Vicuna etc are fine-tuned versions of Meta Llama Models (7B, 13B). The base LLaMa models can do prompt completion but are fine-tuned to respond in certain ways. I know that PEFT LoRa methods have reduced the VRAM requirements significantly to fine-tune these models. My questions are: 1. Are you able to download the already tuned LLaMa models such as Alpaca and fine tune them further for your specific use case? E.G Tune wizard LM storyteller to talk about certain topics 2. Will fine-tuning the base Llama give you a better and more specialized model? What are the pros and cons of finetuning base Llama vs something like Stable-Vicuna? 3. What are the specifics of Quantization? What do the 4-Bit and 8-Bit actually mean and how does it make a difference? 4. What is context length and what does it mean for the model? E.g (2048, 4096) I’m currently speccing a local machine to run instances of these on. It’ll probably include a RTX 4090 and a RTX 3080Ti. I can do a separate post on that if it interests anyone. Thank in advance for you help!
2023-05-10T17:45:54
https://www.reddit.com/r/LocalLLaMA/comments/13dxxp5/questions_about_llms_lora_finetuning/
rookiengineer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13dxxp5
false
null
t3_13dxxp5
/r/LocalLLaMA/comments/13dxxp5/questions_about_llms_lora_finetuning/
false
false
self
17
null
Looks interesting.. maybe we can use this to run a plethora of local 7B or 13B models that are highly specialized, and just have the gpt3.5 API or some other "better" model direct the program to select which model to run on the fly... seems like it would reduce overall model sizes..
29
2023-05-10T18:22:33
https://huggingface.co/docs/transformers/transformers_agents
kc858
huggingface.co
1970-01-01T00:00:00
0
{}
13dywr0
false
null
t3_13dywr0
/r/LocalLLaMA/comments/13dywr0/looks_interesting_maybe_we_can_use_this_to_run_a/
false
false
https://b.thumbs.redditm…Nru6YOpj1KTg.jpg
29
{'enabled': False, 'images': [{'id': 'jfeVG47nZdEkz9kXfW1CcS-Sy8l4DXGb9JErx6bLKfU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=108&crop=smart&auto=webp&s=abf38332c5c00a919af5be75653a93473aa2e5fa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=216&crop=smart&auto=webp&s=1a06602204645d0251d3f5c043fa1b940ca3e799', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=320&crop=smart&auto=webp&s=04833c1845d9bd544eb7fed4e31123e740619890', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=640&crop=smart&auto=webp&s=d592b0a5b627e060ab58d73bde5f359a1058e56d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=960&crop=smart&auto=webp&s=5913a547536ee8300fdb8a32d14ff28667d1b875', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=1080&crop=smart&auto=webp&s=2af86fd4d41393a7d14d45c4bb883bef718575d1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?auto=webp&s=720b78add0a3005c4f67eaed6897df409cc040c6', 'width': 1200}, 'variants': {}}]}
How to use llama.cpp with LM's and LoRas
10
Looking for guides, feedback, direction on how to merge or load LoRa's with existing LModels using llama.cpp. I guess this is part 2 of my question, the first question I had was creating LoRa's : [(19) Creating LoRA's either with llama.cpp or oobabooga (via cli only) : LocalLLaMA (reddit.com)](https://www.reddit.com/r/LocalLLaMA/comments/13c3i33/creating_loras_either_with_llamacpp_or_oobabooga/). I have a decent understanding and have loaded models but looking to better understand the LoRA training and experiment a bit. Thanks!
2023-05-10T19:14:20
https://www.reddit.com/r/LocalLLaMA/comments/13e0am7/how_to_use_llamacpp_with_lms_and_loras/
orangeatom
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13e0am7
false
null
t3_13e0am7
/r/LocalLLaMA/comments/13e0am7/how_to_use_llamacpp_with_lms_and_loras/
false
false
self
10
null
Chatbot arena released new leader board with GPT4 and more models!
144
Now we can finally see how close or far open source models like vicuna are from GPT4! Amazing, this could be an informal benchmark for LLM. [https://chat.lmsys.org/?arena](https://chat.lmsys.org/?arena) &#x200B; &#x200B; https://preview.redd.it/fjrgpdfx02za1.png?width=909&format=png&auto=webp&s=2abe40c2936ad1fcbe7de46d28640288aded8400
2023-05-10T19:19:50
https://www.reddit.com/r/LocalLLaMA/comments/13e0fkf/chatbot_arena_released_new_leader_board_with_gpt4/
GG9242
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13e0fkf
false
null
t3_13e0fkf
/r/LocalLLaMA/comments/13e0fkf/chatbot_arena_released_new_leader_board_with_gpt4/
false
false
https://b.thumbs.redditm…pzvcz2Ysmrzk.jpg
144
null
Need help getting started, llama outputting random gibberish
7
- I installed oobabooga along with GPTQ-for-LLaMa to use 4bit models. - I got `llama-13b-4bit-128g.safetensors` from [here](https://huggingface.co/Neko-Institute-of-Science/LLaMA-13B-4bit-128g) and put it in the models folder. - Started the server with `--model llama-13b-4bit-128g --wbits 4 --groupsize 128 --chat` Everything seems to load fine, but if I ask it anything, all I get as output is random gibberish, such as: > reignCred behind painted fa liberal ourselves credit paint MrsDA Cred reign definitely ex gal behind painted reign reignware behind behind reign reign reign reign credit behind painted reign reign reign behind behind painted behind behind behind behind behind painted painted painted behind painted... or >gngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngngn What am I doing wrong? Is there anything else I need to configure beforehand? I also tried an instruction command found in the [wiki](https://www.reddit.com/r/LocalLLaMA/wiki/index#wiki_standard_llama), with the same result.
2023-05-10T20:01:30
https://www.reddit.com/r/LocalLLaMA/comments/13e1is6/need_help_getting_started_llama_outputting_random/
addandsubtract
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13e1is6
false
null
t3_13e1is6
/r/LocalLLaMA/comments/13e1is6/need_help_getting_started_llama_outputting_random/
false
false
self
7
{'enabled': False, 'images': [{'id': 'UaJo6m3JbOsXwgsuXDIxX3KcUwdXD6fCUdt9PkhCzsY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/tN7MiqTNRV5CSJn1NNRZEaMe9CrAjK_pX0n3bTScDcc.jpg?width=108&crop=smart&auto=webp&s=31b73048591012e375481f856603242133ac989a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/tN7MiqTNRV5CSJn1NNRZEaMe9CrAjK_pX0n3bTScDcc.jpg?width=216&crop=smart&auto=webp&s=3e1e6ede2a2e14a455a7d48d19ce9ca89825f8b1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/tN7MiqTNRV5CSJn1NNRZEaMe9CrAjK_pX0n3bTScDcc.jpg?width=320&crop=smart&auto=webp&s=8b11821c321ea0bd0a98a497400e3c282734319a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/tN7MiqTNRV5CSJn1NNRZEaMe9CrAjK_pX0n3bTScDcc.jpg?width=640&crop=smart&auto=webp&s=bf077520368745a6c2852b7ca8de1aa0b75148b8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/tN7MiqTNRV5CSJn1NNRZEaMe9CrAjK_pX0n3bTScDcc.jpg?width=960&crop=smart&auto=webp&s=b09e184bcf7bd81cdb9b0f8a0519874528f7d3b5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/tN7MiqTNRV5CSJn1NNRZEaMe9CrAjK_pX0n3bTScDcc.jpg?width=1080&crop=smart&auto=webp&s=d442cce0481eceaf4e95ac9f900efdc6e06363f4', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/tN7MiqTNRV5CSJn1NNRZEaMe9CrAjK_pX0n3bTScDcc.jpg?auto=webp&s=4d8577512de50a8a81be9b810821c9b114d7ded4', 'width': 1200}, 'variants': {}}]}
Can you make a Lora with Gpt4-x[…] models?
2
[removed]
2023-05-10T20:37:57
https://www.reddit.com/r/LocalLLaMA/comments/13e2h4h/can_you_make_a_lora_with_gpt4x_models/
maxiedaniels
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13e2h4h
false
null
t3_13e2h4h
/r/LocalLLaMA/comments/13e2h4h/can_you_make_a_lora_with_gpt4x_models/
false
false
default
2
null
Best open source LLM model for commercial use
3
Hey guys, what’s the best open source LLM model for commercial use atm?
2023-05-10T21:30:50
https://www.reddit.com/r/LocalLLaMA/comments/13e3xi4/best_open_source_llm_model_for_commercial_use/
jamesgz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13e3xi4
false
null
t3_13e3xi4
/r/LocalLLaMA/comments/13e3xi4/best_open_source_llm_model_for_commercial_use/
false
false
self
3
null
Google is comparing their LLM to LLMs from 2022 !
119
2023-05-10T21:58:27
https://i.redd.it/kwglgus4t2za1.png
3deal
i.redd.it
1970-01-01T00:00:00
0
{}
13e4nqo
false
null
t3_13e4nqo
/r/LocalLLaMA/comments/13e4nqo/google_is_comparing_their_llm_to_llms_from_2022/
false
false
https://b.thumbs.redditm…meY6LN0daKow.jpg
119
{'enabled': True, 'images': [{'id': 'tKZ9yhtGOfmxTgYbAKy-IjJrn9N6kii0Ts1jCFhwu6Q', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/kwglgus4t2za1.png?width=108&crop=smart&auto=webp&s=22e2b8f9ebaecb765dcfc8f1cd6854cdf778493b', 'width': 108}, {'height': 120, 'url': 'https://preview.redd.it/kwglgus4t2za1.png?width=216&crop=smart&auto=webp&s=4c866adaf9f3152d41858b8c38bab1712e62d282', 'width': 216}, {'height': 178, 'url': 'https://preview.redd.it/kwglgus4t2za1.png?width=320&crop=smart&auto=webp&s=23243d7c0975293e1b2b783953966ff148cd9ed9', 'width': 320}, {'height': 356, 'url': 'https://preview.redd.it/kwglgus4t2za1.png?width=640&crop=smart&auto=webp&s=a20812e41b777caa527fd4ac0386dd2d2114afa9', 'width': 640}, {'height': 534, 'url': 'https://preview.redd.it/kwglgus4t2za1.png?width=960&crop=smart&auto=webp&s=5f9fa927d80a88196ccd123eaebb3777884525dd', 'width': 960}, {'height': 601, 'url': 'https://preview.redd.it/kwglgus4t2za1.png?width=1080&crop=smart&auto=webp&s=7af095d5af37a09c5d574daecbf745ed3b2b77d2', 'width': 1080}], 'source': {'height': 962, 'url': 'https://preview.redd.it/kwglgus4t2za1.png?auto=webp&s=6882ce6222d3ecbb5028a5dcea6b31591ed4f414', 'width': 1727}, 'variants': {}}]}
Long term project: "resurrecting" a passed friend?
23
I'm not sure if this belongs here or in r/learnmachinelearning, but I have a question about what is and isn't possible. My husband's best friend passed several years ago and he has copious chats, forum posts as well as the stories they wrote together. Right now, we have created a bot in [Character.AI](https://Character.AI) that kinda sounds like him, but obviously not one that contains any of his knowledge since the definition window is small and the bots' memory is... imprecise at best. So I got to wondering: it seems like it should be possible to fine-tune/create a LoRA of one of the LLMs so that it does contain the friend's knowledge and can be used as a chatbot. From my research it seems like Vicuna would be a good fit as it has already been tweaked to act as a chatbot. I'm currently working through tutorials, including the "How to Fine Tune an LLM" that exists on Colab that tweaks GPT2 (I think) with wikipedia data. I know I have a huge learning curve ahead of me. I would be looking at doing the training using Google Colab, but ideally he'd run the end result locally. He can run stable diffusion on his machine using his NVidia GPU. Sadly, my video card is AMD so while I can technically run the Vicuna 4 bit model (13B, I think?) in CPU mode, it's too painfully slow to do anything with. The data is currently unstructured. Obviously we will need to format it properly, but it is in the form of blocks of text rather than the Prompt/Input/Output format I've seen in various Github projects. As for me, I am a former C# Windows/Web/SQL developer so I'm not starting from absolute scratch, but obviously I'll need to learn a lot. I'm prepared for this to be an ongoing project for the new few months. I would welcome any feedback as to what is or isn't possible, whether I'm setting my sights too high, or even if I'm simply in the wrong forum. Thanks all! EDIT: I've received many words of warning about whether this is a good idea, for my husband's sake at least. After thinking about it, I'm not sure I'm at the point where I agree yet but I'll at least give this a lot of thought before attempting something like this. I know it's not the most emotionally healthy thing, to cling to the echoes of someone gone. He has not found interacting with the [Character.AI](https://Character.AI) version of his friend to be difficult but while their bots are fun to interact 2i5h and can still sound startingly human, an LLM fine tuned on the friend's text has every chance of being more so to the point of being damaging. So thank you everyone, you've given me a lot to think about.
2023-05-11T03:52:43
https://www.reddit.com/r/LocalLLaMA/comments/13ecakp/long_term_project_resurrecting_a_passed_friend/
rmt77
self.LocalLLaMA
2023-05-11T11:01:12
0
{}
13ecakp
false
null
t3_13ecakp
/r/LocalLLaMA/comments/13ecakp/long_term_project_resurrecting_a_passed_friend/
false
false
self
23
{'enabled': False, 'images': [{'id': 'veE04iaMbgI4yLvLGj2IZNV7UQfnq3n_7BmxP28dCd8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?width=108&crop=smart&auto=webp&s=0e594332595e82a5118e08d35a2cd140c18d7571', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?width=216&crop=smart&auto=webp&s=e3c279ba2d1ae1f9f2fba4b328e22f6615821b5c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?width=320&crop=smart&auto=webp&s=e635acb6bc693890c232162908676cb6478c120c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?width=640&crop=smart&auto=webp&s=59ba293d6adf4cce410b43b5d28ae104922701b0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?width=960&crop=smart&auto=webp&s=fc7dc69af838ec53e60b3e88fec5e67c8759495b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?width=1080&crop=smart&auto=webp&s=e50a4f1b7c99e137a2ab4d5e2d573bb75becd067', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?auto=webp&s=b8597825d9b212133d3dbd9ee26fd0dcc2a84677', 'width': 1200}, 'variants': {}}]}
4096 Context length (and beyond)
48
Right now there's a lot of talk about StableLM vs WizardLM in 7 and 13b varieties. I wanted to point out that the StableLM family of models was trained for 4096 token context length, meaning it can remember twice as much, and is one of the few GPT-based model model families that support a context length larger than 2048 tokens. I hit the token limit frequently during conversations, and love the idea of a model that can go beyond 2048 tokens, making StableLM-Base-Alpha a pretty attractive platform. If this base model could be trained up on the same data set as wizardlm-13b-uncensored, I think we'd have a weiner, at least for a while. For anyone coming up to speed on this, here's a mini-brain-dump on context lengths: Note that GPT-3 has a context length of 8K tokens and GPT-4 supposedly goes up to 32K, though they may be using some tricks to make this happen. There are also other models like longformer (4K) and RWKV (An RNN, not a GPT, but still an LLM) that has versions in 4K and 8K. MosaicLM released mosiaic-something-storywriter-65K+, but apparently it's very, very slow; unusably slow for real-time use. There are also "memory' techniques for enhancing LLM context lengths (see Langchain for examples), and SuperBIG/SuperBooga, but these are all "hacks" on top of the fixed token length of the model. Also worth mentioning that increasing context length slows down generation - by a lot. This is because most GPT architectures work by comparing each new token in the sequence with all the tokens that came before it, which results in a geometric (e.g faster than linear) increase in the number of comparisons or matmuls or whatever needed to generate the prompt. So, you might find that a model is very fast starting out, but slows down as the context length increases. But - back to my selfish question - What's the current SOTA for > 2K, instruction-following, uncensored models? (License is less of a concern for me as most everything I'm doing right now is for personal/private use.) And is anyone using memory augmentation to great effect?
2023-05-11T04:44:19
https://www.reddit.com/r/LocalLLaMA/comments/13ed7re/4096_context_length_and_beyond/
tronathan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13ed7re
false
null
t3_13ed7re
/r/LocalLLaMA/comments/13ed7re/4096_context_length_and_beyond/
false
false
self
48
null
Any tips on effective prompts for the usual LLM suspects?
1
[removed]
2023-05-11T08:38:59
https://www.reddit.com/r/LocalLLaMA/comments/13ehaku/any_tips_on_effective_prompts_for_the_usual_llm/
this_is_a_long_nickn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13ehaku
false
null
t3_13ehaku
/r/LocalLLaMA/comments/13ehaku/any_tips_on_effective_prompts_for_the_usual_llm/
false
false
default
1
null
AI Showdown: Wizard Vicuna vs. Stable Vicuna, GPT-4 as the judge (test in comments)
1
[deleted]
2023-05-11T08:46:31
[deleted]
1970-01-01T00:00:00
0
{}
13ehf5e
false
null
t3_13ehf5e
/r/LocalLLaMA/comments/13ehf5e/ai_showdown_wizard_vicuna_vs_stable_vicuna_gpt4/
false
false
default
1
null
AI Showdown: Wizard Vicuna vs. Stable Vicuna, GPT-4 as the judge (test in comments)
83
2023-05-11T09:00:44
https://i.redd.it/hpopsffe36za1.png
imakesound-
i.redd.it
1970-01-01T00:00:00
0
{}
13ehnnt
false
null
t3_13ehnnt
/r/LocalLLaMA/comments/13ehnnt/ai_showdown_wizard_vicuna_vs_stable_vicuna_gpt4/
false
false
https://a.thumbs.redditm…p8Ok07nMHZ24.jpg
83
{'enabled': True, 'images': [{'id': 'FFdR__ZEBqtyR8s3yBc67BNvxU_B0aejMAVU3Bdq6ow', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/hpopsffe36za1.png?width=108&crop=smart&auto=webp&s=4104f6a90aaf64a35caf47591896472020580b2c', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/hpopsffe36za1.png?width=216&crop=smart&auto=webp&s=7683e9c18ad0a1b8f6b61f7b1358b5078652e611', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/hpopsffe36za1.png?width=320&crop=smart&auto=webp&s=89008d433bb7e99e26098413afc11be4f2ed7ea9', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/hpopsffe36za1.png?width=640&crop=smart&auto=webp&s=44cf3eb525ca7e977d2b3e09c9ce3538dca942c0', 'width': 640}], 'source': {'height': 893, 'url': 'https://preview.redd.it/hpopsffe36za1.png?auto=webp&s=14dd578c5af80c5255a89bb93b91533eb3dc1108', 'width': 892}, 'variants': {}}]}
Engineering training for better quality
3
Have just watched the TED talk from openAI on their models and open thing Greg Brockman mentioned was that in order to get to gpt4 level semantics and understanding they had to get to rocket engineering levels of tolerance and construction of their training and feedback systems. One thing I've noticed over the past few weeks is we have a focus on training new models and putting new datasets together but IDK how much thought is going into the tooling and the quality of the training on specific hardware and in specific epochs engineered for higher quality. Does anyone have any information on this or some really simple things people could be implementing to get a step change in their output? For example, from my own experience I've been working at plugging llama like models into AutoGPT and only Vicuna13b has been able to correctly utilise the JSON response format and it only handles about 2-3 recursions before it breaks. This is kind of the state of functional agents right now for my own experience, unless others have had more success.
2023-05-11T10:31:26
https://www.reddit.com/r/LocalLLaMA/comments/13ejbwx/engineering_training_for_better_quality/
SupernovaTheGrey
self.LocalLLaMA
2023-05-11T10:56:13
0
{}
13ejbwx
false
null
t3_13ejbwx
/r/LocalLLaMA/comments/13ejbwx/engineering_training_for_better_quality/
false
false
self
3
null
Has anyone been able to implement WebGPU LLamas on a local server like this project
1
2023-05-11T11:04:54
https://mlc.ai/web-llm/
SupernovaTheGrey
mlc.ai
1970-01-01T00:00:00
0
{}
13ek1ro
false
null
t3_13ek1ro
/r/LocalLLaMA/comments/13ek1ro/has_anyone_been_able_to_implement_webgpu_llamas/
false
false
default
1
null
Can anyone recommend me some specs that will give me high performance for the next few years?
7
Not sure how much VRAM, not sure how much RAM, or if GPU still matters outside VRAM. I was planning on getting an (asus) gaming laptop because those are built like beasts, price doesnt really matter, under 5k USD? Cheaper is obv better. I'm going to use this for company stuff, so quality is most important. Anyway, high/higest end specs for the VRAM, RAM, GPU and CPU? Would love to run a 60B model.
2023-05-11T11:15:00
https://www.reddit.com/r/LocalLLaMA/comments/13ek9sp/can_anyone_recommend_me_some_specs_that_will_give/
uhohritsheATGMAIL
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13ek9sp
false
null
t3_13ek9sp
/r/LocalLLaMA/comments/13ek9sp/can_anyone_recommend_me_some_specs_that_will_give/
false
false
self
7
null
I made a simple telegram bot Llama.cpp
1
[removed]
2023-05-11T12:25:32
[deleted]
1970-01-01T00:00:00
0
{}
13elwcr
false
{'reddit_video': {'bitrate_kbps': 4800, 'dash_url': 'https://v.redd.it/c9fuex3y37za1/DASHPlaylist.mpd?a=1694534786%2CNGM1NTYwNjk0MzFkYWIwNjJiNmU5OTFkYWIyMTk5ZGJhYjY3MDE3MTFkMTk3ZmZiOGY2M2ExYzBhNTU1OTY5NQ%3D%3D&v=1&f=sd', 'duration': 32, 'fallback_url': 'https://v.redd.it/c9fuex3y37za1/DASH_1080.mp4?source=fallback', 'height': 1080, 'hls_url': 'https://v.redd.it/c9fuex3y37za1/HLSPlaylist.m3u8?a=1694534786%2CMzM5N2E0ZWVkZmUzZGY0ZWMwZTNlYTQwMjYzMDM4YjI4YzI1OWQyYWYzMDJjMDM3NGMxZTE5YjllODAzZDRmYg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/c9fuex3y37za1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 608}}
t3_13elwcr
/r/LocalLLaMA/comments/13elwcr/i_made_a_simple_telegram_bot_llamacpp/
false
false
default
1
null
I made a simple telegram bot Llama.cpp
1
[removed]
2023-05-11T12:27:07
https://v.redd.it/o5hcgab847za1
[deleted]
v.redd.it
1970-01-01T00:00:00
0
{}
13elxod
false
{'reddit_video': {'bitrate_kbps': 4800, 'dash_url': 'https://v.redd.it/o5hcgab847za1/DASHPlaylist.mpd?a=1694534820%2COTM2MzBhZmVjM2YxN2Q0NzMyNDcxMzY3YmU1NGNkNWZlOWJiNTVjYmU4OTY3ZDY3OGVlYjMyMjc4ZGM4YzM1Mg%3D%3D&v=1&f=sd', 'duration': 32, 'fallback_url': 'https://v.redd.it/o5hcgab847za1/DASH_1080.mp4?source=fallback', 'height': 1080, 'hls_url': 'https://v.redd.it/o5hcgab847za1/HLSPlaylist.m3u8?a=1694534820%2CMjMzMTdhOTg2MjRmMGY5ZGU2ZGVkMTVkYmFhMTAzZGZhOTU1YTZjY2UzNzZmZTliY2RlMmNiZTExNGVhZTNiNA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/o5hcgab847za1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 608}}
t3_13elxod
/r/LocalLLaMA/comments/13elxod/i_made_a_simple_telegram_bot_llamacpp/
false
false
default
1
null
VRAM limitations
3
I have a decent machine AMD Ryzen 9 5950X 16-Core Processor, 3401 Mhz, 16 Core(s), 32 Logical Processor(s) My video is a Adapter Description NVIDIA GeForce RTX 3080 VRAM is 10240MB I am struggling to run many models. Is there anything I can do?
2023-05-11T12:28:11
https://www.reddit.com/r/LocalLLaMA/comments/13elyk9/vram_limitations/
Rear-gunner
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13elyk9
false
null
t3_13elyk9
/r/LocalLLaMA/comments/13elyk9/vram_limitations/
false
false
self
3
null
LLMs designed to act like netizens?
12
[deleted]
2023-05-11T15:11:54
[deleted]
2023-06-14T17:58:10
0
{}
13eq6ys
false
null
t3_13eq6ys
/r/LocalLLaMA/comments/13eq6ys/llms_designed_to_act_like_netizens/
false
false
default
12
null
We introduce CAMEL : Clinically Adapted Model Enhanced from LLaMA
1
[removed]
2023-05-11T16:34:12
https://www.reddit.com/r/LocalLLaMA/comments/13esep8/we_introduce_camel_clinically_adapted_model/
HistoryHuge2015
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13esep8
false
null
t3_13esep8
/r/LocalLLaMA/comments/13esep8/we_introduce_camel_clinically_adapted_model/
false
false
default
1
null
Is there vicuna-uncensored?
29
What the title says. Where can I get vicuna-uncensored if it's out somewhere? Thanks! Update: It seems to be in training. https://huggingface.co/ehartford/Wizard-Vicuna-13b-Uncensored
2023-05-11T16:45:19
https://www.reddit.com/r/LocalLLaMA/comments/13esozg/is_there_vicunauncensored/
jl303
self.LocalLLaMA
2023-05-11T19:52:38
0
{}
13esozg
false
null
t3_13esozg
/r/LocalLLaMA/comments/13esozg/is_there_vicunauncensored/
false
false
self
29
{'enabled': False, 'images': [{'id': 'QgK4OSL80eBW-KUk05CIG4tC7_eftM9F062uqLVOjTw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/n4pLyNsllAkZ8gfPXYsQtA2-FCWXYNd_wtyYiMCalXM.jpg?width=108&crop=smart&auto=webp&s=0eb97d0509604b833d08fd83b14be33c59b83122', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/n4pLyNsllAkZ8gfPXYsQtA2-FCWXYNd_wtyYiMCalXM.jpg?width=216&crop=smart&auto=webp&s=d5e24f2637eb906f66523161ee04dbff02b3c2a4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/n4pLyNsllAkZ8gfPXYsQtA2-FCWXYNd_wtyYiMCalXM.jpg?width=320&crop=smart&auto=webp&s=08bbf1b1475b83f63eb3d91c215041eb9bd39a5a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/n4pLyNsllAkZ8gfPXYsQtA2-FCWXYNd_wtyYiMCalXM.jpg?width=640&crop=smart&auto=webp&s=53a4804f3396b28dcf60444f7f853e1e7eaec742', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/n4pLyNsllAkZ8gfPXYsQtA2-FCWXYNd_wtyYiMCalXM.jpg?width=960&crop=smart&auto=webp&s=34a130946b3277f09c40684289c4683f20c8e890', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/n4pLyNsllAkZ8gfPXYsQtA2-FCWXYNd_wtyYiMCalXM.jpg?width=1080&crop=smart&auto=webp&s=360081c3748040fb7d77a57f31cbf70070f93f82', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/n4pLyNsllAkZ8gfPXYsQtA2-FCWXYNd_wtyYiMCalXM.jpg?auto=webp&s=a66cb02cda5e5ed9d36eadf43da75610d8b51f22', 'width': 1200}, 'variants': {}}]}
autogpt-like framework?
11
hey y'all, I've been searching for an autogpt-like framework that can work with a local llama install like llama.cpp or oobabooga or even gpt4all. Do you know of any? So far I tried a number of them but I keep getting stuck on random minutia, was wondering if there's a "smooth" one...
2023-05-11T16:55:46
https://www.reddit.com/r/LocalLLaMA/comments/13esyta/autogptlike_framework/
paskal007r
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13esyta
false
null
t3_13esyta
/r/LocalLLaMA/comments/13esyta/autogptlike_framework/
false
false
self
11
null
GPT4ALL + MPT ---> Bad Magic error ?
2
I am trying to run the new MPT models by MosaicML with pygpt4all. In loading the following, I get a "bad magic" error. How do i overcome it? I've checked [https://github.com/ggerganov/llama.cpp/issues](https://github.com/ggerganov/llama.cpp/issues) and there aren't similar issues reported for the MPT models. &#x200B; Code: \`\`\` from pygpt4all.models.gpt4all\_j import GPT4All\_J model = GPT4All\_J('./models/ggml-mpt-7b-chat.bin') \`\`\` Error: \`\`\` runfile('C:/Data/gpt4all/gpt4all\_cpu2.py', wdir='C:/Data/gpt4all') gptj\_model\_load: invalid model file './models/ggml-mpt-7b-chat.bin' (bad magic) Windows fatal exception: int divide by zero \`\`\`
2023-05-11T19:09:21
https://www.reddit.com/r/LocalLLaMA/comments/13ewsuc/gpt4all_mpt_bad_magic_error/
kayhai
self.LocalLLaMA
2023-05-11T19:21:18
0
{}
13ewsuc
false
null
t3_13ewsuc
/r/LocalLLaMA/comments/13ewsuc/gpt4all_mpt_bad_magic_error/
false
false
self
2
{'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=108&crop=smart&auto=webp&s=b6caea286bbf31bdb473212eb5668f45376977be', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=216&crop=smart&auto=webp&s=ba8933d74dda3c391a7c9a355d2e1cd0054d1c21', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=320&crop=smart&auto=webp&s=93b690f58b739ff61da7a147fc67d6c8842b3a7d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=640&crop=smart&auto=webp&s=a55f55983fcc0b3f5a6d4e0b51f627e1b40ef9d4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=960&crop=smart&auto=webp&s=e56b77b835b76c51a1e12a410b9e908f0255d397', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=1080&crop=smart&auto=webp&s=d06ca9eb5611d109d3ef7935f6de61545e9828da', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?auto=webp&s=0b2a006e16468374b78dd67390927053776e6137', 'width': 1280}, 'variants': {}}]}
Not enough ram, swap space question.
1
[removed]
2023-05-11T19:29:25
https://www.reddit.com/r/LocalLLaMA/comments/13excrx/not_enough_ram_swap_space_question/
h_i_t_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13excrx
false
null
t3_13excrx
/r/LocalLLaMA/comments/13excrx/not_enough_ram_swap_space_question/
false
false
default
1
null
Has anyone any idea when the Google offline Gecko model will be out?
12
Hi, Has anyone any idea when the standalone offline Gecko model will be out? (I wanted to ask this on the Google sub .. but I think that they have frozen my posting, as in the past I haven't been reverent enough to the great Google overload)
2023-05-11T19:32:15
https://www.reddit.com/r/LocalLLaMA/comments/13exft0/has_anyone_any_idea_when_the_google_offline_gecko/
MrEloi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13exft0
false
null
t3_13exft0
/r/LocalLLaMA/comments/13exft0/has_anyone_any_idea_when_the_google_offline_gecko/
false
false
self
12
null
Does a site exist which host open source LLMs to inference?
0
Dumb question, but it seems the only way for to interact with most open source models is by self hosting or spinning up an instance in Google Colab etc. I also noticed that some models can be inferenced via Huggingface, but not the majority. Is there a reason a company hasn't started hosting Alpaca / Koala / Vicuna / etc to allow enthusiasts to inference? Guessing the answer has to do with legality or cost. But legally speaking it seems many of these models are Apache or MIT licensed. Cost wise I would imagine they could charge per inference or subscription.
2023-05-11T21:49:37
https://www.reddit.com/r/LocalLLaMA/comments/13f17vb/does_a_site_exist_which_host_open_source_llms_to/
mdas
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13f17vb
false
null
t3_13f17vb
/r/LocalLLaMA/comments/13f17vb/does_a_site_exist_which_host_open_source_llms_to/
false
false
self
0
null
GitHub - IBM/Dromedary: Dromedary: towards helpful, ethical and reliable LLMs.
7
2023-05-11T22:29:17
https://github.com/IBM/Dromedary
pseudonerv
github.com
1970-01-01T00:00:00
0
{}
13f2a7n
false
null
t3_13f2a7n
/r/LocalLLaMA/comments/13f2a7n/github_ibmdromedary_dromedary_towards_helpful/
false
false
https://a.thumbs.redditm…xVav90hVnDj0.jpg
7
{'enabled': False, 'images': [{'id': 'b8M4u1OqDE4_KJYlbJ-XMrUt1Enksdsa6NLoJwjH984', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6XiLKAOfH6CEc-wqtlM6t7wVHjYGvMf6fy_kFQY76Ss.jpg?width=108&crop=smart&auto=webp&s=865685c9a9323de4bbc84e4cda59e2857553127f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/6XiLKAOfH6CEc-wqtlM6t7wVHjYGvMf6fy_kFQY76Ss.jpg?width=216&crop=smart&auto=webp&s=3c7f7357906fc7e13f505ac23c9ee8764b96bf6f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/6XiLKAOfH6CEc-wqtlM6t7wVHjYGvMf6fy_kFQY76Ss.jpg?width=320&crop=smart&auto=webp&s=bd11874368724a14748217d925fe2aab18d4938b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/6XiLKAOfH6CEc-wqtlM6t7wVHjYGvMf6fy_kFQY76Ss.jpg?width=640&crop=smart&auto=webp&s=e4795b938e06c757c386889dd12476a7aaf70b10', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/6XiLKAOfH6CEc-wqtlM6t7wVHjYGvMf6fy_kFQY76Ss.jpg?width=960&crop=smart&auto=webp&s=3281d130f184fa6f6a9ed8ddbf1d9beeff778077', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/6XiLKAOfH6CEc-wqtlM6t7wVHjYGvMf6fy_kFQY76Ss.jpg?width=1080&crop=smart&auto=webp&s=4f9fda61cee2841c8f0ace1abb59305a10793b49', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/6XiLKAOfH6CEc-wqtlM6t7wVHjYGvMf6fy_kFQY76Ss.jpg?auto=webp&s=3f9ae9c6632d92791483e34f6c28207a078de4fb', 'width': 1200}, 'variants': {}}]}
GGML support for MosaicML MPT-7B pull request in progress
25
2023-05-11T22:44:18
https://github.com/ggerganov/ggml/pull/145
sanxiyn
github.com
1970-01-01T00:00:00
0
{}
13f2oji
false
null
t3_13f2oji
/r/LocalLLaMA/comments/13f2oji/ggml_support_for_mosaicml_mpt7b_pull_request_in/
false
false
https://b.thumbs.redditm…4Su_Is607HaA.jpg
25
{'enabled': False, 'images': [{'id': 'oZlKox1juDccFtJ-p4cTgCqXbL8gDHd1GyEhU8u8IyQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/52Rwr02ipsixEbfALvXdR9SvPXRS74wjiiA5ovCGU5M.jpg?width=108&crop=smart&auto=webp&s=6925f3509a846778658ceeff9c112a846b038e86', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/52Rwr02ipsixEbfALvXdR9SvPXRS74wjiiA5ovCGU5M.jpg?width=216&crop=smart&auto=webp&s=ae834ccfd67f31d99ad9aa824592ce8a524c90a8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/52Rwr02ipsixEbfALvXdR9SvPXRS74wjiiA5ovCGU5M.jpg?width=320&crop=smart&auto=webp&s=e35bd27ae6b265e1d613df8e7b188859d68042fa', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/52Rwr02ipsixEbfALvXdR9SvPXRS74wjiiA5ovCGU5M.jpg?width=640&crop=smart&auto=webp&s=479c8b9356368cf28813242ce2aa619cb3c56f2c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/52Rwr02ipsixEbfALvXdR9SvPXRS74wjiiA5ovCGU5M.jpg?width=960&crop=smart&auto=webp&s=5deafa0617faf741b75569374e31372dee4122d1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/52Rwr02ipsixEbfALvXdR9SvPXRS74wjiiA5ovCGU5M.jpg?width=1080&crop=smart&auto=webp&s=0a3546f8c5c6e30ae792858499533d107ed61a69', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/52Rwr02ipsixEbfALvXdR9SvPXRS74wjiiA5ovCGU5M.jpg?auto=webp&s=144bc2172b445d601550704b87947226a4362966', 'width': 1200}, 'variants': {}}]}
GGML Q4 and Q5 formats have changed. Don't waste bandwidth downloading the old models. They need to be redone.
94
2023-05-11T23:24:56
https://github.com/ggerganov/llama.cpp/pull/1405
fallingdowndizzyvr
github.com
1970-01-01T00:00:00
0
{}
13f3pfv
false
null
t3_13f3pfv
/r/LocalLLaMA/comments/13f3pfv/ggml_q4_and_q5_formats_have_changed_dont_waste/
false
false
https://a.thumbs.redditm…d6e5k_XHfiV0.jpg
94
{'enabled': False, 'images': [{'id': '-JVjdjsIyZk30y48Lef5xstf0pAjMJDOVF3e_zb-Ntw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vTrhuT5_D61S5XsUQOqkvvPOVMAuQpauziw0l7iotts.jpg?width=108&crop=smart&auto=webp&s=2282f3cc9560c4f2bf215f8560cd7ebefbf1c397', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/vTrhuT5_D61S5XsUQOqkvvPOVMAuQpauziw0l7iotts.jpg?width=216&crop=smart&auto=webp&s=27c4f80290e6a547d11161676b311c43551d5c64', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/vTrhuT5_D61S5XsUQOqkvvPOVMAuQpauziw0l7iotts.jpg?width=320&crop=smart&auto=webp&s=deae2f6dbaed0341cbcc1b369b7be211c246fd6f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/vTrhuT5_D61S5XsUQOqkvvPOVMAuQpauziw0l7iotts.jpg?width=640&crop=smart&auto=webp&s=01721e49b2b5d6d634765233082ebe4c505e44df', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/vTrhuT5_D61S5XsUQOqkvvPOVMAuQpauziw0l7iotts.jpg?width=960&crop=smart&auto=webp&s=ebdbd88825634bc397168e5663f212c89f7e26e4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/vTrhuT5_D61S5XsUQOqkvvPOVMAuQpauziw0l7iotts.jpg?width=1080&crop=smart&auto=webp&s=918447d4ec4c76a667332a19b6c9ab1a3dfc4715', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/vTrhuT5_D61S5XsUQOqkvvPOVMAuQpauziw0l7iotts.jpg?auto=webp&s=50c0b63f1f20227c32b725f72aabe039ac043162', 'width': 1200}, 'variants': {}}]}
Distillation
1
[removed]
2023-05-12T00:12:46
[deleted]
1970-01-01T00:00:00
0
{}
13f4uwc
false
null
t3_13f4uwc
/r/LocalLLaMA/comments/13f4uwc/distillation/
false
false
default
1
null
Home LLM Hardware Suggestions
24
I've got a 16gb MacBook Pro currently and I've had a lot of fun setting up Oobabooga and testing 7b and 13b parameter models, but even using 4 bit quantized ggml models and no LoRas, the 13b models are painfully slow to use. I have only really worked with ggml models and I haven't worked directly in PyTorch yet. I've been researching a lot but still out of my depth a bit, and I would really appreciate any advice you can offer. # Goals: * Train/apply LoRas and maybe even do regular training/fine-tuning for 14-30b models * Faster inference with pre-trained LLMs with and without LoRas (5-10+ tokens/sec ideally) * Experiment with context lengths, langchains, and multi-modal stuff * Potentially self-host and use Stable Diffusion variants, and train/apply LoRas to them * Keep things under $5k total, ideally closer to $2-3k. # Questions: * **CPU and RAM** * Should I be focusing on cores/threads, clock speed, or both? * Would I be better off with an older/used Threadripper or Epyc CPU, or a newer Ryzen? * Any reasons I should consider Intel over AMD? * Is DDR5 RAM worth the extra cost over DDR4? Should I consider more than 128gb? * Is ECC RAM worth having or not necessary? * **GPU** * Should I prioritize faster/modern architecture or total vRAM? * Is a 24gb RTX 4090 a good idea? I'm a bit worried about vRAM limitations and the discontinuation of NvLink. I know PCie 5 is theoretically a replacement for NvLink but I don't know how that works in practice. * Is building an older/used workstation rig with multiple Nvidia P40s a bad idea? They are \~$200 each for 24gb vRAM, but my understanding is that the older architectures might be pretty slow for inference, and I can't really tell if I can actually pool the vRAM or not if I wanted to host a larger model. The P40 doesn't support NvLink and vDWS is a bit confusing to try to wrap my head around since I'm not planning on deploying a bunch of VMs. Thank you in advance for your patience and your advice. Please let me know if there's anything I can clarify, or if there are topics I should go read up on.
2023-05-12T00:39:49
https://www.reddit.com/r/LocalLLaMA/comments/13f5gwn/home_llm_hardware_suggestions/
yuicebox
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13f5gwn
false
null
t3_13f5gwn
/r/LocalLLaMA/comments/13f5gwn/home_llm_hardware_suggestions/
false
false
self
24
null
Useful ways to help LLM development with training data?
1
[removed]
2023-05-12T01:18:34
https://www.reddit.com/r/LocalLLaMA/comments/13f6cke/useful_ways_to_help_llm_development_with_training/
Unfruity
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13f6cke
false
null
t3_13f6cke
/r/LocalLLaMA/comments/13f6cke/useful_ways_to_help_llm_development_with_training/
false
false
default
1
null
Snapchat as an LLM free.
0
[removed]
2023-05-12T01:26:30
[deleted]
1970-01-01T00:00:00
0
{}
13f6itw
false
null
t3_13f6itw
/r/LocalLLaMA/comments/13f6itw/snapchat_as_an_llm_free/
false
false
default
0
null
Will DDR5 RAM make running LLMs on cpu more efficient?
11
Currently trying to decide if I should buy more DDR5 RAM to run llama.cpp or upgrade my graphics card. Currently on a RTX 3070 ti and my CPU is 12th gen i7-12700k 12 core. Mobo is z690. I am interested in both running and training LLMs
2023-05-12T02:49:27
https://www.reddit.com/r/LocalLLaMA/comments/13f8ggs/will_ddr5_ram_make_running_llms_on_cpu_more/
YaoiHentaiEnjoyer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13f8ggs
false
null
t3_13f8ggs
/r/LocalLLaMA/comments/13f8ggs/will_ddr5_ram_make_running_llms_on_cpu_more/
false
false
self
11
null
New persona whom we have the good fortune of conversing in graceful expressions of the beloved early nineteenth century
1
[deleted]
2023-05-12T03:51:56
[deleted]
1970-01-01T00:00:00
0
{}
13f9w4v
false
null
t3_13f9w4v
/r/LocalLLaMA/comments/13f9w4v/new_persona_whom_we_have_the_good_fortune_of/
false
false
default
1
null
Comparison of some locally runnable LLMs
83
I compared some locally runnable LLMs on my own hardware (i5-12490F, 32GB RAM) on a range of tasks here: [https://github.com/Troyanovsky/Local-LLM-comparison](https://github.com/Troyanovsky/Local-LLM-comparison). I also included some colab for trying out the models yourself in the repo. Tasks and evaluations are done with GPT-4. Not scientific. Here is the current ranking, which might be helpful for someone interested: | Model | Avg | |---------------------------------------------------------------------------------|------| | wizard-vicuna-13B.ggml.q4_0 (using llama.cpp) | 9.31 | | wizardLM-7B.q4_2 (in GPT4All) | 9.31 | | Airoboros-13B-GPTQ-4bit | 8.75 | | manticore_13b_chat_pyg_GPTQ (using oobabooga/text-generation-webui) | 8.31 | | mpt-7b-chat (in GPT4All) | 8.25 | | Project-Baize-v2-13B-GPTQ (using oobabooga/text-generation-webui) | 8.13 | | wizard-lm-uncensored-13b-GPTQ-4bit-128g (using oobabooga/text-generation-webui) | 8.06 | | vicuna-13b-1.1-q4_2 (in GPT4All) | 7.94 | | koala-13B-4bit-128g.GGML (using llama.cpp) | 7.88 | | Manticore-13B-GPTQ (using oobabooga/text-generation-webui) | 7.81 | | stable-vicuna-13B-GPTQ-4bit-128g (using oobabooga/text-generation-webui) | 7.81 | | gpt4-x-alpaca-13b-ggml-q4_0 (using llama.cpp) | 6.56 | | mpt-7b-instruct | 6.38 | | gpt4all-j-v1.3-groovy (in GPT4All) | 5.56 | Are there any other LLMs I should try to add to the list? Edit: Updated 2023/05/25 Added many models;
2023-05-12T04:49:32
https://www.reddit.com/r/LocalLLaMA/comments/13fb458/comparison_of_some_locally_runnable_llms/
bafil596
self.LocalLLaMA
2023-05-25T11:01:32
0
{}
13fb458
false
null
t3_13fb458
/r/LocalLLaMA/comments/13fb458/comparison_of_some_locally_runnable_llms/
false
false
self
83
{'enabled': False, 'images': [{'id': 'Jd_tPbHdZ-5oEAAbx464QbThWj4Im3kfkS4925EmcOk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MtPQj_rx1bUnknEvbUDiKXCupOv05x_1ZFzzRBAiiKc.jpg?width=108&crop=smart&auto=webp&s=4f862a4622942c2530d5ee2b54bbb948e211d72f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MtPQj_rx1bUnknEvbUDiKXCupOv05x_1ZFzzRBAiiKc.jpg?width=216&crop=smart&auto=webp&s=dcc8e42d1a6c57489bb426ff85896cc30bc2ac97', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MtPQj_rx1bUnknEvbUDiKXCupOv05x_1ZFzzRBAiiKc.jpg?width=320&crop=smart&auto=webp&s=4c89e7b0e867fddfd8258e6651596ee0e3a13b12', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MtPQj_rx1bUnknEvbUDiKXCupOv05x_1ZFzzRBAiiKc.jpg?width=640&crop=smart&auto=webp&s=7c7db24994fc3dc102cb35effbf46338d5631827', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MtPQj_rx1bUnknEvbUDiKXCupOv05x_1ZFzzRBAiiKc.jpg?width=960&crop=smart&auto=webp&s=929d4255751d13197a32e1bbd938f1cc32373fbf', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MtPQj_rx1bUnknEvbUDiKXCupOv05x_1ZFzzRBAiiKc.jpg?width=1080&crop=smart&auto=webp&s=dbd92347fcdf7f79d0fa545ec378225f01b3b45d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MtPQj_rx1bUnknEvbUDiKXCupOv05x_1ZFzzRBAiiKc.jpg?auto=webp&s=260aaf7b2b4ddd2517c51cbd16a7f8e255f0ff64', 'width': 1200}, 'variants': {}}]}
Hardware for LLM
14
Hi I have a dual 3090 machine with 5950x and 128gb ram 1500w PSU built before I got interested in running LLM. Looking for suggestion on hardware if my goal is to do inferences of 30b models and larger. Like 30b/65b vicuña or Alpaca. Fine tuning too if possible. How practical is it to add 2 more 3090 to my machine to get quad 3090? Does it get treated as a 96GB compute unit when using Nvlink to connect all 4 cards? Will inference speed scale well with the number of gpu despite increasing the LLM sizes to 30b and higher? Now getting 10 token/s on 2 3090 running vicuña 13b 4bit, don’t want it to fall below 3 token/s
2023-05-12T05:39:09
https://www.reddit.com/r/LocalLLaMA/comments/13fc241/hardware_for_llm/
xynyxyn
self.LocalLLaMA
2023-05-12T06:01:32
0
{}
13fc241
false
null
t3_13fc241
/r/LocalLLaMA/comments/13fc241/hardware_for_llm/
false
false
self
14
null
Vicuna/LLaMMA Models and Langchain Tools
10
Wondering if anyone's tried hooking up a 13B HF model model to Langchain tools such as search? Currently hacking something together on Flowise but sceptical on its ability to be useful so would love to hear if anyone's tried it.
2023-05-12T06:25:29
https://www.reddit.com/r/LocalLLaMA/comments/13fcv56/vicunallamma_models_and_langchain_tools/
sardoa11
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13fcv56
false
null
t3_13fcv56
/r/LocalLLaMA/comments/13fcv56/vicunallamma_models_and_langchain_tools/
false
false
self
10
null
Should the community focus on more peemissive models?
16
I really like how well some models like wizardvicuna perform, but knowing that since it has the Meta licence which prohibits the commercial use and other models like RedPajama are available I wondered if it would make sense to put more efford in these permissive models instead of the llama based ones?
2023-05-12T06:27:00
https://www.reddit.com/r/LocalLLaMA/comments/13fcw28/should_the_community_focus_on_more_peemissive/
Koliham
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13fcw28
false
null
t3_13fcw28
/r/LocalLLaMA/comments/13fcw28/should_the_community_focus_on_more_peemissive/
false
false
self
16
null
Issues with deploying miniGPT-4
1
[removed]
2023-05-12T09:21:40
https://www.reddit.com/r/LocalLLaMA/comments/13ffw4u/issues_with_deploying_minigpt4/
weluuu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13ffw4u
false
null
t3_13ffw4u
/r/LocalLLaMA/comments/13ffw4u/issues_with_deploying_minigpt4/
false
false
default
1
null
I cant get any of the 30B models to run on my 3090. What am I doing wrong?
4
I'm using the oobabooga\_windows tool, and used the integrated downloader to downloadTheBloke\_OpenAssistant-SFT-7-Llama-30B-GPTQ andMetaIX\_GPT4-X-Alpaca-30B-4bit.Both of them result in OOM errors when loading the models.Since I have 24gb of vram and 32gb of system ram, it should work, based on the wiki. The MetalX-folder contained three versions of the weights, I tried deleting all of them except the "gpt4-x-alpaca-30b-4bit.safetensors" file, so that its forced to load the 4-bit version, but nothing changed. When loading the models, apparently its trying to allocate 88 gb of memory. Any ideas? &#x200B; edit: solved. had to increase my pagefile to 90gb and also set up a custom config file, as /u/Ganfatrai described in a comment.
2023-05-12T11:24:13
https://www.reddit.com/r/LocalLLaMA/comments/13fibvd/i_cant_get_any_of_the_30b_models_to_run_on_my/
IlIllIlllIlllIllll
self.LocalLLaMA
2023-05-12T19:18:49
0
{}
13fibvd
false
null
t3_13fibvd
/r/LocalLLaMA/comments/13fibvd/i_cant_get_any_of_the_30b_models_to_run_on_my/
false
false
self
4
null
How can I do this with local LLM's?
1
Hi, I would like to use llm's for this scenario: I put a description of a product, the llm extracts the main features of this product and it returns it as a structured data, for example in json format. How can I achieve it with local models? I know that there will be some errors, but still this approach it should be faster than manual work. If it would be needed I can train model, but I do not know where should I start.
2023-05-12T11:38:25
https://www.reddit.com/r/LocalLLaMA/comments/13fin7r/how_can_i_do_this_with_local_llms/
polawiaczperel
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13fin7r
false
null
t3_13fin7r
/r/LocalLLaMA/comments/13fin7r/how_can_i_do_this_with_local_llms/
false
false
self
1
null
How To Setup a Model With Guardrails?
3
I have been playing around with some models locally and creating a discord bot as a fun side project, and I wanted to setup some guardrails on inputs / outputs of the bot to make sure that it isn't violating any ethical boundaries. I was going to use Nvidia's Nemo guardrails, but they [only support openai currently](https://github.com/NVIDIA/NeMo-Guardrails/blob/main/docs/user_guide/configuration-guide.md). Are there any other good ways to control inputs? The only idea I had so far was to get some other model to act as sentiment analysis on inputs to detect inappropriate sentiments but I didn't think that would be comprehensive enough for my purposes. Thank You!
2023-05-12T14:02:29
https://www.reddit.com/r/LocalLLaMA/comments/13fm3yf/how_to_setup_a_model_with_guardrails/
[deleted]
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13fm3yf
false
null
t3_13fm3yf
/r/LocalLLaMA/comments/13fm3yf/how_to_setup_a_model_with_guardrails/
false
false
self
3
{'enabled': False, 'images': [{'id': 'fiqLz_aIzMSJuM1lwLLUEqKw50IZkWbCpw0IFHjPHIM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Gw7n1CMTUb31Hutmwi_WJzOf-XF0pK3m6KU7pVRyGC0.jpg?width=108&crop=smart&auto=webp&s=130fcb7a8ff286e21a546a97fb1c8099bd29cc71', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Gw7n1CMTUb31Hutmwi_WJzOf-XF0pK3m6KU7pVRyGC0.jpg?width=216&crop=smart&auto=webp&s=35b4955ad0795e423aea8ef6bd810b83e3e07056', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Gw7n1CMTUb31Hutmwi_WJzOf-XF0pK3m6KU7pVRyGC0.jpg?width=320&crop=smart&auto=webp&s=e765ebdbf55b9e793022fdcb95e8511ce8b61be9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Gw7n1CMTUb31Hutmwi_WJzOf-XF0pK3m6KU7pVRyGC0.jpg?width=640&crop=smart&auto=webp&s=a04922a43d131393e4d2a8991eca004c5f772bd5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Gw7n1CMTUb31Hutmwi_WJzOf-XF0pK3m6KU7pVRyGC0.jpg?width=960&crop=smart&auto=webp&s=82ae597df75cdcc65e5f5c94b4396fe7a2983088', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Gw7n1CMTUb31Hutmwi_WJzOf-XF0pK3m6KU7pVRyGC0.jpg?width=1080&crop=smart&auto=webp&s=5da91b16044faf7be664b29a3f283009ceef5f00', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Gw7n1CMTUb31Hutmwi_WJzOf-XF0pK3m6KU7pVRyGC0.jpg?auto=webp&s=5d38eb28fccf7750583deba07efede9f471ab78e', 'width': 1200}, 'variants': {}}]}
Best model to act as info hub for preppers?
15
Hi, Which model would be the most *'information rich'* to act as an Oracle for end-of-the-world preppers? Ideally, it could act as a sort of mega encyclopedia.
2023-05-12T14:45:30
https://www.reddit.com/r/LocalLLaMA/comments/13fn6tp/best_model_to_act_as_info_hub_for_preppers/
MrEloi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13fn6tp
false
null
t3_13fn6tp
/r/LocalLLaMA/comments/13fn6tp/best_model_to_act_as_info_hub_for_preppers/
false
false
self
15
null
You guys are missing out on GPT4-x Vicuna
133
I'm surprised this one hasn't gotten that much attention yet. All the hype seems to be going towards models like wizard-vicuna, which are pretty great, vicuna was my favorite not long ago, then wizardlm, now we have all the other great llama models, but in my personal, informal tests GPT4-x-Vicuna has by far been the best 13b model I've tested so far. I've seen several others confirm that their own findings match mine. This model is based on Vicuna 1.1, and finetuned on Teknium's GPTeacher dataset, unreleased Roleplay v2 dataset, GPT-4-LLM dataset, and Nous Research Instruct Dataset. Trained on 8 A100-80GB GPUs for 5 epochs following Alpaca deepspeed training code. Find ggml, gptq, fp16, etc models here: [https://huggingface.co/models?search=gpt4-x-vicuna](https://huggingface.co/models?search=gpt4-x-vicuna)
2023-05-12T15:15:44
https://www.reddit.com/r/LocalLLaMA/comments/13fnyah/you_guys_are_missing_out_on_gpt4x_vicuna/
lemon07r
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13fnyah
false
null
t3_13fnyah
/r/LocalLLaMA/comments/13fnyah/you_guys_are_missing_out_on_gpt4x_vicuna/
false
false
self
133
{'enabled': False, 'images': [{'id': 'An0iJLapq-5CUQQlm3lWegevVWf7wlANjmn1iOwCTqk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=108&crop=smart&auto=webp&s=284ee86cd9228390268ace75b44e497c1fec562f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=216&crop=smart&auto=webp&s=96628b1c155401ce2d04a853b6524fa0c95cd632', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=320&crop=smart&auto=webp&s=f5f435bb4d31f0f695560cb0fb6f456702452062', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=640&crop=smart&auto=webp&s=b8b6a03fcde27061acee8ab4cb6ecc598a7ac6b9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=960&crop=smart&auto=webp&s=bbda73bd4f11be7b71efb3892b4107414d815613', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=1080&crop=smart&auto=webp&s=0158100ff6f9041cc8dcb861b66a3db041df5095', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?auto=webp&s=daff0272548bd7ffe5bc2b1eff6cd5c752144ed4', 'width': 1200}, 'variants': {}}]}
I am building an AI chatbot based on llama.cpp models
5
2023-05-12T16:00:41
https://i.redd.it/egxtlwq0bfza1.png
bre-dev
i.redd.it
1970-01-01T00:00:00
0
{}
13fp4ds
false
null
t3_13fp4ds
/r/LocalLLaMA/comments/13fp4ds/i_am_building_an_ai_chatbot_based_on_llamacpp/
false
false
default
5
null
Open llm leaderboard
29
2023-05-12T17:34:47
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
klop2031
huggingface.co
1970-01-01T00:00:00
0
{}
13frp3d
false
null
t3_13frp3d
/r/LocalLLaMA/comments/13frp3d/open_llm_leaderboard/
false
false
https://b.thumbs.redditm…LSDv65hMI7YI.jpg
29
{'enabled': False, 'images': [{'id': '2yXkO2nXyv2ynd0Gc85xzzHWd7q-pzJRTeM5uxEBdoE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/7zqcw9mAS-PYpZk8_Tl2OVpnWH8wLITangIHhIInYos.jpg?width=108&crop=smart&auto=webp&s=7c3bb0e464c062e6518a90b686b3544dad39673d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/7zqcw9mAS-PYpZk8_Tl2OVpnWH8wLITangIHhIInYos.jpg?width=216&crop=smart&auto=webp&s=6c25136371e9056c3998c03e64e73605446a33ac', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/7zqcw9mAS-PYpZk8_Tl2OVpnWH8wLITangIHhIInYos.jpg?width=320&crop=smart&auto=webp&s=30c559b0a3b92cbca6df2ffce369af9f85ccd82d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/7zqcw9mAS-PYpZk8_Tl2OVpnWH8wLITangIHhIInYos.jpg?width=640&crop=smart&auto=webp&s=9cd841171a06a0d0a5be5ca54c5bbc731ae610af', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/7zqcw9mAS-PYpZk8_Tl2OVpnWH8wLITangIHhIInYos.jpg?width=960&crop=smart&auto=webp&s=a1ce8b1063692ab2b2d978ab9459f34cc311ced2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/7zqcw9mAS-PYpZk8_Tl2OVpnWH8wLITangIHhIInYos.jpg?width=1080&crop=smart&auto=webp&s=ddc039e579cbc6105b7c11bc9be89382f69290ce', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/7zqcw9mAS-PYpZk8_Tl2OVpnWH8wLITangIHhIInYos.jpg?auto=webp&s=dff5ffff20b56c519b288a1462cbab0c2de6f313', 'width': 1200}, 'variants': {}}]}
Which model is best at rhyming?
2
[deleted]
2023-05-12T22:54:29
[deleted]
1970-01-01T00:00:00
0
{}
13fzsbs
false
null
t3_13fzsbs
/r/LocalLLaMA/comments/13fzsbs/which_model_is_best_at_rhyming/
false
false
default
2
null