title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]
url
stringlengths
0
780
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Passing embeddings to llama with ctransformers for long term memory
2
Hey guys sorry if this might sound stupid but I have been thinking of building an app with long term memory by caching responses and requests and saving them into a file where vectors will be continuously generated on them but I don't know how to pass those embeddings to the model using c transformers
2023-06-12T23:50:03
https://www.reddit.com/r/LocalLLaMA/comments/148177l/passing_embeddings_to_llama_with_ctransformers/
GOD_HIMSELVES
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
148177l
false
null
t3_148177l
/r/LocalLLaMA/comments/148177l/passing_embeddings_to_llama_with_ctransformers/
false
false
self
2
null
Best UI starting out?
1
[removed]
2023-06-13T00:11:39
https://www.reddit.com/r/LocalLLaMA/comments/1481mrp/best_ui_starting_out/
themushroommage
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1481mrp
false
null
t3_1481mrp
/r/LocalLLaMA/comments/1481mrp/best_ui_starting_out/
false
false
default
1
null
Best machine setup to last the next couple years?
17
If I wanted to get a rig that would be good for the next couple years as far as “keeping up” with things like llama what would you guys suggest as far as specs?
2023-06-13T00:54:31
https://www.reddit.com/r/LocalLLaMA/comments/1482gd5/best_machine_setup_to_last_the_next_couple_years/
ricketpipe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1482gd5
false
null
t3_1482gd5
/r/LocalLLaMA/comments/1482gd5/best_machine_setup_to_last_the_next_couple_years/
false
false
self
17
null
Let's create a 65B benchmark in this thread
52
Please, everyone in this thread, post the specifications of your machine, including the software you are using (e.g. LLaMa.cpp), the format of 65B you are working with (e.g. Q4\_K\_M), and most important: the speed (token/s preferrably). If you are using an Apple device, please also mention how many GPU cores you have. If you are on a PC, please specify your GPU model, CPU model, and memory speed.
2023-06-13T01:09:59
https://www.reddit.com/r/LocalLLaMA/comments/1482r1r/lets_create_a_65b_benchmark_in_this_thread/
Big_Communication353
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1482r1r
false
null
t3_1482r1r
/r/LocalLLaMA/comments/1482r1r/lets_create_a_65b_benchmark_in_this_thread/
false
false
self
52
null
Manticore-Falcon-Wizard-Orca-LLaMA
103
2023-06-13T01:10:27
https://i.redd.it/esr1kju1ro5b1.png
PatientWizardTaken
i.redd.it
1970-01-01T00:00:00
0
{}
1482rff
false
null
t3_1482rff
/r/LocalLLaMA/comments/1482rff/manticorefalconwizardorcallama/
false
false
https://b.thumbs.redditm…WfRR9XbnZqzc.jpg
103
{'enabled': True, 'images': [{'id': '2md2kR7hO82mvqFd7sg4Hwm_CcLKUa-5uU3qNEB4Kz0', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/esr1kju1ro5b1.png?width=108&crop=smart&auto=webp&s=cd82e66bb6f61e47772928c72f52c18668f4fe1e', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/esr1kju1ro5b1.png?width=216&crop=smart&auto=webp&s=f1e7a69d1064c4fb2a128786136ddf9a06e8b11b', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/esr1kju1ro5b1.png?width=320&crop=smart&auto=webp&s=9a1002cc33ea29a179099024f51bba1b1fb95220', 'width': 320}], 'source': {'height': 512, 'url': 'https://preview.redd.it/esr1kju1ro5b1.png?auto=webp&s=6a556f9366c73426c9fea0184347074d9efbc5d0', 'width': 512}, 'variants': {}}]}
If you value the open-source movement, donate to Common Crawl
38
I'm not affiliated at all, I just see the importance. This is one of the most important resources we need to maintain if we're going to keep the open-source AI movement alive. [https://commoncrawl.org/](https://commoncrawl.org/)
2023-06-13T01:49:18
https://www.reddit.com/r/LocalLLaMA/comments/1483hlr/if_you_value_the_opensource_movement_donate_to/
Careful-Temporary388
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1483hlr
false
null
t3_1483hlr
/r/LocalLLaMA/comments/1483hlr/if_you_value_the_opensource_movement_donate_to/
false
false
self
38
null
Is there a way to *partially* stream a prompt into Llama-cpp-Python?
6
I’ve noticed that a lot of the really powerful prompts have an enormous amount of boilerplate text in them that never changes. For example, prompts that focus on in-context learning will give a series of examples that remain static during every evaluation run. Because of that, I’m wondering if there is a way to partially ingest a prompt into Llama-cpp-Python, then wait for further input, and only ingest the last little bit of the prompt once the user submits it? As I understand it, that would really save on the amount of time you must wait before the model starts to output tokens, but I might be wrong.
2023-06-13T02:10:22
https://www.reddit.com/r/LocalLLaMA/comments/1483vrp/is_there_a_way_to_partially_stream_a_prompt_into/
E_Snap
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1483vrp
false
null
t3_1483vrp
/r/LocalLLaMA/comments/1483vrp/is_there_a_way_to_partially_stream_a_prompt_into/
false
false
self
6
null
Llama.cpp GPU Offloading Not Working for me with Oobabooga Webui - Need Assistance
8
Hello, I've been trying to offload transformer layers to my GPU using the llama.cpp Python binding, but it seems like the model isn't being offloaded to the GPU. I've installed the latest version of llama.cpp and followed the instructions on GitHub to enable GPU acceleration, but I'm still facing this issue. Here's a brief description of what I've done: 1. I've installed llama.cpp and the llama-cpp-python package, making sure to compile with CMAKE\_ARGS="-DLLAMA\_CUBLAS=on" FORCE\_CMAKE=1. 2. I've added --n-gpu-layersto the CMD\_FLAGS variable in webui.py. 3. I've verified that my GPU environment is correctly set up and that the GPU is properly recognized by my system. The nvidia-smicommand shows the expected output, and a simple PyTorch test shows that GPU computation is working correctly. I have the Nvidia RTX 3060 Ti 8 GB Vram I am trying to load 13B model and offload some of into the GPU. Right now I have it loaded/working on CPU/RAM. I was able to load the models just using the GGML directly into RAM but I'm trying to offload some of it into Vram see if it would speed things up a bit, but I'm not seeing GPU Vram being used or any Vram taken up. ​ Thanks!!
2023-06-13T03:39:15
https://www.reddit.com/r/LocalLLaMA/comments/1485ir1/llamacpp_gpu_offloading_not_working_for_me_with/
medtech04
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1485ir1
false
null
t3_1485ir1
/r/LocalLLaMA/comments/1485ir1/llamacpp_gpu_offloading_not_working_for_me_with/
false
false
self
8
null
Oobabooga Webui : Connection Reset?
2
[deleted]
2023-06-13T04:49:55
[deleted]
1970-01-01T00:00:00
0
{}
1486rq2
false
null
t3_1486rq2
/r/LocalLLaMA/comments/1486rq2/oobabooga_webui_connection_reset/
false
false
default
2
null
How to reduce Latency when using Falcon 7B?
4
I want to create a chatbot on my personal documents and When I use Falcon 7B with the RetrievalQA chain, the output takes atleast 15 minutes while it takes near 30-45 seconds when using with LLMChain. How do I speed this up? Also, are there any other Alternatives that are very fast(Open Source)? PS: I'm using Google Colab Free for this
2023-06-13T05:46:21
https://www.reddit.com/r/LocalLLaMA/comments/1487pl7/how_to_reduce_latency_when_using_falcon_7b/
Alive_Effective9516
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1487pl7
false
null
t3_1487pl7
/r/LocalLLaMA/comments/1487pl7/how_to_reduce_latency_when_using_falcon_7b/
false
false
self
4
null
The Artificial Intelligence Act
15
2023-06-13T06:46:08
https://artificialintelligenceact.eu/
fallingdowndizzyvr
artificialintelligenceact.eu
1970-01-01T00:00:00
0
{}
1488oqj
false
null
t3_1488oqj
/r/LocalLLaMA/comments/1488oqj/the_artificial_intelligence_act/
false
false
https://b.thumbs.redditm…CKIplvlGyUak.jpg
15
{'enabled': False, 'images': [{'id': 'A83_FVtL_M8DC_8TOXZ0adGfrLJusi1N5EjkBgIRMb8', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/XrnsTXBAzXjU3J_xRg28Cakfohdga0BLYWD0MOzo9OI.jpg?width=108&crop=smart&auto=webp&s=624e2983ca7fbffec152c53c4ea84d41ef0890e9', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/XrnsTXBAzXjU3J_xRg28Cakfohdga0BLYWD0MOzo9OI.jpg?width=216&crop=smart&auto=webp&s=a8dce1becececb96f7d05933498b8cfc30e7474e', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/XrnsTXBAzXjU3J_xRg28Cakfohdga0BLYWD0MOzo9OI.jpg?width=320&crop=smart&auto=webp&s=cd17d8eb07cbe1f8aac9bb4eac70eeae9b898dfe', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/XrnsTXBAzXjU3J_xRg28Cakfohdga0BLYWD0MOzo9OI.jpg?width=640&crop=smart&auto=webp&s=9f995169eaa92868b595de83bcfff7509286dfa5', 'width': 640}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/XrnsTXBAzXjU3J_xRg28Cakfohdga0BLYWD0MOzo9OI.jpg?auto=webp&s=a8b5c1be0d32aa6dde7a4634d3b056cfd640e66c', 'width': 700}, 'variants': {}}]}
How to compile models for MlC-LLM
18
The brilliant folks at MLC-LLM posted a tutorial on adding models to their client for running LLM's. I found it while scouring their social media. If you don't know MLC-LLM is a client meant for running LLMs like llamacpp, but on any device and at speed. It works on android, apple, Nvidia, and **AMD** gpus. They look like they are preparing to create a fair number of tutorials on their project, but this stood out since the inability to add models to their client meant it hasn't gotten attention compared to other methods of inference. [GitHub - mlc-ai/mlc-llm: Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.](https://github.com/mlc-ai/mlc-llm) [How to Compile Models — mlc-llm 0.1.0 documentation](https://mlc.ai/mlc-llm/docs/tutorials/compile-models.html)
2023-06-13T07:23:37
https://www.reddit.com/r/LocalLLaMA/comments/1489ami/how_to_compile_models_for_mlcllm/
jetro30087
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1489ami
false
null
t3_1489ami
/r/LocalLLaMA/comments/1489ami/how_to_compile_models_for_mlcllm/
false
false
self
18
{'enabled': False, 'images': [{'id': 'v5fQwUsLUQibT4Hl66p9ydJOiRfoOIf-hrnthRvkmmk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/9t54Cy0M-qlwCi9P9gMhhITkX6RVPKWsWN3lOLaSjH8.jpg?width=108&crop=smart&auto=webp&s=3f3d89ea66851d9a189f4745e4944103738df827', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/9t54Cy0M-qlwCi9P9gMhhITkX6RVPKWsWN3lOLaSjH8.jpg?width=216&crop=smart&auto=webp&s=f5fbd1846487914e0705ccfcc2c9c330884c9071', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/9t54Cy0M-qlwCi9P9gMhhITkX6RVPKWsWN3lOLaSjH8.jpg?width=320&crop=smart&auto=webp&s=a33d130736bced65969c9e80515d43deb23cd7ff', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/9t54Cy0M-qlwCi9P9gMhhITkX6RVPKWsWN3lOLaSjH8.jpg?width=640&crop=smart&auto=webp&s=50ce3d0b0131ff6371e1f912e245d82950e31a3e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/9t54Cy0M-qlwCi9P9gMhhITkX6RVPKWsWN3lOLaSjH8.jpg?width=960&crop=smart&auto=webp&s=2c9cb5fdadb9746ed089adb3a143c64e4e3a055e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/9t54Cy0M-qlwCi9P9gMhhITkX6RVPKWsWN3lOLaSjH8.jpg?width=1080&crop=smart&auto=webp&s=864151b4d3982a31f6c0aab309d713aa62a94abd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/9t54Cy0M-qlwCi9P9gMhhITkX6RVPKWsWN3lOLaSjH8.jpg?auto=webp&s=00ab7b16a6258eed941903a04c51a14f9af2e89d', 'width': 1200}, 'variants': {}}]}
Honkware/Manticore-13b-Landmark · Hugging Face
31
2023-06-13T07:38:39
https://huggingface.co/Honkware/Manticore-13b-Landmark
glowsticklover
huggingface.co
1970-01-01T00:00:00
0
{}
1489iuy
false
null
t3_1489iuy
/r/LocalLLaMA/comments/1489iuy/honkwaremanticore13blandmark_hugging_face/
false
false
https://b.thumbs.redditm…knv1yed_UyuI.jpg
31
{'enabled': False, 'images': [{'id': 'pfcHCM428xuIGlgLdfF-ND8u6vKuCegaJQjrX38aAV4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/aPrNwuepj83wrqjfrVGWYsQ7iRD5ICmEJTfUpJ-mrlM.jpg?width=108&crop=smart&auto=webp&s=b5c4c42225aa114a8022cdc3658c96eda104a631', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/aPrNwuepj83wrqjfrVGWYsQ7iRD5ICmEJTfUpJ-mrlM.jpg?width=216&crop=smart&auto=webp&s=68cd9627f3659463f4112635a7c59df6bfb34f80', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/aPrNwuepj83wrqjfrVGWYsQ7iRD5ICmEJTfUpJ-mrlM.jpg?width=320&crop=smart&auto=webp&s=71aba67e90923e95880f6ffe365c7db80f3b0f27', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/aPrNwuepj83wrqjfrVGWYsQ7iRD5ICmEJTfUpJ-mrlM.jpg?width=640&crop=smart&auto=webp&s=2f9232d1ef45097476bdd37b697e49dc80edf644', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/aPrNwuepj83wrqjfrVGWYsQ7iRD5ICmEJTfUpJ-mrlM.jpg?width=960&crop=smart&auto=webp&s=17c316ccffc3573a248f2f9d39d498134de16b8e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/aPrNwuepj83wrqjfrVGWYsQ7iRD5ICmEJTfUpJ-mrlM.jpg?width=1080&crop=smart&auto=webp&s=a651059917f4c0682dc9f64e6dba4c5aad56dd61', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/aPrNwuepj83wrqjfrVGWYsQ7iRD5ICmEJTfUpJ-mrlM.jpg?auto=webp&s=1980facb1e1a69e324df535b85e46975d768eca5', 'width': 1200}, 'variants': {}}]}
Hot take 🔥: Lots of buzz these days about new foundation open-source models but what if I told you there have been no real advance since 2019's T5 models 😀 - Yi Tay, ex GoogleBrain senior research scientist
55
2023-06-13T08:01:14
https://twitter.com/YiTayML/status/1668302949276356609
saintshing
twitter.com
1970-01-01T00:00:00
0
{}
1489v57
false
{'oembed': {'author_name': 'Yi Tay', 'author_url': 'https://twitter.com/YiTayML', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Hot take 🔥: Lots of buzz these days about new foundation open-source models but what if I told you there have been no real advance since 2019&#39;s T5 models 😀<br><br>Take a look at this table from this new InstructEval paper: <a href="https://t.co/2lKNFCX7Ke">https://t.co/2lKNFCX7Ke</a>. Some thoughts/observations:<br><br>1.… <a href="https://t.co/qwZELqWkaK">pic.twitter.com/qwZELqWkaK</a></p>&mdash; Yi Tay (@YiTayML) <a href="https://twitter.com/YiTayML/status/1668302949276356609?ref_src=twsrc%5Etfw">June 12, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/YiTayML/status/1668302949276356609', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_1489v57
/r/LocalLLaMA/comments/1489v57/hot_take_lots_of_buzz_these_days_about_new/
false
false
https://b.thumbs.redditm…SWrBrCvNdPXE.jpg
55
{'enabled': False, 'images': [{'id': '_JpfQjkvlhmKt-sCIwWth7_NTjjcXLVN7moNjzP77s8', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/Q2IFmd-zFy6nTUs9kTN-OgoOODxUTDYZuyfdbqH-x9c.jpg?width=108&crop=smart&auto=webp&s=4a94d704cfb55adc89bfa6e1769b1c70cadf763a', 'width': 108}], 'source': {'height': 75, 'url': 'https://external-preview.redd.it/Q2IFmd-zFy6nTUs9kTN-OgoOODxUTDYZuyfdbqH-x9c.jpg?auto=webp&s=9615717b08258d2a1d5221d1028977ae0b6601bd', 'width': 140}, 'variants': {}}]}
Gradient Ascent Post-training Enhances Language Model Generalization
48
Paper: [\[2306.07052\] Gradient Ascent Post-training Enhances Language Model Generalization (arxiv.org)](https://arxiv.org/abs/2306.07052) Abstract: >In this work, we empirically show that updating pretrained LMs (350M, 1.3B, 2.7B) with just a few steps of Gradient Ascent Post-training (GAP) on random, unlabeled text corpora enhances its zero-shot generalization capabilities across diverse NLP tasks. Specifically, we show that GAP can allow LMs to become comparable to 2-3x times larger LMs across 12 different NLP tasks. We also show that applying GAP on out-of-distribution corpora leads to the most reliable performance improvements. Our findings indicate that GAP can be a promising method for improving the generalization capability of LMs without any task-specific fine-tuning. This seems like an interesting method to try on open source models. Any chance we could achieve some similar results with LORAs or QLORAs? What do you think?
2023-06-13T08:38:21
https://www.reddit.com/r/LocalLLaMA/comments/148afhx/gradient_ascent_posttraining_enhances_language/
ptxtra
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
148afhx
false
null
t3_148afhx
/r/LocalLLaMA/comments/148afhx/gradient_ascent_posttraining_enhances_language/
false
false
self
48
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]}
Error During Alpaca Build
1
[removed]
2023-06-13T08:51:52
https://www.reddit.com/r/LocalLLaMA/comments/148amb1/error_during_alpaca_build/
Strong-Employ6841
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
148amb1
false
null
t3_148amb1
/r/LocalLLaMA/comments/148amb1/error_during_alpaca_build/
false
false
default
1
null
FinGPT: Open-Source Financial Large Language Models
25
2023-06-13T09:04:31
https://arxiv.org/abs/2306.06031
Balance-
arxiv.org
1970-01-01T00:00:00
0
{}
148atct
false
null
t3_148atct
/r/LocalLLaMA/comments/148atct/fingpt_opensource_financial_large_language_models/
false
false
https://b.thumbs.redditm…9GuZs_is3XkI.jpg
25
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]}
Falcon 40B
1
[removed]
2023-06-13T09:19:44
https://www.reddit.com/r/LocalLLaMA/comments/148b1z7/falcon_40b/
Toaster496
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
148b1z7
false
null
t3_148b1z7
/r/LocalLLaMA/comments/148b1z7/falcon_40b/
false
false
default
1
null
Microsoft Research proposes new framework, LongMem, allowing for unlimited context length along with reduced GPU memory usage and faster inference speed. Code will be open-sourced
388
Paper: [https://arxiv.org/abs/2306.07174](https://arxiv.org/abs/2306.07174) Code: [https://aka.ms/LongMem](https://aka.ms/LongMem) Excerpts: >In this paper, we propose a framework for Language Models Augmented with Long-Term Memory, (LongMem), which enables language models to cache long-form previous context or knowledge into the non-differentiable memory bank, and further take advantage of them via a decoupled memory module to address the memory staleness problem. > >For LongMem, there are three key components: the frozen backbone LLM, SideNet, and Cache Memory Bank. To tap into the learned knowledge of the pretrained LLM, both previous and current inputs are encoded using the frozen backbone LLM but different representations are extracted. For previous inputs, the key-value pairs from the Transformer self-attention at m-th layer are stored in Cache Memory Bank, whereas the hidden states from each LLM decoder layer for the current inputs are retained and transferred to SideNet. The SideNet module can be viewed as an efficient adaption model that is trained to fuse the current input context and relevant cached previous contexts in the decoupled memory > >The long-term memory capability of LongMem is achieved via a memory-augmentation module for retrieval and fusion. Instead of performing token-to-token retrieval, we focus on token-to-chunk retrieval for acceleration and integrity. The memory bank stores cached key-value pairs at the level of token chunks. The proposed LongMem model significantly outperform all considered baselines on long-text language modeling datasets. Surprisingly, the proposed method achieves the state-of-the-art performance of 40.5% accuracy on ChapterBreakAO3 suffix identification benchmark and outperforms both the strong long-context transformers and latest LLM GPT-3 with 313x larger parameters. > >With the proposed unlimited-length memory augmentation, our LongMem method can overcome the limitation of the number of demonstration examples in the local context and even attend on the whole training set by loading it into the cached memory. When the model is required to comprehend long sequences, the proposed method LongMem can load the out-of-boundary inputs into the cached memory as previous context. Thus, the memory usage and inference speed can be significantly improved compared with vanilla self-attention-based models.
2023-06-13T10:47:52
https://www.reddit.com/r/LocalLLaMA/comments/148ch6z/microsoft_research_proposes_new_framework_longmem/
llamaShill
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
148ch6z
false
null
t3_148ch6z
/r/LocalLLaMA/comments/148ch6z/microsoft_research_proposes_new_framework_longmem/
false
false
self
388
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]}
A local model for summarizing articles
3
[deleted]
2023-06-13T11:17:51
[deleted]
1970-01-01T00:00:00
0
{}
148czv1
false
null
t3_148czv1
/r/LocalLLaMA/comments/148czv1/a_local_model_for_summarizing_articles/
false
false
default
3
null
A little form to help me understand LLM's place in programming
1
[removed]
2023-06-13T11:25:00
https://www.reddit.com/r/LocalLLaMA/comments/148d4b5/a_little_form_to_help_me_understand_llms_place_in/
ouils
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
148d4b5
false
null
t3_148d4b5
/r/LocalLLaMA/comments/148d4b5/a_little_form_to_help_me_understand_llms_place_in/
false
false
default
1
null
British/American spelling of one word changes the output on the same seed.
0
LLaMA seems to deal well with alternate spellings and typos, but I wondered if using the British or American spellings of a word would change the output. It turns out it does. Using the Openblas version of llama.cpp (cublas doesn't give deterministic results still, afaik), I chose the seed 256 and used the same settings each time with the prompt: >write a story about a pigeon having a day out in the city centre tulu-30b.ggmlv3.q5_K_M.bin wrote a nice little kid's story and understood centre was the same as center, and used the word center in the story. Then I changed the prompt to: >write a story about a pigeon having a day out in the city center Totally different story although obviously in the same vein.
2023-06-13T11:42:21
https://www.reddit.com/r/LocalLLaMA/comments/148df70/britishamerican_spelling_of_one_word_changes_the/
ambient_temp_xeno
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
148df70
false
null
t3_148df70
/r/LocalLLaMA/comments/148df70/britishamerican_spelling_of_one_word_changes_the/
false
false
self
0
null
FabLab Assistant
0
Briefly, here's one of my use cases: I own a fairly decent commercial grade PC/Workstation, will update with specs if interested. I also own a homemade 3-axis CNC milling machine, a blue laser engraver/cutter, miter saw and assorted power tools; I have experience running servos, steppers, microcontrollers, general digital electronics, PCB fabrication, microfabrication, fluidics, pneumatics, instrument control and so on and so forth. I've got sufficient coding skills, decent engineering simulations skills, and hardware fab skills down to at least micro-scale. I want to make my tools work for me, though, not the other way round. Objective: bootstrap automated/augmented fabrication lab capabilities with LLMs' assistance. Since the objective looks generic, here is my specific application: I'd like to build a powerful homemade nuclear magnetic resonance (NMR) spectrometer, from almost scratch. I have lots of experience with industrial grade NMR specs, both hardware and software and the theory & practice behind it all. I have plenty of ideas I want to explore on my own, for my own entertainment. What I do not have enough of is time and money (I know, shocking). Also, good power tools are expensive, but I'm considering investing some more. I have some good spare parts I can build stuff with, too, like a 3D printer or a small CNC lathe, or a 4th axis for the mill, etc... An AI assistant that could lift weigh from my shoulders in any conceivable way would be such a huge return on investment for my hobby project. Any suggestion or comment that vaguely aligns to my ramblings would be heartily appreciated. E.g. datasets to consider, finetuning strategies, pretrained models, anything goes. I'm just getting started in this space and I'm shooting in the dark, so to speak.
2023-06-13T11:49:13
https://www.reddit.com/r/LocalLLaMA/comments/148djph/fablab_assistant/
Lolleka
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
148djph
false
null
t3_148djph
/r/LocalLLaMA/comments/148djph/fablab_assistant/
false
false
self
0
null
How to make your own local version of chatbase?
4
Hi all, I was wondering if people here know the technology/software stack to make a personal version of something like [chatbase.io](https://chatbase.io) or any of these other personal chatbots on your own data/documents. I'm very new here to these technologies and want to tinker around to better understand limitations, etc. Bonus: if somebody here has done this, what has your experience been like? what challenges have you run into? how large of a model do you need to get reasonable responses? are any of the free llms suitable, or is it really only usable after the human-dialogue tailoring that went into ChatGPT? Thanks!
2023-06-13T11:50:29
https://www.reddit.com/r/LocalLLaMA/comments/148dkl8/how_to_make_your_own_local_version_of_chatbase/
EnPaceRequiescat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
148dkl8
false
null
t3_148dkl8
/r/LocalLLaMA/comments/148dkl8/how_to_make_your_own_local_version_of_chatbase/
false
false
self
4
null
A minimal design pattern for LLM-powered microservices with FastAPI & LangChain
21
2023-06-13T12:43:49
https://github.com/tleers/llm-api-starterkit
timleers
github.com
1970-01-01T00:00:00
0
{}
148eldn
false
null
t3_148eldn
/r/LocalLLaMA/comments/148eldn/a_minimal_design_pattern_for_llmpowered/
false
false
https://b.thumbs.redditm…bRrNxMACzevw.jpg
21
{'enabled': False, 'images': [{'id': 'ZEU-RYtO_z2hDyy82PIoDqvx-r-84Sor43rhUNTTrN0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vkkWaNL6jqeuNey_OSG9_rjgOUNzYSXeLDzlMJMhkSI.jpg?width=108&crop=smart&auto=webp&s=8a3db85c32ca9bb3dff82809f883e6675716226f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/vkkWaNL6jqeuNey_OSG9_rjgOUNzYSXeLDzlMJMhkSI.jpg?width=216&crop=smart&auto=webp&s=d207be995280b6ac3c682e069f0957b87e7f9b34', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/vkkWaNL6jqeuNey_OSG9_rjgOUNzYSXeLDzlMJMhkSI.jpg?width=320&crop=smart&auto=webp&s=bad4f83ddcf3caadc8ed087312755eb9481af297', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/vkkWaNL6jqeuNey_OSG9_rjgOUNzYSXeLDzlMJMhkSI.jpg?width=640&crop=smart&auto=webp&s=daa3374b2114858c91a3cfd4cfc2da2727c12462', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/vkkWaNL6jqeuNey_OSG9_rjgOUNzYSXeLDzlMJMhkSI.jpg?width=960&crop=smart&auto=webp&s=ef71213803f9e10f5fa348ad06de4680db2446a1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/vkkWaNL6jqeuNey_OSG9_rjgOUNzYSXeLDzlMJMhkSI.jpg?width=1080&crop=smart&auto=webp&s=5f43c8edb54904ba4357ef03bb85013b8770bfc1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/vkkWaNL6jqeuNey_OSG9_rjgOUNzYSXeLDzlMJMhkSI.jpg?auto=webp&s=1aafa5bae7e9bc1def16d095dca958781e1a027f', 'width': 1200}, 'variants': {}}]}
Has anybody trained a model for Open Information Extraction?
2
I have the idea to train a model to do OpenIE for Dutch. I have found that the latest OpenAI models are quite good at generating triples from unstructured text. I will let a paid model generate triples for me and use those as training data. Has anyone tried something similar? I have a RTX3090, so I guess I am fine with training a Lora model. I have trained FLAN-t5-base before on wikipedia triples (REBEL data set) and the results were luke warm.
2023-06-13T13:41:39
https://www.reddit.com/r/LocalLLaMA/comments/148fref/has_anybody_trained_a_model_for_open_information/
kcambrek
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
148fref
false
null
t3_148fref
/r/LocalLLaMA/comments/148fref/has_anybody_trained_a_model_for_open_information/
false
false
self
2
null
Question Answering benchmark
1
[removed]
2023-06-13T13:56:10
https://www.reddit.com/r/LocalLLaMA/comments/148g1j9/question_answering_benchmark/
nofreewill42
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
148g1j9
false
null
t3_148g1j9
/r/LocalLLaMA/comments/148g1j9/question_answering_benchmark/
false
false
default
1
null
Utilize my current hardware or upgrade?
1
I've got a 3070 with 8gb VRAM and 32gb RAM. I want to be able to run 13b or even 30b models. With the new ways to load models and offload to GPU, should I look at just upgrading my ram to 80GB or upgrade GPU to 4070 12gb? I don't have the budget to get a 4090.
2023-06-13T14:13:17
https://www.reddit.com/r/LocalLLaMA/comments/148geaj/utilize_my_current_hardware_or_upgrade/
reiniken
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
148geaj
false
null
t3_148geaj
/r/LocalLLaMA/comments/148geaj/utilize_my_current_hardware_or_upgrade/
false
false
self
1
null
Bitsandbytes for windows
0
[removed]
2023-06-13T14:23:01
[deleted]
2023-06-14T03:41:27
0
{}
148gluf
false
null
t3_148gluf
/r/LocalLLaMA/comments/148gluf/bitsandbytes_for_windows/
false
false
default
0
null
KoboldCPP Updated to Support K-Quants, new bonus CUDA build.
85
[deleted]
2023-06-13T15:20:35
[deleted]
1970-01-01T00:00:00
0
{}
148hul5
false
null
t3_148hul5
/r/LocalLLaMA/comments/148hul5/koboldcpp_updated_to_support_kquants_new_bonus/
false
false
default
85
null
"Emergent capabilities" and OpenAI models
1
I haven't been following the research for long, but I'm confused by researchers pointing to OpenAI models when talking about "emergent capabilities." Besides RLHF, those models are fine-tuned on task-specific instruction/response data OpenAI hasn't published information about, including "massive amounts of" synthetic data and now incorporating data from user interactions... so how can researchers make inferences about emergent capabilities in larger models when counting those models among the examples? For example, if GPT-4 immediately shows step-by-step thinking when asked to do a math problem, I would say that's clearly not a behavior that emerged due to parameter count, but rather carefully curated instruction/response examples, with the specific aim of improving performance on math problems by "thinking step by step"
2023-06-13T15:27:14
https://www.reddit.com/r/LocalLLaMA/comments/148hzx4/emergent_capabilities_and_openai_models/
phree_radical
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
148hzx4
false
null
t3_148hzx4
/r/LocalLLaMA/comments/148hzx4/emergent_capabilities_and_openai_models/
false
false
self
1
null
Help: q2 k quant does not offer any speed advancement over q4 ggml llama cpp
4
Hi y'all. Have anyone tried k quants model on llama cpp python? I tried with the model q2\_k (smallest size that I could find) and it even took more time to run the same prompt compared to the q4\_0 model. I don't know if this is normal, let me know if y'all experienced something different.
2023-06-13T15:48:42
https://www.reddit.com/r/LocalLLaMA/comments/148igxf/help_q2_k_quant_does_not_offer_any_speed/
Cheap-Routine4736
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
148igxf
false
null
t3_148igxf
/r/LocalLLaMA/comments/148igxf/help_q2_k_quant_does_not_offer_any_speed/
false
false
self
4
null
Looking for help getting llama.cpp working on runpod
1
[deleted]
2023-06-13T16:26:46
[deleted]
1970-01-01T00:00:00
0
{}
148jc78
false
null
t3_148jc78
/r/LocalLLaMA/comments/148jc78/looking_for_help_getting_llamacpp_working_on/
false
false
default
1
null
Raw result of Wonder Studio beta
1
[removed]
2023-06-13T16:33:14
[deleted]
1970-01-01T00:00:00
0
{}
148jhle
false
{'reddit_video': {'bitrate_kbps': 4800, 'dash_url': 'https://v.redd.it/akimr7opbt5b1/DASHPlaylist.mpd?a=1695643881%2CYjgzNzAwM2UxMmUxOGEwODM5ZThkNTdmMDI4YTc5YTRiYThmNGNkZDU3NmI2ZmE5YmJjMTUzYTYwZTA5YmJmNw%3D%3D&v=1&f=sd', 'duration': 3, 'fallback_url': 'https://v.redd.it/akimr7opbt5b1/DASH_1080.mp4?source=fallback', 'height': 1080, 'hls_url': 'https://v.redd.it/akimr7opbt5b1/HLSPlaylist.m3u8?a=1695643881%2CYzIxNzU2ZWQ5OTE5YTlkMWYxMTlmNGY5YzkwOThjYjJhOTAxODdjYTBmMGE4NWRhN2U2MTM2YzRkYWU3ODJkZQ%3D%3D&v=1&f=sd', 'is_gif': True, 'scrubber_media_url': 'https://v.redd.it/akimr7opbt5b1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 608}}
t3_148jhle
/r/LocalLLaMA/comments/148jhle/raw_result_of_wonder_studio_beta/
false
false
default
1
null
What can I do to get AMD GPU support CUDA-style?
23
Guys, I have a 6800 XT and I believe I can squeeze more juice from it. On AMD we have this: [https://learn.microsoft.com/en-us/windows/ai/directml/gpu-pytorch-windows](https://learn.microsoft.com/en-us/windows/ai/directml/gpu-pytorch-windows) . Is there anyway to leverage it? If there isn't, is there anything close to it? I am a decent coder I can code it, just need some general guidelines and some advice. I think it would be smart to go towards a one-code-for-all approaches for Windows, like Games have with DirectX, and we no longer depend on a single behemoth (NVIDIA)
2023-06-13T17:04:06
https://www.reddit.com/r/LocalLLaMA/comments/148k700/what_can_i_do_to_get_amd_gpu_support_cudastyle/
shaman-warrior
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
148k700
false
null
t3_148k700
/r/LocalLLaMA/comments/148k700/what_can_i_do_to_get_amd_gpu_support_cudastyle/
false
false
self
23
{'enabled': False, 'images': [{'id': 'LVmzWMJU1UZwRubzQYJZSar-z-Rq8ntUH65yhQyfxB8', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/YH8gNap4KoGcFgyFYrNZ86fXmYfRn6pa7uuwIlZkjEE.jpg?width=108&crop=smart&auto=webp&s=0b9526a51504048891d5e64783519fd5dc3cd83f', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/YH8gNap4KoGcFgyFYrNZ86fXmYfRn6pa7uuwIlZkjEE.jpg?width=216&crop=smart&auto=webp&s=0150bc3ab1c6838c35ff951d69578f3d19ae4ed3', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/YH8gNap4KoGcFgyFYrNZ86fXmYfRn6pa7uuwIlZkjEE.jpg?width=320&crop=smart&auto=webp&s=c1e830770227ae4802c5776d22f63d0f6aa71b15', 'width': 320}], 'source': {'height': 400, 'url': 'https://external-preview.redd.it/YH8gNap4KoGcFgyFYrNZ86fXmYfRn6pa7uuwIlZkjEE.jpg?auto=webp&s=e62264227377a9581e2e2946169864d130fa3217', 'width': 400}, 'variants': {}}]}
local GPT agent projects?
0
[removed]
2023-06-13T17:19:34
https://www.reddit.com/r/LocalLLaMA/comments/148kjts/local_gpt_agent_projects/
PossessionOk6481
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
148kjts
false
null
t3_148kjts
/r/LocalLLaMA/comments/148kjts/local_gpt_agent_projects/
false
false
default
0
null
How to generate longer stories?
8
Is there any trick to generate longer stories, with characters and actions happening? I've been using several models such as llama and wizardLM. My prompt is something like "Write a story about a man who falls in love with a coconut", but the stories generated don't even pretend to flow, it's just a few sentences of "There was a man named X. He met a coconut named Y. They fell in love. The moral of the story was that you can fall in love with a coconut" and that's it.
2023-06-13T17:30:06
https://www.reddit.com/r/LocalLLaMA/comments/148ksbt/how_to_generate_longer_stories/
skocznymroczny
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
148ksbt
false
null
t3_148ksbt
/r/LocalLLaMA/comments/148ksbt/how_to_generate_longer_stories/
false
false
self
8
null
The gap between open source LLMs and OpenAI continues to widen, GPT3.5 now supports 16K context
1
2023-06-13T17:40:08
https://openai.com/blog/function-calling-and-other-api-updates
EcstaticVenom
openai.com
1970-01-01T00:00:00
0
{}
148l0mw
false
null
t3_148l0mw
/r/LocalLLaMA/comments/148l0mw/the_gap_between_open_source_llms_and_openai/
false
false
default
1
null
What ggml models can I run on 16GB RAM, 8GB VRAM GPU0 and 4GB VRAM GPU1?
2
[removed]
2023-06-13T17:51:49
https://www.reddit.com/r/LocalLLaMA/comments/148laeo/what_ggml_models_can_i_run_on_16gb_ram_8gb_vram/
chocolatebanana136
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
148laeo
false
null
t3_148laeo
/r/LocalLLaMA/comments/148laeo/what_ggml_models_can_i_run_on_16gb_ram_8gb_vram/
false
false
default
2
null
Are you seeing emojis from conversational models in ooba?
1
[deleted]
2023-06-13T18:07:05
[deleted]
1970-01-01T00:00:00
0
{}
148ln9p
false
null
t3_148ln9p
/r/LocalLLaMA/comments/148ln9p/are_you_seeing_emojis_from_conversational_models/
false
false
default
1
null
How to use gpu for GGML model , n-gpu-layers isn't working
2
[removed]
2023-06-13T18:22:16
https://www.reddit.com/r/LocalLLaMA/comments/148lzil/how_to_use_gpu_for_ggml_model_ngpulayers_isnt/
Equal-Pilot-9592
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
148lzil
false
null
t3_148lzil
/r/LocalLLaMA/comments/148lzil/how_to_use_gpu_for_ggml_model_ngpulayers_isnt/
false
false
default
2
null
what llama model would be good for instruct purposes
0
are their any good models that are around as capable for instruct purposes as text-davinci-003
2023-06-13T18:45:13
https://www.reddit.com/r/LocalLLaMA/comments/148mi3l/what_llama_model_would_be_good_for_instruct/
SuccessfulCommand882
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
148mi3l
false
null
t3_148mi3l
/r/LocalLLaMA/comments/148mi3l/what_llama_model_would_be_good_for_instruct/
false
false
self
0
null
AMD Expands AI/HPC Product Lineup With Flagship GPU-only Instinct Mi300X with 192GB Memory
46
2023-06-13T19:01:46
https://www.anandtech.com/show/18915/amd-expands-mi300-family-with-mi300x-gpu-only-192gb-memory
Balance-
anandtech.com
1970-01-01T00:00:00
0
{}
148mvnx
false
null
t3_148mvnx
/r/LocalLLaMA/comments/148mvnx/amd_expands_aihpc_product_lineup_with_flagship/
false
false
https://b.thumbs.redditm…Dgi8B-7darVI.jpg
46
{'enabled': False, 'images': [{'id': 'R96zuZO45Qp3NE1yABoAUYRTw4YE4s4fJ1ipwGDmHgA', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/iZo2BGdzs7lzXK-IbQmUQuwq0IDLwTrnnQu7rC6fSpg.jpg?width=108&crop=smart&auto=webp&s=a24a6ec07470476a238e61e5cfe85d492faf4e00', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/iZo2BGdzs7lzXK-IbQmUQuwq0IDLwTrnnQu7rC6fSpg.jpg?width=216&crop=smart&auto=webp&s=22708b4d0ec466e1c5ff536a7b0118d247ef96da', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/iZo2BGdzs7lzXK-IbQmUQuwq0IDLwTrnnQu7rC6fSpg.jpg?width=320&crop=smart&auto=webp&s=4698c584fa3b44d143233274ea0f75df3c10a15d', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/iZo2BGdzs7lzXK-IbQmUQuwq0IDLwTrnnQu7rC6fSpg.jpg?width=640&crop=smart&auto=webp&s=a5233028c360549209d28e123d67b3382dfb71f9', 'width': 640}], 'source': {'height': 381, 'url': 'https://external-preview.redd.it/iZo2BGdzs7lzXK-IbQmUQuwq0IDLwTrnnQu7rC6fSpg.jpg?auto=webp&s=fd0871ef2076aba5911cd308e3d55edc7d134247', 'width': 678}, 'variants': {}}]}
Toolkit for using local models, through langchain &c, including usable demos
13
Hi all, I'll be honest that what inspired this project was my observation that there is a metric ton of LLM-related client Python code out there, and a lot of it is clearly by folks new to programming, or at least to Python. As an old Python head, I thought I'd lean on my experience to put together some tools and demos for the many, cool alt-GPT type project ideas out there, and I'm already reusing some of these components in my own consulting work. [https://github.com/uogbuji/OgbujiPT/](https://github.com/uogbuji/OgbujiPT/) A good place to start is the demos, all using Langchain, though shallowly, I admit, including * Simple bot for you know which popular chat app 😉 * Simple streamlit chat-your-PDF * Blocking API async workaround [https://github.com/uogbuji/OgbujiPT/tree/main/demo](https://github.com/uogbuji/OgbujiPT/tree/main/demo)
2023-06-13T20:01:57
https://www.reddit.com/r/LocalLLaMA/comments/148o7w3/toolkit_for_using_local_models_through_langchain/
CodeGriot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
148o7w3
false
null
t3_148o7w3
/r/LocalLLaMA/comments/148o7w3/toolkit_for_using_local_models_through_langchain/
false
false
self
13
{'enabled': False, 'images': [{'id': 'IeXm3QcmbN8FE8x8OSrJwNGkIEXBmO3Ite07eRod0rc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gj2jL2EVAt7cVQ-VtRCOOupclneRYJxU86GfxfMRKaQ.jpg?width=108&crop=smart&auto=webp&s=f7205367d3b6743657d01059a5ff26c552e3f7a4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/gj2jL2EVAt7cVQ-VtRCOOupclneRYJxU86GfxfMRKaQ.jpg?width=216&crop=smart&auto=webp&s=271fddb97bbae295053990fa547b1c1e8eb2ab4b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/gj2jL2EVAt7cVQ-VtRCOOupclneRYJxU86GfxfMRKaQ.jpg?width=320&crop=smart&auto=webp&s=1984b62108435fc72adb857326db7b8ebfff38f1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/gj2jL2EVAt7cVQ-VtRCOOupclneRYJxU86GfxfMRKaQ.jpg?width=640&crop=smart&auto=webp&s=7d713bacb4c47d03e8879dfa867ebded8e51b0a7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/gj2jL2EVAt7cVQ-VtRCOOupclneRYJxU86GfxfMRKaQ.jpg?width=960&crop=smart&auto=webp&s=5a7410523947e44a115fbd15f4731d51b97aca51', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/gj2jL2EVAt7cVQ-VtRCOOupclneRYJxU86GfxfMRKaQ.jpg?width=1080&crop=smart&auto=webp&s=f12341e21b19535d9e9e721c017ea598c1c8f4dd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/gj2jL2EVAt7cVQ-VtRCOOupclneRYJxU86GfxfMRKaQ.jpg?auto=webp&s=b9f4fc9a5989ad7e092cb71e4466fea4a356db8b', 'width': 1200}, 'variants': {}}]}
Shouldn’t A.I become less resource intensive as time goes on?
1
[removed]
2023-06-13T20:26:11
[deleted]
1970-01-01T00:00:00
0
{}
148oqyy
false
null
t3_148oqyy
/r/LocalLLaMA/comments/148oqyy/shouldnt_ai_become_less_resource_intensive_as/
false
false
default
1
null
Is it possible to train a LoRA for a 4-bit model using the same amount of VRAM as inference?
7
And if so, is it possible to merge the LoRA with a GPTQ while using the same amount of VRAM as inference? For example, if you had a 13B 4-bit llama model that uses ~10 gb VRAM for inference, is there any way to train a LoRA for the 4-bit model that also only uses ~10 gb of VRAM? Is it possible to merge the LoRA with the 4-bit model, and if so is it possible to do it using only ~10 gb VRAM? If it isn't possible now, is it technically possible and probably will be possible soon? Or is there a fundamental limit preventing it?
2023-06-13T20:32:07
https://www.reddit.com/r/LocalLLaMA/comments/148ovsg/is_it_possible_to_train_a_lora_for_a_4bit_model/
SoylentMithril
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
148ovsg
false
null
t3_148ovsg
/r/LocalLLaMA/comments/148ovsg/is_it_possible_to_train_a_lora_for_a_4bit_model/
false
false
self
7
null
I wrote a tokenizer for LLaMA that runs inside the browser
15
2023-06-13T20:32:59
https://github.com/belladoreai/llama-tokenizer-js
belladorexxx
github.com
1970-01-01T00:00:00
0
{}
148owhe
false
null
t3_148owhe
/r/LocalLLaMA/comments/148owhe/i_wrote_a_tokenizer_for_llama_that_runs_inside/
false
false
https://b.thumbs.redditm…ZvzTkPI5Xv9o.jpg
15
{'enabled': False, 'images': [{'id': 't5BYImubexnZbSs3UMfYWIEQSAIwcB_4G44jxoPka2g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0jNYJSjGcx4GrMRq-b-9vMiXKWJpYMhS6vxSoBKXEWk.jpg?width=108&crop=smart&auto=webp&s=df91c49afd9f6de58616898380c72ae6a948f937', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0jNYJSjGcx4GrMRq-b-9vMiXKWJpYMhS6vxSoBKXEWk.jpg?width=216&crop=smart&auto=webp&s=01acc1af705b9172a06b059a3e265f55986ab948', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0jNYJSjGcx4GrMRq-b-9vMiXKWJpYMhS6vxSoBKXEWk.jpg?width=320&crop=smart&auto=webp&s=033a1bc04e7af056e872516f37d64e10ec61f82c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0jNYJSjGcx4GrMRq-b-9vMiXKWJpYMhS6vxSoBKXEWk.jpg?width=640&crop=smart&auto=webp&s=aab47dcb08ce43c36e470372be27bece1d7701af', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0jNYJSjGcx4GrMRq-b-9vMiXKWJpYMhS6vxSoBKXEWk.jpg?width=960&crop=smart&auto=webp&s=d8412d139a899f824337c59dcbbaa7521352300a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0jNYJSjGcx4GrMRq-b-9vMiXKWJpYMhS6vxSoBKXEWk.jpg?width=1080&crop=smart&auto=webp&s=123b220a268d6fbf3ef80c09fd10e26f1ac12ab3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0jNYJSjGcx4GrMRq-b-9vMiXKWJpYMhS6vxSoBKXEWk.jpg?auto=webp&s=89c6e0f69fd3e2e908da1599a3d56019cd1a93cc', 'width': 1200}, 'variants': {}}]}
2,512 -H100s, can train LLaMA 65B in 10 days
69
This 10 exaflop beast looks really promising and for open source startups it may be the best chance to get a true open source LLaMA alternative at the 30-65B+ size (hopefully with longer context and more training tokens).
2023-06-13T20:39:00
https://twitter.com/natfriedman/status/1668650915505803266?s=19
jd_3d
twitter.com
1970-01-01T00:00:00
0
{}
148p17i
false
{'oembed': {'author_name': 'Nat Friedman', 'author_url': 'https://twitter.com/natfriedman', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Daniel and I have setup a cluster for startups: <a href="https://t.co/FMmfA7MQm3">https://t.co/FMmfA7MQm3</a> <a href="https://t.co/cx4NkPVdFI">pic.twitter.com/cx4NkPVdFI</a></p>&mdash; Nat Friedman (@natfriedman) <a href="https://twitter.com/natfriedman/status/1668650915505803266?ref_src=twsrc%5Etfw">June 13, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/natfriedman/status/1668650915505803266', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_148p17i
/r/LocalLLaMA/comments/148p17i/2512_h100s_can_train_llama_65b_in_10_days/
false
false
https://b.thumbs.redditm…xPtIFjr8nR3Q.jpg
69
{'enabled': False, 'images': [{'id': 'epfPZBJ3d-zRlP7wFeHil-A4whqkrS_PPktjBeH4rdg', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/1XGkcBHG4q6mltbaesZdwqq9REY6KIw1YspYSYSI2nw.jpg?width=108&crop=smart&auto=webp&s=9cac0cc8a174bc4c600519506920700af0cdf056', 'width': 108}], 'source': {'height': 74, 'url': 'https://external-preview.redd.it/1XGkcBHG4q6mltbaesZdwqq9REY6KIw1YspYSYSI2nw.jpg?auto=webp&s=9f243bef1a7b8f471120503c78b98c76a9efeffd', 'width': 140}, 'variants': {}}]}
Open AI - Just Killed GPT-3 and GPT-4 Apis
1
[removed]
2023-06-13T20:41:03
https://www.reddit.com/r/LocalLLaMA/comments/148p2u4/open_ai_just_killed_gpt3_and_gpt4_apis/
splitur34
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
148p2u4
false
null
t3_148p2u4
/r/LocalLLaMA/comments/148p2u4/open_ai_just_killed_gpt3_and_gpt4_apis/
false
false
default
1
null
Landmark Attention Oobabooga Support + GPTQ Quantized Models!
175
Hey everyone! We've managed to get Landmark attention working properly in Oobabooga, and /u/theBloke has quantized the models! (Currently only GPTQ support). We need more effort put into properly evaluating these models. It is still very early days and we are looking for some feedback on their performance/any issues you run into. Please feel free to chat with us! You can find a link in my QLoRA Repo.[https://github.com/eugenepentland/landmark-attention-qlora](https://github.com/eugenepentland/landmark-attention-qlora) Models:[https://huggingface.co/TheBloke/WizardLM-7B-Landmark](https://huggingface.co/TheBloke/WizardLM-7B-Landmark)[https://huggingface.co/TheBloke/Minotaur-13B-Landmark](https://huggingface.co/TheBloke/Minotaur-13B-Landmark) Notes when using the models 1. Trust-remote-code must be enabled for the attention model to work correctly. 2. Add bos\_token must be disabled in the parameters tab 3. Truncat the prompt must be increased to allow for a larger context. The slider goes up to a max of 8192, but the models can handle larger contexts as long as you have memory. If you want to go higher, go to text-generation-webui/modules/shared.py and increase truncation\_length\_max to whatever you want it to be. 4. You may need to set the repetition\_penalty when asking questions about a long context to get the correct answer. Performance Notes: 1. Inference in a long context is slow. On the RTX Quadro 8000 I'm testing, it takes about a minute to get an answer for 10k context. This is working on being improved. 2. Remember that the model only has good performance at the base model for complex queries. Sometimes you may not get the answer you are looking for, but it's worth testing if the base model would be able to answer the question within the 2k context. Thanks again to the team who worked on the original landmark paper for making this possible![https://github.com/epfml/landmark-attention](https://github.com/epfml/landmark-attention)They made an update to the repo and the code I wrote 4 days ago is now marked legacy so I'm in the process of updating it again...
2023-06-13T21:12:25
https://www.reddit.com/r/LocalLLaMA/comments/148prx3/landmark_attention_oobabooga_support_gptq/
NeverEndingToast
self.LocalLLaMA
2023-06-13T21:28:32
0
{}
148prx3
false
null
t3_148prx3
/r/LocalLLaMA/comments/148prx3/landmark_attention_oobabooga_support_gptq/
false
false
self
175
{'enabled': False, 'images': [{'id': 'xHpvVEcy3S8jRFD78uuxihyCIcFKFRMnZ2PCic0F0p8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-PskMhLcKnzgGi3sPac5xy4zXknpeCDSpOTMd8pr2-E.jpg?width=108&crop=smart&auto=webp&s=c68c1bd6748a23358cb8868f701e7bed787caa0c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-PskMhLcKnzgGi3sPac5xy4zXknpeCDSpOTMd8pr2-E.jpg?width=216&crop=smart&auto=webp&s=312e61d8103dc3d6026bb26f583a623d3d90c0ca', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-PskMhLcKnzgGi3sPac5xy4zXknpeCDSpOTMd8pr2-E.jpg?width=320&crop=smart&auto=webp&s=b6674706646d3c3d9868734afb5499890b7d9a1c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-PskMhLcKnzgGi3sPac5xy4zXknpeCDSpOTMd8pr2-E.jpg?width=640&crop=smart&auto=webp&s=1e969c1e8c9b787476c3087f843c94bada078890', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-PskMhLcKnzgGi3sPac5xy4zXknpeCDSpOTMd8pr2-E.jpg?width=960&crop=smart&auto=webp&s=09eb54e85b29054134473ef5d890ccb1e0fd7557', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-PskMhLcKnzgGi3sPac5xy4zXknpeCDSpOTMd8pr2-E.jpg?width=1080&crop=smart&auto=webp&s=9f35e6ee02edff433b3b789cfafc84d477806ce8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-PskMhLcKnzgGi3sPac5xy4zXknpeCDSpOTMd8pr2-E.jpg?auto=webp&s=73c679ed9bc1f6f92c3c9d8ff553f079c4d855dc', 'width': 1200}, 'variants': {}}]}
idea: We need a raspberry pi for LLMs.
1
[removed]
2023-06-13T21:35:19
https://www.reddit.com/r/LocalLLaMA/comments/148qanl/idea_we_need_a_raspberry_pi_for_llms/
[deleted]
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
148qanl
false
null
t3_148qanl
/r/LocalLLaMA/comments/148qanl/idea_we_need_a_raspberry_pi_for_llms/
false
false
default
1
null
Llama.cpp crashing with lora, but only when using GPU.
3
I was wondering if anyone’s run into this problem using loras with llama.cpp. It works fine for me if I don’t use the GPU. But if I do use the GPU it crashes. For example, starting llama.cpp with the following works fine on my computer. ./main -m models/ggml-vicuna-7b-f16.bin --lora lora/testlora_ggml-adapter-model.bin Lora loads up with no errors and it demonstrates responses in line with the data I trained the lora on. And starting with the same model, and GPU, but no lora, works fine. ./main -m models/ggml-vicuna-7b-f16.bin --n-gpu-layers 1 Starts up as expected when using --n-gpu-layers and no lora. But if I add n-gpu-layers, like this? ./main -m models/ggml-vicuna-7b-f16.bin --lora lora/testlora_ggml-adapter-model.bin --n-gpu-layers 1 It crashes with the following error. llama_apply_lora_from_file_internal: r = 256, alpha = 512, scaling = 2.00 ...............GGML_ASSERT: ggml-cuda.cu:1643: src0->type == GGML_TYPE_F32 && src1->type == GGML_TYPE_F32 && dst->type == GGML_TYPE_F32 Aborted (core dumped) I get the same thing with every other model I try. And I’ve created a few different lora, both on my own machine and runpod, to try to see if anything changes. But I keep running into the same thing. Any ideas? edit: I forgot to mention I'm using an nvidia m40. Nvidia driver version: 530.30.02, CUDA version: 12.1. I just finished totally purging everything related to nvidia from my system and then installing the drivers and cuda again, setting the path in bashrc, etc. Aaaaaaand, no luck. llama.cpp still crashes if I use a lora and the --n-gpu-layers together. edit2: Someone opened up a new issue report on it. Turns out that this behavior is normal, and a result of llama.cpp running things as f32 in the GPU. It was suggested to just merge the lora.
2023-06-13T21:44:40
https://www.reddit.com/r/LocalLLaMA/comments/148qi16/llamacpp_crashing_with_lora_but_only_when_using/
toothpastespiders
self.LocalLLaMA
2023-06-14T19:22:55
0
{}
148qi16
false
null
t3_148qi16
/r/LocalLLaMA/comments/148qi16/llamacpp_crashing_with_lora_but_only_when_using/
false
false
self
3
null
Running LLaMA Inference in Parallel Using Accelerate
3
2023-06-13T21:58:22
https://www.bengubler.com/posts/multi-gpu-inference-with-accelerate
FutureIncrease
bengubler.com
1970-01-01T00:00:00
0
{}
148qstm
false
null
t3_148qstm
/r/LocalLLaMA/comments/148qstm/running_llama_inference_in_parallel_using/
false
false
default
3
null
llama.cpp can now train?
37
Looks like we just got some support for training in llama.cpp! Not sure what I'm doing wrong but it's crashing for me. Anyone else tried it and got any success? [https://github.com/ggerganov/llama.cpp/tree/master/examples/train-text-from-scratch](https://github.com/ggerganov/llama.cpp/tree/master/examples/train-text-from-scratch)
2023-06-13T23:32:52
https://www.reddit.com/r/LocalLLaMA/comments/148st5r/llamacpp_can_now_train/
stonegdi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
148st5r
false
null
t3_148st5r
/r/LocalLLaMA/comments/148st5r/llamacpp_can_now_train/
false
false
self
37
{'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=108&crop=smart&auto=webp&s=b6caea286bbf31bdb473212eb5668f45376977be', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=216&crop=smart&auto=webp&s=ba8933d74dda3c391a7c9a355d2e1cd0054d1c21', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=320&crop=smart&auto=webp&s=93b690f58b739ff61da7a147fc67d6c8842b3a7d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=640&crop=smart&auto=webp&s=a55f55983fcc0b3f5a6d4e0b51f627e1b40ef9d4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=960&crop=smart&auto=webp&s=e56b77b835b76c51a1e12a410b9e908f0255d397', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=1080&crop=smart&auto=webp&s=d06ca9eb5611d109d3ef7935f6de61545e9828da', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?auto=webp&s=0b2a006e16468374b78dd67390927053776e6137', 'width': 1280}, 'variants': {}}]}
Any way to adjust GPT4All 13b I have 32 Core Threadripper with 512 GB RAM but not sure if GPT4ALL uses all power? Any other alternatives that are easy to install on Windows?
2
Any way to adjust GPT4All 13b I have 32 Core Threadripper with 512 GB RAM but not sure if GPT4ALL uses all power? Any other alternatives that are easy to install on Windows? Ideally I would like to have most powerful AI chat connected to Stable Diffusion (for my machine 32 core Threadripper 512 GB RAM 3070 8GB I would love to stay with Windows 11 and avoid Linux
2023-06-13T23:38:34
https://www.reddit.com/r/LocalLLaMA/comments/148sx4w/any_way_to_adjust_gpt4all_13b_i_have_32_core/
SolvingLifeWithPoker
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
148sx4w
false
null
t3_148sx4w
/r/LocalLLaMA/comments/148sx4w/any_way_to_adjust_gpt4all_13b_i_have_32_core/
false
false
self
2
null
Multimodal models and "active" learning
5
For the former, there's already projects like: https://panda-gpt.github.io/ https://github.com/OpenGVLab/LLaMA-Adapter/tree/main/imagebind_LLM https://github.com/Luodian/Otter and others (although they're usually bimodal, if that's the right word). I bring this up because many want to compete with GPT-4 but seem to forget it's multimodal. And of course, we might never reach its level soon- but multimodality still looks promising. Maybe it is the case language is all we need- but more work or activity on multimodality would be interesting, like a llamacpp, ggml, and quantization for multimodal models. Maybe give it a few months? I'm not sure about the "active" learning being done right now, but what I mean is it just seems people are so focused on the "frozen" intelligence of LMs. Although to be fair maybe in context learning with a much bigger context window and others I'm missing may be good work for now. (Note: speaking as a layman so forgive me if I sound silly but) with ideas from task vectors (https://arxiv.org/pdf/2212.04089.pdf) and Loras, maybe a thing can be done? Maybe simulating a stream of memory, some threshold can be reached that changes weights or at least some "adapters" on the model? Ofc assuming this process becomes fast enough (with all the advancements going on). Now I'm not saying that people SHOULD do these RIGHT NOW or complaining- understanding that this is cutting edge, at the very least hard work and apologies if I sound demanding. But with that said, I'll just put this thought out there. I wish I was as capable enough to try something or know when something can work instead of just come off as whining about it- all I've done so far was stuff like modifying PandaGPT to run on only CPU- could only try 7b and though it worked it wasn't that impressive probably because of my setup (still trying others). For LMs though, I am still excited for work being done on context length although unable to run them as is rn. Though as I post this, this just came out https://old.reddit.com/r/LocalLLaMA/comments/148prx3/landmark_attention_oobabooga_support_gptq/ so that's something to do after I make this post. At the end of the day, I'm still really grateful to be able to run LMs at all and hope that every work on local models continues despite any moats or whatnot!
2023-06-14T00:18:36
https://www.reddit.com/r/LocalLLaMA/comments/148tpun/multimodal_models_and_active_learning/
reduserGf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
148tpun
false
null
t3_148tpun
/r/LocalLLaMA/comments/148tpun/multimodal_models_and_active_learning/
false
false
self
5
null
Best approach to local LLMs for a journal?
13
My partner and I built an [open source](https://github.com/ocdevel/gnothi) robo-journal [Gnothi](https://gnothiai.com). Started in 2019 with summarization, recurring themes, book recommendations, behavior tracking. Soon as GPT came along, you bet your butt I integrated. But privacy. It's a journal. I trust OpenAI with mine, but not everyone does. And frankly, I'd love to be coached by [Wizard-Vicuna\_Uncensored\_Uncut\_Extreme\_Chaos\_non-GMO\_Organic-GGML](https://www.reddit.com/r/LocalLLaMA/comments/13pqj3j/meanwhile_here_at_localllama/). I want users to be able to choose between OpenAI; a Gnothi-hosted LLM; or BYO-model, using an IP / ngrok. **The easy question (maybe?):** For users to self-host, what's the easiest approach? Some sort of local service which exposes an API similar to OpenAI. Oobabooga? Is it reasonable to expect, if they've setup Oobabooga, they can setup Ngrok for Gnothi to webhook? Once I land on a decent one-size-fits-all, I'll write a Github Wiki for these users. **The tall order:** For Gnothi-hosted, I'd need a model which (all 3, even I have to wait on upcoming SOTA): 1. Fits on Lambda or SageMaker Serverless. <=10GB RAM. So I'm thinking 7B quantized. 2. Is robust / quality enough to compete with GPT 3.5. [This is the master-prompt](https://github.com/ocdevel/gnothi/blob/main/sst/services/ml/node/summarize.ts#L54) which the model would have to be able to handle. 3. Is safe enough for the headspace in which someone is journaling. If all 3 can't be achieved, I'll just lean into the DIY side. If you want it enough, you'll know what you're doing, and I'll get hands-dirty with ya If you're interested in using it but not keen on OpenAI: use the free version. Free uses pre-trained Huggingface, premium uses GPT. If you're interested on the code side, hit me up - it'll give me incentive to bring the README back to life. All the action's in the \`sst/\` folder (all other top-level folders are old code). My biggest problem is that it uses SST (AWS, CDK) and a real RDS database and VPC with Nat Gateway, even in dev-mode - a minimum of $70/m. So I need a way to Dockerize parts of this for tinkering. In the meantime, if there's low-hanging fruit, just Github-issue me and I'll test changes on my end.
2023-06-14T00:29:18
https://www.reddit.com/r/LocalLLaMA/comments/148txkx/best_approach_to_local_llms_for_a_journal/
lefnire
self.LocalLLaMA
2023-06-14T03:03:15
0
{}
148txkx
false
null
t3_148txkx
/r/LocalLLaMA/comments/148txkx/best_approach_to_local_llms_for_a_journal/
false
false
self
13
{'enabled': False, 'images': [{'id': 'sQUuNLY1QJMR87V9uBaSdK8B_Agvap2O4wbDGbhU0-s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fBX-UDBiRtGuefop0-aEkNVx6wOkHiVyH5cgfnaEy5A.jpg?width=108&crop=smart&auto=webp&s=84e68237e5a5b6bea78426fe9b1f312334699bef', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/fBX-UDBiRtGuefop0-aEkNVx6wOkHiVyH5cgfnaEy5A.jpg?width=216&crop=smart&auto=webp&s=46c86a5c8d07b6336206c2f144efed2b52b690f3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/fBX-UDBiRtGuefop0-aEkNVx6wOkHiVyH5cgfnaEy5A.jpg?width=320&crop=smart&auto=webp&s=88311bb5d2ebbd44be7cf0bfacc9d2875b751882', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/fBX-UDBiRtGuefop0-aEkNVx6wOkHiVyH5cgfnaEy5A.jpg?width=640&crop=smart&auto=webp&s=ebfcf159f7a82b6307653e64b54899fb706fbac0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/fBX-UDBiRtGuefop0-aEkNVx6wOkHiVyH5cgfnaEy5A.jpg?width=960&crop=smart&auto=webp&s=3ab22003a650e62928a3d5748ba5ad178e4d63a3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/fBX-UDBiRtGuefop0-aEkNVx6wOkHiVyH5cgfnaEy5A.jpg?width=1080&crop=smart&auto=webp&s=364ddd46c3bacfa48d7c47989e4f2e980b6fc144', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/fBX-UDBiRtGuefop0-aEkNVx6wOkHiVyH5cgfnaEy5A.jpg?auto=webp&s=b39861cde4ce7ea9086b845dfc549842da2e78b2', 'width': 1200}, 'variants': {}}]}
Alternative download links
1
[removed]
2023-06-14T02:22:43
https://www.reddit.com/r/LocalLLaMA/comments/148w2cu/alternative_download_links/
Yip37
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
148w2cu
false
null
t3_148w2cu
/r/LocalLLaMA/comments/148w2cu/alternative_download_links/
false
false
default
1
null
Simple LLM Performance Benchmarking Util utilizing the oobabooga web API
21
Hey everyone, I've created a simple performance benchmarking utility using the oobabooga Text Generation Web API. Repo: [oobabooga-benchmark](https://github.com/traumahound86/oobabooga-benchmark) After getting bitten by the [changes in version 535.98 of the nVidia drivers](https://www.reddit.com/r/LocalLLaMA/comments/1461d1c/major_performance_degradation_with_nvidia_driver/), I thought it'd be a good idea to have a simple and repeatable way to check and save performance metrics. This utility will send one or more instruction prompts to oobabooga through its web API and output and save the results. (ex. total time taken, total tokens generated, tokens per second, etc.) In its basic form, the utility is simple to use: python benchmark.py --infile prompt.txt Run the utility on a different machine than the oobabooga server. Pass in a static seed value for repeatable tests. Benchmark multiple prompt files at once python benchmark.py --host 192.168.0.10 --seed 600753398 --tokens_per_gen 250 --infile prompt01.txt prompt02.txt prompt03.txt **Protip** Shell globbing multiple files works and since the benchmarking util also saves the generated output in addition to the performance metrics, it can be used to generate batches of output with no additional interaction. python benchmark.py --host 192.168.0.10 --infile mystories*.txt mypoems*.txt mysongs*.txt There are many additional (optional) flags to alter the behaviour of the utility and API (ex. temperature, output directory, tokens per generation, etc.) documented on the GitHub page. I'm definitely open to feedback and/or suggestions for improvements, bug fixes, etc. Hopefully someone finds this tool useful.
2023-06-14T02:26:15
https://www.reddit.com/r/LocalLLaMA/comments/148w4ry/simple_llm_performance_benchmarking_util/
GoldenMonkeyPox
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
148w4ry
false
null
t3_148w4ry
/r/LocalLLaMA/comments/148w4ry/simple_llm_performance_benchmarking_util/
false
false
self
21
{'enabled': False, 'images': [{'id': '4Zzz0mayEAE8fC5P7jtkzPi2mVVpArA1WN5OOTBlJYw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mGhDGd4MLuNpuUOYVz55cZSkfxDwJjKsHWEMU8H9FbM.jpg?width=108&crop=smart&auto=webp&s=8ead88b6982ad32cb2032d8e59c635a0989e5cc6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/mGhDGd4MLuNpuUOYVz55cZSkfxDwJjKsHWEMU8H9FbM.jpg?width=216&crop=smart&auto=webp&s=b6fa613caed1f37b17807e0053f51e0e5e96ddcc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/mGhDGd4MLuNpuUOYVz55cZSkfxDwJjKsHWEMU8H9FbM.jpg?width=320&crop=smart&auto=webp&s=dc9979e05ecc88752289d377a5f5dd6aed4f496d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/mGhDGd4MLuNpuUOYVz55cZSkfxDwJjKsHWEMU8H9FbM.jpg?width=640&crop=smart&auto=webp&s=236f2463015e611cf54adea570882da60ae58659', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/mGhDGd4MLuNpuUOYVz55cZSkfxDwJjKsHWEMU8H9FbM.jpg?width=960&crop=smart&auto=webp&s=6fb4e596af41f2d9409350e30c31c1b7706d1a6a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/mGhDGd4MLuNpuUOYVz55cZSkfxDwJjKsHWEMU8H9FbM.jpg?width=1080&crop=smart&auto=webp&s=c126a7e30e39765f1f842017bea9f59ee278f4c1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/mGhDGd4MLuNpuUOYVz55cZSkfxDwJjKsHWEMU8H9FbM.jpg?auto=webp&s=dbbcedba0322004a8d6cf7065aa4d6b5d930fb0e', 'width': 1200}, 'variants': {}}]}
Your painpoints in building/using Local LLMs
1
[removed]
2023-06-14T02:46:41
https://www.reddit.com/r/LocalLLaMA/comments/148win2/your_painpoints_in_buildingusing_local_llms/
Latter-Implement-243
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
148win2
false
null
t3_148win2
/r/LocalLLaMA/comments/148win2/your_painpoints_in_buildingusing_local_llms/
false
false
default
1
null
Useful Links and Info
39
[removed]
2023-06-14T03:27:27
[deleted]
2023-06-14T03:33:49
0
{}
148xacr
false
null
t3_148xacr
/r/LocalLLaMA/comments/148xacr/useful_links_and_info/
false
false
default
39
null
Forgive my ignorance, but is crowdsourced networked GPU processing feasible?
1
[deleted]
2023-06-14T04:30:23
[deleted]
1970-01-01T00:00:00
0
{}
148yff7
false
null
t3_148yff7
/r/LocalLLaMA/comments/148yff7/forgive_my_ignorance_but_is_crowdsourced/
false
false
default
1
null
16k context for OpenAI GPT-3.5 API
74
Looks like OpenAI just upped the context length for gpt-3.5-turbo and made some other updates to make it easier to integrate with other applications. [Function calling and other API updates (openai.com)](https://openai.com/blog/function-calling-and-other-api-updates) * new function calling capability in the Chat Completions API * updated and more steerable versions of gpt-4 and gpt-3.5-turbo * new 16k context version of gpt-3.5-turbo (vs the standard 4k version) * 75% cost reduction on our state-of-the-art embeddings model * 25% cost reduction on input tokens for gpt-3.5-turbo * announcing the deprecation timeline for the gpt-3.5-turbo-0301and gpt-4-0314 models
2023-06-14T04:44:31
https://www.reddit.com/r/LocalLLaMA/comments/148yoo4/16k_context_for_openai_gpt35_api/
noco-ai
self.LocalLLaMA
2023-06-14T04:50:44
0
{}
148yoo4
false
null
t3_148yoo4
/r/LocalLLaMA/comments/148yoo4/16k_context_for_openai_gpt35_api/
false
false
self
74
{'enabled': False, 'images': [{'id': '9DbbdjKChgxgpk85RvWkYY-sPol1aDoPYeUX07sqagA', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/EcmqnsLf0EsGJhzQ8-XKBeyseSQajVJjhVnMdXtdq_E.jpg?width=108&crop=smart&auto=webp&s=85d6ab49c24a9caab2287f2df04f1bcafac79db4', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/EcmqnsLf0EsGJhzQ8-XKBeyseSQajVJjhVnMdXtdq_E.jpg?width=216&crop=smart&auto=webp&s=6e97adf939b8013963f8ca5d50d0233faa5921bf', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/EcmqnsLf0EsGJhzQ8-XKBeyseSQajVJjhVnMdXtdq_E.jpg?width=320&crop=smart&auto=webp&s=482f69c2cb28fd9c61367e2463e17e0054c7301b', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/EcmqnsLf0EsGJhzQ8-XKBeyseSQajVJjhVnMdXtdq_E.jpg?width=640&crop=smart&auto=webp&s=24ca881245cde17e486b4596b7d10b30534df708', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/EcmqnsLf0EsGJhzQ8-XKBeyseSQajVJjhVnMdXtdq_E.jpg?width=960&crop=smart&auto=webp&s=57fd3f6d8874f2a6f11daa3e4fb5b540325d4c08', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/EcmqnsLf0EsGJhzQ8-XKBeyseSQajVJjhVnMdXtdq_E.jpg?width=1080&crop=smart&auto=webp&s=37d674c99b69ea0c836a41f1fefe92c7394d0a7c', 'width': 1080}], 'source': {'height': 4096, 'url': 'https://external-preview.redd.it/EcmqnsLf0EsGJhzQ8-XKBeyseSQajVJjhVnMdXtdq_E.jpg?auto=webp&s=fde5b948b4f1deeabe69cb002890fdeb34b08cc8', 'width': 4096}, 'variants': {}}]}
30b models super slow webui 4090
1
[removed]
2023-06-14T06:11:20
https://www.reddit.com/r/LocalLLaMA/comments/1490633/30b_models_super_slow_webui_4090/
fractaldesigner
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1490633
false
null
t3_1490633
/r/LocalLLaMA/comments/1490633/30b_models_super_slow_webui_4090/
false
false
default
1
null
Honkware/Manticore-13b-Landmark-GPTQ · Hugging Face
11
2023-06-14T06:45:37
https://huggingface.co/Honkware/Manticore-13b-Landmark-GPTQ
glowsticklover
huggingface.co
1970-01-01T00:00:00
0
{}
1490qch
false
null
t3_1490qch
/r/LocalLLaMA/comments/1490qch/honkwaremanticore13blandmarkgptq_hugging_face/
false
false
https://b.thumbs.redditm…frrG42blk8tw.jpg
11
{'enabled': False, 'images': [{'id': 'gfzA01Jl7gabQOK32OfizGoVb5tlu8m5ffyF_ox7om0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/cO67hNiSJ4d85G8idPm4CzH64r-a3wfdaSt7uUxptrU.jpg?width=108&crop=smart&auto=webp&s=5620d15d5450ac6f9d4d0a0aae8ff93634a34ab1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/cO67hNiSJ4d85G8idPm4CzH64r-a3wfdaSt7uUxptrU.jpg?width=216&crop=smart&auto=webp&s=17abaae64b3ab5ad9fda4b320b97fb10352aa987', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/cO67hNiSJ4d85G8idPm4CzH64r-a3wfdaSt7uUxptrU.jpg?width=320&crop=smart&auto=webp&s=a2dc350d7c8459b597f1291bcd440fe0b38be4a9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/cO67hNiSJ4d85G8idPm4CzH64r-a3wfdaSt7uUxptrU.jpg?width=640&crop=smart&auto=webp&s=01db63701106ccdb06e7ca303889bf8d46a69f00', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/cO67hNiSJ4d85G8idPm4CzH64r-a3wfdaSt7uUxptrU.jpg?width=960&crop=smart&auto=webp&s=c1390da794b2dd5272c141358a71f2a3d4ea26ec', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/cO67hNiSJ4d85G8idPm4CzH64r-a3wfdaSt7uUxptrU.jpg?width=1080&crop=smart&auto=webp&s=3a148e7c96a51e32765a037e475e32e6d67c0e84', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/cO67hNiSJ4d85G8idPm4CzH64r-a3wfdaSt7uUxptrU.jpg?auto=webp&s=6e2a4d6bc40f0f028407c9e226a1e0d3ef1215d0', 'width': 1200}, 'variants': {}}]}
NO issues loading Q4 K_S model but cant load Q3 K_S model, get this error
1
[removed]
2023-06-14T07:17:21
https://www.reddit.com/r/LocalLLaMA/comments/14918zm/no_issues_loading_q4_k_s_model_but_cant_load_q3_k/
Equal-Pilot-9592
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14918zm
false
null
t3_14918zm
/r/LocalLLaMA/comments/14918zm/no_issues_loading_q4_k_s_model_but_cant_load_q3_k/
false
false
default
1
null
Help a beginner
2
I've searched through this sub and the llama.cpp github but nothing seems to help. Maybe it's just some trivial thing I'm missing. I was trying to run vicuna through llama.cpp on a Azure VM (Standard D4ds v4 (4 vcpus, 16 GiB memory) and 150gb storage) I followed the installation guide from this sub This is where is downloaded the model from : [https://huggingface.co/vicuna/ggml-vicuna-7b-1.1/tree/main](https://huggingface.co/vicuna/ggml-vicuna-7b-1.1/tree/main) and when i try to run it, this is what i get user@temp:~/llama.cpp$ ./main -m ./models/7B/ggml-vic7b-q4_0.bin -n 128 main: build = 669 (9254920) main: seed = 1686726150 llama.cpp: loading model from ./models/7B/ggml-vic7b-q4_0.bin error loading model: unexpectedly reached end of file llama_init_from_file: failed to load model llama_init_from_gpt_params: error: failed to load model './models/7B/ggml-vic7b-q4_0.bin' main: error: unable to load model there were things about converting and quantizing which I really don't understand but tried them anyway with no avail only more errors Please help
2023-06-14T07:24:41
https://www.reddit.com/r/LocalLLaMA/comments/1491d7a/help_a_beginner/
[deleted]
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1491d7a
false
null
t3_1491d7a
/r/LocalLLaMA/comments/1491d7a/help_a_beginner/
false
false
self
2
{'enabled': False, 'images': [{'id': 'KJJFb_vYzt3LgoIp4piANHHDFm2Fi9VkonZzVdjEgVA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/g7nJoY3S0C27JJbAawqryvXNz6ae8_8kOfXK-8UfXGM.jpg?width=108&crop=smart&auto=webp&s=8cda09ab5c77cdcf284e7f085d139a72ac86bac3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/g7nJoY3S0C27JJbAawqryvXNz6ae8_8kOfXK-8UfXGM.jpg?width=216&crop=smart&auto=webp&s=86fde117f5379c7df0dbbc434aaa7c0771a92ce9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/g7nJoY3S0C27JJbAawqryvXNz6ae8_8kOfXK-8UfXGM.jpg?width=320&crop=smart&auto=webp&s=0e69b61be1f991d8c667291896f9ce716e73023a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/g7nJoY3S0C27JJbAawqryvXNz6ae8_8kOfXK-8UfXGM.jpg?width=640&crop=smart&auto=webp&s=8bd8cd26283c3e890f7b2e2273e714df2bb45143', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/g7nJoY3S0C27JJbAawqryvXNz6ae8_8kOfXK-8UfXGM.jpg?width=960&crop=smart&auto=webp&s=afbed4721bff1979ec8adfc199cb0b6021782a46', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/g7nJoY3S0C27JJbAawqryvXNz6ae8_8kOfXK-8UfXGM.jpg?width=1080&crop=smart&auto=webp&s=2ee64a7edaa0c4f582de40db45853578fbf2d312', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/g7nJoY3S0C27JJbAawqryvXNz6ae8_8kOfXK-8UfXGM.jpg?auto=webp&s=fc09d54412f3392d664111a799e57b0f44b048db', 'width': 1200}, 'variants': {}}]}
Tiny models for contextually coherent conversations?
8
I'm looking for a small (maybe less than 1B) GGML model that can hold a simple conversation with good contextual coherence, so it should remember the history and understand context. No general knowledge required, just that. Any suggestions?
2023-06-14T07:58:10
https://www.reddit.com/r/LocalLLaMA/comments/1491wg6/tiny_models_for_contextually_coherent/
Amazing_Sentence5393
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1491wg6
false
null
t3_1491wg6
/r/LocalLLaMA/comments/1491wg6/tiny_models_for_contextually_coherent/
false
false
self
8
null
Introducing my 'VowelReconstruct' Method - A Tangible Test for Comparing LLM's General Intelligence
31
TL;DR I have created a test method I call it "VowelReconstruct", where texts with almost all vowels were removed are presented to language models, and its job is to reconstruct the original text. I am very excited to introduce it to you. My approach is interesting because the model needs various cognitive capabilities at same time to be able to achieve this task. The result is evaluated by comparing the reconstructed text to the original text using two metrics, the Levenshtein-distance and a simple characters based similarity score. After that I calculate a new score (let's call it symscore), which provide insights into the performance of different language models and helping assess their intelligence.. This method aims to provide a practical way of assessing and comparing language models intelligence. I've also decided to start my own blog and there you can read more about the method, if you are interested: https://publish.obsidian.md/mountaiin/VowelReconstruct --- Here you'll find the files, if you want to use this method too: https://github.com/mounta11n/VowelReconstruct --- And here you can see how some of my results look like: | Name | Size | Specifications | Similarity | Levenshtein | Symscore | |:------------:|:-------:|:--------------:|:----------:|:-----------:|:----------:| | Guanaco | 7B | */* | 39.81% | 151 | 379.29 | | WizardLM | 7B | q40 | 42.36% | 194 | 457.90 | | **--------** | **---** | **------** | **------** | **---** | **---** | | Vicuna | 13B | q41_v3 | 44.90% | 109 | 242.76 | | Vicuna | 13B | q41_v3 | 57.64% | 29 | 50.31 | | Vicuna | 13B | q6k | 51.27% | 190 | 370.82 | | WizardLM | 13B | q40_v3 | 51.27% | 41 | 79.95 | | WizardLM | 13B | q40_v3 | 51.59% | 31 | 60.09 | | WizardLM | 13B | q40_v3 | 51.27% | 42 | 81.92 | | WizardLM | 13B | q40_v3 | 50.00% | 29 | 58.00 | | WizardLM | 13B | q4km | 57.96% | 34 | 58.64 | | WizardLM | 13B | q6k | 55.73% | 41 | 73.52 | | **--------** | **---** | **------** | **------** | **---** | **---** | | based | 30B | q40_v3 | 53.50% | 108 | 201.87 | | LLaMA | 30B | s-hotcot | 67.20% | 83 | 123.45 | | **--------** | **---** | **------** | **------** | **---** | **---** | | **Guanaco** | **65B** | **q40_v3** | **99%** | **2** | **2.01** | | **--------** | **---** | **------** | **------** | **---** | **---** | | Claude+ | */* | 100k | 93% | 12 | 12.90 | | GPT-3.5 | */* | */* | 96.18% | 12 | 12.48 | | GPT-4 | */* | */* | 97.77% | 2 | 2.04 | EDIT: I maybe should have mention, as I mentioned it in the article: " … a test that I believe is very meaningful and suitable **for everyday use**. That means this test is more "tangible" and has a direct realistic value. It is **not** a matter of measuring something **exactly** to a certain decimal place, … " So, in other words, this method/test does not aim to replace SOTA tests or to be ordered in the same categories like already existing tests. It is also not declaring itself as a highly scientific one. This test aims to be a tool for the average user and for everyday life. This means that this test addresses the problem that there is a lack of meaningful tests that are 1) easy and **quickly** to use, 2) easy to understand, 3) reliable and valid enough for hobby research, and, above all, 4) that are really applicable within the time and technical framework of an average citizen. This further fills a niche area that has so far only been able to offer meagre resources. In view of this requirement, it is only a logical consequence that this test should not be compared with the big well-known tests. EDIT EDIT: Typos etc
2023-06-14T08:11:29
https://www.reddit.com/r/LocalLLaMA/comments/14924se/introducing_my_vowelreconstruct_method_a_tangible/
Evening_Ad6637
self.LocalLLaMA
2023-07-16T09:04:32
0
{}
14924se
false
null
t3_14924se
/r/LocalLLaMA/comments/14924se/introducing_my_vowelreconstruct_method_a_tangible/
false
false
self
31
{'enabled': False, 'images': [{'id': 'Oivw5lBZ_Xvm4N55tuIVtXcGjiWjMahJK6a6LZVFZkI', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/b8ZbNIEblYHwyDGJGU6o7reRPvZmO6vdttXbOtiJt9Y.jpg?width=108&crop=smart&auto=webp&s=e77fa167fe3c65645b1c59f1d803014688367ff1', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/b8ZbNIEblYHwyDGJGU6o7reRPvZmO6vdttXbOtiJt9Y.jpg?width=216&crop=smart&auto=webp&s=b4c64edc16bd4445781aa8645b537b7c1d36ac05', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/b8ZbNIEblYHwyDGJGU6o7reRPvZmO6vdttXbOtiJt9Y.jpg?width=320&crop=smart&auto=webp&s=eca3c8298dcba4147ac4019b3d13ecbaaee665ba', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/b8ZbNIEblYHwyDGJGU6o7reRPvZmO6vdttXbOtiJt9Y.jpg?width=640&crop=smart&auto=webp&s=fc5cbe00a5581e44fff6fdcf00327fdd9db92e86', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/b8ZbNIEblYHwyDGJGU6o7reRPvZmO6vdttXbOtiJt9Y.jpg?width=960&crop=smart&auto=webp&s=5358f46424fa1ebab9289c74131b8aa5aa8dba52', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/b8ZbNIEblYHwyDGJGU6o7reRPvZmO6vdttXbOtiJt9Y.jpg?width=1080&crop=smart&auto=webp&s=c9d5a04c0b51000f8f76911449d427a0fc81e5c6', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/b8ZbNIEblYHwyDGJGU6o7reRPvZmO6vdttXbOtiJt9Y.jpg?auto=webp&s=0ec443d8adca0701bca99cee788cebc4e31449b8', 'width': 1200}, 'variants': {}}]}
SlimPajama: A 627B token cleaned and deduplicated version of RedPajama
46
2023-06-14T08:11:39
https://www.cerebras.net/blog/slimpajama-a-627b-token-cleaned-and-deduplicated-version-of-redpajama
baconwasright
cerebras.net
1970-01-01T00:00:00
0
{}
14924w1
false
null
t3_14924w1
/r/LocalLLaMA/comments/14924w1/slimpajama_a_627b_token_cleaned_and_deduplicated/
false
false
https://b.thumbs.redditm…IYGLsv6WIXYM.jpg
46
{'enabled': False, 'images': [{'id': 'I4vZeqDJ34df6oqvOcDRwRvFQVJ55B9iedovy4BjqKU', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/J_Y4yH25SQxQvvybsOT6njqXQ5qTE5B4FAXcCLMtSos.jpg?width=108&crop=smart&auto=webp&s=76593e6f5cf714b379ef1f5e5eacf58e9f16a119', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/J_Y4yH25SQxQvvybsOT6njqXQ5qTE5B4FAXcCLMtSos.jpg?width=216&crop=smart&auto=webp&s=747abdae1026ccae000a6ce552e2605823a3f678', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/J_Y4yH25SQxQvvybsOT6njqXQ5qTE5B4FAXcCLMtSos.jpg?width=320&crop=smart&auto=webp&s=7da96e90977ad255fe023e8f8c1550b6d54da161', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/J_Y4yH25SQxQvvybsOT6njqXQ5qTE5B4FAXcCLMtSos.jpg?width=640&crop=smart&auto=webp&s=b3cf1cbb4f864f852629fd53480cb78b8fa02867', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/J_Y4yH25SQxQvvybsOT6njqXQ5qTE5B4FAXcCLMtSos.jpg?width=960&crop=smart&auto=webp&s=27196d66a1d7e18fbf3a3ff1a97c49735b561d84', 'width': 960}], 'source': {'height': 1018, 'url': 'https://external-preview.redd.it/J_Y4yH25SQxQvvybsOT6njqXQ5qTE5B4FAXcCLMtSos.jpg?auto=webp&s=2569800299f88b480de75ac449a3e4f1d1ebbc91', 'width': 1018}, 'variants': {}}]}
Which model has the highest token limit?
3
Just getting into this, so pardon my question, but which model has the highest token limit AND is closest to GPT 3.5 in terms of chatbot mode?
2023-06-14T09:10:10
https://www.reddit.com/r/LocalLLaMA/comments/149313m/which_model_has_the_highest_token_limit/
cool-beans-yeah
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
149313m
false
null
t3_149313m
/r/LocalLLaMA/comments/149313m/which_model_has_the_highest_token_limit/
false
false
self
3
null
This is getting really complicated.
229
I wish the whole LLM community (as well as stable diffusion) would iron out some of the more user-unfriendly kinks. Every day you hear some news about how the stochastic lexical cohesion analysis or whatever has improved tenfold (but no mention of what it does or how to use it). Or you get oobabooga to run locally only to be met with a ten page list of settings where the deep probabilistic syntactic parsing needs to be set to 0.75 with latent variable models but **absolutely not** for hierarchical attentional graph convolutional networks or you'll break your computer (with no further details). If you have any questions you're expected to already know how to code and you need to parse five git repositories for error messages where the answers were outdated a week ago. I'm just saying... We need to simplify this for the average user and have an "advanced" button on the side instead of the main focus. Edit: Some of you are going "well, it's very bleeding edge tech so of course it's going to be complicated but I agree that it could be a bit easier to parse once we've joined together and worked on it as a community" and some of you are going "lol smoothbrain non-techie, go use ChatGPT dum fuk settings are supposed to be obtuse because we're progressing *science* what have *u* done with your life?" One of these opinions is correct. Edit2: Here's a point: it's perfectly valid to work on the back end and the front end of a product at the same time. Just because the interface is (let's face it) unproductive, doesn't mean you can't work on that while also still working on the nitty gritty of machine learning or coding. Saying "it's obtuse" is not the same as saying "there's no need to improve." How many people know each component and function of a car? The user just needs to gas and steer, that doesn't mean car manufacturers can't iterate on and improve the engine.
2023-06-14T09:33:30
https://www.reddit.com/r/LocalLLaMA/comments/1493et3/this_is_getting_really_complicated/
Adkit
self.LocalLLaMA
2023-06-15T10:06:24
0
{}
1493et3
false
null
t3_1493et3
/r/LocalLLaMA/comments/1493et3/this_is_getting_really_complicated/
false
false
self
229
null
Local models on laptops: AMD 6900HS/32GB/Nvidia 3050TI 4GB vs. Apple M2/24GB vs.
9
Dear respected community, In awe of recent developments I, like many, wonder how I can most effectively run models on end-user laptop hardware for personal use. I will be upgrading my laptop to either a Asus X13 with AMD 6900HS, 32GB RAM (LPDDR5 6400), Nvidia 3050TI or a MacBook Air with M2, 24GB RAM, 8 GPU Given GGML has shown promise with GPU offloading, is it reasonable to assume I can run 30/33B models on the Asus X13? I suppose it would be preferable to the MacBook. By the way: Thank you for everything and everyone bringing forward local LLMs! **EDIT 2:** I actually got both laptops at very good prices for testing and will sell one - I'm still thinking about which one. **Testing the Asus X13, 32GB LPDDR5 6400, Nvidia 3050TI 4GB vs. the MacBook Air 13.6'', M2, 24GB, 10 Core GPU** * **In the end, the MacBook is clearly faster with 9.47 tokens per second. The Asus X13 runs at 5.2 tokens per second.** * **However, the MacBook only runs q4\_0 models at the moment and most 13B models can be run; on the Asus, 30/33B models can be run.** * **EDIT 3: smaller 33B models run on the MacBook as well using kobold.cpp - at the moment without GPU/Metal/OpenCL:** * **vicuna-33b-preview.ggmlv3.q3\_K\_S.bin with 1.6 tokens per second** * **It also runs models of other architectures not supported by llama.cpp on MacOS with Metal, e.g.,** * **WizardCoder-15B-1.0.ggmlv3.q4\_0 with 4 tokens per second** * **starchat-beta.ggmlv3.q4\_0 both with 4.3 tokens per second** * **With the 4GB Nvidia GPU the Asus is 16% faster compared to CPU only.** Note: * Testing with wizardlm-13b-v1.0-uncensored.ggmlv3.q4\_0.bin (reason: MacBook with llama.cpp and metal supports only q4\_0 (and certain others) at this time). * On the Asus X13 with CUDA max. 14 of 43 layers could be offloaded to the 4GB GPU * On the Asus X13 with OpenCL max. 19 of 43 layers could be offloaded to the 4GB GPU * OS: Pop OS / Ubuntu 22.04 and macOS 13.4.1 * llama.cpp, compiled from Git repository on 2023-06-23 Results for wizardlm-13b-v1.0-uncensored.ggmlv3.q4\_0: * Asus X13, CUDA, 14/43 layers: 5.0 tokens per second * **Asus X13, OpenCL, 19/43 layers: 5.2 tokens per second** * Asus X13, CPU only: 4.5 tokens per second * **MacBook M2, Metal: 9.47 tokens per second** Asus X13, CUDA, 14/43 layers: llama.cpp-cuda/main -t 16 -ngl 14 -m wizardlm-13b-v1.0-uncensored.ggmlv3.q4\_0.bin --color -c 2048 --temp 0.7 --repeat\_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\\n### Response:" llama\_print\_timings: prompt eval time = 1107.44 ms / 17 tokens ( 65.14 ms per token, 15.35 tokens per second) llama\_print\_timings: eval time = 80015.33 ms / 402 runs ( 199.04 ms per token, 5.02 tokens per second) llama\_print\_timings: total time = 81372.60 ms Asus X13, CPU 6900HS: llama.cpp/main -t 16 -ngl 0 -m wizardlm-13b-v1.0-uncensored.ggmlv3.q4\_0.bin --color -c 2048 --temp 0.7 --repeat\_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\\n### Response:" llama\_print\_timings: prompt eval time = 1234.49 ms / 17 tokens ( 72.62 ms per token, 13.77 tokens per second) llama\_print\_timings: eval time = 74731.55 ms / 337 runs ( 221.76 ms per token, 4.51 tokens per second) llama\_print\_timings: total time = 76175.37 ms X13, OpenGL, 19/43 Layers: llama.cpp/main -t 16 -ngl 19 -m wizardlm-13b-v1.0-uncensored.ggmlv3.q4\_0.bin --color -c 2048 --temp 0.7 --repeat\_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\\n### Response:" llama\_print\_timings: prompt eval time = 1303.10 ms / 17 tokens ( 76.65 ms per token, 13.05 tokens per second) llama\_print\_timings: eval time = 82414.65 ms / 430 runs ( 191.66 ms per token, 5.22 tokens per second) llama\_print\_timings: total time = 83987.24 ms X13, CPU 6900HS, with OpenBLAS: llama.cpp/main -t 16 -b 512 -m wizardlm-13b-v1.0-uncensored.ggmlv3.q4\_0.bin --color -c 2048 --temp 0.7 --repeat\_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\\n### Response:" llama\_print\_timings: prompt eval time = 1215.86 ms / 17 tokens ( 71.52 ms per token, 13.98 tokens per second) llama\_print\_timings: eval time = 124515.19 ms / 564 runs ( 220.77 ms per token, 4.53 tokens per second) llama\_print\_timings: total time = 126082.17 ms MBA, M2, Metal, 10c GPU llama.cpp/main -t 10 -ngl 1 -m wizardlm-13b-v1.0-uncensored.ggmlv3.q4\_0.bin --color -c 2048 --temp 0.7 --repeat\_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\\n### Response:" llama\_print\_timings: prompt eval time = 18236,60 ms / 17 tokens ( 1072,74 ms per token, 0,93 tokens per second) llama\_print\_timings: eval time = 40985,60 ms / 388 runs ( 105,63 ms per token, 9,47 tokens per second) llama\_print\_timings: total time = 59520,43 ms (**EDIT 1**: Summary of responses Thank you so much for all the comments and real-world performance measurements! I'm leaning towards the X13 since it does not seem to be slower in terms of processor and RAM performance compared to M2, has more RAM, has an additional GPU. Further, it seems the 4GB GPU will provide little performance benefit over the 6900HS alone, but it will be a few percent and the card is a nice addition in any case. The Asus X13 should run most models in GGML format with 13B or 30B and quantization. The MacBook Air should run most models in GGML format or other formats with 13B (and possibly with heavy quantization barely also larger models); i.e., not only GGML can be run as up to half of the integrated RAM can be utilized for the GPU (that is, the MacBook with 24GB of RAM is somewhat comparable to having \~12GB VRAM). Other suggestions, possibly very well suited, are the Asus G14 (more VRAM, faster GPU/CPU, however, larger and heavier than the small Asus X13), a MacBook with M1, M1 Pro, M1 Max or M2 32 GB (or even 64GB should be speedy for LLMs, in M1 Max especially due to very good memory bandwidth, possibly a used one is not that expensive), other 15 inch laptops (which are, for me, too large and heavy when traveling). All of the above assumes casual use (question, answer, question, ... scenario without batch queries, learning etc.) of models in GGML format. For other formats, either lots of VRAM or a MacBook with >= 32GB seem to be required at a minimum. Note also this is a rough characterization based on the current state - implementations could change and allow for better performance when using AMD, optimizations for Nvidia are not integrated everywhere at this point, and Macs could also see increases in performance with more flexible offloading / possibly being able to utilize more than 1/2 of RAM for GPU in the future. I will report back on the results once I have bought a laptop and have it up and running with Ubuntu 22.04 (or Pop!\_OS / Mint).) Regards Felix
2023-06-14T09:34:34
https://www.reddit.com/r/LocalLLaMA/comments/1493fes/local_models_on_laptops_amd_6900hs32gbnvidia/
bitangular
self.LocalLLaMA
2023-06-25T13:33:55
0
{}
1493fes
false
null
t3_1493fes
/r/LocalLLaMA/comments/1493fes/local_models_on_laptops_amd_6900hs32gbnvidia/
false
false
self
9
null
How context building works in very first run?
1
Using Oogabooga's WebUI i can see that any chat with prompt takes much slower than any other. I presume this is not just Oogabooga's WebUI, and not about GGML or GPTQ (GPTQ-for-llama), as i have tried both. This is how i see the process in the 1. As i can understand, we have "precompiled" parameters, where each are grouped as set of neurons 2. Program loads those groups to memory 3. We "feed" them the text, which turns into "tokens" 4. Program consecutively transfer processed "tokens" through next neurons, loading one group after another 5. Eventually, processed "tokens" turns into text again at some last layers. When we first run the question which has additional prompt it should also be tokenized, that is why it is so slow. But how exactly tokenization (if my understanding was right) happens? It takes significantly more time to tokenize text rather than to process resulted tokens it through whole network. Is it so much resource consumptive? What can i read to understand this part? What processes going on there? Because when i already had my first question (with additional prompt) processed and there is the second question, everything responds dramatically faster. Which probably mean that adding this my question to the context happens faster and without a problem. But when context reaches limit and being truncated, it happens as slow as it was at first time. Would that mean that text is coming in a chunk, and adding word is harder than removing older parts? Whole text should be deciphered back, then reassembled? I have seen pictures of structure, telling that token embeddings should be merged into token matrix. But how exactly consumptive the whole merging process is and what processes happening in there? I wonder if it can be done by separate programs or neural networks. This way i could, for example, feed preprocessed context and work with documentation/books/articles. But overall and complete understanding is much more vital for me.
2023-06-14T09:46:53
https://www.reddit.com/r/LocalLLaMA/comments/1493mbp/how_context_building_works_in_very_first_run/
Accomplished_Bet_127
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1493mbp
false
null
t3_1493mbp
/r/LocalLLaMA/comments/1493mbp/how_context_building_works_in_very_first_run/
false
false
self
1
null
LlaMa best hardware? Cloud?
3
[deleted]
2023-06-14T09:54:01
[deleted]
1970-01-01T00:00:00
0
{}
1493qf6
false
null
t3_1493qf6
/r/LocalLLaMA/comments/1493qf6/llama_best_hardware_cloud/
false
false
default
3
null
What are you using for RP?
31
With the Ten Thousand Models of Llama, and all the variants thereof, it's becoming both more difficult, and easier, to get the model you want. So I was wanting to ask the community - those who use LLM for roleplay, which models are you using? What do you like/dislike about them?
2023-06-14T10:22:53
https://www.reddit.com/r/LocalLLaMA/comments/14948ud/what_are_you_using_for_rp/
Equal_Station2752
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14948ud
false
null
t3_14948ud
/r/LocalLLaMA/comments/14948ud/what_are_you_using_for_rp/
false
false
self
31
null
Joining the Blackout: Private every Tuesday
25
[removed]
2023-06-14T10:33:01
https://www.reddit.com/r/LocalLLaMA/comments/1494f89/joining_the_blackout_private_every_tuesday/
Balance-
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1494f89
true
null
t3_1494f89
/r/LocalLLaMA/comments/1494f89/joining_the_blackout_private_every_tuesday/
false
false
default
25
null
Update on my new agent library, now called `agency`
11
Hello r/LocalLLaMa! You might've seen a post two weeks back where I excitedly announced a new agent related library I was calling "everything". It was still not quite ready and I got a ton of really thoughtful feedback from here that I'm so grateful for. I've been super busy since. First, I renamed the project to [`agency`](https://github.com/operand/agency). Much better I think! And I've done a lot of work to simplify, improve, document, and finish up the API for a real release. Lots has changed so if you read the readme before, it's been largely redone and now contains a detailed and working walkthrough of building an agent integrated system that includes multiple types of agents, operating system integration, access control, and a flask+react based web application where users appear as individual "agents" as well. `agency` differs from other agent libraries, most importantly in that it focuses on a distinct part of the overall problem, that of agent integration. Many more details in the readme. Also worth noting here is that I *just* pushed an update to integrate with the brand new [functions support on the OpenAI API](https://openai.com/blog/function-calling-and-other-api-updates)! I also plan to make a detailed video walkthrough soon and I'll add it to the project page and post here when I do. I'm eager to hear what people think! I developed this in order to build a foundation for some of my own ambitious ideas. If you find this useful for your projects I'd love to know! Thanks for checking it out! I hope this helps you build amazing things! ❤️ [https://github.com/operand/agency](https://github.com/operand/agency)
2023-06-14T13:39:09
https://www.reddit.com/r/LocalLLaMA/comments/14985gw/update_on_my_new_agent_library_now_called_agency/
helloimop
self.LocalLLaMA
2023-06-14T15:07:44
0
{}
14985gw
false
null
t3_14985gw
/r/LocalLLaMA/comments/14985gw/update_on_my_new_agent_library_now_called_agency/
false
false
self
11
{'enabled': False, 'images': [{'id': 'QxvGpt1V4qdsVLmrWfFUgfPo91Y1fgxqs8Ul2uUyiBw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-7zvMe2-P7v0UN4_rt1u92DcCcFAUgcEpt4q4aJ1miI.jpg?width=108&crop=smart&auto=webp&s=b8974706ac051d75975ba5ea77014038801a627b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-7zvMe2-P7v0UN4_rt1u92DcCcFAUgcEpt4q4aJ1miI.jpg?width=216&crop=smart&auto=webp&s=7fef51a07cb00e683a9de25adf7a956e5b2c60ad', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-7zvMe2-P7v0UN4_rt1u92DcCcFAUgcEpt4q4aJ1miI.jpg?width=320&crop=smart&auto=webp&s=2fef9142d8a96f3af1b1af80be9af6031e08f9e7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-7zvMe2-P7v0UN4_rt1u92DcCcFAUgcEpt4q4aJ1miI.jpg?width=640&crop=smart&auto=webp&s=cd49a0ad8071d0bac5eb8491a94897b824403bf2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-7zvMe2-P7v0UN4_rt1u92DcCcFAUgcEpt4q4aJ1miI.jpg?width=960&crop=smart&auto=webp&s=a88dceacd2811199144d0610c03a597ed31673b2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-7zvMe2-P7v0UN4_rt1u92DcCcFAUgcEpt4q4aJ1miI.jpg?width=1080&crop=smart&auto=webp&s=5b9f9afd2fd29bb0785bf17410dd5400fdbe6506', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-7zvMe2-P7v0UN4_rt1u92DcCcFAUgcEpt4q4aJ1miI.jpg?auto=webp&s=8fce127c2e1cce4e0dbf5ed4210331615704bdfe', 'width': 1200}, 'variants': {}}]}
PSA: New Nvidia driver 536.23 still bad, don't waste your time
65
The driver was just released and I tried it hoping the issue was resolved. No luck, it's still way slower than 531.79 when running close to max VRAM capacity (long context length). This was a quick test on a 4090, Win11, Windows installation of oobabooga (not WSL), AutoGPTQ. (I'm just a dabbler so maybe it's good if another user tests and confirms this)
2023-06-14T13:52:01
https://www.reddit.com/r/LocalLLaMA/comments/1498gdr/psa_new_nvidia_driver_53623_still_bad_dont_waste/
rerri
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1498gdr
false
null
t3_1498gdr
/r/LocalLLaMA/comments/1498gdr/psa_new_nvidia_driver_53623_still_bad_dont_waste/
false
false
self
65
null
Creating a Wiki for all things Local LLM. What do you want to know?
231
It's becoming abundantly clear there is a lot of information you need to know to get into this space, making it hard for people to get started. I'm working with a small group to create a wiki page so that way its easier for people to get started. What questions do you want answered or what are the most important things you think people need to know? I will be editing this post and appending the questions as they come in. I should have the wiki up and running by this weekend. The goal is to have everything explained in plain English so anyone can understand it. Questions: 1. What is a LLM? 2. How do I get started, what interfaces can I use? 3. What is Quantization, GPTQ vs GGML? 4. What hardware requirements are there? 5. How do I know what model to pick? 6. What is X paper and what does it mean for LLMs in the future? 7. How do I train my own model on my own dataset? 8. What's the difference between chat vs instruction trained model? 9. How do I make an API like open ai's to interact with my local model? 10. What is prompt engineering? How can I ask a question to get the best results from a particular model? 11. What are the software requirements? 12. How do I create my own dataset? 13. What makes a high quality dataset? 14. What are text embeddings and vector databases? 15. What are LoRA's and adapter models? 16. What are tokens? 17. What are Agents? Edit: The framework of the site has been completed and the domain is up. If you are interested in helping out you can click "edit" in the bottom left corner of a page to write content or to add any questions to a section that you think is missing from. [https://understandgpt.ai/](https://understandgpt.ai/) [https://github.com/UnderstandGPT/UnderstandGPT](https://github.com/UnderstandGPT/UnderstandGPT)
2023-06-14T15:08:37
https://www.reddit.com/r/LocalLLaMA/comments/149abrg/creating_a_wiki_for_all_things_local_llm_what_do/
NeverEndingToast
self.LocalLLaMA
2023-06-15T20:02:39
0
{}
149abrg
false
null
t3_149abrg
/r/LocalLLaMA/comments/149abrg/creating_a_wiki_for_all_things_local_llm_what_do/
false
false
self
231
{'enabled': False, 'images': [{'id': 'F-ZIOBaWaYfLE07ouyeSREiOsgeE3ZhYPXamY49eupo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4hMqxqhyCozSMG1wf_HlP3tGKdFb0v1VvG1AzZjl3Sg.jpg?width=108&crop=smart&auto=webp&s=d0aa982ee62c7d69336e3bc704fb2a602edd62e0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/4hMqxqhyCozSMG1wf_HlP3tGKdFb0v1VvG1AzZjl3Sg.jpg?width=216&crop=smart&auto=webp&s=1928cb299d5db6215bf11ee891ed45bfd4a5f2d0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/4hMqxqhyCozSMG1wf_HlP3tGKdFb0v1VvG1AzZjl3Sg.jpg?width=320&crop=smart&auto=webp&s=643fda13909cd805648443cb8306eda8cfb444ed', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/4hMqxqhyCozSMG1wf_HlP3tGKdFb0v1VvG1AzZjl3Sg.jpg?width=640&crop=smart&auto=webp&s=41abdd5d224a05f9e39c93d81279ea2a1084ee21', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/4hMqxqhyCozSMG1wf_HlP3tGKdFb0v1VvG1AzZjl3Sg.jpg?width=960&crop=smart&auto=webp&s=1f3f38befc6fe4b6fae66a0f5664680fcae15512', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/4hMqxqhyCozSMG1wf_HlP3tGKdFb0v1VvG1AzZjl3Sg.jpg?width=1080&crop=smart&auto=webp&s=bd4913a9178b03002e41b4f2c6c797811576ac4a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/4hMqxqhyCozSMG1wf_HlP3tGKdFb0v1VvG1AzZjl3Sg.jpg?auto=webp&s=a5dbdc69221197fe6b692f5d034a14c0c78e041f', 'width': 1200}, 'variants': {}}]}
LLM that answers questions based on documents
5
Hi I’m trying to create a chatbot that can answer my questions based on a collection of documents that the model is “trained” on. I need something that I can use within my business, so privacy and the ability to use it commercially are important. I’ve looked into OpenLlama and LangChain, but am wondering what the best course of action is.
2023-06-14T15:39:13
https://www.reddit.com/r/LocalLLaMA/comments/149b22m/llm_that_answers_questions_based_on_documents/
Sea_Koala_7726
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
149b22m
false
null
t3_149b22m
/r/LocalLLaMA/comments/149b22m/llm_that_answers_questions_based_on_documents/
false
false
self
5
null
I was finally able to get WizardLM-13B to work on my GTX 1080 but alas only getting 0.01 tokens per second...
25
Is this to be expected? Also, I'm thinking about getting a dedicated LocalLLM system, maybe a 24GB 4090? What kind of tokens per second might this generate on the Wizard model? Apologies for the noob question but I have no idea what to expect as I'm just learning the ropes.
2023-06-14T16:16:18
https://www.reddit.com/r/LocalLLaMA/comments/149byz4/i_was_finally_able_to_get_wizardlm13b_to_work_on/
amoebatron
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
149byz4
false
null
t3_149byz4
/r/LocalLLaMA/comments/149byz4/i_was_finally_able_to_get_wizardlm13b_to_work_on/
false
false
self
25
null
Please don’t close this Sub. Open-source AI needs all the help it can get
136
[removed]
2023-06-14T16:25:44
https://www.pinsentmasons.com/out-law/news/meps-eu-ai-act-foundation-models
IntenseSunshine
pinsentmasons.com
1970-01-01T00:00:00
0
{}
149c7a8
true
null
t3_149c7a8
/r/LocalLLaMA/comments/149c7a8/please_dont_close_this_sub_opensource_ai_needs/
false
false
default
136
null
Oobabooga not recognizing GPU for adding layers
3
Apparently the one-click install method for Oobabooga comes with a 1.3B model from Facebook which didn't seem the best in the time I experimented with it, but one thing I noticed right away was that text generation was incredibly fast (about 28 tokens/sec) and my GPU was being utilized. I have a GGML model that claims to support CPU+GPU inferencing which is great as there's no way a 13B model would fit in 10GB of VRAM, but adding layers doesn't actually utilize my GPU at all and I'm left with it using my CPU. After searching around it seemed that people were able to add layers and use larger models on lesser hardware with no issues so I have absolutely no idea what could be wrong here. I guess you have to basically manually add GPU support for GGML models yourself which I got a C compiler for and attempted to fix but I must be doing something wrong because even after entering the exact code given by a user that's supposed to add GPU support, I still get errors in the command box. I'm thinking I may have the wrong compiler but I'm not sure. At this point I'm very close to throwing in the towel. I'm 100% not well-versed in coding or computer science and while I understand this is bleeding edge stuff that hasn't reached the same level of user-friendliness as Stable Diffusion UIs, I don't understand why you need to jump through this many hoops to do something as simple as adding GPU support. I also don't understand why this has to be compiled by the user and not just be supported with the application itself but I'm assuming there's a reason for this that I don't understand because, like I said, I'm not well-versed in computer science. I normally have patience with these types of things but after looking through every Reddit and GitHub thread trying to understand and test every recommended fix just for every single one of them to not work, I'm starting to lose patience.
2023-06-14T16:48:49
https://www.reddit.com/r/LocalLLaMA/comments/149crc9/oobabooga_not_recognizing_gpu_for_adding_layers/
yungfishstick
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
149crc9
false
null
t3_149crc9
/r/LocalLLaMA/comments/149crc9/oobabooga_not_recognizing_gpu_for_adding_layers/
false
false
self
3
null
What's the best current open source model for a) vectorizing pdfs b) generative Q and A?
6
Looking for somethin equivalent to ChatGPT's performance in this task, although I know that of course nothing comes close.
2023-06-14T17:20:06
https://www.reddit.com/r/LocalLLaMA/comments/149dj2k/whats_the_best_current_open_source_model_for_a/
bramitkittel
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
149dj2k
false
null
t3_149dj2k
/r/LocalLLaMA/comments/149dj2k/whats_the_best_current_open_source_model_for_a/
false
false
self
6
null
Karen the Editor and quotations
3
I have a question regarding the way how [Karen the Editor](https://huggingface.co/FPHam/Karen_theEditor_13b_HF) was trained. Specifically, the model card mentions: "Based on LLAMA 13b and Wizard-Vucna-uncensored finetune, then finetuned with about 20k grammar examples (bad grammar/good grammar)." However, for non-fiction work, sometimes there is an absolute requirement that everything in quotation marks is left intact, even if it contains incorrect grammar, because that's what someone in the real world said. Obviously, for fiction, there is no such requirement. So - does the training dataset include examples with quotations? Is there any known prompt that reliably switches Karen between these two modes of operation?
2023-06-14T17:42:35
https://www.reddit.com/r/LocalLLaMA/comments/149e3ht/karen_the_editor_and_quotations/
patrakov
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
149e3ht
false
null
t3_149e3ht
/r/LocalLLaMA/comments/149e3ht/karen_the_editor_and_quotations/
false
false
self
3
{'enabled': False, 'images': [{'id': 'uJ9V-m-WnuFvsV7073FF7JnV8PxGeAFxWkMfb-qhs5Y', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/pcjOj1U3Tkw7EAVTVCpoNljTAvoJDB2vPu5Vq0JX8cs.jpg?width=108&crop=smart&auto=webp&s=86802327304e4cebeaf4a7abd3aa3b962a2fe43e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/pcjOj1U3Tkw7EAVTVCpoNljTAvoJDB2vPu5Vq0JX8cs.jpg?width=216&crop=smart&auto=webp&s=39a5554f67d2c5a2915c32889f717abca4ae429e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/pcjOj1U3Tkw7EAVTVCpoNljTAvoJDB2vPu5Vq0JX8cs.jpg?width=320&crop=smart&auto=webp&s=a916ac5157e89c421d7276d30ac83e7882198bab', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/pcjOj1U3Tkw7EAVTVCpoNljTAvoJDB2vPu5Vq0JX8cs.jpg?width=640&crop=smart&auto=webp&s=2b4015b9e1553d483bc840887d8b78940013b10d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/pcjOj1U3Tkw7EAVTVCpoNljTAvoJDB2vPu5Vq0JX8cs.jpg?width=960&crop=smart&auto=webp&s=977904dfe7ab4945056a96c9d756161b7fd56e1a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/pcjOj1U3Tkw7EAVTVCpoNljTAvoJDB2vPu5Vq0JX8cs.jpg?width=1080&crop=smart&auto=webp&s=c6a9a3c629a3491a0fe22b3bc22311fb8d4c8c04', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/pcjOj1U3Tkw7EAVTVCpoNljTAvoJDB2vPu5Vq0JX8cs.jpg?auto=webp&s=17443aa61fbb6d5aeb4cbfd08da5fb45e5a67926', 'width': 1200}, 'variants': {}}]}
Do you ever see emojis in the output of your GGML model in ooba using streaming?
2
I don't know what options you have, I never got ooba to run. This is also why I ask instead of checking myself. So I am using llama-cpp-python and I know ooba uses that too, and I noticed emojis are just destroyed when I try to use streaming (meaning the output updates instead of waiting for the whole thing to complete). I'm pretty sure it's a bug, but really I am interested, does that mean you guys never saw your ggml models use emojis?
2023-06-14T17:47:52
https://www.reddit.com/r/LocalLLaMA/comments/149e87t/do_you_ever_see_emojis_in_the_output_of_your_ggml/
involviert
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
149e87t
false
null
t3_149e87t
/r/LocalLLaMA/comments/149e87t/do_you_ever_see_emojis_in_the_output_of_your_ggml/
false
false
self
2
null
Cpu models 16gb ram
1
[removed]
2023-06-14T18:17:33
https://www.reddit.com/r/LocalLLaMA/comments/149eycf/cpu_models_16gb_ram/
SignificantAd5514
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
149eycf
false
null
t3_149eycf
/r/LocalLLaMA/comments/149eycf/cpu_models_16gb_ram/
false
false
default
1
null
Elevating Gaming to New Heights: Unleashing Next-Level Performance with Arc GPUs
0
[removed]
2023-06-14T18:33:45
[deleted]
1970-01-01T00:00:00
0
{}
149fcgq
false
null
t3_149fcgq
/r/LocalLLaMA/comments/149fcgq/elevating_gaming_to_new_heights_unleashing/
false
false
default
0
null
Community driven Open Source dataset collaboration platform
26
I am trying to create a platform where people can get together and edit datasets that can be used to fine tune or train models. I want it to be as easy as possible to collaborate, check eachother's work, and keep everything transparent. A few people suggested to me Google Sheets, but that's not viable due to Google's terms of service. So after searching around, I came across Baserow, which is a self hosted solution. I spun up a public instance last night to mess around with, and I think it might do the job. Positive Thoughts: * You can upload a CSV, JSON, or copy and paste raw text, and it creates front end tables you can start editing alone or with others with live changes. * It can handle a lot of rows and fields pretty well. I've been able to upload and edit 250mb json files without slowing down or crashing. Negative Thoughts: * You can only expand the first column to an expanded view. I saw some hacky ways to fix this on their community forum, but I don't know how I feel about that. You can still edit the content, it just feels weird and makes it hard to read. You can always edit the data offline and copy and paste it back in. * You can only export files as CSV. Which is annoying, but not really a deal-breaker. It looks pretty easy to divy up and assign people to different workspaces. So we could do something like split a big dataset into a bunch small pieces. When people are finished cleaning/formatting the data, each chunk could get rotated to a fresh set up eyes to look over for errors. Then we can recombine it all and post it to a public workspace where everybody can check over the combined results for anything that might have been missed. I'd like some feedback on this idea. If anyone has thoughts or suggestions for a better way to organize, I'm all ears. I'd like to have a fleshed out plan that people generally agree on before I start inviting people to my instance and telling them to spend their time on it. Here was my original post for those who missed it. [https://www.reddit.com/r/LocalLLaMA/comments/142tked/bot\_embracing\_nefarious\_deeds\_erotic\_roleplay/](https://www.reddit.com/r/LocalLLaMA/comments/142tked/bot_embracing_nefarious_deeds_erotic_roleplay/)
2023-06-14T19:34:06
https://www.reddit.com/r/LocalLLaMA/comments/149gv4d/community_driven_open_source_dataset/
CheshireAI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
149gv4d
false
null
t3_149gv4d
/r/LocalLLaMA/comments/149gv4d/community_driven_open_source_dataset/
false
false
self
26
null
New model just dropped: WizardCoder-15B-v1.0 model achieves 57.3 pass@1 on the HumanEval Benchmarks .. 22.3 points higher than the SOTA open-source Code LLMs.
232
2023-06-14T20:50:02
https://twitter.com/TheBlokeAI/status/1669032287416066063
Zelenskyobama2
twitter.com
1970-01-01T00:00:00
0
{}
149ir49
false
{'oembed': {'author_name': 'Tom Jobbins', 'author_url': 'https://twitter.com/TheBlokeAI', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">New StarCoder coding model from <a href="https://twitter.com/WizardLM_AI?ref_src=twsrc%5Etfw">@WizardLM_AI</a> <br>&quot;WizardCoder-15B-v1.0 model achieves 57.3 pass@1 on the HumanEval Benchmarks .. 22.3 points higher than the SOTA open-source Code LLMs.&quot;<br>My quants:<a href="https://t.co/ABjBvFRxw7">https://t.co/ABjBvFRxw7</a><a href="https://t.co/Hn4qQCeuZn">https://t.co/Hn4qQCeuZn</a><br>Original: <a href="https://t.co/L7wJhQyfRT">https://t.co/L7wJhQyfRT</a></p>&mdash; Tom Jobbins (@TheBlokeAI) <a href="https://twitter.com/TheBlokeAI/status/1669032287416066063?ref_src=twsrc%5Etfw">June 14, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/TheBlokeAI/status/1669032287416066063', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_149ir49
/r/LocalLLaMA/comments/149ir49/new_model_just_dropped_wizardcoder15bv10_model/
false
false
https://b.thumbs.redditm…5DN9hbbO_wAs.jpg
232
{'enabled': False, 'images': [{'id': 'sdZCEDH6vEPePQosukNPoJBQkfbmKFDoiiOCaRr3MaM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/LDz9t2YDUQGVPqoN8o0Ev59CrRHWPrlJTMtNl3DJMe0.jpg?width=108&crop=smart&auto=webp&s=10cc2c8cc49b3f13b3f90bba40bff487054e2ead', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/LDz9t2YDUQGVPqoN8o0Ev59CrRHWPrlJTMtNl3DJMe0.jpg?auto=webp&s=3eb58837ee5e8a03f3db89f71d5dbb05e342ae0f', 'width': 140}, 'variants': {}}]}
llama.cpp full CUDA acceleration has been merged
125
Update of [(1) llama.cpp just got full CUDA acceleration, and now it can outperform GPTQ! : LocalLLaMA (reddit.com)](https://www.reddit.com/r/LocalLLaMA/comments/147z6as/llamacpp_just_got_full_cuda_acceleration_and_now/) posted by TheBloke. The PR added by Johannes Gaessler has been merged to main Link of the PR : [https://github.com/ggerganov/llama.cpp/pull/1827](https://github.com/ggerganov/llama.cpp/pull/1827) **Description of the PR:** This PR adds GPU acceleration for all remaining ggml tensors that didn't yet have it. Especially for long generations this makes a large difference because the KV cache is still CPU only on master and gets larger as the context fills up. Prompt processing is also significantly faster because the large batch size allows the more effective use of GPUs. He also added a --low-vram option that disables the CUDA scratch buffer
2023-06-14T22:54:57
https://www.reddit.com/r/LocalLLaMA/comments/149ls1f/llamacpp_full_cuda_acceleration_has_been_merged/
aminedjeghri
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
149ls1f
false
null
t3_149ls1f
/r/LocalLLaMA/comments/149ls1f/llamacpp_full_cuda_acceleration_has_been_merged/
false
false
self
125
{'enabled': False, 'images': [{'id': 'GoLjSKY7nzLq6QbI1x3IXo7j3K4sPBmu0Cx1x74SKyQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BwFJWcQ3GZKJWyRQaJnB-4I6DbLDeFhCXtJQwqhrsCU.jpg?width=108&crop=smart&auto=webp&s=f650b19a78381e2e7232f5c68f5c7a7872c4be22', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/BwFJWcQ3GZKJWyRQaJnB-4I6DbLDeFhCXtJQwqhrsCU.jpg?width=216&crop=smart&auto=webp&s=367fbf324b15177f88f6997ba1ca7a6b8c3a6ef6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/BwFJWcQ3GZKJWyRQaJnB-4I6DbLDeFhCXtJQwqhrsCU.jpg?width=320&crop=smart&auto=webp&s=7bf63728e4b6acd82c9c13bfe7b4a4b1b0648a6c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/BwFJWcQ3GZKJWyRQaJnB-4I6DbLDeFhCXtJQwqhrsCU.jpg?width=640&crop=smart&auto=webp&s=fa48465a116dbafe1eb2d6eed1013c7d3a206e14', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/BwFJWcQ3GZKJWyRQaJnB-4I6DbLDeFhCXtJQwqhrsCU.jpg?width=960&crop=smart&auto=webp&s=7041ba30a59b3f50fa1bd9a38145c9e2d79f58d7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/BwFJWcQ3GZKJWyRQaJnB-4I6DbLDeFhCXtJQwqhrsCU.jpg?width=1080&crop=smart&auto=webp&s=3e001773e8623fdf62a75ffdc160ba3002ff8743', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/BwFJWcQ3GZKJWyRQaJnB-4I6DbLDeFhCXtJQwqhrsCU.jpg?auto=webp&s=24dfe8d1d58dc194ae0675c72931b43cc0b0a5a3', 'width': 1200}, 'variants': {}}]}
Attention based truncation approach
7
[deleted]
2023-06-14T23:01:50
[deleted]
1970-01-01T00:00:00
0
{}
149lxwv
false
null
t3_149lxwv
/r/LocalLLaMA/comments/149lxwv/attention_based_truncation_approach/
false
false
default
7
null
Model possible to run on laptop GPU?
0
[removed]
2023-06-14T23:51:43
https://www.reddit.com/r/LocalLLaMA/comments/149n29k/model_possible_to_run_on_laptop_gpu/
DarkHelmetedOne
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
149n29k
false
null
t3_149n29k
/r/LocalLLaMA/comments/149n29k/model_possible_to_run_on_laptop_gpu/
false
false
default
0
null
Tool for chatting with your codebase and docs using OpenAI, LlamaCpp, and GPT-4-All
14
[removed]
2023-06-14T23:59:56
[deleted]
1970-01-01T00:00:00
0
{}
149n8cm
false
null
t3_149n8cm
/r/LocalLLaMA/comments/149n8cm/tool_for_chatting_with_your_codebase_and_docs/
false
false
default
14
null
The best chat model for a RTX 4090 ?
5
Hello, i saw a lot of new LLM since a month, so i am a bit lost. Can someone tell me what is the best local LLM usable in oobabooga yet ? Thanks.
2023-06-15T00:20:11
https://www.reddit.com/r/LocalLLaMA/comments/149nofy/the_best_chat_model_for_a_rtx_4090/
3deal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
149nofy
false
null
t3_149nofy
/r/LocalLLaMA/comments/149nofy/the_best_chat_model_for_a_rtx_4090/
false
false
self
5
null
Your painpoints in building/using local LLMs
10
What are your painpoints in building/using local LLMs? Let's discuss and see what we can build together to solve these issues!
2023-06-15T01:02:29
https://www.reddit.com/r/LocalLLaMA/comments/149okhd/your_painpoints_in_buildingusing_local_llms/
Latter-Implement-243
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
149okhd
false
null
t3_149okhd
/r/LocalLLaMA/comments/149okhd/your_painpoints_in_buildingusing_local_llms/
false
false
self
10
null
So hypothetically what’s the strongest LLM model this could reasonably run? (24GB VRAM + 128GB system ram) (reasonably = output generations happen within 60 seconds)
4
2023-06-15T01:20:28
https://i.imgur.com/tp0WkrB.jpg
katiecharm
i.imgur.com
1970-01-01T00:00:00
0
{}
149oxqx
false
null
t3_149oxqx
/r/LocalLLaMA/comments/149oxqx/so_hypothetically_whats_the_strongest_llm_model/
false
false
https://a.thumbs.redditm…dDr-RGpGDAZ0.jpg
4
{'enabled': True, 'images': [{'id': 'EjmZ7VMJ_aNSW30VP0ebQ_7MdKg5e8Z1c9dRoev2dbY', 'resolutions': [{'height': 174, 'url': 'https://external-preview.redd.it/A8WB_xX7orMwgZXbl8N_L-q6eKl3mQJ4VI8M4UN7L-I.jpg?width=108&crop=smart&auto=webp&s=73816e487ad6e1f9fccf236f5f5b69eab252b309', 'width': 108}, {'height': 349, 'url': 'https://external-preview.redd.it/A8WB_xX7orMwgZXbl8N_L-q6eKl3mQJ4VI8M4UN7L-I.jpg?width=216&crop=smart&auto=webp&s=732418aed5d1660a66749eb31580765e1c2583e1', 'width': 216}, {'height': 517, 'url': 'https://external-preview.redd.it/A8WB_xX7orMwgZXbl8N_L-q6eKl3mQJ4VI8M4UN7L-I.jpg?width=320&crop=smart&auto=webp&s=dcb22707030b7c6296df5f1df5c60298ab77a203', 'width': 320}, {'height': 1035, 'url': 'https://external-preview.redd.it/A8WB_xX7orMwgZXbl8N_L-q6eKl3mQJ4VI8M4UN7L-I.jpg?width=640&crop=smart&auto=webp&s=87203aef9fd9fc8c09442ac5567b0a86965edee2', 'width': 640}, {'height': 1553, 'url': 'https://external-preview.redd.it/A8WB_xX7orMwgZXbl8N_L-q6eKl3mQJ4VI8M4UN7L-I.jpg?width=960&crop=smart&auto=webp&s=df59d1567d73eb4342b8b77f75309d0f0218bcf4', 'width': 960}, {'height': 1747, 'url': 'https://external-preview.redd.it/A8WB_xX7orMwgZXbl8N_L-q6eKl3mQJ4VI8M4UN7L-I.jpg?width=1080&crop=smart&auto=webp&s=786b496a8ada375ec3ba2766237dc02796c09118', 'width': 1080}], 'source': {'height': 2000, 'url': 'https://external-preview.redd.it/A8WB_xX7orMwgZXbl8N_L-q6eKl3mQJ4VI8M4UN7L-I.jpg?auto=webp&s=1253a08e9713ce099738411a582ead8a71915866', 'width': 1236}, 'variants': {}}]}
MLC-LLM Chat vicuna-Wizard-7B-Uncensored-q3f16_0
26
Using the unofficial tutorials provided by MLC-LLM I was able to format the ehartford/Wizard-Vicuna-7B-Uncensored to work with MLC-Chat in Vulkan mode. It should work with AMD GPUs though I've only tested it on a RTX 3060. Feedback is appreciated. MLC-LLM doesn't get enough press here, likely because they don't upload enough models. But their approach to deploying models on all devices is something the space really needs at some point if it's going to move beyond the somewhat arcane methods, we have to get these LLMs working currently. The Model: PC/Linux AMD/Nvidia - [jetro30087/vicuna-Wizard-7B-Uncensored-q3f16\_0 · Hugging Face](https://huggingface.co/jetro30087/vicuna-Wizard-7B-Uncensored-q3f16_0) Android - [jetro30087/vicuna-Wizard-7B-Uncensored-android-q4f16\_0 · Hugging Face](https://huggingface.co/jetro30087/vicuna-Wizard-7B-Uncensored-android-q4f16_0) MLC-LLM - [GitHub - mlc-ai/mlc-llm: Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.](https://github.com/mlc-ai/mlc-llm) https://preview.redd.it/ycx8rjcw836b1.png?width=1623&format=png&auto=webp&s=fd5bcb4c8bd2957fb6f537394a467241b8a82693
2023-06-15T02:01:32
https://www.reddit.com/r/LocalLLaMA/comments/149prvz/mlcllm_chat_vicunawizard7buncensoredq3f16_0/
jetro30087
self.LocalLLaMA
2023-06-15T22:01:44
0
{}
149prvz
false
null
t3_149prvz
/r/LocalLLaMA/comments/149prvz/mlcllm_chat_vicunawizard7buncensoredq3f16_0/
false
false
https://a.thumbs.redditm…8s2KlAAW1fF4.jpg
26
{'enabled': False, 'images': [{'id': '_wJmnI2pDwAdI3VoLl7qZVF-NGz6bZRVt-aIFtKeE0Q', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/kNCrojIaSUt_3D8A3lu3YeHeLQjpuBSJyWIH9TIIIOE.jpg?width=108&crop=smart&auto=webp&s=5be55471119888b3da3034bef80abefc032a801a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/kNCrojIaSUt_3D8A3lu3YeHeLQjpuBSJyWIH9TIIIOE.jpg?width=216&crop=smart&auto=webp&s=8682df20184ebaadeba65d98dafcbf301ae6c75b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/kNCrojIaSUt_3D8A3lu3YeHeLQjpuBSJyWIH9TIIIOE.jpg?width=320&crop=smart&auto=webp&s=accde02d4d5b8ab0e9c22aa20e2c48935831cd80', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/kNCrojIaSUt_3D8A3lu3YeHeLQjpuBSJyWIH9TIIIOE.jpg?width=640&crop=smart&auto=webp&s=96935ab35d3bba3c5b740284994512a40ee4881c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/kNCrojIaSUt_3D8A3lu3YeHeLQjpuBSJyWIH9TIIIOE.jpg?width=960&crop=smart&auto=webp&s=52e98fa0274fe7e4be3ce4d8351d733411c26893', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/kNCrojIaSUt_3D8A3lu3YeHeLQjpuBSJyWIH9TIIIOE.jpg?width=1080&crop=smart&auto=webp&s=43186ec70961c9639e074065e96c73e56f9f0581', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/kNCrojIaSUt_3D8A3lu3YeHeLQjpuBSJyWIH9TIIIOE.jpg?auto=webp&s=7161c0d049274b89e4afd7a5508e298460bb16c0', 'width': 1200}, 'variants': {}}]}
How good is Chronos Hermes 13B?
18
I'm reading a lot of praise for Chronos Hermes 13B for chat. Can someone put out a benchmark on this? More importantly, how does it stack up to Wizard-Vicuna-Uncensored-30B?
2023-06-15T02:24:13
https://www.reddit.com/r/LocalLLaMA/comments/149q8ke/how_good_is_chronos_hermes_13b/
ReMeDyIII
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
149q8ke
false
null
t3_149q8ke
/r/LocalLLaMA/comments/149q8ke/how_good_is_chronos_hermes_13b/
false
false
self
18
null