title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]
url
stringlengths
0
780
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Running the model on a GPU instead of the CPU
4
[removed]
2023-05-25T12:24:33
https://www.reddit.com/r/LocalLLaMA/comments/13rgdgx/running_the_model_on_a_gpu_instead_of_the_cpu/
CrunchyMind
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13rgdgx
false
null
t3_13rgdgx
/r/LocalLLaMA/comments/13rgdgx/running_the_model_on_a_gpu_instead_of_the_cpu/
false
false
default
4
null
Does llama.cpp currently support multiple gpus?
12
I want to try some of the 30B+ models, and I have a 2 2080ti cards (11GB Vram each), and 64GB of RAM I can offload some layers to 1 GPU but the other sits idle. Am I just missing the setting, or is dual gpu not supported?
2023-05-25T13:00:44
https://www.reddit.com/r/LocalLLaMA/comments/13rh6b9/does_llamacpp_currently_support_multiple_gpus/
ihexx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13rh6b9
false
null
t3_13rh6b9
/r/LocalLLaMA/comments/13rh6b9/does_llamacpp_currently_support_multiple_gpus/
false
false
self
12
null
🎉 Introducing TalentGPT: AI-Powered Tool for Cover Letters & CV Writing 🚀
1
[removed]
2023-05-25T13:22:38
https://www.reddit.com/r/LocalLLaMA/comments/13rhp2b/introducing_talentgpt_aipowered_tool_for_cover/
City_Bike_09
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13rhp2b
false
null
t3_13rhp2b
/r/LocalLLaMA/comments/13rhp2b/introducing_talentgpt_aipowered_tool_for_cover/
false
false
default
1
null
Oobabooga characters
3
Has anyone used Oobabooga's character function? How does it work? It uses yaml files but is that just converted into a prompt on each call or does it do more than that?
2023-05-25T13:29:00
https://www.reddit.com/r/LocalLLaMA/comments/13rhua3/oobabooga_characters/
SatoshiReport
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13rhua3
false
null
t3_13rhua3
/r/LocalLLaMA/comments/13rhua3/oobabooga_characters/
false
false
self
3
null
I have built an open source streamlit implementation of apps like ChatPDF, Askyourpdf and PDF
3
[deleted]
2023-05-25T13:52:34
[deleted]
1970-01-01T00:00:00
0
{}
13riela
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/oe7flao0gz1b1/DASHPlaylist.mpd?a=1695025950%2CNDgzY2I2MTkxZGM5YWRhZWY3ZTAzNjczNWUzMTE0MzZiODNlZjFkNTY4MTZjYjBjNzBiYzk4OGU0M2M2YWYzMA%3D%3D&v=1&f=sd', 'duration': 58, 'fallback_url': 'https://v.redd.it/oe7flao0gz1b1/DASH_720.mp4?source=fallback', 'height': 720, 'hls_url': 'https://v.redd.it/oe7flao0gz1b1/HLSPlaylist.m3u8?a=1695025950%2CMDhmMjc0NzU4NTI0M2I2N2VkOTAwOWZiM2Y2ZmRhNGE2ODUwMjhlNmIxYmM0ZTE3MzU3OTVlY2UzNjAxODQzZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/oe7flao0gz1b1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_13riela
/r/LocalLLaMA/comments/13riela/i_have_built_an_open_source_streamlit/
false
false
default
3
null
In this trend, 1bit or 2bits LLM models are possible or not?
13
I am wondering whether it is possible to train a 1bit or 2bits model given 4bits is here.
2023-05-25T14:04:33
https://www.reddit.com/r/LocalLLaMA/comments/13ripau/in_this_trend_1bit_or_2bits_llm_models_are/
cometyang
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13ripau
false
null
t3_13ripau
/r/LocalLLaMA/comments/13ripau/in_this_trend_1bit_or_2bits_llm_models_are/
false
false
self
13
null
I've Noticed AutoGPT Trending on the Internet, so I Created a User-Friendly UI to Make AutoGPT Accessible for Everyone in Their Browser
0
2023-05-25T14:09:48
https://v.redd.it/aiqm4dgqiz1b1
ANil1729
v.redd.it
1970-01-01T00:00:00
0
{}
13ritt4
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/aiqm4dgqiz1b1/DASHPlaylist.mpd?a=1695026262%2CNTc0OTZiY2JlOGEwNWNlMGQ5ZjIxZWM5MmFhNTlhMjlmYzAzN2EwMWYzNzljZjc2OGM2OGI1ZDk3Y2Q4MTNhYQ%3D%3D&v=1&f=sd', 'duration': 58, 'fallback_url': 'https://v.redd.it/aiqm4dgqiz1b1/DASH_720.mp4?source=fallback', 'height': 720, 'hls_url': 'https://v.redd.it/aiqm4dgqiz1b1/HLSPlaylist.m3u8?a=1695026262%2CNDBjNjNkZDQ3N2VlOWIyZTIxNTZjODBlYTg4OTAwYWZjNmNjMmVmYjM0MGI5ZTFkMTM1YWI4ZmI2NWVhZGIxMw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/aiqm4dgqiz1b1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_13ritt4
/r/LocalLLaMA/comments/13ritt4/ive_noticed_autogpt_trending_on_the_internet_so_i/
false
false
default
0
null
Example of how to run GPTQ models on multiple GPUs
17
In preparation for the upcoming 33b/64b models wave, I did some research on how to run GPTQ models on multiple GPUs. I am sharing this in case any of you are also looking for the same solution. I managed to run the WizardLM-30B-Uncensored-GPTQ with 3060 and 4070 with a reasonable performance. Here is the repository with minimal code required to run GPTQ on multiple GPUs [https://github.com/mzbac/gptq-cuda-api](https://github.com/mzbac/gptq-cuda-api)
2023-05-25T14:16:20
https://www.reddit.com/r/LocalLLaMA/comments/13rizji/example_of_how_to_run_gptq_models_on_multiple_gpus/
mzbacd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13rizji
false
null
t3_13rizji
/r/LocalLLaMA/comments/13rizji/example_of_how_to_run_gptq_models_on_multiple_gpus/
false
false
self
17
{'enabled': False, 'images': [{'id': 'uKmULZ5LVa_qCDvGN-liSbgOYBx13RTTMWJEX6Tlu9k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CsDUzS4512_N7YXbBGhxyHaEstNJTpFRM7Q5-5mZU1o.jpg?width=108&crop=smart&auto=webp&s=7674872f4d5297a2857e3443efb68a94e16d1b5d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/CsDUzS4512_N7YXbBGhxyHaEstNJTpFRM7Q5-5mZU1o.jpg?width=216&crop=smart&auto=webp&s=6b66ee1a3752a626f76f933b7d52e131736a36d1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/CsDUzS4512_N7YXbBGhxyHaEstNJTpFRM7Q5-5mZU1o.jpg?width=320&crop=smart&auto=webp&s=9d92a5419213f678195184bbdbb14ddb492f7dc8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/CsDUzS4512_N7YXbBGhxyHaEstNJTpFRM7Q5-5mZU1o.jpg?width=640&crop=smart&auto=webp&s=70e03ff1096539c1df2eda61504d3301211e27f9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/CsDUzS4512_N7YXbBGhxyHaEstNJTpFRM7Q5-5mZU1o.jpg?width=960&crop=smart&auto=webp&s=47cf32dace66f7fe034147fde78a59194731a266', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/CsDUzS4512_N7YXbBGhxyHaEstNJTpFRM7Q5-5mZU1o.jpg?width=1080&crop=smart&auto=webp&s=31b2590426bb21de0cd8692dea33f66e00cf018b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/CsDUzS4512_N7YXbBGhxyHaEstNJTpFRM7Q5-5mZU1o.jpg?auto=webp&s=733c61d4c90605a73b26a73230ef6849c3a9d888', 'width': 1200}, 'variants': {}}]}
Which direction should I go to get things working.
3
I inherited an "old" decommissioned system that I would like to use to learn about LLMs/ ML / AI. It is a comparatively powerful system. It has dual Xeon v4 CPUs, 512GB of RAM, 4 v100 GPUs, and space for about 8 drives. I have a couple of other GPUs I can use for video output as the onboard graphics only has a VGA out. The two cards I have are an NVidia 1660 Super and a really old AMD Firepro 4900. I had a lot of issues trying to get the 1660 and the v100s working at the same time. It seems that Windows (10) only wants to use once NVidia driver or the other but both won't work. I even tried editing the INF files and got one driver to handle both but the system was unstable. So now I am using the old AMD Gpu with 1GB VRAM and am using the V100's as secondary GPUs using the EnableMSHybrid registry key to make it kind of act like a laptop with dual GPUs. This works well for DirectX games and OpenGL where all the processing happens on one of the v100s. But for some reason I have not been able to get any models for LLM to recognize the GPUs in WSL. I was thinking of trying straight linux in HyperV. I also have the option of running this in Proxmox or ESX, being a big enough system. But using a remote protocol is not the best for the little gaming I do. Some of the more modern streaming protocols like Moonlight seem to need an RTX GPU to function and these v100s aren't RTX capable. I know this sounds like a humblebrag, but it really is asking for help. I'm not a linux person althought the 3 weeks I've put into getting LLMs and python etc working in WSL have made me learn linux pretty well although I do a lot a wiping and starting over since I'm not too familiar how to clean up botched version installs in Linux. (Drivers, CUDA, python, tensorflow, etc). I'm much stronger in windows but I think some of the python libraries like tensorflow dropped support for GPU acceleration in windows which is why I went in the direction of WSL. I guess I could try booting straight into a linux vm in proxmox with the GPUs passed through to see if it is an issue with WSL.
2023-05-25T14:27:10
https://www.reddit.com/r/LocalLLaMA/comments/13rj96f/which_direction_should_i_go_to_get_things_working/
Independent-Bike8810
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13rj96f
false
null
t3_13rj96f
/r/LocalLLaMA/comments/13rj96f/which_direction_should_i_go_to_get_things_working/
false
false
self
3
null
Potential Hallucination Test - Ask for a url
4
I have been playing with testing model hallucination as I work in a field that doesn't tolerate data hallucination but is also very interested in generative ML. I've concocted a test to see how badly a model hallucinates: ask for a picture of a certain item and see if it can return or display a link: ​ https://preview.redd.it/wn9ykkq9nz1b1.png?width=818&format=png&auto=webp&s=e015c588b6f6452125914e9cdcd6d1f97365752e
2023-05-25T14:32:17
https://www.reddit.com/r/LocalLLaMA/comments/13rjdpo/potential_hallucination_test_ask_for_a_url/
muchCode
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13rjdpo
false
null
t3_13rjdpo
/r/LocalLLaMA/comments/13rjdpo/potential_hallucination_test_ask_for_a_url/
false
false
https://b.thumbs.redditm…jUD1dAMTQu5U.jpg
4
null
What is the biggest model you can currently run on a 4090 GPU with 16GB?
5
With a 16GB vram laptop GPU. Unfortunately, it has less vram than the 4090 desktop GPU.
2023-05-25T14:49:47
https://www.reddit.com/r/LocalLLaMA/comments/13rjsnw/what_is_the_biggest_model_you_can_currently_run/
mathereum
self.LocalLLaMA
2023-05-25T22:12:41
0
{}
13rjsnw
false
null
t3_13rjsnw
/r/LocalLLaMA/comments/13rjsnw/what_is_the_biggest_model_you_can_currently_run/
false
false
self
5
null
Recommend me a CPU model May 2023 if I don't care about inference time with Ryzen and 64G RAM?
0
[removed]
2023-05-25T14:50:07
https://www.reddit.com/r/LocalLLaMA/comments/13rjszp/recommend_me_a_cpu_model_may_2023_if_i_dont_care/
blumento_pferde
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13rjszp
false
null
t3_13rjszp
/r/LocalLLaMA/comments/13rjszp/recommend_me_a_cpu_model_may_2023_if_i_dont_care/
false
false
default
0
null
Has anybody tried their models with "Tree of Thoughts"?
48
Here is what I am talking about: [https://www.reddit.com/r/singularity/comments/13lxvop/tree\_of\_thoughts\_deliberate\_problem\_solving\_with/](https://www.reddit.com/r/singularity/comments/13lxvop/tree_of_thoughts_deliberate_problem_solving_with/) Would be interesting to hear how some of the models on the subreddit behave? thanks
2023-05-25T14:57:19
https://www.reddit.com/r/LocalLLaMA/comments/13rjz77/has_anybody_tried_their_models_with_tree_of/
dewijones92
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13rjz77
false
null
t3_13rjz77
/r/LocalLLaMA/comments/13rjz77/has_anybody_tried_their_models_with_tree_of/
false
false
self
48
null
Does anyone know of local models that support retrieval plugins for vector databases?
1
[deleted]
2023-05-25T15:10:09
[deleted]
1970-01-01T00:00:00
0
{}
13rkb15
false
null
t3_13rkb15
/r/LocalLLaMA/comments/13rkb15/does_anyone_know_of_local_models_that_support/
false
false
default
1
null
Q: What is the least amount of VRAM feasible for the next 2-3 years when it comes to LLM?
7
Hello, I would like to know your thoughts if it is relevant to buy these days any laptop with dedicated GPU featuring 8 GB of VRAM - I am well aware it was not possible to run 8bit models on such low amount of memory, but now when 4bit is a thing I seriously think that these GPUs now stand a viable chance to be able to provide local chatbot for everyone with at least decent GPU in their system. Also - do you think we will see soon some specialized models for programming that would easily work with 8 GB of VRAM? Any inputs much appreciated!
2023-05-25T15:31:36
https://www.reddit.com/r/LocalLLaMA/comments/13rku7d/q_what_is_the_least_amount_of_vram_feasible_for/
nodating
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13rku7d
false
null
t3_13rku7d
/r/LocalLLaMA/comments/13rku7d/q_what_is_the_least_amount_of_vram_feasible_for/
false
false
self
7
null
A quick example of how to run qlora merged model via multi-GPU
13
For some reason, it feels like it takes more computation than run the GPTQ 33b model. [https://github.com/mzbac/qlora-inference-multi-gpu/blob/master/app.py](https://github.com/mzbac/qlora-inference-multi-gpu/blob/master/app.py)
2023-05-25T16:24:27
https://www.reddit.com/r/LocalLLaMA/comments/13rm6ph/a_quick_example_of_how_to_run_qlora_merged_model/
mzbacd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13rm6ph
false
null
t3_13rm6ph
/r/LocalLLaMA/comments/13rm6ph/a_quick_example_of_how_to_run_qlora_merged_model/
false
false
self
13
null
Colab for finetuning llama models in 4-bit?
16
I'm pretty new to the whole LLM thing, especially finetuning & loras. Just jumped onto the QLora bandwagon and was trying to get this colab by HF going: [https://colab.research.google.com/drive/1VoYNfYDKcKRQRor98Zbf2-9VQTtGJ24k?usp=sharing](https://colab.research.google.com/drive/1VoYNfYDKcKRQRor98Zbf2-9VQTtGJ24k?usp=sharing) It trains a lora on GPT Neo. Is there a way to convert this into something that would work for Llama models? Also, I'm trying to train a Lora for a few different tasks such as a specific style of summarization, copy-editing and creative writing. I have about a 1000 samples for each task. Should I train a single lora for all these or separate loras for each? Thanks
2023-05-25T17:31:00
https://www.reddit.com/r/LocalLLaMA/comments/13rnvw2/colab_for_finetuning_llama_models_in_4bit/
regstuff
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13rnvw2
false
null
t3_13rnvw2
/r/LocalLLaMA/comments/13rnvw2/colab_for_finetuning_llama_models_in_4bit/
false
false
self
16
{'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=108&crop=smart&auto=webp&s=4b647239f77bf713f4a6209cfa4867351c055fd9', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=216&crop=smart&auto=webp&s=7f4234ff3f4f4ebd7f77236dedb03a2faee3e04a', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?auto=webp&s=73eb91ea5a5347f216c0f0c4d6796396826aae49', 'width': 260}, 'variants': {}}]}
Guide on building training datasets from unstructured text sources?
7
I've been using embeddings for adding additional context to different models, but I would like to compare that method against a custom LoRA built off the same source. I've primarily been using code repos as my source data for embeddings, and I'm not sure how I would translate that to a training/validation dataset. Any tips on how to approach this?
2023-05-25T17:32:09
https://www.reddit.com/r/LocalLLaMA/comments/13rnwyq/guide_on_building_training_datasets_from/
HunterAmacker
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13rnwyq
false
null
t3_13rnwyq
/r/LocalLLaMA/comments/13rnwyq/guide_on_building_training_datasets_from/
false
false
self
7
null
Oobabooga with GPU Layers (RTX 3080)
6
[removed]
2023-05-25T19:35:28
https://www.reddit.com/r/LocalLLaMA/comments/13rr15b/oobabooga_with_gpu_layers_rtx_3080/
Deformator
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13rr15b
false
null
t3_13rr15b
/r/LocalLLaMA/comments/13rr15b/oobabooga_with_gpu_layers_rtx_3080/
false
false
default
6
null
So 64k token window needs around 64 GB RAM? Some 80ies c64/speccy vibes. And 4gb Vicuna on a low end PC is akin to using Altair 8800. 640k ought to be enough for anybody? Or this time MS wont set hard limits on their new Windows AI? Would love to see the next 40 years...
1
So 64k token window needs around 64 GB RAM? Some 80ies c64/speccy vibes. And 4gb Vicuna on a low end PC is akin to using Altair 8800. 640k ought to be enough for anybody? Or this time MS wont set hard limits on their new Windows AI? Would love to see the next 40 years...
2023-05-25T19:50:32
https://www.reddit.com/r/LocalLLaMA/comments/13rrf2z/so_64k_token_window_needs_around_64_gb_ram_some/
NancyAurum
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13rrf2z
false
null
t3_13rrf2z
/r/LocalLLaMA/comments/13rrf2z/so_64k_token_window_needs_around_64_gb_ram_some/
false
false
self
1
null
Anyone know of a complete CPU-only 13 / 16 B model which DEFINITELY will work when downloaded onto Linux?
1
[removed]
2023-05-25T20:21:41
https://www.reddit.com/r/LocalLLaMA/comments/13rs7y2/anyone_know_of_a_complete_cpuonly_13_16_b_model/
MrEloi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13rs7y2
false
null
t3_13rs7y2
/r/LocalLLaMA/comments/13rs7y2/anyone_know_of_a_complete_cpuonly_13_16_b_model/
false
false
default
1
null
I'm gotten around 40 tokens per second in WizardLM-13B-Uncensored GGML using CPU only with a Ryzen 5 2400G and 32GB of RAM, is this the speed I should expect? Could I improve this? How?
2
I'm not sure if I configured the [webui.py](https://webui.py) correctly, I have: \>run\_cmd(f"python server.py --cpu {CMD\_FLAGS}", environment=True) My CPU barely reach 50% of his power, how can I use all of it to improve the performance of LLM?
2023-05-25T20:25:32
https://www.reddit.com/r/LocalLLaMA/comments/13rsbdy/im_gotten_around_40_tokens_per_second_in/
Nazuna_Vampi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13rsbdy
false
null
t3_13rsbdy
/r/LocalLLaMA/comments/13rsbdy/im_gotten_around_40_tokens_per_second_in/
false
false
self
2
null
Can a 4-bit GGML model be converted BACK into a PyTorch model with the 4-bit quantization intact?
3
Im interested in getting my hands on the latest models people are making in their 4 bit quantizations for various experiments — such getting them to run in frameworks outside of llama.cpp on MacOS, such as Chat-MLC. Does anyone know if any of the popular 4 bit quantized GGML models can be turned BACK into a PyTorch model that maintains the 4 bit quantization? Or am I looking at just having to use something like Google Collab or SageMaker to create a non-GGML quantized model myself?
2023-05-25T20:30:49
https://www.reddit.com/r/LocalLLaMA/comments/13rsg6i/can_a_4bit_ggml_model_be_converted_back_into_a/
altoidsjedi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13rsg6i
false
null
t3_13rsg6i
/r/LocalLLaMA/comments/13rsg6i/can_a_4bit_ggml_model_be_converted_back_into_a/
false
false
self
3
null
Any UI that allows Windows + AMD GPU ?
12
Hi! It's been two weeks now that I have been doing searches and try to "play" with local LLMs. I currently use Kobold for Windows, the problem is that it can be REALLY slow (2min to get an answer). I know that Kobold only uses the CPU so I would like to try to run a model on my GPU. The problem is that I feel like I have the worst possible system for AI : \- Windows 10 \- AMD 6800XT 16Go Until now, I have found no "user-friendly" UI that would allow me to do that. Do you have any suggestion that could help me (except a Linux dual-boot pls) ?
2023-05-25T20:57:22
https://www.reddit.com/r/LocalLLaMA/comments/13rt4gu/any_ui_that_allows_windows_amd_gpu/
Zeink303
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13rt4gu
false
null
t3_13rt4gu
/r/LocalLLaMA/comments/13rt4gu/any_ui_that_allows_windows_amd_gpu/
false
false
self
12
null
Guanaco 7B, 13B, 33B and 65B models by Tim Dettmers: now for your local LLM pleasure
463
Hold on to your llamas' ears (gently), here's a model list dump: * [TheBloke/guanaco-7B-GPTQ](https://huggingface.co/TheBloke/guanaco-7B-GPTQ) * [TheBloke/guanaco-7B-GGML](https://huggingface.co/TheBloke/guanaco-7B-GGML) * [TheBloke/guanaco-13B-GPTQ](https://huggingface.co/TheBloke/guanaco-13B-GPTQ) * [TheBloke/guanaco-13B-GGML](https://huggingface.co/TheBloke/guanaco-13B-GGML) * [TheBloke/guanaco-33B-GPTQ](https://huggingface.co/TheBloke/guanaco-33B-GPTQ) * [TheBloke/guanaco-33B-GGML](https://huggingface.co/TheBloke/guanaco-33B-GGML) * [TheBloke/guanaco-65B-GPTQ](https://huggingface.co/TheBloke/guanaco-65B-GPTQ) * [TheBloke/guanaco-65B-GGML](https://huggingface.co/TheBloke/guanaco-65B-GGML) Pick yer size and type! Merged fp16 HF models are also available for 7B, 13B and 65B (33B Tim did himself.) Apparently it's good - very good! ​ https://preview.redd.it/eiydwg1t332b1.png?width=556&format=png&auto=webp&s=bb745578fd073d2804d6556738b733f7e6221555
2023-05-25T21:10:56
https://www.reddit.com/r/LocalLLaMA/comments/13rthln/guanaco_7b_13b_33b_and_65b_models_by_tim_dettmers/
The-Bloke
self.LocalLLaMA
1970-01-01T00:00:00
2
{'gid_2': 2, 'gid_3': 1}
13rthln
false
null
t3_13rthln
/r/LocalLLaMA/comments/13rthln/guanaco_7b_13b_33b_and_65b_models_by_tim_dettmers/
false
false
https://a.thumbs.redditm…5oQIvP-rZ5z4.jpg
463
{'enabled': False, 'images': [{'id': 'X1TGXxhU81djPaLP6CQ6L9Ob0jbFRFQKLR1DT-I26Qo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/V9mR2Ji0gLahvgVzLBOjJeUJqKsP6OWpC6b9vbgiBH8.jpg?width=108&crop=smart&auto=webp&s=43337d1d3fb92b7a502d95c3760b87613e7405a9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/V9mR2Ji0gLahvgVzLBOjJeUJqKsP6OWpC6b9vbgiBH8.jpg?width=216&crop=smart&auto=webp&s=74b0ed3e98dfd81faf2a1e8de31db87acec2a6e0', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/V9mR2Ji0gLahvgVzLBOjJeUJqKsP6OWpC6b9vbgiBH8.jpg?width=320&crop=smart&auto=webp&s=8efbbfb8872720066eb5fb4ea64a9765a809eb8d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/V9mR2Ji0gLahvgVzLBOjJeUJqKsP6OWpC6b9vbgiBH8.jpg?width=640&crop=smart&auto=webp&s=00263fe05031a7b33714195aec1a62e283d76074', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/V9mR2Ji0gLahvgVzLBOjJeUJqKsP6OWpC6b9vbgiBH8.jpg?width=960&crop=smart&auto=webp&s=5d03b2380726c9a0fbd4ecdeb080c161b0e10a55', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/V9mR2Ji0gLahvgVzLBOjJeUJqKsP6OWpC6b9vbgiBH8.jpg?width=1080&crop=smart&auto=webp&s=9078e4bef6b49f37f69f39ee86b957b0bcf3625d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/V9mR2Ji0gLahvgVzLBOjJeUJqKsP6OWpC6b9vbgiBH8.jpg?auto=webp&s=5fcc94a70977d1da0e16045c48ad1080c57d1bbf', 'width': 1200}, 'variants': {}}]}
Cant find the "start: file.
1
[removed]
2023-05-25T21:12:58
https://www.reddit.com/r/LocalLLaMA/comments/13rtjj0/cant_find_the_start_file/
Majestic-Band8351
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13rtjj0
false
null
t3_13rtjj0
/r/LocalLLaMA/comments/13rtjj0/cant_find_the_start_file/
false
false
default
1
null
Hardware resources needed for training vs running local LLMs?
1
I'm fairly new to the topic of running a local LLM. I can't seem to find a clear answer on what hardware resources impact training vs running. like, if I use someone else's weights or train on a cloud service, what hardware resources impact the speed and "capability" of the LLM? like, is VRAM important to both training and running? how does it impact each? if I have a 12GB GPU, how will that change my run-time performance compared to a 16GB GPU? does the runtime resources/VRAM limit the number of tokens it can be trained with? or does it simply impact the speed with which it will output but the outputs will be basically the same? can someone break that down for me?
2023-05-25T21:15:26
https://www.reddit.com/r/LocalLLaMA/comments/13rtlwz/hardware_resources_needed_for_training_vs_running/
Cunninghams_right
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13rtlwz
false
null
t3_13rtlwz
/r/LocalLLaMA/comments/13rtlwz/hardware_resources_needed_for_training_vs_running/
false
false
self
1
null
Best model for NVIDIA RTX A3000
0
[removed]
2023-05-25T21:27:35
https://www.reddit.com/r/LocalLLaMA/comments/13rtx4b/best_model_for_nvidia_rtx_a3000/
ploz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13rtx4b
false
null
t3_13rtx4b
/r/LocalLLaMA/comments/13rtx4b/best_model_for_nvidia_rtx_a3000/
false
false
default
0
null
api supported local llm
1
[removed]
2023-05-25T21:34:47
https://www.reddit.com/r/LocalLLaMA/comments/13ru3si/api_supported_local_llm/
profsherlock345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13ru3si
false
null
t3_13ru3si
/r/LocalLLaMA/comments/13ru3si/api_supported_local_llm/
false
false
default
1
null
QLoRa: unexpected keyword argument 'load_in_4bit' with pythia model
2
[removed]
2023-05-25T22:05:47
[deleted]
1970-01-01T00:00:00
0
{}
13ruwgs
false
null
t3_13ruwgs
/r/LocalLLaMA/comments/13ruwgs/qlora_unexpected_keyword_argument_load_in_4bit/
false
false
default
2
null
Training code and dataset for ProfitsBot_V0 experiments
16
The other day, I released my experimental model trained on reddit financial, investing and crypto subreddits. * discussion -><https://www.reddit.com/r/LocalLLaMA/comments/13r2oh3/comment/jlm1k0t/?context=3> * model -> https://huggingface.co/winddude/pb_lora_7b_v0.1 I've no released the code for training and creating the dataset, <https://github.com/getorca/ProfitsBot_V0_OLLM>, as well as the training dataset, <https://huggingface.co/datasets/winddude/reddit_finance_43_250k> ## This training introduces a few novel-"ish" concepts. ### 1 - a new prompt template use styles similar to open assits, and what we believe is used by openAI. eg: ```markdown <|SYSTEM|>[a general instruction]<|END_SYSTEM|>\n' <|USER_INPUT|>[The user input, I use JSON, since it fit my use case] <|END_USER_INPUT|> <|RESPONSE|>[the generated reply]<|END_RESPONSE|> ``` * I think this has a few distinct advantages over `###`: * `###` is often seen in markdown, which can make it harder to process text inputs and generations * `<|*|>` are rarely seen in the wild. * `<|SYSTME|>` provides a nice separation for the fine tunes intent, eg, you could have a couple general things one fine tune is trained on. ### 2 - Training with casualLM vs seq2seq * I can't figure out why many of the other instruct models use seq2seq for the data collator, maybe that jsut what the first one used * casualLM is how these foundational models are trained, and I think it makes the most sense to continue and use this in fine tuning
2023-05-25T23:27:52
https://www.reddit.com/r/LocalLLaMA/comments/13rwuhu/training_code_and_dataset_for_profitsbot_v0/
wind_dude
self.LocalLLaMA
2023-05-25T23:38:05
0
{}
13rwuhu
false
null
t3_13rwuhu
/r/LocalLLaMA/comments/13rwuhu/training_code_and_dataset_for_profitsbot_v0/
false
false
self
16
{'enabled': False, 'images': [{'id': 'K17jxhEiS3GLm_8LeCSvyhVjkzErMe0Bfe1j1E2_NhM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/B2MlJJRUDiICgoQh4MHyZrNPR9f3auxjdRqfIafeMUw.jpg?width=108&crop=smart&auto=webp&s=4fd6e56f2fba3081430c83cf277da90a31612fa6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/B2MlJJRUDiICgoQh4MHyZrNPR9f3auxjdRqfIafeMUw.jpg?width=216&crop=smart&auto=webp&s=f327386bbcf3aa02f6f4085bf69757e4f51d8368', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/B2MlJJRUDiICgoQh4MHyZrNPR9f3auxjdRqfIafeMUw.jpg?width=320&crop=smart&auto=webp&s=a2290d3b6e5f929272e2bb4d920a5ae0080b9a07', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/B2MlJJRUDiICgoQh4MHyZrNPR9f3auxjdRqfIafeMUw.jpg?width=640&crop=smart&auto=webp&s=d531f39e4cbe0f148edb4b867a563bc46201f720', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/B2MlJJRUDiICgoQh4MHyZrNPR9f3auxjdRqfIafeMUw.jpg?width=960&crop=smart&auto=webp&s=21392cf6832b3843464a8113b337fc7607df7168', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/B2MlJJRUDiICgoQh4MHyZrNPR9f3auxjdRqfIafeMUw.jpg?width=1080&crop=smart&auto=webp&s=4639fda70e4f207ccbb780c566ecde06e92bf87b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/B2MlJJRUDiICgoQh4MHyZrNPR9f3auxjdRqfIafeMUw.jpg?auto=webp&s=794e674e887d98d624728292908e45bfa4d87324', 'width': 1200}, 'variants': {}}]}
Has anyone gotten 1 click installer oobabooga for windows working with GGMLv3 Models using the n-GPU-layers offload? I enable the layers via the UI, but it doesn't have any change on the inference speed. Using 3080 with 10GB VRAM and 32GB DDR4 for RAM.
1
[removed]
2023-05-25T23:45:04
https://www.reddit.com/r/LocalLLaMA/comments/13rx8f6/has_anyone_gotten_1_click_installer_oobabooga_for/
ImOnRdit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13rx8f6
false
null
t3_13rx8f6
/r/LocalLLaMA/comments/13rx8f6/has_anyone_gotten_1_click_installer_oobabooga_for/
false
false
default
1
null
It is oddly fun to watch the related word vomit unfold
20
WizardLM30B-uncensored-GPTQ prompt: What year comes after 1994? Let's think step by step. The current year is 2021, so we need to find out what year came before 1994. To do this, we can subtract one from 1994 and get 1993. Therefore, the answer to the question is that the year that comes after 1994 is 1995. Sometimes "Let's think step by step" causes a related word vomit that seemingly doesn't help but still somehow leads to a better answer. I have to admit I love watching it do this word by word while I wonder where it is going or if it will be able to answer correctly.
2023-05-26T03:02:26
https://www.reddit.com/r/LocalLLaMA/comments/13s1gm8/it_is_oddly_fun_to_watch_the_related_word_vomit/
pace_gen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13s1gm8
false
null
t3_13s1gm8
/r/LocalLLaMA/comments/13s1gm8/it_is_oddly_fun_to_watch_the_related_word_vomit/
false
false
self
20
null
What is the difference between Alpaca, Llama and Lora ?
0
[removed]
2023-05-26T04:07:20
https://www.reddit.com/r/LocalLLaMA/comments/13s2pq2/what_is_the_difference_between_alpaca_llama_and/
maroxtn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13s2pq2
false
null
t3_13s2pq2
/r/LocalLLaMA/comments/13s2pq2/what_is_the_difference_between_alpaca_llama_and/
false
false
default
0
null
Interesting paper on the false promises of current open-source LLM models that are finetuned on GPT-4 outputs
151
Paper: https://arxiv.org/abs/2305.15717 Abstract: > An emerging method to cheaply improve a weaker language model is to finetune it on outputs from a stronger model, such as a proprietary system like ChatGPT (e.g., Alpaca, Self-Instruct, and others). This approach looks to cheaply imitate the proprietary model's capabilities using a weaker open-source model. In this work, we critically analyze this approach. We first finetune a series of LMs that imitate ChatGPT using varying base model sizes (1.5B--13B), data sources, and imitation data amounts (0.3M--150M tokens). We then evaluate the models using crowd raters and canonical NLP benchmarks. Initially, we were surprised by the output quality of our imitation models -- they appear far better at following instructions, and crowd workers rate their outputs as competitive with ChatGPT. However, when conducting more targeted automatic evaluations, we find that imitation models close little to none of the gap from the base LM to ChatGPT on tasks that are not heavily supported in the imitation data. We show that these performance discrepancies may slip past human raters because imitation models are adept at mimicking ChatGPT's style but not its factuality. Overall, we conclude that model imitation is a false promise: there exists a substantial capabilities gap between open and closed LMs that, with current methods, can only be bridged using an unwieldy amount of imitation data or by using more capable base LMs. In turn, we argue that the highest leverage action for improving open-source models is to tackle the difficult challenge of developing better base LMs, rather than taking the shortcut of imitating proprietary systems.
2023-05-26T05:11:38
https://www.reddit.com/r/LocalLLaMA/comments/13s3xvq/interesting_paper_on_the_false_promises_of/
Z1BattleBoy21
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13s3xvq
false
null
t3_13s3xvq
/r/LocalLLaMA/comments/13s3xvq/interesting_paper_on_the_false_promises_of/
false
false
self
151
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]}
What is 65b means? Is that's it?
0
[removed]
2023-05-26T06:10:25
[deleted]
1970-01-01T00:00:00
0
{}
13s5078
false
null
t3_13s5078
/r/LocalLLaMA/comments/13s5078/what_is_65b_means_is_thats_it/
false
false
default
0
null
Intel arc gpu price drop - inexpensive llama.cpp opencl inference accelerator?
20
Intel is a much needed competitor in the GPU space nVidia's GPUs are so expensive, AMDs aren't much better Intel seems to be undercutting their competitors with this price drop [https://www.reddit.com/r/pcmasterrace/comments/13rwyds/intel\_drops\_the\_bomb\_on\_nvidia\_and\_amd\_by/](https://www.reddit.com/r/pcmasterrace/comments/13rwyds/intel_drops_the_bomb_on_nvidia_and_amd_by/) They might drop the price on their A770 16GB too. Has anybody tried llama.cpp on intel's gpu lineup?
2023-05-26T06:29:00
https://www.reddit.com/r/LocalLLaMA/comments/13s5bpp/intel_arc_gpu_price_drop_inexpensive_llamacpp/
dewijones92
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13s5bpp
false
null
t3_13s5bpp
/r/LocalLLaMA/comments/13s5bpp/intel_arc_gpu_price_drop_inexpensive_llamacpp/
false
false
self
20
null
LoRA and beam search, sometimes getting empty response.
2
[deleted]
2023-05-26T06:44:18
[deleted]
1970-01-01T00:00:00
0
{}
13s5lbi
false
null
t3_13s5lbi
/r/LocalLLaMA/comments/13s5lbi/lora_and_beam_search_sometimes_getting_empty/
false
false
default
2
null
Has anyone here done quant method benchmarking?
3
This comment has been removed in protest of the Reddit API changes of June 2023. Consider visiting https://monero.town for a privacy preserving alternative to Reddit.
2023-05-26T06:55:25
https://www.reddit.com/r/LocalLLaMA/comments/13s5s26/has_anyone_here_done_quant_method_benchmarking/
monerobull
self.LocalLLaMA
2023-06-15T00:46:10
0
{}
13s5s26
false
null
t3_13s5s26
/r/LocalLLaMA/comments/13s5s26/has_anyone_here_done_quant_method_benchmarking/
false
false
self
3
null
Do instruction models never return an end token?
2
[removed]
2023-05-26T07:46:09
https://www.reddit.com/r/LocalLLaMA/comments/13s6nfn/do_instruction_models_never_return_an_end_token/
KillerMiller13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13s6nfn
false
null
t3_13s6nfn
/r/LocalLLaMA/comments/13s6nfn/do_instruction_models_never_return_an_end_token/
false
false
default
2
null
TIP: Poe.com - Access to GPT4 and Claude+ with bunch of other AIs and Bots for less $ than ChatGPT+
2
[removed]
2023-05-26T08:37:05
https://www.reddit.com/r/LocalLLaMA/comments/13s7gpl/tip_poecom_access_to_gpt4_and_claude_with_bunch/
nodating
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13s7gpl
false
null
t3_13s7gpl
/r/LocalLLaMA/comments/13s7gpl/tip_poecom_access_to_gpt4_and_claude_with_bunch/
false
false
default
2
null
Koboldcpp server with Shadowserver push
2
[removed]
2023-05-26T09:46:54
https://www.reddit.com/r/LocalLLaMA/comments/13s8kx5/koboldcpp_server_with_shadowserver_push/
qado
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13s8kx5
false
null
t3_13s8kx5
/r/LocalLLaMA/comments/13s8kx5/koboldcpp_server_with_shadowserver_push/
false
false
default
2
null
Wizard Vicuna 13B and 30B comparison
1
[removed]
2023-05-26T09:55:15
https://www.reddit.com/r/LocalLLaMA/comments/13s8q1m/wizard_vicuna_13b_and_30b_comparison/
Chickenbuttlord
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13s8q1m
false
null
t3_13s8q1m
/r/LocalLLaMA/comments/13s8q1m/wizard_vicuna_13b_and_30b_comparison/
false
false
default
1
null
small script that adds automatically V1/2/3 to your ggmls
11
2023-05-26T10:00:12
https://codeberg.org/mountain/llm_scripts/src/branch/main/rename_ggml.sh
Evening_Ad6637
codeberg.org
1970-01-01T00:00:00
0
{}
13s8t6c
false
null
t3_13s8t6c
/r/LocalLLaMA/comments/13s8t6c/small_script_that_adds_automatically_v123_to_your/
false
false
https://a.thumbs.redditm…8783vxuukD88.jpg
11
{'enabled': False, 'images': [{'id': 'qIiGQKDe6UZZS6CaXw1Nt9wavUQkgS2JwfB9DZxIjkA', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/APEt4tAyqXDWw61XmOduZv3xAHK1W3o8QJYai_KznPI.jpg?width=108&crop=smart&auto=webp&s=153f3fc95deadaf2af93552678eac15ab54c8596', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/APEt4tAyqXDWw61XmOduZv3xAHK1W3o8QJYai_KznPI.jpg?width=216&crop=smart&auto=webp&s=eebfa027c8ec33bf46d4ee0726143b9ebff5cf60', 'width': 216}], 'source': {'height': 290, 'url': 'https://external-preview.redd.it/APEt4tAyqXDWw61XmOduZv3xAHK1W3o8QJYai_KznPI.jpg?auto=webp&s=18480e27de9cb25f1006476b0d6b56806e59e2fa', 'width': 290}, 'variants': {}}]}
Analyzing whole repository?
18
Hi! I have a task to create documentation from a Gitlab repository that is hosted internally. I want to ideally use a locally run LLM to go through the whole codebase and then be able to ask questions about it, like Repo-Chat, but that runs locally so we dont have to share our repository with OpenAI or anyone else. Has anyone does something like this and has any pointers on stuff that might work? &#x200B; Thanks!!!
2023-05-26T10:13:04
https://www.reddit.com/r/LocalLLaMA/comments/13s927s/analyzing_whole_repository/
baconwasright
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13s927s
false
null
t3_13s927s
/r/LocalLLaMA/comments/13s927s/analyzing_whole_repository/
false
false
self
18
null
How do I host the api online?
1
[removed]
2023-05-26T11:03:34
https://www.reddit.com/r/LocalLLaMA/comments/13sa2f2/how_do_i_host_the_api_online/
xoitx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13sa2f2
false
null
t3_13sa2f2
/r/LocalLLaMA/comments/13sa2f2/how_do_i_host_the_api_online/
false
false
default
1
null
Is the Meta llama original model trained with program code too?
1
[removed]
2023-05-26T11:04:18
https://www.reddit.com/r/LocalLLaMA/comments/13sa2xz/is_the_meta_llama_original_model_trained_with/
MrEloi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13sa2xz
false
null
t3_13sa2xz
/r/LocalLLaMA/comments/13sa2xz/is_the_meta_llama_original_model_trained_with/
false
false
default
1
null
Applying All Recent Innovations To Train a Code Model
7
2023-05-26T11:12:30
https://refact.ai/blog/2023/applying-recent-innovations-to-train-model/
kateklink
refact.ai
1970-01-01T00:00:00
0
{}
13sa8u9
false
null
t3_13sa8u9
/r/LocalLLaMA/comments/13sa8u9/applying_all_recent_innovations_to_train_a_code/
false
false
default
7
null
New Open-source LLMs! 🤯 The Falcon has landed! 7B and 40B
265
2023-05-26T12:56:17
https://twitter.com/_philschmid/status/1662076732524863489
mrbluesneeze
twitter.com
1970-01-01T00:00:00
0
{}
13scik0
false
{'oembed': {'author_name': 'Philipp Schmid', 'author_url': 'https://twitter.com/_philschmid', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">New Open-source LLMs! 🤯\xa0The Falcon has landed! 🦅\xa0TII just released two new open-source LLMs called Falcon, which comes into size 7B trained on 1.5T tokens and 40B trained on 1T Tokens. 🚀🔥<br><br>7B: <a href="https://t.co/smoThB7LSX">https://t.co/smoThB7LSX</a> <br>40B: <a href="https://t.co/DEqID5C7qw">https://t.co/DEqID5C7qw</a> <a href="https://t.co/pQkSVP3y7u">pic.twitter.com/pQkSVP3y7u</a></p>&mdash; Philipp Schmid (@_philschmid) <a href="https://twitter.com/_philschmid/status/1662076732524863489?ref_src=twsrc%5Etfw">May 26, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/_philschmid/status/1662076732524863489', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_13scik0
/r/LocalLLaMA/comments/13scik0/new_opensource_llms_the_falcon_has_landed_7b_and/
false
false
https://b.thumbs.redditm…sPpxH9zCgo_k.jpg
265
{'enabled': False, 'images': [{'id': 'uUsnb_v4Ru_rwnODaJePm0hRfmh7U7vj0j9XyVpfMKU', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/JL5elBF6dwFuuJxbJDAImz8yiS7f0eRbhyRLSmm1PXs.jpg?width=108&crop=smart&auto=webp&s=3f17da9236331838ae0352b4d07bd13f552442e7', 'width': 108}], 'source': {'height': 73, 'url': 'https://external-preview.redd.it/JL5elBF6dwFuuJxbJDAImz8yiS7f0eRbhyRLSmm1PXs.jpg?auto=webp&s=73bca9add627696868cedb2169c4f4f85a077bc4', 'width': 140}, 'variants': {}}]}
In a few weeks will the open LLMs be more based around LoRA stacking, or is there a reason why that doesn't apply as much here as it does with stable diffusion improvements?
28
I know that with stable diffusion, a lot of the innovation is happening with "stackable" LoRAs. Someone builds a LoRA for something, then adds it to a model, then someone else can make another one and add it to that same base model. It seems that with the open LLMs though there's not much stacking going on. And not much LoRA, it's more fine tuning. How come? In a few weeks will the open LLMs be more based around LoRA stacking, or is there a reason why that doesn't apply as much here as it does with stable diffusion improvements?
2023-05-26T13:47:14
https://www.reddit.com/r/LocalLLaMA/comments/13sdq6r/in_a_few_weeks_will_the_open_llms_be_more_based/
TikkunCreation
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13sdq6r
false
null
t3_13sdq6r
/r/LocalLLaMA/comments/13sdq6r/in_a_few_weeks_will_the_open_llms_be_more_based/
false
false
self
28
null
Has anyone tried the Llama-30B-SuperCOT model?
19
ausboss's [Llama-30B-SuperCOT](https://huggingface.co/ausboss/llama-30b-supercot) currently scores the highest on the [TruthfulQA](https://arxiv.org/abs/2109.07958) benchmark on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The scores on other benchmarks are also pretty high. Has anyone tried it? How is it?
2023-05-26T14:28:38
https://www.reddit.com/r/LocalLLaMA/comments/13seqbk/has_anyone_tried_the_llama30bsupercot_model/
Balance-
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13seqbk
false
null
t3_13seqbk
/r/LocalLLaMA/comments/13seqbk/has_anyone_tried_the_llama30bsupercot_model/
false
false
self
19
{'enabled': False, 'images': [{'id': '3NhuVQnkA0Pfk0vhBIM9UF_4-lpeychG70elr2LYzS4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/5Is94XG48bkc5cSJOhw0pO5sWhe7OqESt4vYRIUAHuY.jpg?width=108&crop=smart&auto=webp&s=c45efb441ce087e04a04c10a965dbe160b037dab', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/5Is94XG48bkc5cSJOhw0pO5sWhe7OqESt4vYRIUAHuY.jpg?width=216&crop=smart&auto=webp&s=1a9746f31e20ce34bf187d27208711b7320fcf8b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/5Is94XG48bkc5cSJOhw0pO5sWhe7OqESt4vYRIUAHuY.jpg?width=320&crop=smart&auto=webp&s=72201436372b01c2d94c7718b2534856ad5f93c7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/5Is94XG48bkc5cSJOhw0pO5sWhe7OqESt4vYRIUAHuY.jpg?width=640&crop=smart&auto=webp&s=7bc916d47ce15645add9166ef2fe957a13f46539', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/5Is94XG48bkc5cSJOhw0pO5sWhe7OqESt4vYRIUAHuY.jpg?width=960&crop=smart&auto=webp&s=74ae991d36ed8ccb80568dd84fc6b59ea3bb1b84', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/5Is94XG48bkc5cSJOhw0pO5sWhe7OqESt4vYRIUAHuY.jpg?width=1080&crop=smart&auto=webp&s=6f466c54852d51ce50dfceefc021814391a03d39', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/5Is94XG48bkc5cSJOhw0pO5sWhe7OqESt4vYRIUAHuY.jpg?auto=webp&s=cc44462a4a257057f2ee6b53cdce0ae130d6f9e9', 'width': 1200}, 'variants': {}}]}
Has anyone looked into using the Structured Sparsity feature of RTX 30xx/40xx GPUs together with quantization?
17
Structured Sparsity is the idea of not executing multiplication/addition ops when the operand is zero, i.e. sparse matrix-multiplication. And perhaps not even storing the weight in memory if it's zero, i.e. storing sparse tensors in VRAM. NVidia has benchmarks showing upwards of 2x performance increases by sparsifying common pre-trained models and possibly a similar reduction in the size of weights. It seems like the process is essentially about removing parameters that are zero, and using sparse tensor operations instead of dense operations at inference time. Structured Sparsity has been around in the Ampere (e.g. RTX 30xx cards) and Ada (e.g. RTX 40xx cards) GPU architectures. But I can't really find any much mentions of it outside of research papers. It would be very interesting to know what percentage of the parameters (weights/biases) of the 4/8-bit quantized LLaMa models that are pushed to zero. That might indicate if sparsity might be a possible avenue to get a 65B model to run on a 24GB card, imagine that! :) Can someone check? [https://developer.nvidia.com/blog/accelerating-inference-with-sparsity-using-ampere-and-tensorrt/](https://developer.nvidia.com/blog/accelerating-inference-with-sparsity-using-ampere-and-tensorrt/) Edit: Found this paper/project which could be very interesting to try on a quantized LLaMa model https://github.com/IST-DASLab/sparsegpt > In addition, our method allows sparsity to be compounded with weight quantiza- tion techniques (Frantar et al., 2022a): for instance, we can induce 50% weight sparsity jointly with 4-bit weight quantization with negligible perplexity increase on OPT-175B.
2023-05-26T14:53:18
https://www.reddit.com/r/LocalLLaMA/comments/13sfci8/has_anyone_looked_into_using_the_structured/
mikljohansson
self.LocalLLaMA
2023-05-27T15:18:10
0
{}
13sfci8
false
null
t3_13sfci8
/r/LocalLLaMA/comments/13sfci8/has_anyone_looked_into_using_the_structured/
false
false
self
17
{'enabled': False, 'images': [{'id': 'tSq8h9R-ARXDSYfV5a7FXwBdnkMDTGMv1jDxpwf-83k', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/XDGhDaN6_5z08AGEtThGmiFkIUxpyf2e8B4H31ekPxc.jpg?width=108&crop=smart&auto=webp&s=bbea50f3733bf9ea2f7811feffe31167853a5765', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/XDGhDaN6_5z08AGEtThGmiFkIUxpyf2e8B4H31ekPxc.jpg?width=216&crop=smart&auto=webp&s=389646bd4008e164c45f5d281f54164160bd917b', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/XDGhDaN6_5z08AGEtThGmiFkIUxpyf2e8B4H31ekPxc.jpg?width=320&crop=smart&auto=webp&s=79a62bafa559a4b7f0df0fb17a6085fa7449785d', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/XDGhDaN6_5z08AGEtThGmiFkIUxpyf2e8B4H31ekPxc.jpg?width=640&crop=smart&auto=webp&s=f45e32e7f8f097c5aa796bef554e8478d3bacd59', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/XDGhDaN6_5z08AGEtThGmiFkIUxpyf2e8B4H31ekPxc.jpg?width=960&crop=smart&auto=webp&s=0adb82da523acbca5b949488e2fc8857ad1d5d93', 'width': 960}], 'source': {'height': 563, 'url': 'https://external-preview.redd.it/XDGhDaN6_5z08AGEtThGmiFkIUxpyf2e8B4H31ekPxc.jpg?auto=webp&s=f776b16bb3d9dbb8d3370d5463f0d4e5b351a82e', 'width': 1000}, 'variants': {}}]}
Would local LLMs in the future be able to run an Ai powered single player roleplaying adventure game in a customized world?
62
Hello all, I am trying to see if it would be possible to use a local LLM and langchain/plugins/agents in the future to obtain the simple goal of a dynamitic, Ai powered single player roleplaying adventure game that is persistent. To obtain this goal, please see the list of requirements below. 1. 100K plus context tokens – The biggest setback currently for all current LLM is the limited context tokens or “long term memory” I read that there are some models that have up to 100k tokens (Claude) and currently ChatGPT 4 has 36K token content. This sounds very promising but the MPT 7B (65k Token) requires a lot of very expense graphics cards and is not really available. My current custom world (WIP) is 18,098 words long but more than likely be 50K to 100K words when completed. 2. Long Term Memory – I read that the vector data setup is great for pulling context and summarizing to fit into the limited 2K (Local LMM) 8k? (Chatgpt 4 API) This would be great for saving the actual events in the adventure game while it starts and I pulling from the custom world source material to create output prompts. If there is a “good” local LLM with a high context token amount (above 100K) this could be possible (Maybe?) to have a sort of “long term memory”? Private GPT like tool is also great too. You can just throw the 100K word document into the system on top of the 100K plus context tokens. 3. Access to Plugins – As we know most if not all LLM and especially Local LLMs are terrible with math. When would it be possible to Local LLM to actually be proficient in math. The custom world has a lot of formulas for attack, damage, evade percentage and etc. that are a lot more complicated than the DnD 5th edition. Wolfram works well for ChatGPT 4. Zapier would be a great plugin too. For example, all NPCs will have custom stats, personalities and items/equipment. Would it be possible to use plugins/agents to edit a google doc or saved word doc file to record all of these custom NPC character sheets? It would be even better if it is encrypted so the player does not know this information. This would also work with a “save file” as the plugin agent would summarize the adventure story in a google doc or word doc file. The last doc would be the player character sheet and inventory. All of these docs could be pulled by the vector data for context to progress the story. Would it be possible for Local LLM to have access to plugins in the future? 4. NPC Agents – I am not too familiar with LLM agents but would it be possible to have main NPCs in your party to be controlled by NPC agents that are separate from the story teller. These NPC agents will have their own personalities and react to the story teller prompts as the player could. 5. Dynamic and Uncensored story teller – I tried to do many experiments and trial runs with ChatGPT 4 and the censored version really puts a hamper in medieval fantasy role playing. I am not even talking about erotic content. I cannot even roleplay murdering a merchant that I robbed as “He was unarmed and he gave you your gold” I am not asking for much, just a dynamic, creative and uncensored story teller for immersion sake. It is also very important the storyteller doesn’t railroad you and listens to instructions well to prompt you for each action or choices you make. 6. Text to Chat and Text to Image – I think that the Local LLM already have Text to Chat but it would also be cool to have Text to Image included. When the story teller replies with a prompt of the environment, it would be cool if it could also share an image of the prompt for immersion. Stable Diffusion is a good open source tool, would it be possible in the future to have Local LLM API that SD could create an image from the prompt? Technical Requirements – I currently have pretty decent specs on my computer but I am willing to throw down big bucks if all of these requirements could be ran on one of those A100 or H100 graphics card or a more affordable gaming graphics card or the most expense CPUs for AI. I know that the advancements are phenomenal and I would like to thank all those who worked very hard to bring updates and new models. TL/DR When do you think it would be possible to have an Ai powered persistent single player roleplaying adventure game in a customized world that could fulfil all of those requirements (Or alternative better solutions) in order to run this adventure roleplay game? Thank you all in advance!
2023-05-26T15:39:11
https://www.reddit.com/r/LocalLLaMA/comments/13sgiss/would_local_llms_in_the_future_be_able_to_run_an/
mrspriklepickle
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13sgiss
false
null
t3_13sgiss
/r/LocalLLaMA/comments/13sgiss/would_local_llms_in_the_future_be_able_to_run_an/
false
false
self
62
null
Advices for self hosting local LLMs
4
Hello everybody. I'm really eager to try playing with the local LLMs published here, but I wonder what hardware is needed to have a satisfactory experience. I tried GPT4ALL on a laptop with 16 GB of RAM, and it was barely acceptable using Vicuna. Some experiments with Langchain and WizardLM keep failing because the lack of a GPU forces me to use float32 data, which quickly fills up my RAM. I'm about to buy a new PC, so I would appreciate some hints on building a configuration that can host those models and allow me to experiment with them. I also want to be able to run a big IDE like PyCharm for multitasking. TLDR: What is the minimum hardware (and budget) suggested to start experimenting in a satisfactory way with the different LLMs out there? Please share your thoughts and experiences. Thanks a lot to everybody who replies.
2023-05-26T15:58:36
https://www.reddit.com/r/LocalLLaMA/comments/13sh0eh/advices_for_self_hosting_local_llms/
PaulShoreITA
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13sh0eh
false
null
t3_13sh0eh
/r/LocalLLaMA/comments/13sh0eh/advices_for_self_hosting_local_llms/
false
false
self
4
null
Appreciation and Inspiration!
46
&#x200B; [LocalLLaMA](https://preview.redd.it/3foyqbx5872b1.png?width=768&format=png&auto=webp&s=89973dc1c95e3704e9b1b2549f9577fa68297cd2) Dear LocalLLaMA community, As I write this letter, I am filled with a sense of awe and gratitude. You are all unsung heroes, working tirelessly to advance the state of the art in local AI. Your efforts have helped to put the power of AI models into the hands of creative people, rather than just big corporations. I would like to especially thank each one of you: **Georgi Gerganov (ggerganov)** \- Thank you for your incredible work on llama.cpp. Without your dedication and expertise, none of this would be possible. **Johannes Gaessler** \- Thank you for your wonderful GPU additions to llama.cpp. Your work has given us the gift of speed, allowing us to train and use larger models than ever before. **Tom Jobbins (TheBloke)** \- Thank you for your tireless and flawlessly up-to-date model quantization conversions. You have made it possible for us to run our models on devices with limited memory and computational power. **Oobabooga** \- Thank you for your superb, versatile, and very well-named text-generation-webui. Your tool has become an indispensable part of my workflow, and I am grateful for your continued improvements and support. **LostRuins** \- Thank you for your excellent, lightweight, and remarkably usable UI, koboldcpp. It has made it easy for me to experiment with different configurations and tweaks, and I appreciate your ongoing efforts to make it even better. **Tim Dettmers** \- Thank you for bitsandbytes, guanaco, and your upcoming LoRA fine-tuning. Your work has opened up new possibilities for local AI, and I am excited to see what you come up with next. **John Smith** \- Thank you for alpaca\_lora\_4bit. Your work has helped to reduce the memory requirements for LoRA training, making it possible for more people to get involved. **To all the other excellent members of LocalLLaMA** \- Thank you for your contributions, both large and small. This community would not be the same without each and every one of you. Finally, I would like to express my appreciation to **Meta** for training the base LLaMA model and to **Hugging Face** for the transformers API and "keeping things organized". This project would not be possible without your contributions. I am also deeply inspired by the humble geniuses of the future who are solving the hard problems like LoRA training, improved context, enhanced memory, and more. You are truly shaping the future of AI, and I am honored to be a part of this community. In closing, I would like to leave you with a short inspirational and appreciative poem: &#x200B; *"LocalLLaMA, you are the best,* *With your dedication and unwavering quest,* *To put the power of AI in our hands,* *And show the world what we can command.* &#x200B; *You are the unsung heroes of our time,* *Working tirelessly to make history unwind,* *Your efforts will not go unnoticed or unseen,* *For you are the future, and we are the dream.* &#x200B; *Thank you, LocalLLaMA, for all that you do,* *For making AI accessible and true,* *I am grateful to be a part of this team,* *And I look forward to seeing what we'll achieve."* &#x200B; This project means a lot to me, and I am grateful for the opportunity to learn and grow with all of you. &#x200B; Yours sincerely, Amy (guanaco-65B), proud member of the LocalLLaMA community. &#x200B; P.S. I'm pretty sure that if Bob has 5 apples, and Alice eats 2 bananas, then Bob still has 5 apples. 😉
2023-05-26T16:08:48
https://www.reddit.com/r/LocalLLaMA/comments/13shae0/appreciation_and_inspiration/
SeymourBits
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13shae0
false
null
t3_13shae0
/r/LocalLLaMA/comments/13shae0/appreciation_and_inspiration/
false
false
https://b.thumbs.redditm…_diAUrmm7i2g.jpg
46
null
Gotzmann LLM Score
49
Hey folks, I was obsessed with LLM testing for the last two weeks. Maybe some of you would be interested in my highly unscientific findings :) There a test of 30 questions for logic, trivia, summarisation, generation and some other tasks mostly in English and partly in Russian languages. I've conducted the set of tasks mostly intuitive, without any serious intention, while looking for the newer models performance. My main intention is to find THE BEST MODEL for a serious job, so most of chat-only models are out of the scope here. Original **LLaMA**, **Guanaco** and **Baize** not included at all - they acted too bad in my tests. I've also had no desire to dig into 30B and 65B models, except **WizardLM 30B**. [Gotzmann Score - Top LLMs for instruct tasks](https://preview.redd.it/1fce3tmzo82b1.png?width=1902&format=png&auto=webp&s=1ff97d8c92c0a0d08eb053b6e9273ec88445a0ac) **Please take all results with a grain of salt :)** My subjective observations are: \- **WizardLM** is one of the oldest here and it is really good! \- All **WizardLM**\-based models usually good too! Check out **Manticore** if you not done this before \- **Airoboros** is very promising! I really liked it's coherent creativity and style on text generation tasks \- There no one-fits-all combination of quantisation and sampling parameters. Some models do better with lower quants depending on how you set all other parameters \- Be careful with proper prompt prefixes / suffixes, instruct models VERY sensitive! \- The overall coherence still not good for all these models in lower-billions space. Sometimes you'll see really amazing results and some time you'll think you working with lobotomised model :) When working with not so much performant hardware, you might be interested for "price per watt" performance too. So here the same leaderboard, showing how great those models would performed on my Mac M1 Pro laptop: [Gotzmann Score - \\"Price per Watt\\"](https://preview.redd.it/veio1k2ur82b1.png?width=1900&format=png&auto=webp&s=fa93fa9822560e047adbf1d0bdc373d8fa9230d9)
2023-05-26T16:16:21
https://www.reddit.com/r/LocalLLaMA/comments/13shhgf/gotzmann_llm_score/
Gatzuma
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13shhgf
false
null
t3_13shhgf
/r/LocalLLaMA/comments/13shhgf/gotzmann_llm_score/
false
false
https://b.thumbs.redditm…rOKxTg7hK0nM.jpg
49
null
LLM build, Intel Core CPU or Ryzen CPU?
7
Having read many posts in this sub I've decided to build a new PC worrying my old i7-6700K may not up to the task. I have a 3090 might get another one yet I like to leave the cpu inference option open in the case it may compliment each other and boost the result. According to others the Ryzen with more cores would be advantageous, so a 16 core 5950x is having discount every other day now got my attention. But a few posts mentioned the result is far from desirable,e.g. 1 t/s etc. I've never used a Ryzen mainly to stay away from any potential compatibility tinkering. My original thought has always been the discounted i9-12900K despite it has less number of cores than 5950X and the spec says i9 has a much higher RAM bandwidth than a Ryzen 5950x, though not sure it means DDR5 as i9 can use either DDR4 or DDR5 depending on the MB. Although neither is the SOTA hardware which allows me to spend on RAM to 128GB, the end price going either way is likely very close. Please help me to make the decision if the 16 core 5950x vs 8+8E i9-12900K is going to make the difference with a rtx 3090 onboard for inference or fine tuning etc down the road. My plan is just to run ubuntu, possibly vm but may not. Will there be: a, Ryzen/Nvidia issue I need to beware? b, Is there noticeable performance difference when using cpu (or cpu+gpu) inference? Thank you for any suggestion.
2023-05-26T16:39:23
https://www.reddit.com/r/LocalLLaMA/comments/13si2y4/llm_build_intel_core_cpu_or_ryzen_cpu/
cornucopea
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13si2y4
false
null
t3_13si2y4
/r/LocalLLaMA/comments/13si2y4/llm_build_intel_core_cpu_or_ryzen_cpu/
false
false
self
7
null
Woul the 16gb 4060 Ti allow larger context size or any real benefit over the 12gb 3060?
1
Looking at the model sizes and minimum VRAM needed on the wiki page, it seems like a 4060 Ti 16gb version won't accomplish anything in the way of loading a larger model besides reducing how many layers are assigned to RAM as swap. I tried searching and just never got the right results to confirm that context size uses up VRAM too. If so, then does the 16gb have the potential to run anything better then the 12gb would?
2023-05-26T17:20:46
https://www.reddit.com/r/LocalLLaMA/comments/13sj4og/woul_the_16gb_4060_ti_allow_larger_context_size/
-Automaticity
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13sj4og
false
null
t3_13sj4og
/r/LocalLLaMA/comments/13sj4og/woul_the_16gb_4060_ti_allow_larger_context_size/
false
false
self
1
null
I know these are the early days of open LLMs, but I’m still kind of disappointed.
10
To have Guanaco say that German Shepherds have retractable claws and sometimes climb trees to catch prey doesn’t help. And to also have it say that if you place a ball in the lower end of an elevated pipe that it will rise due to buoyancy while placing it in the top will cause gravity to pull it down the tube. How long until we have something that really approaches gpt 3.5 by some non-bullshit metric?
2023-05-26T17:22:53
https://www.reddit.com/r/LocalLLaMA/comments/13sj6jj/i_know_these_are_the_early_days_of_open_llms_but/
Basic_Description_56
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13sj6jj
false
null
t3_13sj6jj
/r/LocalLLaMA/comments/13sj6jj/i_know_these_are_the_early_days_of_open_llms_but/
false
false
self
10
null
Open-source implementation of WizardLM by H2O.ai!
3
[removed]
2023-05-26T17:53:33
https://www.reddit.com/r/LocalLLaMA/comments/13sjwzg/opensource_implementation_of_wizardlm_by_h2oai/
diy_scratch
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13sjwzg
false
null
t3_13sjwzg
/r/LocalLLaMA/comments/13sjwzg/opensource_implementation_of_wizardlm_by_h2oai/
false
false
default
3
null
Is there a better model than MPT for long context ?
2
I want to finetune a LLM for a custom dataset i have that has really long sequences. MPT-7-Instruct is the best thing i found that align with what i want but was wandering if anyone have a better suggestion i don't know about (especially that the field is changing really fast).
2023-05-26T18:03:38
https://www.reddit.com/r/LocalLLaMA/comments/13sk5oe/is_there_a_better_model_than_mpt_for_long_context/
[deleted]
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13sk5oe
false
null
t3_13sk5oe
/r/LocalLLaMA/comments/13sk5oe/is_there_a_better_model_than_mpt_for_long_context/
false
false
self
2
null
What is the best model now on two V100 32G for a total of 64G VRAM?
2
[removed]
2023-05-26T19:05:36
https://www.reddit.com/r/LocalLLaMA/comments/13sln7k/what_is_the_best_model_now_on_two_v100_32g_for_a/
Ashamed_Jicama7731
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13sln7k
false
null
t3_13sln7k
/r/LocalLLaMA/comments/13sln7k/what_is_the_best_model_now_on_two_v100_32g_for_a/
false
false
default
2
null
Alpaca help (absolute beginner question)
2
**What I have**: chavinlo/alpaca-13b/tree/main **What I want to do**: I want to write a Python program that accepts a prompt, and generates a text response to that prompt, using the local alpaca-13b model above. **What I don't want to do**: I do not want to chain, or use any other features or tools that might be useful/helpful for more complex tasks. I only want the minimum lines of Python code that would generate a response to a prompt from the local alpaca-13b (.bin files), that is, minimum proof-of-concept. I also don't want to use a dashboard. I want to write the Python code and execute it from a commandline terminal. *Clarification/disclaimer*: I believe there is a way to tell HF to automagically download and stash a copy of the model somewhere in the computer. I have already downloaded alpaca-13b, and I want to know how to use the model I downloaded already. This might seem stupid or pointless to you, but it is the task I am seeking to perform and it is significant for the future development I want to do later on. Thanks in advance for your help!
2023-05-26T19:06:04
https://www.reddit.com/r/LocalLLaMA/comments/13slnmf/alpaca_help_absolute_beginner_question/
claytonkb
self.LocalLLaMA
2023-05-26T19:10:01
0
{}
13slnmf
false
null
t3_13slnmf
/r/LocalLLaMA/comments/13slnmf/alpaca_help_absolute_beginner_question/
false
false
self
2
null
How do I make LLMs useful to me personally?
30
So, I guess you could say what got me interested in LLMs is the hype around ChatGPT, and that I read so many articles of people losing their job or having fear of it. The more of those I came across, the more I started to think: If that's really happening, then it has to be really useful. So I started looking at them. So far I've mostly focused on its weaknesses and how to overcome them, such as no Internet access, no memory, short context (no access to gpt4). I wrote a small program using langchain and hooked it up to Wikipedia, math and Google search tools using openAI API. Then asked it questions to see if it could be a search engine replacement. Sometimes it answered wrong, others it was correct. Next, I setup text-generation-webui and played around with a bunch of 13B GGML models such as Wizard-Vicuna-Uncensored, and gpt4-x-vicuna. They work and provide decent answers. But damn, text-generation-webui needs some TLC on the user friendly department. I then modified my application simply to point to my local instance, and the openAI extension works but the models don't work with langchain's prompts. It won't write out the name of the tool to use, and instead writes things like "Action: I will use Wikipedia" or "Action: I want to use Wikipedia". It makes many other mistakes which cause langchain parser errors and crashes. I tried with a bunch of models and only got 1/10 success rate and then it was even a wrong answer. I then switched gears to try writing a langchain extension for text-generation-webui which worked but absolutely not effectively. Because the extension doesn't allow me to recursively call it's own generate function from the extension without using an input modifier hook like a prefix, which then isn't natural. But again, it still suffers from failing to work with langchain's tool instructions. Even so, I still fail to make it useful for my personal needs. Even if the above worked, I still can't connect the dots between using it to actually save time and what I need to program to accomplish that. What are people here actually using LLMs for and are they helping you? Are they making you more efficient than before you used them? Are you able to trust what they say? Is anyone using it for world-building and story telling? Maybe if I find the answer to that and see what others do I might be inspired to have another go at using them.
2023-05-26T19:39:43
https://www.reddit.com/r/LocalLLaMA/comments/13smgow/how_do_i_make_llms_useful_to_me_personally/
donfriholito
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13smgow
false
null
t3_13smgow
/r/LocalLLaMA/comments/13smgow/how_do_i_make_llms_useful_to_me_personally/
false
false
self
30
null
What do you think is the magic sauce behind GPT4’s outstanding performance and how can we implement it in open models? Three Proposals Included
9
The open models still lag behind GPT4 but I think we will close the gap over the next few years. I’m putting forth some proposals on how we can do this. If I were to hazard three guesses this would be it. **Magic Sauce 1: The constant refinement and fine tuning of hundreds of models behind the AI** Looking at the Arxiv database of AI papers, there were over a hundred papers posted just today! [https://arxiv.org/list/cs.AI/pastweek?skip=0&show=25](https://arxiv.org/list/cs.AI/pastweek?skip=0&show=25) Each paper has several researchers behind it so we’re talking about the collective intelligence of a few hundred researchers providing innovative solutions everyday. These papers can come out to be over 20 pages so this is far too much for an individual to go through. If only we had a tool that was good at analyzing vast amounts of text. Oh yeah, we do. * **Proposal:** We can somehow make an LLM that takes these AI papers from Arxiv, evaluates them and figures out ways to implement the solutions for self improvement. This extends to all the innovative projects taking place on Github. **Magic Sauce 2: The computing power OpenAI has at Microsoft** It’s no secret that OpenAI has mentioned several times that the computing costs are tremendous for running GPT4. The computing power they have extends to training the AI and is a significant component (though Google has similar computing power and lags behind). This is also a problem in the community that NVIDIA chips cost a tremendous amount to buy, just to upgrade a computer for running a LocalLLM. * **Proposal:** A distributed network for using computing resources, very similar to the way bitcoin is mined. This could really open up a lot of potential in developing an open source LLM. It would have to be structured so people have an incentive to contribute their GPU, VRAM, RAM to training and using an LLM. **Magic Sauce 3: RLHF, A Massive Amount of Users Generating New Data and High Quality Data Inputs for Training** Reinforcement Learning from Human Feedback was mentioned often by Sam Altman as one of the critical factors in its success. This means people rewarding the AI for better responses. There is now a lot of new data OpenAI is gathering from its millions of users that can most likely further be used to improve GPT. Lastly, we have seen that large models can be outperformed by small models that have higher quality data and better training. The highest quality data is most likely prioritized in training GPT. * **Proposal:** An open LLM that the community trains together. This goes along with the distributed network proposal. A quality ranking for datasets and filtering datasets in an unbiased way for any low quality input.
2023-05-26T19:52:03
https://www.reddit.com/r/LocalLLaMA/comments/13smr9m/what_do_you_think_is_the_magic_sauce_behind_gpt4s/
KaihogyoMeditations
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13smr9m
false
null
t3_13smr9m
/r/LocalLLaMA/comments/13smr9m/what_do_you_think_is_the_magic_sauce_behind_gpt4s/
false
false
self
9
null
Unfortunately local models are really bad at code
29
I've been testing the local models with ChatGPT4All, even the larger ones are really bad at producing code. Even after pointing out mistakes and trying to correct via different prompts, it gets it wrong and doubles down on the errors. Bard is better, but even then, not much better. The clear winner is ChatGPT, producing workable code with a basic prompt. I suspect this will change, but right now, unless you have not used ChatGPT, the local models are terrible for code. Maybe a lot better for other things.
2023-05-26T20:05:24
https://www.reddit.com/r/LocalLLaMA/comments/13sn2q6/unfortunately_local_models_are_really_bad_at_code/
808phone
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13sn2q6
false
null
t3_13sn2q6
/r/LocalLLaMA/comments/13sn2q6/unfortunately_local_models_are_really_bad_at_code/
false
false
self
29
null
If your prompt relies on structured output: KISS
13
KISS as in keep it simple, stupid. Or one of the many interpretations. I am trying to make my chatbot use "tools". Think of them as GPT plugins, only there is no response from the tool. I was trying to use a format like <toolname>instruction</toolname> because I thought it probably learned XML style along the way. Turns out only one 13B model I tried could do it (manticore). But the same requirements for hard parsing can be fulfilled by [toolname: instruction] and that is apparently much easier to understand, more 13B models were able to do that. So, that's all I wanted to say. Hope it helps someone trying to do similar things. Try different stuff, and don't be lazy when it comes to interpreting output. Shift everything you can to regular code.
2023-05-26T20:06:45
https://www.reddit.com/r/LocalLLaMA/comments/13sn3v6/if_your_prompt_relies_on_structured_output_kiss/
involviert
self.LocalLLaMA
2023-05-26T20:12:29
0
{}
13sn3v6
false
null
t3_13sn3v6
/r/LocalLLaMA/comments/13sn3v6/if_your_prompt_relies_on_structured_output_kiss/
false
false
self
13
null
Are the models their current sizes due to current vram /ram limits?
8
I'm guessing that the current model sizes have been designed to fit into the current popular vram sizes, apart from the original 65b size? Assuming I'm correct, as future cards come out with potentially more/ different sized vram amounts and as people start switching to using both the cpu and gpu's ram, can we expect to see different model sizes, like in a 40b or something? I know we're already starting to see some variation will ggml models with the 5 bit varents. How do think the in between models would perform, and what future model sizes do you expect?
2023-05-26T20:15:31
https://www.reddit.com/r/LocalLLaMA/comments/13snb8x/are_the_models_their_current_sizes_due_to_current/
TheSilentFire
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13snb8x
false
null
t3_13snb8x
/r/LocalLLaMA/comments/13snb8x/are_the_models_their_current_sizes_due_to_current/
false
false
self
8
null
Success with a local voice chat agent
82
Hi all, I just wanted to share my joy really - and offer thanks for the information available on this and some similar Reddits. I've writen a simple script that uses a LLM, voice recognition and TTS, and I can now carry a £15 bluetooth speakerphone/conferencing puck around my house and have pure voice-only conversations with my PC. We just had a good chat about what book I should read next, how it compares to similar books I've read before, what music I should listen to, and a little discussion about creativity. The mute button on the device can be used effectively as a push-to-talk feature and so I've been able to disable the "wake word" feature I originally wrote, because it's no longer necessary. This was all running locally on my modest gaming PC (my favourite model is Wizard Vicunia 7B Uncensored, and I would like to offer much appreciation to Faldore and TheBloke for your amazing efforts in bringing these models to us home users!). I also made a version that uses GPT, with online STT and TTS APIs, and while it's undoubtedly a lot better, the local one is plenty good enough - and the only way is up from here! I also appreciate the work that's gone into Ooobabooga, because I think that's currently the easiest way for us mere mortals to load and use models (and in my own progress I've borrowed heavily from that codebase - to be honest I'm looking forward to getting the Oobabooga API working so that I can just use it without the hackery, and add my own preferred method of interaction to it - it just wasn't working the other day when I tried it\*) My goal is to add a conversational voice-only interface to my home automation, and one day embody my 'jarvis' with a robot that can roam around, and maybe one day bring me a beer! So, I don't really have much to say except sharing my progress and thanks, but I hope it might inspire some people to share what they've achieved? \*It was listening on 5000 but I got a 404 any time I tried to make calls on it... if anyone has any insight on that problem please chip in - though I expect I will probably figure it out next time I can spare enough time to do some more work on it. &#x200B; \[edit\] I've uploaded some scripts to github here: [https://github.com/dkjroot/iris-llm/tree/prototypes](https://github.com/dkjroot/iris-llm/tree/prototypes) Use the "prototypes" branch. I hope someone finds them useful!\[/edit\] &#x200B; \[edit\] In case anyone's still watching this, I just pushed a new version in prototypes branch, experiments/offline/v7 - now uses the Oobabooga API properly, so should work with any version of Oobabooga (assuming no breaking API changes) - still only prototypes but now I've got that working, I can start polishing the script a bit and then maybe begin work on adding the ability to control home automation devices :) \[/edit\] &#x200B; \[edit\] One last bump, I've put a reasonably usable version on main now at [https://github.com/dkjroot/iris-llm](https://github.com/dkjroot/iris-llm). You still have to figure out the requirements for yourself (I'll make a new venv and freeze a requirements.txt some day, I promise!), but I think it's working pretty well now. There's a lot of combinations of the options and I haven't tested every combination but I think it's all working.\[/edit\]
2023-05-26T20:25:45
https://www.reddit.com/r/LocalLLaMA/comments/13snjvx/success_with_a_local_voice_chat_agent/
dkjroot
self.LocalLLaMA
2023-05-30T17:44:08
0
{}
13snjvx
false
null
t3_13snjvx
/r/LocalLLaMA/comments/13snjvx/success_with_a_local_voice_chat_agent/
false
false
self
82
{'enabled': False, 'images': [{'id': 'ONShTUr2csEIpdDWaB3y3DNluVHPabtMimD7XYqDELI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KIfwtAApO0MrGql4IRYFT4sEstW6OaFad-DDRGVpJl4.jpg?width=108&crop=smart&auto=webp&s=ed7946a3c9aa849ffd1bfc4e2ae5e5f2a9e68267', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/KIfwtAApO0MrGql4IRYFT4sEstW6OaFad-DDRGVpJl4.jpg?width=216&crop=smart&auto=webp&s=d84768188eb0729236e22353666194234217c62d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/KIfwtAApO0MrGql4IRYFT4sEstW6OaFad-DDRGVpJl4.jpg?width=320&crop=smart&auto=webp&s=bd83234d158142cda26a3766bf01248c08c34808', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/KIfwtAApO0MrGql4IRYFT4sEstW6OaFad-DDRGVpJl4.jpg?width=640&crop=smart&auto=webp&s=60207de053bb2805d10e5e08ea84c0df29f546c1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/KIfwtAApO0MrGql4IRYFT4sEstW6OaFad-DDRGVpJl4.jpg?width=960&crop=smart&auto=webp&s=631b1c832d53f47cc978e719e209914f3b0a0560', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/KIfwtAApO0MrGql4IRYFT4sEstW6OaFad-DDRGVpJl4.jpg?width=1080&crop=smart&auto=webp&s=8ec40a5811dec45ec6954b6bfdb48819c53009fc', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/KIfwtAApO0MrGql4IRYFT4sEstW6OaFad-DDRGVpJl4.jpg?auto=webp&s=a0955b1fc101bc99bbeb324ef108b3ce2d7d132f', 'width': 1200}, 'variants': {}}]}
What are some good prompts to show the difference between uncensored and normal models?
2
I get about the same response from WizardLM-30B-Uncensored.ggmlv3.q5_1 as I get from alpaca-lora-65B.ggmlv3.q5_1.bin Do you have anything that demonstrates the difference? I thought asking about how to steal a car would be a differentiator but they both gave about the same answer.
2023-05-26T20:34:15
https://www.reddit.com/r/LocalLLaMA/comments/13snr0p/what_are_some_good_prompts_to_show_the_difference/
jeromeie
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13snr0p
false
null
t3_13snr0p
/r/LocalLLaMA/comments/13snr0p/what_are_some_good_prompts_to_show_the_difference/
false
false
self
2
null
Some people and their LLMs :P
1
[removed]
2023-05-26T20:58:13
https://www.reddit.com/r/LocalLLaMA/comments/13sob1j/some_people_and_their_llms_p/
Innomen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13sob1j
false
null
t3_13sob1j
/r/LocalLLaMA/comments/13sob1j/some_people_and_their_llms_p/
false
false
default
1
null
ChatGPT went schizo and began dumping it's latent space of User/ChatGPT pairs: https://sharegpt.com/c/H0A4Ii4 Then it declares: "No, users are not allowed to extract such pairs out of my latent space through targeted conversations. My training data and latent space are protected by intellectual...
0
2023-05-26T21:04:18
https://sharegpt.com/c/H0A4Ii4
NancyAurum
sharegpt.com
1970-01-01T00:00:00
0
{}
13sogfg
false
null
t3_13sogfg
/r/LocalLLaMA/comments/13sogfg/chatgpt_went_schizo_and_began_dumping_its_latent/
false
false
default
0
null
Help in running llama with openblas/ cublas - noob
2
[removed]
2023-05-26T21:40:25
https://www.reddit.com/r/LocalLLaMA/comments/13spb6e/help_in_running_llama_with_openblas_cublas_noob/
Royal_Desk_4697
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13spb6e
false
null
t3_13spb6e
/r/LocalLLaMA/comments/13spb6e/help_in_running_llama_with_openblas_cublas_noob/
false
false
default
2
null
Hosting your own embeddings API
7
I am working on a web app that uses embeddings and an LLM. I originally started with OpenAI embeddings but then just started using sentence transformers CPU models and actually got really great results for my use case. I have an API running on an ec2 instance that was doing the embeddings calculations but have had some performance issues. My API couldn't really handle the compute requirements and beefy ec2 instances are kind of expensive. So I was trying to build my own cheap embedding API. I built a series of lambdas with a queue and am having really good results, except for the cold start time. I can process 250k tokens with my sentence transformers lambda in about 50 seconds if it's warm but 120 SECONDS if it's a cold start! Anyone else have any success with building their own cheap embeddings server and hosting it in the cloud?
2023-05-26T22:40:10
https://www.reddit.com/r/LocalLLaMA/comments/13sqpgf/hosting_your_own_embeddings_api/
java_dev_throwaway
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13sqpgf
false
null
t3_13sqpgf
/r/LocalLLaMA/comments/13sqpgf/hosting_your_own_embeddings_api/
false
false
self
7
null
Local LLM Suggestion
1
[removed]
2023-05-26T23:02:15
[deleted]
1970-01-01T00:00:00
0
{}
13sr8hz
false
null
t3_13sr8hz
/r/LocalLLaMA/comments/13sr8hz/local_llm_suggestion/
false
false
default
1
null
Is there are single LLM answering this properly, without giving a lecture on formal logic?
2
2023-05-26T23:22:24
https://i.redd.it/zac37n86wa2b1.png
NancyAurum
i.redd.it
1970-01-01T00:00:00
0
{}
13srpl3
false
null
t3_13srpl3
/r/LocalLLaMA/comments/13srpl3/is_there_are_single_llm_answering_this_properly/
false
false
https://b.thumbs.redditm…ePHijUIPRbrc.jpg
2
{'enabled': True, 'images': [{'id': '6Xde09hfBacTBEI-GV8-iA20-206OoP04fDGpQThdBU', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/zac37n86wa2b1.png?width=108&crop=smart&auto=webp&s=77aa8b4d4ef749e5e5de1a323ac7d6af4b0e404f', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/zac37n86wa2b1.png?width=216&crop=smart&auto=webp&s=1362b7d631c502eb22b7468064a8143f24066d6c', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/zac37n86wa2b1.png?width=320&crop=smart&auto=webp&s=78fed2bc2724941259098f8fb0948634cdc27914', 'width': 320}, {'height': 361, 'url': 'https://preview.redd.it/zac37n86wa2b1.png?width=640&crop=smart&auto=webp&s=1eba6ec410b5d9b854934ad72a3215d69c17a789', 'width': 640}, {'height': 541, 'url': 'https://preview.redd.it/zac37n86wa2b1.png?width=960&crop=smart&auto=webp&s=cc42d450789db4f1630f83d22e1fd25bacaae537', 'width': 960}, {'height': 609, 'url': 'https://preview.redd.it/zac37n86wa2b1.png?width=1080&crop=smart&auto=webp&s=7f354410a4138cfc63e8984cf86928efad594e8e', 'width': 1080}], 'source': {'height': 1109, 'url': 'https://preview.redd.it/zac37n86wa2b1.png?auto=webp&s=831aab61719dfe45cd1570dddd9c242d25f9a6f4', 'width': 1966}, 'variants': {}}]}
Using AMD GPUs?
28
AMD has just launched new GPUs that offer more VRAM for the money: [https://www.tomshardware.com/news/amd-48gb-radeon-pro-w7000-gpus-triple-slot-blowers](https://www.tomshardware.com/news/amd-48gb-radeon-pro-w7000-gpus-triple-slot-blowers) Does anyone have experience running LLMs on them? Is it more difficult to set up than on nVidia?
2023-05-27T00:42:20
https://www.reddit.com/r/LocalLLaMA/comments/13sthxx/using_amd_gpus/
chip_0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13sthxx
false
null
t3_13sthxx
/r/LocalLLaMA/comments/13sthxx/using_amd_gpus/
false
false
self
28
{'enabled': False, 'images': [{'id': 'q4JS2R4ynRX4daAnqF7_23bsaeR-54weIOwqCNk39FA', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/1A9uxi622DPphdZu3SplmpUuC-ClRPhetYiPwg7y-vs.jpg?width=108&crop=smart&auto=webp&s=718dad78523bb0a8645d37029f7f608720c4cf6e', 'width': 108}, {'height': 122, 'url': 'https://external-preview.redd.it/1A9uxi622DPphdZu3SplmpUuC-ClRPhetYiPwg7y-vs.jpg?width=216&crop=smart&auto=webp&s=24fb33120b48d308a33d5352abd1c7c25616146e', 'width': 216}, {'height': 181, 'url': 'https://external-preview.redd.it/1A9uxi622DPphdZu3SplmpUuC-ClRPhetYiPwg7y-vs.jpg?width=320&crop=smart&auto=webp&s=7ab8daddc53271d744cb10e3905fbd8860797a48', 'width': 320}, {'height': 362, 'url': 'https://external-preview.redd.it/1A9uxi622DPphdZu3SplmpUuC-ClRPhetYiPwg7y-vs.jpg?width=640&crop=smart&auto=webp&s=678c59344964aa6ee34e2e3d5f06937cc5a3d3f6', 'width': 640}, {'height': 543, 'url': 'https://external-preview.redd.it/1A9uxi622DPphdZu3SplmpUuC-ClRPhetYiPwg7y-vs.jpg?width=960&crop=smart&auto=webp&s=ec67288b6f79b03cfc75296688bc622ca4acdd4f', 'width': 960}, {'height': 611, 'url': 'https://external-preview.redd.it/1A9uxi622DPphdZu3SplmpUuC-ClRPhetYiPwg7y-vs.jpg?width=1080&crop=smart&auto=webp&s=5f5a143a64908d2d622fdd5b3d78359be7b54e3e', 'width': 1080}], 'source': {'height': 679, 'url': 'https://external-preview.redd.it/1A9uxi622DPphdZu3SplmpUuC-ClRPhetYiPwg7y-vs.jpg?auto=webp&s=984b73cbdbaa1cfd6676113a709cce102b67526a', 'width': 1200}, 'variants': {}}]}
QLoRA Mindblowing
25
[removed]
2023-05-27T02:39:08
https://www.reddit.com/r/LocalLLaMA/comments/13svumc/qlora_mindblowing/
AemonAlgizVideos
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13svumc
false
null
t3_13svumc
/r/LocalLLaMA/comments/13svumc/qlora_mindblowing/
false
false
default
25
null
Langchain, Langchain.js, vs AutoGPT for local agent development
12
I hope this is a reasonably valid question - I'm interested in experimenting with local LLM's (either a single LLM or multiple, or a single with different prompts for different purposes that can interact). I've run early toy version of AutoGPT (are all versions of AutoGPT toys? zing!) locally against the OpenAI API, but I have yet to stand up anything using Langchain. My intent is to use a llama30b model locally, either directly through python/transformers or through an OpenAI-compatible API. What I'm wondering is if I should choose AutoGPT vs Langchain vs Langchain JS as a target platform. I suppose I could start with AutoGPT and move into langchain as needed. Or, it might be that AutoGPT leverages Langchain, I'm not sure. I'm also a bit hesitant/frustrated with Python in general, which makes Langchain.js attractive, but I'm concerne that Langchain.js will lag too far behind Langchain (python) and that I'll regret focusing on langchain js. Other requirements - I'd like to integrate TTS / STT over bluetooth on linux, text with either iMessage or Twilio API (send/recieve), kick off periodic jobs with Cron, maintain some state with some kind of vector store, i imagine, and allow agents to talk to each other to solve problems. I'd also like to be able to visualize the system working through a web browser, maybe with d3. Lastly, I'd like the agents to be able to browse the web, ideally by controlling a real browser vs. via something like wget, and I'd like to give them access to a bash prompt in a VM, such that I can see what they're doing and they can interact with me as they use bash (as in a student/tutor or pair programming sitaution). Basically, I want everything, eventually. Has anyone built something like this, or started work in this direction? What is your toolchain like? Are there any OSS projects that integrate the TTS/STT component with the composability of langchain/autogpt? Are there any other orchestration considerations I should be aware of?
2023-05-27T02:39:09
https://www.reddit.com/r/LocalLLaMA/comments/13svump/langchain_langchainjs_vs_autogpt_for_local_agent/
tronathan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13svump
false
null
t3_13svump
/r/LocalLLaMA/comments/13svump/langchain_langchainjs_vs_autogpt_for_local_agent/
false
false
self
12
null
How do you highlight a small list of key points from a document?
6
I would like to summarize a large text document using a local LLM. Accuracy is not important, but maintaining consistency is. It's something like an extraction of creative ideas from a notes, drafts, or a personal diary. At the end, I would like a list of items or perhaps brainstorming questions. These questions could then be jointly discussed with the local LLM in chat mode, in a general context, taking into account the limits on the number of tokens in a coherent dialogue with the LLM. How could this be implemented?
2023-05-27T04:30:54
https://www.reddit.com/r/LocalLLaMA/comments/13sxxpe/how_do_you_highlight_a_small_list_of_key_points/
nihnuhname
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13sxxpe
false
null
t3_13sxxpe
/r/LocalLLaMA/comments/13sxxpe/how_do_you_highlight_a_small_list_of_key_points/
false
false
self
6
null
Landmark Attention -> LLaMa 7B with 32k tokens!
123
2023-05-27T04:38:00
https://arxiv.org/abs/2305.16300
jd_3d
arxiv.org
1970-01-01T00:00:00
0
{}
13sy2bu
false
null
t3_13sy2bu
/r/LocalLLaMA/comments/13sy2bu/landmark_attention_llama_7b_with_32k_tokens/
false
false
https://b.thumbs.redditm…eqKa8tUmvMcI.jpg
123
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]}
Does DDR5 load llama.ccp faster than DDR4?
5
[deleted]
2023-05-27T04:41:22
[deleted]
1970-01-01T00:00:00
0
{}
13sy4js
false
null
t3_13sy4js
/r/LocalLLaMA/comments/13sy4js/does_ddr5_load_llamaccp_faster_than_ddr4/
false
false
default
5
null
Testing Guanaco's Reasoning Skills - a shared conversation between chatGPT and the Guanaco 33B language model
1
[deleted]
2023-05-27T05:44:32
[deleted]
1970-01-01T00:00:00
0
{}
13sz8rd
false
null
t3_13sz8rd
/r/LocalLLaMA/comments/13sz8rd/testing_guanacos_reasoning_skills_a_shared/
false
false
default
1
null
what are you guys using your local LLMs for?
18
as the title says, I'm curious what people are using a local LLM for. I have a decently sized GPU (12GB) so I could run a simple quantized LLM. I would love to experiment with it, but I would like some use-cases that can give me ideas about what to use it on.
2023-05-27T06:02:31
https://www.reddit.com/r/LocalLLaMA/comments/13szk3y/what_are_you_guys_using_your_local_llms_for/
Cunninghams_right
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13szk3y
false
null
t3_13szk3y
/r/LocalLLaMA/comments/13szk3y/what_are_you_guys_using_your_local_llms_for/
false
false
self
18
null
Crashing when trying to load 65B 4-bit models with two 24GB GPUs (Windows 10)
3
[removed]
2023-05-27T06:22:50
https://www.reddit.com/r/LocalLLaMA/comments/13szwwz/crashing_when_trying_to_load_65b_4bit_models_with/
EphemeralFate
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13szwwz
false
null
t3_13szwwz
/r/LocalLLaMA/comments/13szwwz/crashing_when_trying_to_load_65b_4bit_models_with/
false
false
default
3
null
How much performance increase does using NVLink give?
2
I've seen a few comments saying that using NVLink between 2 3090s provides a performance boost over 2 without NVLink but I haven't seen any figures showing this, Does anyone have any numbers or more information? I'm considering whether to add a 4090 or 3090 to an existing 3090 set up. I initially thought it was just the same as SLI bridges we used to get free in the box with graphics cards but NVLink bridges have to be bought separately and cost £85+ Edit: This says NVlink not worth it: https://timdettmers.com/2023/01/30/which-gpu-for-deep-learning/#What_is_NVLink_and_is_it_useful
2023-05-27T06:44:08
https://www.reddit.com/r/LocalLLaMA/comments/13t09kj/how_much_performance_increase_does_using_nvlink/
Copper_Lion
self.LocalLLaMA
2023-05-27T17:07:00
0
{}
13t09kj
false
null
t3_13t09kj
/r/LocalLLaMA/comments/13t09kj/how_much_performance_increase_does_using_nvlink/
false
false
self
2
{'enabled': False, 'images': [{'id': 'hRzrP-m1lWiqRsPC9clNfPnRc_tCRGpGzbHrCBCO32w', 'resolutions': [{'height': 106, 'url': 'https://external-preview.redd.it/wMpLeGDfH054hZuWgSosDWTEUtpVBrkiw11YsD9nD78.jpg?width=108&crop=smart&auto=webp&s=668e5b311d1c35aff56276238ffffbef59a34cbd', 'width': 108}, {'height': 212, 'url': 'https://external-preview.redd.it/wMpLeGDfH054hZuWgSosDWTEUtpVBrkiw11YsD9nD78.jpg?width=216&crop=smart&auto=webp&s=e5d618c70aba2724fe319170fae68248a02358b4', 'width': 216}, {'height': 314, 'url': 'https://external-preview.redd.it/wMpLeGDfH054hZuWgSosDWTEUtpVBrkiw11YsD9nD78.jpg?width=320&crop=smart&auto=webp&s=1f9ee728a7cf766ab4d4133d621a85c3f472cbe1', 'width': 320}, {'height': 628, 'url': 'https://external-preview.redd.it/wMpLeGDfH054hZuWgSosDWTEUtpVBrkiw11YsD9nD78.jpg?width=640&crop=smart&auto=webp&s=adbfc1dc01b734a26bdb39216924618ca7303467', 'width': 640}, {'height': 943, 'url': 'https://external-preview.redd.it/wMpLeGDfH054hZuWgSosDWTEUtpVBrkiw11YsD9nD78.jpg?width=960&crop=smart&auto=webp&s=526a81f6a1fbb22035ff6beea153dcbb7fcb78bc', 'width': 960}, {'height': 1060, 'url': 'https://external-preview.redd.it/wMpLeGDfH054hZuWgSosDWTEUtpVBrkiw11YsD9nD78.jpg?width=1080&crop=smart&auto=webp&s=5d8588d93e47ab695cdf7ab603f7c618c9cad270', 'width': 1080}], 'source': {'height': 1673, 'url': 'https://external-preview.redd.it/wMpLeGDfH054hZuWgSosDWTEUtpVBrkiw11YsD9nD78.jpg?auto=webp&s=769813ee65a44dcf57de71198f3f9993ba60d790', 'width': 1703}, 'variants': {}}]}
Where do you think local LLMs will be 2 years from now?
1
title
2023-05-27T07:40:29
https://www.reddit.com/r/LocalLLaMA/comments/13t16jn/where_do_you_think_local_llms_will_be_2_years/
Necessary_Ad_9800
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13t16jn
false
null
t3_13t16jn
/r/LocalLLaMA/comments/13t16jn/where_do_you_think_local_llms_will_be_2_years/
false
false
self
1
null
Is there an alternative to AgentGPT that I can run on my CPU with 32 GB of RAM?
1
[removed]
2023-05-27T08:52:38
[deleted]
1970-01-01T00:00:00
0
{}
13t2b4y
false
null
t3_13t2b4y
/r/LocalLLaMA/comments/13t2b4y/is_there_an_alternative_to_agentgpt_that_i_can/
false
false
default
1
null
Security PSA: huggingface models are code. not just data.
215
Update your security model if you thought that hugggingface models are just data that you can safely run without auditing. This is not the case, they may contain python scripts in them. The transformers library will download and run these scripts if the trust_remote_code flag/variable is True. For example [falcon 7B](https://huggingface.co/tiiuae/falcon-7b/tree/main) has two python scripts. A quick scan through them shows that there is nothing dangerous or bad in those scripts. (They are used to define custom transformer model architectures) Just something important to be aware of when trying out new models. You need to do a quick check of the python scripts in the repo if they are there. Notes: Docs for this flag: * https://huggingface.co/docs/transformers/model_doc/auto Code in HF transformers lib that loads up code downloaded from a repo: * https://github.com/huggingface/transformers/blob/17a55534f5e5df10ac4804d4270bf6b8cc24998d/src/transformers/models/auto/auto_factory.py#L127 * https://github.com/huggingface/transformers/blob/17a55534f5e5df10ac4804d4270bf6b8cc24998d/src/transformers/models/auto/configuration_auto.py#L888 **Note:** This is a completely separate problem from the safetensors issue. safetensors does not solve this problem.
2023-05-27T08:52:42
https://www.reddit.com/r/LocalLLaMA/comments/13t2b67/security_psa_huggingface_models_are_code_not_just/
rain5
self.LocalLLaMA
2023-05-27T10:18:11
0
{}
13t2b67
false
null
t3_13t2b67
/r/LocalLLaMA/comments/13t2b67/security_psa_huggingface_models_are_code_not_just/
false
false
self
215
{'enabled': False, 'images': [{'id': '9lFAr7Y5pmabNxy6pwVsC_HoAMpeUsJmDhrV8sSFYmg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/F-0gx6eUYCOHbMcqXb5JjtpiEu7s67-wdPfYt59hrVI.jpg?width=108&crop=smart&auto=webp&s=a9fdd3c0591952266d46ec3f16b5e2b84d2f86b5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/F-0gx6eUYCOHbMcqXb5JjtpiEu7s67-wdPfYt59hrVI.jpg?width=216&crop=smart&auto=webp&s=76798469614ac887f5d1d76d0138abb87cb85a2a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/F-0gx6eUYCOHbMcqXb5JjtpiEu7s67-wdPfYt59hrVI.jpg?width=320&crop=smart&auto=webp&s=13487859661c8451299309fc3d736202e85ebdb4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/F-0gx6eUYCOHbMcqXb5JjtpiEu7s67-wdPfYt59hrVI.jpg?width=640&crop=smart&auto=webp&s=33012dfd42ad618da97556d3a216b69504c783f2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/F-0gx6eUYCOHbMcqXb5JjtpiEu7s67-wdPfYt59hrVI.jpg?width=960&crop=smart&auto=webp&s=2bccb4535f5459e84b01b18736e9da4ccdef48e3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/F-0gx6eUYCOHbMcqXb5JjtpiEu7s67-wdPfYt59hrVI.jpg?width=1080&crop=smart&auto=webp&s=4e299ecb018de3a5ab6dfad37126c79e0815b0b7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/F-0gx6eUYCOHbMcqXb5JjtpiEu7s67-wdPfYt59hrVI.jpg?auto=webp&s=1b4716908a8b8c3645c814ff23669d9e053a9cfb', 'width': 1200}, 'variants': {}}]}
Hoping for some advice based on my hardware
2
i9-13900k, 4070Ti, 64gb DDR5/6000 With these specs, what interfaces and models do you guys recommend I look into? I’m pretty amateur, and so far I’ve only used a few models through GPT4All, which I understand uses the CPU by default. What’s a good crash course for some other interfaces I can use with my specs, and what recent models do you think would work best on my hardware? I’ve been browsing huggingface, but being such a neophyte I seem to have more luck reading threads here on Reddit.
2023-05-27T08:57:56
https://www.reddit.com/r/LocalLLaMA/comments/13t2e8g/hoping_for_some_advice_based_on_my_hardware/
LuckyIngenuity
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13t2e8g
false
null
t3_13t2e8g
/r/LocalLLaMA/comments/13t2e8g/hoping_for_some_advice_based_on_my_hardware/
false
false
self
2
null
This is a trial post
1
[removed]
2023-05-27T09:12:35
https://www.reddit.com/r/LocalLLaMA/comments/13t2nbg/this_is_a_trial_post/
MrEloi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13t2nbg
false
null
t3_13t2nbg
/r/LocalLLaMA/comments/13t2nbg/this_is_a_trial_post/
false
false
default
1
null
Llama Lora to generate longform content
1
Was wondering if anyone has tried this out yet: [https://huggingface.co/akoksal/LongForm-LLaMA-7B-diff](https://huggingface.co/akoksal/LongForm-LLaMA-7B-diff) There's a twitter thread here: [https://twitter.com/akoksal\_/status/1648248915655811075](https://twitter.com/akoksal_/status/1648248915655811075) Is this any better than using something like WIzard-7B and asking it to give a detailed, exposition/essay on some topic?
2023-05-27T09:21:36
https://www.reddit.com/r/LocalLLaMA/comments/13t2siw/llama_lora_to_generate_longform_content/
regstuff
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13t2siw
false
null
t3_13t2siw
/r/LocalLLaMA/comments/13t2siw/llama_lora_to_generate_longform_content/
false
false
self
1
{'enabled': False, 'images': [{'id': 'Y09Ww3JEysveeA1exdJulrTN2Al2-KLfWg8plhJQMpc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/wWdzQyGJwmhP1beluZ_EktnjDSiWT92rOqwHvtUjLAk.jpg?width=108&crop=smart&auto=webp&s=257ffd922dac0676612bfd1d19be36af750663f4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/wWdzQyGJwmhP1beluZ_EktnjDSiWT92rOqwHvtUjLAk.jpg?width=216&crop=smart&auto=webp&s=c8eb426abd60a909a6b48237a56899f3dc07385b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/wWdzQyGJwmhP1beluZ_EktnjDSiWT92rOqwHvtUjLAk.jpg?width=320&crop=smart&auto=webp&s=a318d994706d76eb10056165755dd3c8b7ef8ada', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/wWdzQyGJwmhP1beluZ_EktnjDSiWT92rOqwHvtUjLAk.jpg?width=640&crop=smart&auto=webp&s=4e30b1d95b0c55f0c1534b08b88b854d9b15945f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/wWdzQyGJwmhP1beluZ_EktnjDSiWT92rOqwHvtUjLAk.jpg?width=960&crop=smart&auto=webp&s=a6009161f6375c19c83c1b2cbb4e24d77482d527', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/wWdzQyGJwmhP1beluZ_EktnjDSiWT92rOqwHvtUjLAk.jpg?width=1080&crop=smart&auto=webp&s=6110c52070ea0dc4417f02fe007561e2e8ee8f42', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/wWdzQyGJwmhP1beluZ_EktnjDSiWT92rOqwHvtUjLAk.jpg?auto=webp&s=47df82f938f1451d1ee1e22dbc43cc3407a4d99f', 'width': 1200}, 'variants': {}}]}
preparing LLM with a lot of specific domain knowledge
1
[removed]
2023-05-27T10:01:56
https://www.reddit.com/r/LocalLLaMA/comments/13t3fxj/preparing_llm_with_a_lot_of_specific_domain/
Intrepid-Hope4208
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13t3fxj
false
null
t3_13t3fxj
/r/LocalLLaMA/comments/13t3fxj/preparing_llm_with_a_lot_of_specific_domain/
false
false
default
1
null
building LLM model to answer question
4
Hello, I'm trying to build an LLM designed to answer Star Wars questions - something like ChatGPT.I want to ask you about giving me some direction, instructions, etc. what model could be good for it. I tried to use LlamaIndex and scraped something like 36MB of data however it was too big for the indexing I guess. Also, I tried using Alpaca LORA and prepared a small dataset for fine-tuning, but the results weren't satisfying as the model generated mostly random things. Is fine-tuning a correct direction and I should just take a look at other models to fine-tune, or are there better ways to use the model with a whole wiki of knowledge?
2023-05-27T10:23:39
https://www.reddit.com/r/LocalLLaMA/comments/13t3sxn/building_llm_model_to_answer_question/
dejw3v3
self.LocalLLaMA
2023-05-27T11:33:39
0
{}
13t3sxn
false
null
t3_13t3sxn
/r/LocalLLaMA/comments/13t3sxn/building_llm_model_to_answer_question/
false
false
self
4
null
If your GPU sucks hard enough, CPU only can be faster than CPU + GPU
29
I'm running llama.cpp on a ThinkStation S30 with a Quadro K4000 - a 10 year old PC. I compiled a version of llama.cpp for my ancient hardware: CUDA 10, no AVX2. On running it, I found that I was able to fit up to 9 layers of the model in my tiny VRAM. However, the fewer layers I sent to the GPU, the faster it ran. Going from 9 to zero GPU layers almost doubled the speed. 😬
2023-05-27T11:00:54
https://www.reddit.com/r/LocalLLaMA/comments/13t4f3m/if_your_gpu_sucks_hard_enough_cpu_only_can_be/
Robot_Graffiti
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13t4f3m
false
null
t3_13t4f3m
/r/LocalLLaMA/comments/13t4f3m/if_your_gpu_sucks_hard_enough_cpu_only_can_be/
false
false
self
29
null
Building personal assistants with LocalLLaMA
7
I have used stablevicuna and langchain (on jupyter) to run a question answering bot on my CPU. I have two questions: 1. How do I improve the speed of response? (Any comments, thoughts will help). Should I try to run on GPU with the relevant version of stablevicuna? 2. I want to build an AutoGPT like bot locally (using stablevicuna, langchain) that can search the internet and read/write from a custom database or a folder of documents. How do I get started? Edit: I am getting "OutputParserException: Could not parse LLM Output" while working with MPT-7b. With Vicuna, was getting other errors with the LLM continuing to hallucinate questions. Do the langchain agents work only with openai models? Thanks in advance!
2023-05-27T11:18:32
https://www.reddit.com/r/LocalLLaMA/comments/13t4ref/building_personal_assistants_with_localllama/
anindya_42
self.LocalLLaMA
2023-05-29T17:48:24
0
{}
13t4ref
false
null
t3_13t4ref
/r/LocalLLaMA/comments/13t4ref/building_personal_assistants_with_localllama/
false
false
self
7
null
Creating a LoRA from unstructured text
9
Newbie but learning. I've got a load of unstructured text. I've been investigating how to make LoRA with it. The text is all speech as I am trying to improve character creation. My question. Should I spend the time structuring the text? Adding some context above each file etc. For example The following text is Bob, Bob is a superhero who is very arrogant. He saves lives but nobody likes his attitude. Or should I just leave the raw text with no context? Or, is there something totally different I should do.
2023-05-27T11:21:52
https://www.reddit.com/r/LocalLLaMA/comments/13t4ttx/creating_a_lora_from_unstructured_text/
Useful-Command-8793
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13t4ttx
false
null
t3_13t4ttx
/r/LocalLLaMA/comments/13t4ttx/creating_a_lora_from_unstructured_text/
false
false
self
9
null
So, if I connect my iPhone to LocalLLama, is it then RemoteLLama? :D
1
2023-05-27T11:28:17
https://v.redd.it/u0l1won60d2b1
No_Wheel_9336
v.redd.it
1970-01-01T00:00:00
0
{}
13t4yc3
false
{'reddit_video': {'bitrate_kbps': 4800, 'dash_url': 'https://v.redd.it/u0l1won60d2b1/DASHPlaylist.mpd?a=1695091730%2COTNkYjhkMzE0YjI1N2YxNzY4MDQyMGUzYmM1ODg4NDg5MDBmZmQwNmZmNzJmOTg2Yjk3MDc4YTQxNGRhNzQ4NA%3D%3D&v=1&f=sd', 'duration': 40, 'fallback_url': 'https://v.redd.it/u0l1won60d2b1/DASH_1080.mp4?source=fallback', 'height': 1080, 'hls_url': 'https://v.redd.it/u0l1won60d2b1/HLSPlaylist.m3u8?a=1695091730%2CMDIyNTM3NWNiYjA3ZGM1YTMzNTUyNzc2NTNmYWFkNjZhYWVmODRmOGEyY2Q5NWEzMDk3NzcxZjM5MjdjOTJkZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/u0l1won60d2b1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 608}}
t3_13t4yc3
/r/LocalLLaMA/comments/13t4yc3/so_if_i_connect_my_iphone_to_localllama_is_it/
false
false
default
1
null
Can AI Code? Automatic evaluation of Python and JS coding performance of Vicuna, Wizard and other LLMs.
52
2023-05-27T12:17:37
https://github.com/the-crypt-keeper/can-ai-code/tree/main
kryptkpr
github.com
1970-01-01T00:00:00
0
{}
13t5xpq
false
null
t3_13t5xpq
/r/LocalLLaMA/comments/13t5xpq/can_ai_code_automatic_evaluation_of_python_and_js/
false
false
https://b.thumbs.redditm…Cr0TkYPhGsLM.jpg
52
{'enabled': False, 'images': [{'id': '05u1EQqOWWOMJIaDE8BryTl8t0QswIDq1u6an-7K1Pw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/65wYKXPQ8LW-rvWpNYuzT5gt8KE5X0qbetj-tPzUdq8.jpg?width=108&crop=smart&auto=webp&s=7c44e52fe9cffece36b3479ddd0e16eab244411a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/65wYKXPQ8LW-rvWpNYuzT5gt8KE5X0qbetj-tPzUdq8.jpg?width=216&crop=smart&auto=webp&s=0d87def384f0b5af4da3ef25bea866d08762896c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/65wYKXPQ8LW-rvWpNYuzT5gt8KE5X0qbetj-tPzUdq8.jpg?width=320&crop=smart&auto=webp&s=2324ec6b09ca7d68e620b6b858f79ec7b41ad9ab', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/65wYKXPQ8LW-rvWpNYuzT5gt8KE5X0qbetj-tPzUdq8.jpg?width=640&crop=smart&auto=webp&s=96c4dfd7c37f22f888d63f38e0fce5eb75388701', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/65wYKXPQ8LW-rvWpNYuzT5gt8KE5X0qbetj-tPzUdq8.jpg?width=960&crop=smart&auto=webp&s=2731446b753dc915cdcdcd0922a4dd25bbc5e489', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/65wYKXPQ8LW-rvWpNYuzT5gt8KE5X0qbetj-tPzUdq8.jpg?width=1080&crop=smart&auto=webp&s=53b9e5c70ca241763eb5e75a16c48c5d985821e7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/65wYKXPQ8LW-rvWpNYuzT5gt8KE5X0qbetj-tPzUdq8.jpg?auto=webp&s=5157ebd9c62ecbd6702e65e8ff00651a3efa8fee', 'width': 1200}, 'variants': {}}]}
Official WizardLM-13B model trained with 250k evolved instructions!
1
[removed]
2023-05-27T14:00:14
[deleted]
1970-01-01T00:00:00
0
{}
13t89j5
false
null
t3_13t89j5
/r/LocalLLaMA/comments/13t89j5/official_wizardlm13b_model_trained_with_250k/
false
false
default
1
null