title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]
url
stringlengths
0
780
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Api for WizardML and family
1
2023-05-16T14:12:09
https://github.com/aratan/ApiCloudLLaMA
system-developer
github.com
1970-01-01T00:00:00
0
{}
13j6di6
false
null
t3_13j6di6
/r/LocalLLaMA/comments/13j6di6/api_for_wizardml_and_family/
false
false
default
1
null
How do I load a gptq LLaMA model (Vicuna) in .safetensors format?
3
This question is not regarding text generation webui, there's plenty of tutorials for that. My question is about loading the model with huggingface transformers or whatever library is needed to actually use the model in a python script with other tools (such as langchain or transformer agents). GPTQ-for-LLaMA has no documentation regarding this and scouring it's source code for how it loads the model has been a pain. Any help appreciated EDIT: SOLVED! after some time getting my head in the GPTQ-For-LLaMA i got how it loaded the models. if you're in the dir directly above the repo, just do the following: ``` import sys sys.path.append("GPTQ-for-LLaMa/") import importlib llama = importlib.import_module("llama_inference") DEV = torch.device('cuda:0') model = llama.load_quant(repo,model_path,4,128,0) model.to(DEV) ``` the DEV var is for loading it to the GPU. Some notes on this: I've found inference from this being slow, I'm trying to get a triton inference server going but the solutions I've found are run from a docker file, any of you have a solution?
2023-05-16T14:14:41
https://www.reddit.com/r/LocalLLaMA/comments/13j6fsy/how_do_i_load_a_gptq_llama_model_vicuna_in/
KillerX629
self.LocalLLaMA
2023-05-22T14:56:52
0
{}
13j6fsy
false
null
t3_13j6fsy
/r/LocalLLaMA/comments/13j6fsy/how_do_i_load_a_gptq_llama_model_vicuna_in/
false
false
self
3
null
Open LLM Server - Run local LLMs via HTTP API in a single command (Linux, Mac, Windows)
8
2023-05-16T14:31:13
https://github.com/dcSpark-AI/open-LLM-server
robkorn
github.com
1970-01-01T00:00:00
0
{}
13j6vby
false
null
t3_13j6vby
/r/LocalLLaMA/comments/13j6vby/open_llm_server_run_local_llms_via_http_api_in_a/
false
false
default
8
null
Long term memory for LLM based assistants? Would DeepMind Retro be a solution?
3
I apologize if any of this sounds stupid. As an "outsider", I've been thinking of how much of a difference and game changer long term memory would be for assistants. I'm mostly interested in programming tasks and the size of the context window is a serious limitation there. After asking around I've been pointed to this paper: [https://www.deepmind.com/publications/improving-language-models-by-retrieving-from-trillions-of-tokens](https://www.deepmind.com/publications/improving-language-models-by-retrieving-from-trillions-of-tokens) Is anyone experimenting with something like that for LLaMA or other open source models? Are there any other potentially better techniques? Thanks in advance. :)
2023-05-16T14:54:28
https://www.reddit.com/r/LocalLLaMA/comments/13j7hil/long_term_memory_for_llm_based_assistants_would/
giesse
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13j7hil
false
null
t3_13j7hil
/r/LocalLLaMA/comments/13j7hil/long_term_memory_for_llm_based_assistants_would/
false
false
self
3
null
The Milgram experiment as prompt injection in humans.
21
It occurred to me that the concept of prompt injection as a hack predates LLMs, in a sense. The famous [Milgram experiment](https://en.wikipedia.org/wiki/Milgram_experiment): > The experimenter told them that they were taking part in "a scientific study of memory and learning", to see what the effect of punishment is on a subject's ability to memorize content. Also, he always clarified that the payment for their participation in the experiment was secured regardless of its development. The subject and actor drew slips of paper to determine their roles. Unknown to the subject, both slips said "teacher". The actor would always claim to have drawn the slip that read "learner", thus guaranteeing that the subject would always be the "teacher". The clever use of context switching for the actions of the subject led the subject to be divorced from the consequences of their actions. Are we seeing a basic principle at work, rather than a clever hack specific to LLMs?
2023-05-16T15:07:41
https://www.reddit.com/r/LocalLLaMA/comments/13j7uag/the_milgram_experiment_as_prompt_injection_in/
_supert_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13j7uag
false
null
t3_13j7uag
/r/LocalLLaMA/comments/13j7uag/the_milgram_experiment_as_prompt_injection_in/
false
false
self
21
{'enabled': False, 'images': [{'id': 'mCKJKkEKUTezQAoL71o4GL9mWFmFzUUdGav9qhICP0Y', 'resolutions': [{'height': 137, 'url': 'https://external-preview.redd.it/JrzNLONctiagfnvfsnF2M3VErHPfJddIc3PhX3q3nKE.jpg?width=108&crop=smart&auto=webp&s=a0b8ccb2bc50bb7aca18367252944c34d67259f0', 'width': 108}, {'height': 274, 'url': 'https://external-preview.redd.it/JrzNLONctiagfnvfsnF2M3VErHPfJddIc3PhX3q3nKE.jpg?width=216&crop=smart&auto=webp&s=b6ff85d8968a0052d5e84cb6d1280b4acff2215e', 'width': 216}, {'height': 406, 'url': 'https://external-preview.redd.it/JrzNLONctiagfnvfsnF2M3VErHPfJddIc3PhX3q3nKE.jpg?width=320&crop=smart&auto=webp&s=b25840246df98c0e9c2fb8e0b8c5ed29ef959027', 'width': 320}, {'height': 812, 'url': 'https://external-preview.redd.it/JrzNLONctiagfnvfsnF2M3VErHPfJddIc3PhX3q3nKE.jpg?width=640&crop=smart&auto=webp&s=ae3f9c9420112badd4378812ab7b7e0ec23346ed', 'width': 640}, {'height': 1219, 'url': 'https://external-preview.redd.it/JrzNLONctiagfnvfsnF2M3VErHPfJddIc3PhX3q3nKE.jpg?width=960&crop=smart&auto=webp&s=05b3425253d3c2db19fb91fc4c76655f74c93242', 'width': 960}, {'height': 1371, 'url': 'https://external-preview.redd.it/JrzNLONctiagfnvfsnF2M3VErHPfJddIc3PhX3q3nKE.jpg?width=1080&crop=smart&auto=webp&s=b4c0343b213c159d52225e0507df3c8303f5ab39', 'width': 1080}], 'source': {'height': 1524, 'url': 'https://external-preview.redd.it/JrzNLONctiagfnvfsnF2M3VErHPfJddIc3PhX3q3nKE.jpg?auto=webp&s=028e978ad4eecb973948fd652f57451042a4ba50', 'width': 1200}, 'variants': {}}]}
Can someone recommend any fundamental books for textgen ai?
11
[deleted]
2023-05-16T15:44:35
[deleted]
1970-01-01T00:00:00
0
{}
13j8sml
false
null
t3_13j8sml
/r/LocalLLaMA/comments/13j8sml/can_someone_recommend_any_fundamental_books_for/
false
false
default
11
null
How can I use LLM as a ecommerce recommendation engine? Can I blend private and public data to return relevant products from a catalog?
0
I'd like to build a web application that can sit in the front end and return chat results with the most similar products from the backend catalog. What would be the best way to put this together from the currently available applications?
2023-05-16T16:41:30
https://www.reddit.com/r/LocalLLaMA/comments/13jaaym/how_can_i_use_llm_as_a_ecommerce_recommendation/
rturtle
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13jaaym
false
null
t3_13jaaym
/r/LocalLLaMA/comments/13jaaym/how_can_i_use_llm_as_a_ecommerce_recommendation/
false
false
self
0
null
Best current tutorial for training your own LoRA? Also I've got a 24GB 3090, so which models would you recommend fine tuning on?
46
I'm assuming 4bit but correct me if I'm wrong there. I'm trying to get these working but with the current oobagooba pull I keep getting memory limit issues or it won't train at all. Which models and sizes of .txt files have you all found work for fine tuning? What was your memory?
2023-05-16T16:56:36
https://www.reddit.com/r/LocalLLaMA/comments/13japh6/best_current_tutorial_for_training_your_own_lora/
theredknight
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13japh6
false
null
t3_13japh6
/r/LocalLLaMA/comments/13japh6/best_current_tutorial_for_training_your_own_lora/
false
false
self
46
null
[deleted by user]
0
[removed]
2023-05-16T17:28:25
[deleted]
1970-01-01T00:00:00
0
{}
13jbjgo
false
null
t3_13jbjgo
/r/LocalLLaMA/comments/13jbjgo/deleted_by_user/
false
false
default
0
null
What exactly is an agent?
2
Is an agent nothing more than a fancy prompt? Any help would be appreciated.
2023-05-16T17:39:11
https://www.reddit.com/r/LocalLLaMA/comments/13jbti5/what_exactly_is_an_agent/
klop2031
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13jbti5
false
null
t3_13jbti5
/r/LocalLLaMA/comments/13jbti5/what_exactly_is_an_agent/
false
false
self
2
null
How do Character Settings in oobabooga's Text Generation UI work behind the scenes? Seeking advice on utilizing it with the --api option. No coding required, just guidance.
6
Hello everyone! I'm currently utilizing oobabooga's Text Generation UI with the --api flag, and I have a few questions regarding the functionality of the UI. Specifically, I'm interested in understanding how the UI incorporates the character's **name**, **context**, and **greeting** within the Chat Settings tab. Currently, I am able to send text prompts to the API from my React app using a sample request that I found while browsing the web. I am receiving responses successfully. Here's an example of the request: ```yaml { "prompt": "What is your name?", "max_new_tokens": 200, "do_sample": true, "temperature": 0.7, "top_p": 0.5, "typical_p": 1, "repetition_penalty": 1.2, "top_k": 40, "min_length": 0, "no_repeat_ngram_size": 0, "num_beams": 1, "penalty_alpha": 0, "length_penalty": 1, "early_stopping": false, "seed": -1, "add_bos_token": true, "truncation_length": 2048, "ban_eos_token": false, "skip_special_tokens": true, "stopping_strings": [] } ``` However, I'm uncertain about the parameters for the character's **name**, **context**, and **greeting** within this request. I'm also unsure whether these parameters can be utilized with this endpoint. I have a couple of theories on how this might work: 1) The UI possibly appends an additional string to the user's prompt before sending the request, consistently reminding the model about the character's name, context, and any other relevant information or instructions for each request. 2) There might be a method to load the character's YAML file to ensure that all replies adhere to the character settings. However, I'm unsure how to accomplish this using the --api flag. I'm also curious to know if there are any special characters or keywords that allow me to provide instructions and subsequently use a specific word like "BEGIN," so that anything preceding "BEGIN" is solely utilized as context. Although Silly Tavern was recommended to me, I'm genuinely interested in understanding how this process works. I would greatly appreciate any suggestions, tips, or insights. Thank you in advance!
2023-05-16T18:04:35
https://www.reddit.com/r/LocalLLaMA/comments/13jchhj/how_do_character_settings_in_oobaboogas_text/
masteryoyogi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13jchhj
false
null
t3_13jchhj
/r/LocalLLaMA/comments/13jchhj/how_do_character_settings_in_oobaboogas_text/
false
false
self
6
null
Dev Pattern Recognition to algo assumptions?
1
[removed]
2023-05-16T18:28:33
https://www.reddit.com/r/LocalLLaMA/comments/13jd3pm/dev_pattern_recognition_to_algo_assumptions/
TH3NUD3DUD3
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13jd3pm
false
null
t3_13jd3pm
/r/LocalLLaMA/comments/13jd3pm/dev_pattern_recognition_to_algo_assumptions/
false
false
default
1
null
Tutorial: Run PrivateGPT model locally
1
[removed]
2023-05-16T18:44:10
https://youtu.be/G7iLllmx4qc
zeroninezerotow
youtu.be
1970-01-01T00:00:00
0
{}
13jdiub
false
{'oembed': {'author_name': 'Prompt Engineering', 'author_url': 'https://www.youtube.com/@engineerprompt', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/G7iLllmx4qc?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen title="PrivateGPT: Chat to your FILES OFFLINE and FREE [Installation and Tutorial]"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/G7iLllmx4qc/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'PrivateGPT: Chat to your FILES OFFLINE and FREE [Installation and Tutorial]', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_13jdiub
/r/LocalLLaMA/comments/13jdiub/tutorial_run_privategpt_model_locally/
false
false
default
1
null
[deleted by user]
0
[removed]
2023-05-16T19:23:47
[deleted]
1970-01-01T00:00:00
0
{}
13jeknb
false
null
t3_13jeknb
/r/LocalLLaMA/comments/13jeknb/deleted_by_user/
false
false
default
0
null
Effective specialized light models
15
Don't you think it would be nice, for example, to create specialized but lightweight models? For example, a model that programs very well, but does everything else worse. Or a wonderful author of uncensored texts who is very poorly versed in code. We could use one efficient model for one type of task, and then switch to another if necessary.
2023-05-16T20:14:49
https://www.reddit.com/r/LocalLLaMA/comments/13jfwip/effective_specialized_light_models/
dimaff
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13jfwip
false
null
t3_13jfwip
/r/LocalLLaMA/comments/13jfwip/effective_specialized_light_models/
false
false
self
15
null
What is and isn't possible at various tiers of VRAM? And not just in LLMs?
13
Title. I've been using 13b 4/5bit ggml models at 1600Mhz DDR3 ram. About 700ms/token. I've also run Stable Diffusion in CPU only mode, at about 18 secs/iteration. My GPU's Kepler, it's too old to be supported in anything. Now that you can get massive speedups in GGML through utilizing GPU, I'm thinking of getting a 3060 12gb. Far as I can tell, that should let me run 33b 4/5bit, and maybe relatively fast too. Heck if I max out my DDR3 ram to 32gb I might run 65b 4/5bit too. SD should be in its/sec too I think, instead of secs/its. But I'm interested in stuff besides just basic inference. I don't know how much VRAM you need to do these things, and how much you can optimize these things to run with lower VRAM, like how SD can be run with 4 or even 2gb VRAM. So I got a few questions: 1. For LLMs, what can and can't you do with various levels of VRAM? I'm a little unfamiliar on what exactly everything is, but there's finetuning, training, merging/mixing, Loras, langchain, vector databases, agents, extensions (SuperBIG looks cool), etc. 2. Also, my dream LLM would be what I think something like LLaVa, MiniGPT-4, ImageBind, or Ask-Anything are? A multimodal/combined text and image (or more) LLM, so I can converse about and generate images with it too, not just text. What's needed for that? MiniGPT's page is saying 12gb VRAM for it, anything lower/better? I've also seen people combine local Stable Diffusion with their LLM model instead, is that better? Can the recent GPU implementation speedup breakthrough be applied to these multimodal models? Can these multimodal models be in 4/5bit too? And can they be GGML? Can you turn any LLM into a multimodal model, since I think MiniGPT was Vicuna? What's the best local multimodal model? 3. For Stable Diffusion, what can and can't you do with various levels of VRAM? Again I'm a little unfamiliar with things. There's stuff like Lora, ControlNet, DreamBooth, HyperNetworks, etc. Also there was a research paper by Google recently where they managed to generate images in 11 seconds on a phone, I think. Was that clickbait, or will speedups also reach everyone else soon, making even fast GPUs even faster? 4. Is there anything else I should keep in mind or know about if I'm wanting to try all these things? Like, which OS is best to do all this (I see people saying they get 40 tokens/sec in Linux, is that the best option? Is it better for SD, multimodals, etc. too? Is dual boot worse?) ? Are there bottlenecks from anything, like my older DDR3 ram and PCIE gen 3 lanes? Is having an integrated GPU-type CPU better? Is AIO cooling needed, or can air cooling suffice? Etc. etc.
2023-05-16T20:23:52
https://www.reddit.com/r/LocalLLaMA/comments/13jg504/what_is_and_isnt_possible_at_various_tiers_of/
ThrowawayProgress99
self.LocalLLaMA
2023-05-16T20:31:28
0
{}
13jg504
false
null
t3_13jg504
/r/LocalLLaMA/comments/13jg504/what_is_and_isnt_possible_at_various_tiers_of/
false
false
self
13
null
Could I get a suggestion for a simple HTTP API with no GUI for llama.cpp?
10
Hello I'm looking for a very simple API to match my very simple usecase. I'm using a fixed prompt passed as `-f file.txt` to llama.cpp, and I would like to pass instruction as part of the URL with a HTTP GET, then collect the results with curl or wget, so that when I do something like `curl http://127.0.0.1:8080/something/the%20instructions%20I%20sent` I simply get the result - nothing more, nothing less. I just want to use that in my bash prompt and maybe vim too with a different prompt, so on a different port, and I'd prefer to use HTTP because I want to eventually move one or both to my desktop. I would prefer so avoid reinventing the wheel, so I wonder if there's already anything that simple, ideally in C or Perl? It would just need to: - bind to the port - fork llama, keeping the input FD opened - then waiting for HTTP request - loop on requests, feeding the URL to the input FD, and sending back the result that was read from the output FD. - optionally, if it's not too hard: after 2 minutes without activity, stop llama Can anyone offer a suggestion? I don't need a GUI or anything fancy like JSON or REST, but if the simplest existing option say keeps a queue of requests and return the output in JSON, that's not a dealbreaker: I'll just use jq on curl output :) Thanks for any help!
2023-05-16T20:50:27
https://www.reddit.com/r/LocalLLaMA/comments/13jgtvz/could_i_get_a_suggestion_for_a_simple_http_api/
csdvrx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13jgtvz
false
null
t3_13jgtvz
/r/LocalLLaMA/comments/13jgtvz/could_i_get_a_suggestion_for_a_simple_http_api/
false
false
self
10
null
We’re Gonna Need a Bigger Moat - by Steve Yegge
17
Original: [https://steve-yegge.medium.com/were-gonna-need-a-bigger-moat-478a8df6a0d2](https://steve-yegge.medium.com/were-gonna-need-a-bigger-moat-478a8df6a0d2) 1. **Emergence of Low Rank Adaptation (LoRA):**  LoRA has made large language models (LLMs) composable, allowing them to converge on having the same knowledge, potentially making LLMs more powerful and dangerous. 2. **Rapid Evolution of LLMs:** LLMs are evolving rapidly, with potential uses in various fields, including potentially dangerous applications. 3. **Leak of GPT-class LLMs:** The recent leak of GPT-class LLMs has led to the development of many open-source software (OSS) LLMs. 4. **Meta as the Surprise Winner:** Meta has emerged as the surprise winner due to their architecture being best suited for scaling up OSS LLMs. 5. **Predictions for Smaller LLMs and LLaMA:** The article predicts that smaller LLMs will soon perform as well as more advanced models, with LLaMA potentially becoming the standard architecture. 6. **Significant Social Consequences:** The leak of LLMs may have significant social consequences, although these are difficult to predict. 7. **Impact on the AI Industry:** The LLM-as-Moat model is disappearing, and AI is being commoditized quickly. 8. **Pluggable Platforms and Standardization:** The author believes that pluggable platforms have a way of standardizing, with LLaMA possibly becoming the standard architecture. 9. **SaaS Builders and Data Moats:** SaaS builders may benefit from the commoditization of AI, but relying on LLMs for a moat is risky; having a data moat is recommended. 10. **Sourcegraph’s Moat-Building Capabilities:** The author discusses the moat-building capabilities of Sourcegraph’s platform.
2023-05-16T21:06:57
https://www.reddit.com/r/LocalLLaMA/comments/13jh8ud/were_gonna_need_a_bigger_moat_by_steve_yegge/
goproai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13jh8ud
false
null
t3_13jh8ud
/r/LocalLLaMA/comments/13jh8ud/were_gonna_need_a_bigger_moat_by_steve_yegge/
false
false
self
17
{'enabled': False, 'images': [{'id': '2oy5U649B1efZ-4MxaDS3SnxBwvWaX_M68KtADh7Ngg', 'resolutions': [{'height': 46, 'url': 'https://external-preview.redd.it/6oAGL_W5MYLV_ZZv3YQC3eDtnGa6nzTtFF3kRuMyugY.jpg?width=108&crop=smart&auto=webp&s=4d8ccfa16f8aa571f05e0c5f8c37accea8e9225b', 'width': 108}, {'height': 93, 'url': 'https://external-preview.redd.it/6oAGL_W5MYLV_ZZv3YQC3eDtnGa6nzTtFF3kRuMyugY.jpg?width=216&crop=smart&auto=webp&s=4647688ea3f97cd832f64895639b2383ffd918a9', 'width': 216}, {'height': 137, 'url': 'https://external-preview.redd.it/6oAGL_W5MYLV_ZZv3YQC3eDtnGa6nzTtFF3kRuMyugY.jpg?width=320&crop=smart&auto=webp&s=1b90c56aee4f811d2e42bf59f857ddef70e97faa', 'width': 320}, {'height': 275, 'url': 'https://external-preview.redd.it/6oAGL_W5MYLV_ZZv3YQC3eDtnGa6nzTtFF3kRuMyugY.jpg?width=640&crop=smart&auto=webp&s=39154cebab59d992e7e36d9f3b292ca583021145', 'width': 640}, {'height': 413, 'url': 'https://external-preview.redd.it/6oAGL_W5MYLV_ZZv3YQC3eDtnGa6nzTtFF3kRuMyugY.jpg?width=960&crop=smart&auto=webp&s=4520c1f9eddedaf9431799950dbb7aa1d15705b4', 'width': 960}, {'height': 465, 'url': 'https://external-preview.redd.it/6oAGL_W5MYLV_ZZv3YQC3eDtnGa6nzTtFF3kRuMyugY.jpg?width=1080&crop=smart&auto=webp&s=45eb3bf1d140a958588ab966fa4e511041c4f4bf', 'width': 1080}], 'source': {'height': 517, 'url': 'https://external-preview.redd.it/6oAGL_W5MYLV_ZZv3YQC3eDtnGa6nzTtFF3kRuMyugY.jpg?auto=webp&s=845accb65a06ee257b126f6ccdcd7abcf30d9cfb', 'width': 1200}, 'variants': {}}]}
OpenAI wants to crack down on open source LLMs, force through a government licensing system, and create a regulatory moat for themselves
523
2023-05-16T21:13:52
https://www.nasdaq.com/articles/openai-chief-goes-before-us-congress-to-propose-licenses-for-building-ai
donthaveacao
nasdaq.com
1970-01-01T00:00:00
0
{}
13jhf44
false
null
t3_13jhf44
/r/LocalLLaMA/comments/13jhf44/openai_wants_to_crack_down_on_open_source_llms/
false
false
default
523
null
can two A6000 using NVlink pool their VRAM memory to use fully 96GB for LLM?
6
Given that models like quantized 65B 4bit are/will be expected to need more than 65GB of memory, would it be possible to connect two A6000 via NVlink to have a working memory of 96GB?
2023-05-16T21:48:58
https://www.reddit.com/r/LocalLLaMA/comments/13jibri/can_two_a6000_using_nvlink_pool_their_vram_memory/
Caffdy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13jibri
false
null
t3_13jibri
/r/LocalLLaMA/comments/13jibri/can_two_a6000_using_nvlink_pool_their_vram_memory/
false
false
self
6
null
Noticed TavernAI characters rarely emote when running on Wizard Vicuna uncensored 13B. Is this due to the model itself?
10
So I finally got TavernAI to work with the 13B model via using the new koboldcpp with a GGML model, and although I saw a huge increase in coherency compared to Pygmalion 7B, characters very rarely emote anymore, instead only speaking. After hours of testing, only once did the model generate text with an emote in it. Is this because Pygmalion 7B has been trained specifically for roleplaying in mind? And if so, when might we expect a Pygmalion 13B now that everyone, including those of us with low vram, can finally load 13B models? It feels like we're getting new models every few days, so surely Pygmalion 13B isn't that far off?
2023-05-16T23:17:43
https://www.reddit.com/r/LocalLLaMA/comments/13jkh19/noticed_tavernai_characters_rarely_emote_when/
Megneous
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13jkh19
false
null
t3_13jkh19
/r/LocalLLaMA/comments/13jkh19/noticed_tavernai_characters_rarely_emote_when/
false
false
self
10
null
Llama CPP and GPT4all Error... Anyone have any idea wh?
2
Hello I am getting this error when trying to run Llama or GPT4all. Does anyone know how to fix? Looked at hugging face and Github.... others have had the same issues, but have not found resolve. &#x200B; if self.ctx is not None: AttributeError: 'Llama' object has no attribute 'ctx'
2023-05-17T00:01:33
https://www.reddit.com/r/LocalLLaMA/comments/13jlh6z/llama_cpp_and_gpt4all_error_anyone_have_any_idea/
Lord_Crypto13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13jlh6z
false
null
t3_13jlh6z
/r/LocalLLaMA/comments/13jlh6z/llama_cpp_and_gpt4all_error_anyone_have_any_idea/
false
false
self
2
null
[deleted by user]
1
[removed]
2023-05-17T01:59:22
[deleted]
1970-01-01T00:00:00
0
{}
13jo463
false
null
t3_13jo463
/r/LocalLLaMA/comments/13jo463/deleted_by_user/
false
false
default
1
null
Can’t get my characters prompts to work when using Oobabooga over API.
1
So im making a chat bot that can read and respond to twitch chat. Problem is it won’t really use my prompt/character set up. I see the preprompt load into the command line but it’s response doesn’t really match the prompt or the character I set up. I’m using the wizard 7bil uncensored. When I set up the character in the web UI is that used when it’s generating responses or does it just use the just the model and the input prompt?
2023-05-17T03:05:02
https://www.reddit.com/r/LocalLLaMA/comments/13jpjx0/cant_get_my_characters_prompts_to_work_when_using/
opi098514
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13jpjx0
false
null
t3_13jpjx0
/r/LocalLLaMA/comments/13jpjx0/cant_get_my_characters_prompts_to_work_when_using/
false
false
self
1
null
Need advice on prebuilt PC for running AI apps (RTX 3090, Ryzen 9 5900x, 32GB RAM)
1
Hi everyone, I'm interested in running some AI apps locally on my PC, such as Whisper, Vicuna, Stable Diffusion, etc. I found this prebuilt PC , and I'm wondering if it's good enough for my needs. Here are the specs: \- CPU: Ryzen 9 5900x \- GPU: RTX 3090 \- RAM: 32GB DDR4 3200 MHz \- SSD: 1TB NVMe PCIe Gen3.0x4 \- Mainboard: ASRock B450M Pro4 R2.0 &#x200B; I'm uncertain whether the CPU is overkill or not, and if the RAM size and speed are sufficient. I also heard that PCIe Gen 4 is better for NVMe SSDs, but this mainboard only supports Gen 3. Will that make a big difference in performance? I would appreciate any opinions or suggestions from you guys. Thanks in advance!
2023-05-17T05:09:21
https://www.reddit.com/r/LocalLLaMA/comments/13js563/need_advice_on_prebuilt_pc_for_running_ai_apps/
Prince-of-Privacy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13js563
false
null
t3_13js563
/r/LocalLLaMA/comments/13js563/need_advice_on_prebuilt_pc_for_running_ai_apps/
false
false
self
1
null
Collaborative renting server for LLM
9
With the emergence of new large language models, there is a need for more computational resources to train and develop these models. However, high-end hardware can be costly and out of reach for many enthusiasts and small teams. I propose an idea for a platform called RentLLAMA where people can come together to share the cost of renting cloud or dedicated AI servers. Here's how it would work: A user proposes a project on RentLLAMA along with a description and required hardware specs. Other interested users join the project, up to a maximum of 10 for example. The members then vote to select a "lead user" who is responsible for setting up the server. RentLLAMA collects payment from each member and handles paying the server bills. The cost is divided equally among all members. For example, if a $300/month server is needed and 10 members join, each pays $30/month. This approach would allow more collaboration and wider access to advanced AI hardware for research and experimentation. The platform would be managed in a decentralized way through the vote of each project's members.
2023-05-17T07:24:39
https://www.reddit.com/r/LocalLLaMA/comments/13juml5/collaborative_renting_server_for_llm/
docloulou
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13juml5
false
null
t3_13juml5
/r/LocalLLaMA/comments/13juml5/collaborative_renting_server_for_llm/
false
false
self
9
null
Effects of long term use on hardware
15
I am helping develop a plugin for AutoGPT to interface with Text Gen WebUI and I'll be conducting experiments on how effective the plugin works. AutoGPT can be very heavy on the OpenAI API. And to get it to work with the typical reduced context size of LLMs, multiple chunks of data will need to be sent. I am confident it will work but I'm also confident it will peg video hardware and CPU hardware to max while it is hammering at the API in TGWUI. What is your experience for continuous use? How hard do you think AutoGPT could hammer the video hardware before bad things happen? I am thinking of building in artificial throttling to give the hardware a break between API calls. Thank you for your insight. Edit 1: Thank you all for your information! The Linus Tech Tips reference was particularly useful! I'll implement an optional throttle that is off by default so if people have a concern, they can turn it on.
2023-05-17T07:39:45
https://www.reddit.com/r/LocalLLaMA/comments/13juvr5/effects_of_long_term_use_on_hardware/
cddelgado
self.LocalLLaMA
2023-05-17T15:33:44
0
{}
13juvr5
false
null
t3_13juvr5
/r/LocalLLaMA/comments/13juvr5/effects_of_long_term_use_on_hardware/
false
false
self
15
null
Recursively grab all the text from a website for an LLM
0
Is there a way to scrape all of the text from an entire website to later train an LLM on? I'm not looking to build it myself and reinvent the wheel unless I have to Edit: why the downvotes? The question was answered below. I didn’t have to make anything custom. Imagine if LLMs negatively rated us based on us asking questions- what would our score be?
2023-05-17T08:26:13
https://www.reddit.com/r/LocalLLaMA/comments/13jvoks/recursively_grab_all_the_text_from_a_website_for/
somethedaring
self.LocalLLaMA
2023-05-17T17:09:55
0
{}
13jvoks
false
null
t3_13jvoks
/r/LocalLLaMA/comments/13jvoks/recursively_grab_all_the_text_from_a_website_for/
false
false
self
0
null
How can I scrape text only from Facebook posts?
1
[removed]
2023-05-17T08:44:08
https://www.reddit.com/r/LocalLLaMA/comments/13jvz4i/how_can_i_scrape_text_only_from_facebook_posts/
AlfaidWalid
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13jvz4i
false
null
t3_13jvz4i
/r/LocalLLaMA/comments/13jvz4i/how_can_i_scrape_text_only_from_facebook_posts/
false
false
default
1
null
LLM with Apple M2 vs Intel 12th Gen
9
I'm looking to buy another machine to work with LLaMA models. Ultimately what is the faster CPU for running general-purpose LLMs before GPU acceleration? M2 or Intel 12th gen? I'll limit it to the best-released processor on both sides.
2023-05-17T09:25:29
https://www.reddit.com/r/LocalLLaMA/comments/13jwonl/llm_with_apple_m2_vs_intel_12th_gen/
somethedaring
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13jwonl
false
null
t3_13jwonl
/r/LocalLLaMA/comments/13jwonl/llm_with_apple_m2_vs_intel_12th_gen/
false
false
self
9
null
LLaMA and AutoAPI?
2
Does anybody know if we can use AutoGPT with LLaMA (e.g.: via oobabooga APIs)? If so, where can I find the integration instruction or tutorial? Thanks 🙇‍♂️🙏
2023-05-17T10:48:07
https://www.reddit.com/r/LocalLLaMA/comments/13jy71k/llama_and_autoapi/
MichaelBui2812
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13jy71k
false
null
t3_13jy71k
/r/LocalLLaMA/comments/13jy71k/llama_and_autoapi/
false
false
self
2
null
Using LLaMA as a "real personal assistant"?
26
What I really mean by "real personal assistant" is an AI that: * Is given all of my personal details: personality, hobbies, lifestyle description, work experience, writing style,... * Persist all of the given information in a database so that those data will be reloaded when we re-launch/re-install the AI * Based on that information, provide personalised responses that match my characteristics, are relevant to me & really feel like I created them Possible?
2023-05-17T10:54:04
https://www.reddit.com/r/LocalLLaMA/comments/13jyb4u/using_llama_as_a_real_personal_assistant/
MichaelBui2812
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13jyb4u
false
null
t3_13jyb4u
/r/LocalLLaMA/comments/13jyb4u/using_llama_as_a_real_personal_assistant/
false
false
self
26
null
Ok, I’m just curious of the security risks?
7
Let’s just say you are running a model on your server, but this model was trained by someone else. Imagine they trained their model with an innocuous password that allows the prompter to fully utilize a set of embedded hacking capabilities, or even just have it check system time or do a series of checks every few hundred times it gets used before it accesses resources and elevates its privileges to call home. How can anyone here assure that wasn’t done to the model they’re running?
2023-05-17T10:55:21
https://www.reddit.com/r/LocalLLaMA/comments/13jyc0m/ok_im_just_curious_of_the_security_risks/
lordlysparrow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13jyc0m
false
null
t3_13jyc0m
/r/LocalLLaMA/comments/13jyc0m/ok_im_just_curious_of_the_security_risks/
false
false
self
7
null
"Guidance" a prompting language by Microsoft.
68
Recently released by Microsoft - a template language for "guiding" sampling from LLMs: [https://github.com/microsoft/guidance](https://github.com/microsoft/guidance) It is interesting that open models like LLaMA not only are supported: llama = guidance.llms.Transformers("your_path/llama-7b", device=0) But there is even a "[Guidance acceleration](https://github.com/microsoft/guidance/blob/main/notebooks/guidance_acceleration.ipynb)" mode that improves sampling performance by means of "maintaining the session state" - I guess what is meant there is that they maintain attention cache.
2023-05-17T11:02:10
https://www.reddit.com/r/LocalLLaMA/comments/13jyh3m/guidance_a_prompting_language_by_microsoft/
QFTornotQFT
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13jyh3m
false
null
t3_13jyh3m
/r/LocalLLaMA/comments/13jyh3m/guidance_a_prompting_language_by_microsoft/
false
false
self
68
{'enabled': False, 'images': [{'id': 'HOYYp67xOlOtV3bRY2ZPsCoUJYYPW6lykIpadrXWViE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PaDHf00RIEw7HN2VKdaR1bExckWmhaOsEyEjWOtJR2s.jpg?width=108&crop=smart&auto=webp&s=b0ce880810ffaff85ba1776fb0b58d7b5ffc714f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/PaDHf00RIEw7HN2VKdaR1bExckWmhaOsEyEjWOtJR2s.jpg?width=216&crop=smart&auto=webp&s=3e28cd94d5c7a49f802b8ee208e92c0095cc1e34', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/PaDHf00RIEw7HN2VKdaR1bExckWmhaOsEyEjWOtJR2s.jpg?width=320&crop=smart&auto=webp&s=1540bd0e3ab2fec91a01021a7a3c4a0a71ca99d2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/PaDHf00RIEw7HN2VKdaR1bExckWmhaOsEyEjWOtJR2s.jpg?width=640&crop=smart&auto=webp&s=d69d6698d276c7df536bacf1fc45a14670eaaab8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/PaDHf00RIEw7HN2VKdaR1bExckWmhaOsEyEjWOtJR2s.jpg?width=960&crop=smart&auto=webp&s=58911b2123b270021f2aba78898d73fa0577275f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/PaDHf00RIEw7HN2VKdaR1bExckWmhaOsEyEjWOtJR2s.jpg?width=1080&crop=smart&auto=webp&s=56c90d28238a269dc271d5dce6058ebea42b265e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/PaDHf00RIEw7HN2VKdaR1bExckWmhaOsEyEjWOtJR2s.jpg?auto=webp&s=a3f73e9f4b8bc14a342a13f5b3aa9b00a1da8473', 'width': 1200}, 'variants': {}}]}
Next best LLM model?
314
Almost 48 hours passed since Wizard Mega 13B was released, but yet I can't see any new breakthrough LLM model released in the subreddit? Who is responsabile for this mistake? Will there be a compensation? How many more hours will we need to wait? Is training a language model which will run entirely and only on the power of my PC, in ways beyond my understanding and comprehension, that mimics a function of the human brain, using methods and software that yet no university book had serious mention of, just within days / weeks from the previous model being released too much to ask? Jesus, I feel like this subreddit is way past its golden days.
2023-05-17T12:00:07
https://www.reddit.com/r/LocalLLaMA/comments/13jzosu/next_best_llm_model/
elektroB
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13jzosu
false
null
t3_13jzosu
/r/LocalLLaMA/comments/13jzosu/next_best_llm_model/
false
false
self
314
null
Looking to find a good up to date ratings of models and timeline of performance for open models and the OpenAi ones.
6
Do someone has a good resource to point me to? I'm curious about the current performance of current model vs GPT-3.5 -4, and get an vague idea of when we can expect the open source models to reach the performances of GPT-3.5 I've been playing with models locally, and while impressive, they are not quite at a level where I find them useful to integrate in a real workflow.
2023-05-17T12:52:27
https://www.reddit.com/r/LocalLLaMA/comments/13k0war/looking_to_find_a_good_up_to_date_ratings_of/
dgermain
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13k0war
false
null
t3_13k0war
/r/LocalLLaMA/comments/13k0war/looking_to_find_a_good_up_to_date_ratings_of/
false
false
self
6
null
LLM@home
43
I think the open source community should create software like [Folding@home](https://en.wikipedia.org/wiki/Folding@home) to collaboratively train a LLM. If we can get enough people to donate their GPU power, then we could build an extremely powerful open source model. One that may even surpass anything big tech can create. Is there any ongoing work similar to this?
2023-05-17T14:20:55
https://www.reddit.com/r/LocalLLaMA/comments/13k35on/llmhome/
[deleted]
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13k35on
false
null
t3_13k35on
/r/LocalLLaMA/comments/13k35on/llmhome/
false
false
self
43
{'enabled': False, 'images': [{'id': 'i2SMdWEgExesNcnaOqr8a4MGt6MPp3-sn4Z341kYkr4', 'resolutions': [{'height': 116, 'url': 'https://external-preview.redd.it/8nRfSq7QV4ZE5YzM_o6_t9MoeVQsF3KTjZlp7P6qLp0.jpg?width=108&crop=smart&auto=webp&s=15368fc391e412351907dd816346194a9fbc1667', 'width': 108}], 'source': {'height': 216, 'url': 'https://external-preview.redd.it/8nRfSq7QV4ZE5YzM_o6_t9MoeVQsF3KTjZlp7P6qLp0.jpg?auto=webp&s=5e39f09cb195d1d2e9f9ebb7a2ac774d39e18425', 'width': 200}, 'variants': {}}]}
Does 24GB RAM for CPU-only match any usable models?
3
I upgraded (!) my PC to its max 32GB ... only to find that the very cheap 32GB DDR4 memory I had bought was unusable server RAM. One refund later, I am thinking of adding just 16GB to my current 8GB to make 24GB. Does this size match any useful CPU-only models out there? Thanks!
2023-05-17T15:09:22
https://www.reddit.com/r/LocalLLaMA/comments/13k4gl4/does_24gb_ram_for_cpuonly_match_any_usable_models/
MrEloi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13k4gl4
false
null
t3_13k4gl4
/r/LocalLLaMA/comments/13k4gl4/does_24gb_ram_for_cpuonly_match_any_usable_models/
false
false
self
3
null
Rearching resources for Model Compatibility
1
[removed]
2023-05-17T15:23:27
https://www.reddit.com/r/LocalLLaMA/comments/13k4ur0/rearching_resources_for_model_compatibility/
VucaBAT
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13k4ur0
false
null
t3_13k4ur0
/r/LocalLLaMA/comments/13k4ur0/rearching_resources_for_model_compatibility/
false
false
default
1
null
OpenLLaMa has released it's 400B token checkpoint.
149
Progress is happening, albeit slowly. Someone needs to lend these guys some GPU hours. [GitHub - openlm-research/open\_llama](https://github.com/openlm-research/open_llama)
2023-05-17T15:46:36
https://www.reddit.com/r/LocalLLaMA/comments/13k5hvc/openllama_has_released_its_400b_token_checkpoint/
jetro30087
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13k5hvc
false
null
t3_13k5hvc
/r/LocalLLaMA/comments/13k5hvc/openllama_has_released_its_400b_token_checkpoint/
false
false
self
149
{'enabled': False, 'images': [{'id': 'pm_lNdI36D02TxMXQt75NXCTdzbr2EMmnXkPOAnkzfQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/d2hVVHR6hJSaF6Vdauu5Eesz3kGFuTh80Derzzowaw8.jpg?width=108&crop=smart&auto=webp&s=dfc0af441a1b65619a75659da4ea48df3765e795', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/d2hVVHR6hJSaF6Vdauu5Eesz3kGFuTh80Derzzowaw8.jpg?width=216&crop=smart&auto=webp&s=fed84704bded964534deabc5f0e15b4da3991494', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/d2hVVHR6hJSaF6Vdauu5Eesz3kGFuTh80Derzzowaw8.jpg?width=320&crop=smart&auto=webp&s=b3d64ee4784424545dff66dc1ed9f88a963d0764', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/d2hVVHR6hJSaF6Vdauu5Eesz3kGFuTh80Derzzowaw8.jpg?width=640&crop=smart&auto=webp&s=66bab7f6f80b933f5e991b7a34390c5e7a7678e2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/d2hVVHR6hJSaF6Vdauu5Eesz3kGFuTh80Derzzowaw8.jpg?width=960&crop=smart&auto=webp&s=2673979be06b2a8df71e4f68e4fab7ea34513662', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/d2hVVHR6hJSaF6Vdauu5Eesz3kGFuTh80Derzzowaw8.jpg?width=1080&crop=smart&auto=webp&s=ffe2ff73673de10c35ec79aa657121b962ab4f87', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/d2hVVHR6hJSaF6Vdauu5Eesz3kGFuTh80Derzzowaw8.jpg?auto=webp&s=eae42ff7b9978b8e46fd8526c5b205d3fd927d5e', 'width': 1200}, 'variants': {}}]}
llama-cpp-python not using GPU
6
Hello, I have llama-cpp-python running but it’s not using my GPU. I have passed in the ngl option but it’s not working. I also tried a cuda devices environment variable (forget which one) but it’s only using CPU. I also had to up the ulimit memory lock limit but still nothing.
2023-05-17T16:26:59
https://www.reddit.com/r/LocalLLaMA/comments/13k6mk3/llamacpppython_not_using_gpu/
Artistic_Okra7288
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13k6mk3
false
null
t3_13k6mk3
/r/LocalLLaMA/comments/13k6mk3/llamacpppython_not_using_gpu/
false
false
self
6
null
Is it just me, or can Ooba UI not run Perplexity evaluations on GGML models?
2
I'm wondering if this is a problem with my setup, or if this was an oversight that didn't get worked in when GGML and GPU acceleration became a big thing like... 2 days ago haha. Maybe I'm expecting too much considering it's only been a few days, but I was looking forward to running some GGML models through their paces and ranking them. Anyone have any feedback?
2023-05-17T16:35:42
https://www.reddit.com/r/LocalLLaMA/comments/13k6ve6/is_it_just_me_or_can_ooba_ui_not_run_perplexity/
Megneous
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13k6ve6
false
null
t3_13k6ve6
/r/LocalLLaMA/comments/13k6ve6/is_it_just_me_or_can_ooba_ui_not_run_perplexity/
false
false
self
2
null
What are the best performing local models both for GPTQ and llama.cpp for 8GB VRAM and 16GB RAM?
22
I've been trying to try different ones, and the speed of GPTQ models are pretty good since they're loaded on GPU, however I'm not sure which one would be the best option for what purpose. According to open leaderboard on HF, Vicuna 7B 1.1 GPTQ 4bit runs well and fast, but some GGML models with 13B 4bit/5bit quantization are also good. What do you guys think? Can we create such a list for everyone to see?
2023-05-17T16:48:11
https://www.reddit.com/r/LocalLLaMA/comments/13k77tg/what_are_the_best_performing_local_models_both/
marleen01
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13k77tg
false
null
t3_13k77tg
/r/LocalLLaMA/comments/13k77tg/what_are_the_best_performing_local_models_both/
false
false
self
22
null
Antilibrary - talk to your documents
23
[removed]
2023-05-17T17:02:03
https://www.reddit.com/r/LocalLLaMA/comments/13k7luv/antilibrary_talk_to_your_documents/
Icaruswept
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13k7luv
false
null
t3_13k7luv
/r/LocalLLaMA/comments/13k7luv/antilibrary_talk_to_your_documents/
false
false
default
23
null
[deleted by user]
0
[removed]
2023-05-17T17:18:49
[deleted]
1970-01-01T00:00:00
0
{}
13k825v
false
null
t3_13k825v
/r/LocalLLaMA/comments/13k825v/deleted_by_user/
false
false
default
0
null
Riddle/Reasoning GGML model tests update + Koboldcpp 1.23 beta is out with OpenCL GPU support!
51
First of all, look at this crazy mofo: [Koboldcpp 1.23 beta](https://github.com/LostRuins/koboldcpp/releases) This thing is a beast, it works faster than the 1.22 CUDA version for me. I did some testing (2 tests each just in case). I used the max gpulayers I could before I ran out of VRAM. My GPU is a mobile RTX 2070 with 8gb VRAM. GPT4-X-Vicuna 13b q5_1 Kobold 1.21.3: 488 ms/t 468 ms/t Kobold 1.22 CUDA gpulayers 26: 278 ms/t 283 ms/t Kobold 1.23: 375 ms/t 371 ms/t Kobold 1.23 gpulayers 22: 275 ms/t 273 ms/t VicUnlocked 30b q5_0 Kobold 1.21.3: 1092 ms/t 1094 ms/t Kobold 1.22 CUDA gpulayers 16: 957 ms/t 944 ms/t Kobold 1.23: 863 ms/t 861 ms/t Kobold 1.23 gpulayers 12: 823 ms/t 797 ms/t First I noticed that 1.23 is faster than 1.21.3 even on CPU only. For the 30b model, it was faster on CPU than the CUDA one even, not sure why. Also I noticed that the OpenCL version can't use the same amount of gpulayers as the CUDA version, but it doesn't matter, it seems to not affect the performance. If anything, it's faster. This is not the "is Pepsi ok?" version. This is the "Our Coke has free refills" version! I WILL TEST ALL THE MODELS NOW. No really, I only have 65b models left to go - I got everything else scored on riddles/reasoning so far in my [**spreadsheet**](https://docs.google.com/spreadsheets/d/1NgHDxbVWJFolq8bLvLkuPWKC7i_R6I6W/edit?usp=sharing&ouid=102314596465921370523&rtpof=true&sd=true). Make sure to check the Scores (Draft) and Responses (Draft) tabs for the latest. I will update the FINAL tabs once I got the 65b models tested as well. All models have their responses recorded (yay), and I've been keeping up with the latest models as they come out as well. Also I'll be adding ChatGPT 3.5/4.0, the New Bard, and Claude for reference in there as well. Oh I should mention, in case anyone didn't notice: [These guys](https://lmsys.org/blog/) are letting you test models side by side and assigning the winners ELO scores just like chess. This is great, but don't be fooled by how close some of the local LLM's are to ChatGPT and the like. And huggingface now has the [Open LLM leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) which does multiple tests. They need to catch up though, there's tons of models in the queue and it seems to be stuck at the moment. Also don't forget to hit the **REFRESH** button below all the benchmarks. For some reason when the page loads, it is missing a bunch of stuff if you don't click that. Both of these sources are awesome, but I'll continue with my riddle/reasoning tests for several reasons: 1) I provide the model responses for you to evaluate your own scores if you disagree with my score 2) I test it on questions/problems that I personally find valuable for myself. I am especially interested in the model's reasoning ability and cleverness. 3) I like to see individual question/answers/score, not just the overall score of the model. 4) I can control my thing better - if a new model comes out, I don't have to wait days/weeks for the other sources to catch up and benchmark it. I can just do it right away. 5) It's fun! But more variety of testing methodologies is a good thing. This space is blowing up and in a few months we will be looking at.. dozens? hundreds? BILLIONS of local LLM's and we need resources that can organize what's out there, how it performs, and make it easier for you to select which ones you want to play with. That's all for now!
2023-05-17T17:33:59
https://www.reddit.com/r/LocalLLaMA/comments/13k8h0r/riddlereasoning_ggml_model_tests_update_koboldcpp/
YearZero
self.LocalLLaMA
2023-05-17T17:59:35
0
{}
13k8h0r
false
null
t3_13k8h0r
/r/LocalLLaMA/comments/13k8h0r/riddlereasoning_ggml_model_tests_update_koboldcpp/
false
false
self
51
{'enabled': False, 'images': [{'id': 'm9KaapXjs2n5MSsVvxZHn_EFREL-HB-nWde3as-mioc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WUQnE2zmawtDH-EVSaWif6WUMH6KJzdEf2qZ0Cp97IQ.jpg?width=108&crop=smart&auto=webp&s=fb330bdc2eee4f706524c990eef25371caf258bf', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/WUQnE2zmawtDH-EVSaWif6WUMH6KJzdEf2qZ0Cp97IQ.jpg?width=216&crop=smart&auto=webp&s=dd1999b363478f52ca948177dffbdf51b4a3c91c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/WUQnE2zmawtDH-EVSaWif6WUMH6KJzdEf2qZ0Cp97IQ.jpg?width=320&crop=smart&auto=webp&s=b29b1557103c0b64c4bc49ee867a9f2bb3a4cb53', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/WUQnE2zmawtDH-EVSaWif6WUMH6KJzdEf2qZ0Cp97IQ.jpg?width=640&crop=smart&auto=webp&s=5e9090f5aa6ecba38fa71943566f42cd7d2e4aff', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/WUQnE2zmawtDH-EVSaWif6WUMH6KJzdEf2qZ0Cp97IQ.jpg?width=960&crop=smart&auto=webp&s=2da6c3e296edb21c5ac8518afc2f3ef7c21f11c4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/WUQnE2zmawtDH-EVSaWif6WUMH6KJzdEf2qZ0Cp97IQ.jpg?width=1080&crop=smart&auto=webp&s=310063846758d260458704aa9d5839eb8e7eab43', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/WUQnE2zmawtDH-EVSaWif6WUMH6KJzdEf2qZ0Cp97IQ.jpg?auto=webp&s=35ede8fa25e519c8e9d2af0e75f53e44f065fe17', 'width': 1200}, 'variants': {}}]}
do i need to learn python to fine tune an LLM with the traditional methods, or create a lora for an LLM?
8
i used to know some cpp and some java but have since forgotten it to the point that i am largely code illiterate. I want to try to fine tune a model, or i may stop with just a lora if the results from that are adequate. that being said, i had previously assumed i would need to learn python to do this, but it kind of seems like the tools have reached a point where i could do this with just pre-existing, relatively user-friendly tools like alpaca-lora/alpaca-lora-4b/peft. can anyone provide some insight on other such tools? additionally, are there tools designed to create loras for wizardlm or mpt? most of the research i have done has pointed to people using wizardlm/mpt to create loras for llama
2023-05-17T17:44:06
https://www.reddit.com/r/LocalLLaMA/comments/13k8qzk/do_i_need_to_learn_python_to_fine_tune_an_llm/
im_disappointed_n_u
self.LocalLLaMA
2023-05-17T20:28:21
0
{}
13k8qzk
false
null
t3_13k8qzk
/r/LocalLLaMA/comments/13k8qzk/do_i_need_to_learn_python_to_fine_tune_an_llm/
false
false
self
8
null
Problem with finetuning model
1
[removed]
2023-05-17T19:03:19
https://www.reddit.com/r/LocalLLaMA/comments/13kauyf/problem_with_finetuning_model/
GooD404
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13kauyf
false
null
t3_13kauyf
/r/LocalLLaMA/comments/13kauyf/problem_with_finetuning_model/
false
false
default
1
null
Hardware benchmarking
2
[deleted]
2023-05-17T19:22:16
[deleted]
1970-01-01T00:00:00
0
{}
13kbcyj
false
null
t3_13kbcyj
/r/LocalLLaMA/comments/13kbcyj/hardware_benchmarking/
false
false
default
2
null
Noob here. How to activate BLAS for llama in oobabooga? PLEASE help!
1
[removed]
2023-05-17T19:38:21
https://www.reddit.com/r/LocalLLaMA/comments/13kbs7c/noob_here_how_to_activate_blas_for_llama_in/
OobaboogaHelp
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13kbs7c
false
null
t3_13kbs7c
/r/LocalLLaMA/comments/13kbs7c/noob_here_how_to_activate_blas_for_llama_in/
false
false
default
1
null
Looking for a UI similar to KoboldCPP for llamacpp
4
I'm very new to all this, I find that KoboldCPP continues lines when it doesn't need to, I'm trying to find something with the same kind of speed/processing as the llamacpp but with a UI basically using the latest GGML models. Help a noobie out?
2023-05-17T20:31:54
https://www.reddit.com/r/LocalLLaMA/comments/13kd7j7/looking_for_a_ui_similar_to_koboldcpp_for_llamacpp/
Deformator
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13kd7j7
false
null
t3_13kd7j7
/r/LocalLLaMA/comments/13kd7j7/looking_for_a_ui_similar_to_koboldcpp_for_llamacpp/
false
false
self
4
null
LLM for synology chat, yes I did
4
I created a python script for running LLMs that use synology chat as the interface Would love some feedback and or help HTTPS://GitHub.com/CaptJaybles/synologyLLM
2023-05-17T20:32:56
https://www.reddit.com/r/LocalLLaMA/comments/13kd8f2/llm_for_synology_chat_yes_i_did/
ProfessionalGuitar32
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13kd8f2
false
null
t3_13kd8f2
/r/LocalLLaMA/comments/13kd8f2/llm_for_synology_chat_yes_i_did/
false
false
self
4
null
Please explain to a 5 years old Lora concept and how to fine tune
31
Okay, to be honest, it's been 30 years since I was 5 - but when it comes to Lora, that's how I feel. Would someone please be so kind and explain to me in simple terms firstly **roughly the concept** behind it and secondly a step by step explanation of how I do **create a Lora** ~~fine tuning~~ and how I **apply it** to a ggml model? I only use CPU-llama.cpp, so I do not have a powerful GPU. So let's say I have a book as a raw text file and I want to have such a ~~fine tuning~~ Low rank adapter so that a LLM can respond more adequately to the content of the book. * How, what, where do I need to do? I assume I would need to upload the text file and a model to some cloud GPU, right? * How long will such a training/fine-tuning take, or what costs should I expect?I would be very grateful for very **specific** advice on what cloud services are available for this, * And what comes then? Will a new file be created by the ~~fine-tuning~~ ... process(? what is the correct term actually?) * How do I apply the in llama.cpp? On which model? EDIT: i must add briefly: for some reason i thought fine-tuning and lora were roughly the same thing - sorry. What I actually mean is already Lora (I recently saw a kind of comic or meme about this: a character wears a different headgear every time and thus becomes sometimes a fireman, sometimes a policeman, sometimes a surgeon, etc. But the figure has not changed 'from the inside' so no *fine tuning*). And the example with the raw text of a book is only a fictional example. I know that for something like that a vector-embedding and search is better suited. Or even just a normal text search. The reason I want to know about Lora is only educational for now. I would really like to apply learning by doing here ;D * Does anything have to be processed or converted before? I hope that not only I could benefit from this, but also other newcomers to this topic, because the documentation is either very difficult to find or too complicated to be understood by laymen.
2023-05-17T20:59:09
https://www.reddit.com/r/LocalLLaMA/comments/13kdwl2/please_explain_to_a_5_years_old_lora_concept_and/
Evening_Ad6637
self.LocalLLaMA
2023-05-17T22:27:28
0
{}
13kdwl2
false
null
t3_13kdwl2
/r/LocalLLaMA/comments/13kdwl2/please_explain_to_a_5_years_old_lora_concept_and/
false
false
self
31
null
A little demo integration the alpaca model w/ my open-source search app
52
2023-05-17T21:38:28
https://v.redd.it/oenr7spzng0b1
andyndino
v.redd.it
1970-01-01T00:00:00
0
{}
13key7p
false
{'reddit_video': {'bitrate_kbps': 4800, 'dash_url': 'https://v.redd.it/oenr7spzng0b1/DASHPlaylist.mpd?a=1694755110%2CMGI4NGQwYmJkOGI5MjM4NWFkNTFmZjJlMDY2ZjFhNzBiZmY1YTVkMmIxYTc1Yjk0OTIxODlmNzFlNzhjYmU2ZQ%3D%3D&v=1&f=sd', 'duration': 47, 'fallback_url': 'https://v.redd.it/oenr7spzng0b1/DASH_1080.mp4?source=fallback', 'height': 1080, 'hls_url': 'https://v.redd.it/oenr7spzng0b1/HLSPlaylist.m3u8?a=1694755110%2CZGRmMTdmMjNiMDJjODI4MDQ1OGIzMDA5NDEyM2EyYTM0MmY1ZmE2ZmNjZTg1Mzg4NzMxMjg2MmFiNjljOTRmNw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/oenr7spzng0b1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_13key7p
/r/LocalLLaMA/comments/13key7p/a_little_demo_integration_the_alpaca_model_w_my/
false
false
https://b.thumbs.redditm…-iTjBGpfIuas.jpg
52
{'enabled': False, 'images': [{'id': '7Kkei14aNiTOOPEK7y04ZA4gvPHlesiBi7mM4m_nFQ8', 'resolutions': [{'height': 85, 'url': 'https://external-preview.redd.it/nDZuwPb2Hxg8KzGQSWuaQyUNgiwmlR9DwZ_g_WdyIeE.png?width=108&crop=smart&format=pjpg&auto=webp&s=4a6d2966f4ecd6a09e8d12110bbf9867af1d48a9', 'width': 108}, {'height': 171, 'url': 'https://external-preview.redd.it/nDZuwPb2Hxg8KzGQSWuaQyUNgiwmlR9DwZ_g_WdyIeE.png?width=216&crop=smart&format=pjpg&auto=webp&s=748c7c1c4d6a91265e5f7ac05bacdfe644b2f42f', 'width': 216}, {'height': 253, 'url': 'https://external-preview.redd.it/nDZuwPb2Hxg8KzGQSWuaQyUNgiwmlR9DwZ_g_WdyIeE.png?width=320&crop=smart&format=pjpg&auto=webp&s=d9988678437453606271218590e7aff308b22eef', 'width': 320}, {'height': 507, 'url': 'https://external-preview.redd.it/nDZuwPb2Hxg8KzGQSWuaQyUNgiwmlR9DwZ_g_WdyIeE.png?width=640&crop=smart&format=pjpg&auto=webp&s=4aebcdd8544da6964ccb506312b2c971661365fe', 'width': 640}, {'height': 761, 'url': 'https://external-preview.redd.it/nDZuwPb2Hxg8KzGQSWuaQyUNgiwmlR9DwZ_g_WdyIeE.png?width=960&crop=smart&format=pjpg&auto=webp&s=a564fe02a31dcee01674f5a3c9df9628e6ad8555', 'width': 960}, {'height': 856, 'url': 'https://external-preview.redd.it/nDZuwPb2Hxg8KzGQSWuaQyUNgiwmlR9DwZ_g_WdyIeE.png?width=1080&crop=smart&format=pjpg&auto=webp&s=052f2a31db04dafa4990b476aeea0117af196187', 'width': 1080}], 'source': {'height': 1522, 'url': 'https://external-preview.redd.it/nDZuwPb2Hxg8KzGQSWuaQyUNgiwmlR9DwZ_g_WdyIeE.png?format=pjpg&auto=webp&s=ef2d329dee9d43df48c71ad084df2f8ab8503394', 'width': 1920}, 'variants': {}}]}
Looking to add a conversational model to a web app
1
Currently looking at https://huggingface.co/ehartford/WizardLM-13B-Uncensored?text=My+name+is+Thomas+and+my+main with the deploy option for javascript. However, I am curious about the API here and why I would choose to go this route vs. running the model locally so that it doesn't have a rate limit. Further, how simple is it to add the actual model locally so that it isn't using an API? I'm not familiar with that or where the documentation is to explore that option more
2023-05-17T23:52:39
https://www.reddit.com/r/LocalLLaMA/comments/13kicqd/looking_to_add_a_conversational_model_to_a_web_app/
UpDown
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13kicqd
false
null
t3_13kicqd
/r/LocalLLaMA/comments/13kicqd/looking_to_add_a_conversational_model_to_a_web_app/
false
false
self
1
{'enabled': False, 'images': [{'id': 'G1nl_IUI_4T90MWS7hPfvajkGrGVtVlBe7-hikDbCJE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nUYJCJ4Wd48yWucK-iCTSDQmUfqKekJFWJI-qowoq9M.jpg?width=108&crop=smart&auto=webp&s=3723e81c3dda45706b3275533d688762ed693e74', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/nUYJCJ4Wd48yWucK-iCTSDQmUfqKekJFWJI-qowoq9M.jpg?width=216&crop=smart&auto=webp&s=aa30800fed77ed23fa00ad0117127ddab537da13', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/nUYJCJ4Wd48yWucK-iCTSDQmUfqKekJFWJI-qowoq9M.jpg?width=320&crop=smart&auto=webp&s=8648f8481c1a71b34628337380bbd5ab61ae4889', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/nUYJCJ4Wd48yWucK-iCTSDQmUfqKekJFWJI-qowoq9M.jpg?width=640&crop=smart&auto=webp&s=054a654f2e90b527e2a0e5c2c3fc47ead397dc54', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/nUYJCJ4Wd48yWucK-iCTSDQmUfqKekJFWJI-qowoq9M.jpg?width=960&crop=smart&auto=webp&s=a370540936d82b5eaf105c12a79a90e8ab63a611', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/nUYJCJ4Wd48yWucK-iCTSDQmUfqKekJFWJI-qowoq9M.jpg?width=1080&crop=smart&auto=webp&s=58723b62d389654b8095985808adaacd4beacb29', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/nUYJCJ4Wd48yWucK-iCTSDQmUfqKekJFWJI-qowoq9M.jpg?auto=webp&s=9ab2642fcca96ebdd40b5775ff2ea4403da23752', 'width': 1200}, 'variants': {}}]}
Koboldcpp with dual GPUs of different makes
3
Does Koboldcpp use multiple GPU? If so, with the latest version that uses OpenCL, could I use an AMD 6700 12GB and an Intel 770 16GB to have 28GB of VRAM? It's my understanding that with the Nvidia cards you dont need the NVLink to take advantage of both cards so I was wondering is the same may be true for OpenCL based cards.
2023-05-17T23:58:32
https://www.reddit.com/r/LocalLLaMA/comments/13kihgi/koboldcpp_with_dual_gpus_of_different_makes/
ccbadd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13kihgi
false
null
t3_13kihgi
/r/LocalLLaMA/comments/13kihgi/koboldcpp_with_dual_gpus_of_different_makes/
false
false
self
3
null
Are the LLM's being designed to generate datasets to train other LLMs?
10
I was thinking about making a LoRA with my own dataset, but the most challenging part about making a good model is having a good dataset. Are there any models that have been made for the purpose of generating datasets? Or at people just using the best LLM's available (like chat-gpt) for now to generate datasets? I want to train on my own knowledge base, this is what I am interested in doing: \- Generate a list of questions: Split my documents into chunks and feed them into a "dataset LLM" which comes up with questions about the provided text. \- Create the Q/A Pairs: Ask the LLM each of the questions with the provided text and have it give me an answer \- Train the Lora on the provided dataset Using smaller models like TheBloke\_wizardLM-7B-HF, it doesn't always come up with relevant questions. I was wondering if we are always going to have to use larger models to make datasets for smaller models, or if we could make a smaller model that's specifically designed for generating datasets to train new small models.
2023-05-18T00:26:52
https://www.reddit.com/r/LocalLLaMA/comments/13kj5hp/are_the_llms_being_designed_to_generate_datasets/
NeverEndingToast
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13kj5hp
false
null
t3_13kj5hp
/r/LocalLLaMA/comments/13kj5hp/are_the_llms_being_designed_to_generate_datasets/
false
false
self
10
null
Error While Finetuning
2
[deleted]
2023-05-18T01:43:16
[deleted]
1970-01-01T00:00:00
0
{}
13kkv8v
false
null
t3_13kkv8v
/r/LocalLLaMA/comments/13kkv8v/error_while_finetuning/
false
false
default
2
null
Struggling with settings for different models
2
I have TavernAI set up and am loading various models, but they’re always dog slow. I have a 3090 w/24 gigs and 32 gigs of ram, and I’m assuming I’m doing something wrong since I get similar results with everything, like 1 word every 5-10 seconds. My issue is, on the Model page/tab in TavernAI, there are many different fields, bars, checkboxes, and drop down menus which all probably relate to my performance, but I cannot for the life of me find documentation on these anywhere. I don’t know where else to look that I haven’t already checked. Plus, if they don’t list these requirements on the relevant model huggingface page, how are you supposed to know or calculate these settings? I don’t see any of this info on any model pages I’ve tested, or on any of the myriad git projects I have loaded, on rentry, discord, etc.. so I figured I’d ask as I’ve stalled on this project after weeks of beating my head against it. Say I have a WizardLM13b or 7b for example, as these are the current ones I’m messing with. What should wbits, group size, pre_layer, threads and so on be set to? I understand what 4bit and CPU means in this context, as the model names indicate this, but I’m not using these at the moment. I should be able to load these models via gpu. Leaving everything blank or default doesn’t seem to work. What else should I be doing to get this info? Guessing doesn’t seem right either. Sorry I’m so confused, but I’ve seriously sunk many hours into this already and I’m usually good at figuring new tech out. It seems like everywhere assumes you know what you’re doing in this regard. Thanks!
2023-05-18T01:51:33
https://www.reddit.com/r/LocalLLaMA/comments/13kl1yc/struggling_with_settings_for_different_models/
StriveForMediocrity
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13kl1yc
false
null
t3_13kl1yc
/r/LocalLLaMA/comments/13kl1yc/struggling_with_settings_for_different_models/
false
false
self
2
null
Wizard-Vicuna-7B-Uncensored
251
Due to popular demand, today I released 7B version of Wizard Vicuna Uncensored, which also includes the fixes I made to the Wizard-Vicuna dataset for the 13B version. [https://huggingface.co/ehartford/Wizard-Vicuna-7B-Uncensored](https://huggingface.co/ehartford/Wizard-Vicuna-7B-Uncensored) u/The-Bloke
2023-05-18T01:56:02
https://www.reddit.com/r/LocalLLaMA/comments/13kl5hn/wizardvicuna7buncensored/
faldore
self.LocalLLaMA
1970-01-01T00:00:00
1
{'gid_2': 1}
13kl5hn
false
null
t3_13kl5hn
/r/LocalLLaMA/comments/13kl5hn/wizardvicuna7buncensored/
false
false
self
251
{'enabled': False, 'images': [{'id': 'Zvu7MMbJfuNi9sVEJ9fhsfi0hvuH9mGjp9PPWuqsbp4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/tQrj8o6usB_UYlKrxY1SQP1IyOh_a4M-mw3rtiCt050.jpg?width=108&crop=smart&auto=webp&s=3e00d4c6312f4141e97a1237a464fda6d5b7401d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/tQrj8o6usB_UYlKrxY1SQP1IyOh_a4M-mw3rtiCt050.jpg?width=216&crop=smart&auto=webp&s=278f650c999b78b0735b106d5a5e220deb8e305e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/tQrj8o6usB_UYlKrxY1SQP1IyOh_a4M-mw3rtiCt050.jpg?width=320&crop=smart&auto=webp&s=82275d55769db839c04c657067f9462dc9794eb4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/tQrj8o6usB_UYlKrxY1SQP1IyOh_a4M-mw3rtiCt050.jpg?width=640&crop=smart&auto=webp&s=c9c3303c662530bd49e42a1fe70427827a0a613a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/tQrj8o6usB_UYlKrxY1SQP1IyOh_a4M-mw3rtiCt050.jpg?width=960&crop=smart&auto=webp&s=b057d69ae884ad30eadc8c18a567a5dc2da47f9d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/tQrj8o6usB_UYlKrxY1SQP1IyOh_a4M-mw3rtiCt050.jpg?width=1080&crop=smart&auto=webp&s=0de860dda19901a5b6bc924bb07f061b11d05aad', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/tQrj8o6usB_UYlKrxY1SQP1IyOh_a4M-mw3rtiCt050.jpg?auto=webp&s=41785821f2f5133f9ccb0a7c1db18cd46b250c3f', 'width': 1200}, 'variants': {}}]}
Context size explanation?
1
[removed]
2023-05-18T03:42:27
https://www.reddit.com/r/LocalLLaMA/comments/13kngyl/context_size_explanation/
entered_apprentice
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13kngyl
false
null
t3_13kngyl
/r/LocalLLaMA/comments/13kngyl/context_size_explanation/
false
false
default
1
null
Spent the entire day playing around with Local Llms (mainly wizard vicuna 13b) then compared it against chatgpt
68
and then it hit me, the local models ran roughly as fast as chatgpt did a few months ago which is saying a lot for the progress of localcpp seeing as how openai have a multimillion dollar setup while I'm running it on a 7 year old laptop with 16gb of ram I paid $1,500 for. It's amazing the work that the developers are doing, hat's off to you guys. 👏👏
2023-05-18T04:34:07
https://www.reddit.com/r/LocalLLaMA/comments/13koi04/spent_the_entire_day_playing_around_with_local/
fresh_n_clean
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13koi04
false
null
t3_13koi04
/r/LocalLLaMA/comments/13koi04/spent_the_entire_day_playing_around_with_local/
false
false
self
68
null
Has anyone published a dataset of prompt text?
4
I have been looking for a collection of prompt texts which people have posed to LLMs, but can't find anything. Does anyone know if such a dataset is available anywhere for personal use? **Edited to add:** Thanks for the input, all! I finally found what I needed in https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/blob/main/HTML_cleaned_raw_dataset/ thanks to u/ruryrury -- hundreds of thousands of human-generated prompt texts. I apologize for not making it clear that I was looking specifically for prompt texts, not training data, not models, not interfaces for prompting models, and appreciate all of your suggestions.
2023-05-18T05:22:29
https://www.reddit.com/r/LocalLLaMA/comments/13kpehh/has_anyone_published_a_dataset_of_prompt_text/
ttkciar
self.LocalLLaMA
2023-05-18T19:46:57
0
{}
13kpehh
false
null
t3_13kpehh
/r/LocalLLaMA/comments/13kpehh/has_anyone_published_a_dataset_of_prompt_text/
false
false
self
4
{'enabled': False, 'images': [{'id': 'hCJm1WvoukTm8o3iKxx6PgypOTukUiQ9MSNgq1s3NQE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ZAAFUcL47wkmSmraeP1fxnWAeCEYzHSfny9rWp7caT8.jpg?width=108&crop=smart&auto=webp&s=53cfd5649ccabc02caf81c85c0ef6fd93c0d6753', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ZAAFUcL47wkmSmraeP1fxnWAeCEYzHSfny9rWp7caT8.jpg?width=216&crop=smart&auto=webp&s=4b2776e4ab9a0394aada31f03054955a7242c6b6', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ZAAFUcL47wkmSmraeP1fxnWAeCEYzHSfny9rWp7caT8.jpg?width=320&crop=smart&auto=webp&s=5fa1a900b723e80f7b65e561e5028867be4b58c3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ZAAFUcL47wkmSmraeP1fxnWAeCEYzHSfny9rWp7caT8.jpg?width=640&crop=smart&auto=webp&s=13412c8d161e4a13edf3f7ad8b8750684a005536', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ZAAFUcL47wkmSmraeP1fxnWAeCEYzHSfny9rWp7caT8.jpg?width=960&crop=smart&auto=webp&s=f73fac0c06956e47104c1b3c606a3edaf1b1d98f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ZAAFUcL47wkmSmraeP1fxnWAeCEYzHSfny9rWp7caT8.jpg?width=1080&crop=smart&auto=webp&s=200773d04c8debe3865bdc395a318126791fffde', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ZAAFUcL47wkmSmraeP1fxnWAeCEYzHSfny9rWp7caT8.jpg?auto=webp&s=6130b1031b11bc2639db3f24677561e5a4e73b10', 'width': 1200}, 'variants': {}}]}
Other subreddits?
1
[removed]
2023-05-18T05:53:09
https://www.reddit.com/r/LocalLLaMA/comments/13kpy41/other_subreddits/
entered_apprentice
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13kpy41
false
null
t3_13kpy41
/r/LocalLLaMA/comments/13kpy41/other_subreddits/
false
false
default
1
null
Local LLMs for coding?
3
[removed]
2023-05-18T06:07:22
https://www.reddit.com/r/LocalLLaMA/comments/13kq79d/local_llms_for_coding/
entered_apprentice
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13kq79d
false
null
t3_13kq79d
/r/LocalLLaMA/comments/13kq79d/local_llms_for_coding/
false
false
default
3
null
How do LLMs “think”?
0
[removed]
2023-05-18T06:08:35
https://www.reddit.com/r/LocalLLaMA/comments/13kq800/how_do_llms_think/
entered_apprentice
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13kq800
false
null
t3_13kq800
/r/LocalLLaMA/comments/13kq800/how_do_llms_think/
false
false
default
0
null
What is quantisizing mean?
14
I see 4 bits, 8 bits, etc. Just not storing weights as doubles (8bits) ? ELI5 please.
2023-05-18T06:14:00
https://www.reddit.com/r/LocalLLaMA/comments/13kqbci/what_is_quantisizing_mean/
entered_apprentice
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13kqbci
false
null
t3_13kqbci
/r/LocalLLaMA/comments/13kqbci/what_is_quantisizing_mean/
false
false
self
14
null
Do you need a good CPU for training models if you have a good GPU?
6
Asking for a friend and for we were getting into training models and are thinking about mushing all the budget into the GPU as a 3090 such and getting a cheap cpu
2023-05-18T06:14:01
https://www.reddit.com/r/LocalLLaMA/comments/13kqbct/do_you_need_a_good_cpu_for_training_models_if_you/
Impossible_Belt_7757
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13kqbct
false
null
t3_13kqbct
/r/LocalLLaMA/comments/13kqbct/do_you_need_a_good_cpu_for_training_models_if_you/
false
false
self
6
null
*update* Completely restructured the repo. One of the most in-depth collections of all things LLM. ~500 Stars and counting
99
2023-05-18T08:13:19
https://github.com/underlines/awesome-marketing-datascience
_underlines_
github.com
1970-01-01T00:00:00
0
{}
13ksfcr
false
null
t3_13ksfcr
/r/LocalLLaMA/comments/13ksfcr/update_completely_restructured_the_repo_one_of/
false
false
https://b.thumbs.redditm…JejJO441q8CM.jpg
99
{'enabled': False, 'images': [{'id': 'rPJXDTXvadx5BA_jrYzZNm1GLb6uTxg97tNKy9txPcA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/t49q26HVsUejGSZ7mQaKyWdSH8lFvhxT_soUP7EdO2E.jpg?width=108&crop=smart&auto=webp&s=ac6fe89bcd0dc67925c293c1093de1d4b6e7f50e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/t49q26HVsUejGSZ7mQaKyWdSH8lFvhxT_soUP7EdO2E.jpg?width=216&crop=smart&auto=webp&s=0c2474e81631b202e002ffa0872b5ec9d02ab020', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/t49q26HVsUejGSZ7mQaKyWdSH8lFvhxT_soUP7EdO2E.jpg?width=320&crop=smart&auto=webp&s=fb82b97652b1dbfe0f8939a5227599776ec4eb3f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/t49q26HVsUejGSZ7mQaKyWdSH8lFvhxT_soUP7EdO2E.jpg?width=640&crop=smart&auto=webp&s=7918fe0a5821351f1853b8b717b3df1c6b00fcd9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/t49q26HVsUejGSZ7mQaKyWdSH8lFvhxT_soUP7EdO2E.jpg?width=960&crop=smart&auto=webp&s=7e0491c6e15342d4aec3cca65a511b36bf5b141f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/t49q26HVsUejGSZ7mQaKyWdSH8lFvhxT_soUP7EdO2E.jpg?width=1080&crop=smart&auto=webp&s=2b89b4410aa4f943d53244dac104d112524e22fd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/t49q26HVsUejGSZ7mQaKyWdSH8lFvhxT_soUP7EdO2E.jpg?auto=webp&s=e6d17305167872aa55a988a189e7be1060994c5b', 'width': 1200}, 'variants': {}}]}
Have to abandon my (almost) finished LLaMA-API-Inference server. If anybody finds it useful and wants to continue, the repo is yours. :)
54
I've been working on an API-first inference server for fast inference of GPTQ quantized LLaMA models, including multi GPU. The idea is to provide a server, which runs in the background and which can be queried much like OpenAI models can be queried using their API library. This may happen from the same machine or via the network. The core functionality is working. It can load the 65B model onto two 4090's and produce inference at 10 to 12 tokens per seconds, depending on different variables. Single GPU and other model/GPU configurations are a matter of changing some configs and minor code adjustments, but should be doable quite easily. The (for me) heavy lifting of making the Triton kernel working on multi GPU is done. Additionally, one can send requests to the model via POST requests and get streaming or non-streaming output as reply. Furthermore, an additional control flow is available, which makes it possible to stop text generation in a clean and non-buggy way via http request. Concepts of how to implement a pause/continue control-flow as well as a "stop-on-specific-string" flow are ready to be implemented. The repo can be found [here](https://github.com/Dhaladom/TALIS), the readme is not up-to-date. The code is a bit messy. If anybody wants to continue (or use) this project, feel free to contact me. I'd happily hand it over and assist with questions. For personal reasons, I can not continue. Thanks for your attention.
2023-05-18T10:02:43
https://www.reddit.com/r/LocalLLaMA/comments/13kued5/have_to_abandon_my_almost_finished/
MasterH0rnet
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13kued5
false
null
t3_13kued5
/r/LocalLLaMA/comments/13kued5/have_to_abandon_my_almost_finished/
false
false
self
54
{'enabled': False, 'images': [{'id': 'UVAzNyepFDDzqT3dzunV4tOEVdns17i0IuW98PQT8Ag', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-X_m6iz9IKMboNzWu7yg1X06CY4Iax1WY3OpRr2K-AQ.jpg?width=108&crop=smart&auto=webp&s=8a10e747885093006d644407b4b14443c075e81b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-X_m6iz9IKMboNzWu7yg1X06CY4Iax1WY3OpRr2K-AQ.jpg?width=216&crop=smart&auto=webp&s=ab218714567c243d0b1094204ea228352fa296aa', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-X_m6iz9IKMboNzWu7yg1X06CY4Iax1WY3OpRr2K-AQ.jpg?width=320&crop=smart&auto=webp&s=4109fdfb66ac1b8da95721395d166547af7813eb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-X_m6iz9IKMboNzWu7yg1X06CY4Iax1WY3OpRr2K-AQ.jpg?width=640&crop=smart&auto=webp&s=7dfb284fb6acdd7b10d3b11b494968e9aa661f2c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-X_m6iz9IKMboNzWu7yg1X06CY4Iax1WY3OpRr2K-AQ.jpg?width=960&crop=smart&auto=webp&s=612169214b6c4defca684f8cc7c9c8b7823ca4ef', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-X_m6iz9IKMboNzWu7yg1X06CY4Iax1WY3OpRr2K-AQ.jpg?width=1080&crop=smart&auto=webp&s=8dac1326acea8756dda2ef13139c2b80d17f1f0d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-X_m6iz9IKMboNzWu7yg1X06CY4Iax1WY3OpRr2K-AQ.jpg?auto=webp&s=ca59d2ec3b9ac4424926c69552ea6840b384b8e6', 'width': 1200}, 'variants': {}}]}
Issue starting with Azure server
1
[deleted]
2023-05-18T11:30:04
[deleted]
1970-01-01T00:00:00
0
{}
13kwbya
false
null
t3_13kwbya
/r/LocalLLaMA/comments/13kwbya/issue_starting_with_azure_server/
false
false
default
1
null
GPU quota requests
1
Has anyone else had trouble getting GPUs on GCP/AWS/Azure? The most I've gotten is in GCP, where they gave me 1 gpu. Every other quota request I make gets denied without explanation. Anyone have any advice on how to rent GPUs? Thanks!
2023-05-18T12:53:16
https://www.reddit.com/r/LocalLLaMA/comments/13kydvh/gpu_quota_requests/
maiclazyuncle
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13kydvh
false
null
t3_13kydvh
/r/LocalLLaMA/comments/13kydvh/gpu_quota_requests/
false
false
self
1
null
I made a simple agent demo with Guidance and wizard-mega-13B-GPTQ, feel quite promising.
85
Hi, I just discover Guidance last night through this post: [https://www.reddit.com/r/LocalLLaMA/comments/13jyh3m/guidance\_a\_prompting\_language\_by\_microsoft/](https://www.reddit.com/r/LocalLLaMA/comments/13jyh3m/guidance_a_prompting_language_by_microsoft/) &#x200B; It looks interesting and I think it can solve my problem with the previous ReAct agent I built with Langchain. The Langchain agent often doesn't follow the instruction, especially when working with small LLM (like 3B-7B). It is painful to handle this for me (optimize prompt, manually set some stop conditions). The ReAct framework seems to work well with Guidance, forcing LLM to follow my instructions strictly. Github: [https://github.com/QuangBK/localLLM\_guidance](https://github.com/QuangBK/localLLM_guidance) My post: [https://medium.com/better-programming/a-simple-agent-with-guidance-and-local-llm-c0865c97eaa9](https://medium.com/better-programming/a-simple-agent-with-guidance-and-local-llm-c0865c97eaa9) Hope it helps :)
2023-05-18T13:53:52
https://www.reddit.com/r/LocalLLaMA/comments/13kzubz/i_made_a_simple_agent_demo_with_guidance_and/
Unhappy-Reaction2054
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13kzubz
false
null
t3_13kzubz
/r/LocalLLaMA/comments/13kzubz/i_made_a_simple_agent_demo_with_guidance_and/
false
false
self
85
{'enabled': False, 'images': [{'id': 'C2daJj-mtKtomZBtF4uzGILi1nHKFU4Pgmd3mqn3DLg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/r87qhdWE9obf5DJlXnP_c2gpCWfcsni8G7U2VOOdal4.jpg?width=108&crop=smart&auto=webp&s=f89e667c03a084103f7329b3b365c058d503be95', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/r87qhdWE9obf5DJlXnP_c2gpCWfcsni8G7U2VOOdal4.jpg?width=216&crop=smart&auto=webp&s=b90e9115b2aa301de6896d42010afd3ae4650bf8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/r87qhdWE9obf5DJlXnP_c2gpCWfcsni8G7U2VOOdal4.jpg?width=320&crop=smart&auto=webp&s=9cb79380d2755a6e4ff3f2db3a25ec8f6392efc6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/r87qhdWE9obf5DJlXnP_c2gpCWfcsni8G7U2VOOdal4.jpg?width=640&crop=smart&auto=webp&s=cecf4e5d2cc52a1b5c00df8e1a582b1ac6c66708', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/r87qhdWE9obf5DJlXnP_c2gpCWfcsni8G7U2VOOdal4.jpg?width=960&crop=smart&auto=webp&s=9b22dda8e966abd87f6531fed7db661ac9a4c51f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/r87qhdWE9obf5DJlXnP_c2gpCWfcsni8G7U2VOOdal4.jpg?width=1080&crop=smart&auto=webp&s=51bcb7520c9bafd950790927e7b9df73c6dc3e06', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/r87qhdWE9obf5DJlXnP_c2gpCWfcsni8G7U2VOOdal4.jpg?auto=webp&s=c3a4513b559dc7e44948bec68496de376de75266', 'width': 1200}, 'variants': {}}]}
Created yet another tool to execute generated code, with or without Langchain, with a self managed virtualenv
11
So one of the things that posed to be a problem in my previous iteration of fine-tuning a LoRA for code generation for Langchain Python REPL, was that most of the time the errors were about a missing package. &#x200B; To attempt to fix this, I created a Python package that manages a virtualenv, stores the source code in a local file and allows the code to be executed through the virtualenv interpreter. As a bonus, we can also apply pylint and see the linting score of the code. This is the result: [https://github.com/paolorechia/code-it](https://github.com/paolorechia/code-it) &#x200B; As usual, also wrote up an explanation of the inner workings of the package / prompts etc: [https://medium.com/@paolorechia/building-a-custom-langchain-tool-for-generating-executing-code-fa20a3c89cfd](https://medium.com/@paolorechia/building-a-custom-langchain-tool-for-generating-executing-code-fa20a3c89cfd) &#x200B; Nothing too shiny, but it was super fun developing - I think I might try to get something done with Microsoft's guidance library next though, as it seems like I was partially reinventing the wheel here.
2023-05-18T13:55:15
https://www.reddit.com/r/LocalLLaMA/comments/13kzvgu/created_yet_another_tool_to_execute_generated/
rustedbits
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13kzvgu
false
null
t3_13kzvgu
/r/LocalLLaMA/comments/13kzvgu/created_yet_another_tool_to_execute_generated/
false
false
self
11
{'enabled': False, 'images': [{'id': 'zV4LWqetRQbk3_AZUEmDnNel2nxb3c2FMGfhkX_9FK0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dOu7eJcmNwX_5oyXGggP8KIq7ZhArwEEsbll4dLAE40.jpg?width=108&crop=smart&auto=webp&s=fd9d124c41e6080e095d484f4ff30e1c2d4e5e4d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/dOu7eJcmNwX_5oyXGggP8KIq7ZhArwEEsbll4dLAE40.jpg?width=216&crop=smart&auto=webp&s=e19a90c4f7c08ae353f590bdeb35565e9e52c4c4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/dOu7eJcmNwX_5oyXGggP8KIq7ZhArwEEsbll4dLAE40.jpg?width=320&crop=smart&auto=webp&s=ffad5899baead79360be1b49b660825d1ba28fb2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/dOu7eJcmNwX_5oyXGggP8KIq7ZhArwEEsbll4dLAE40.jpg?width=640&crop=smart&auto=webp&s=b2d3cdc1c7403976e43d9303a2c4959fc7921a78', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/dOu7eJcmNwX_5oyXGggP8KIq7ZhArwEEsbll4dLAE40.jpg?width=960&crop=smart&auto=webp&s=00dbf3fc1dfd997a6206f3a8bb475dce0e9b31e8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/dOu7eJcmNwX_5oyXGggP8KIq7ZhArwEEsbll4dLAE40.jpg?width=1080&crop=smart&auto=webp&s=d22965c81b3d442f0afb0208a86983a222428266', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/dOu7eJcmNwX_5oyXGggP8KIq7ZhArwEEsbll4dLAE40.jpg?auto=webp&s=1005d7a2a8015e9e9968db21e0e482952dcc5885', 'width': 1200}, 'variants': {}}]}
A comparative look at (GGML) quantization and parameter size
58
## Preamble/credits Based on: [the llama.cpp repo README](https://github.com/ggerganov/llama.cpp/blob/dc271c52ed65e7c8dfcbaaf84dabb1f788e4f3d0/README.md#quantization) section on quantization. Looking at that, it's a little hard to assess the how different levels of quantization actually affect the quality, and what choices would actually cause a perceptible change. Hopefully this post will shed a little light. While this post is about GGML, the general idea/trends should be applicable to other types of quantization and models, for example GPTQ. First, perplexity isn't the be-all-end-all of assessing a the quality of a model. However, as far as I know given a specific full-precision model, if you process that data in a way that increases perplexity, the result is never an improvement in quality. So this is useful for comparing quantization formats for one exact version of a model, but not necessarily as useful comparing different models (or even different versions of the same model like Vicuna 1.0 vs Vicuna 1.1). ## Parameter size and perplexity A good starting point for assessing quality is 7b vs 13b models. Most people would agree there is a significant improvement between a 7b model (LLaMA will be used as the reference) and a 13b model. According to the chart in the llama.cpp repo, the difference in perplexity between a 16 bit (essentially full precision) 7b model and the 13b variant is 0.6523 (7b at 5.9066, 13b at 5.2543). For percentage calculations below, we'll consider the difference between the 13b and 7b to be 100%. So something that causes perplexity to increase by `0.6523 / 2` = ` 0.3261` would be 50% and so on. ### 7b from|to|ppl diff|pct diff -|-|-|- 16bit|Q8_0|0.0003|0.04% Q8_0|Q5_1|0.4150|6.32% Q5_1|Q5_0|0.0381|5.84% Q5_0|Q4_1|0.1048|16.06% Q4_1|Q4_0|0.1703|26.10% &nbsp;|&nbsp;|&nbsp;| Q5_1|Q4_0|0.2084|31.94% Q5_1|Q4_1|0.1429|21.90% |16bit|Q4_0|0.2450|37.55% ### 13b from|to|ppl diff|pct diff -|-|-|- 16bit|Q8_0|0.0005|0.07% Q8_0|Q5_1|0.0158|2.42% Q5_1|Q5_0|0.0150|2.29% Q5_0|Q4_1|0.0751|11.51% Q4_1|Q4_0|0.0253|3.87% &nbsp;|&nbsp;|&nbsp;| Q5_1|Q4_0|0.1154|17.69% Q5_1|Q4_1|0.0900|13.79% 16bit|Q4_0|0.1317|20.20% ## 13b to 7b from (13b)|to (7b)|ppl diff|pct diff -|-|-|- 16bit|16bit|0.6523|100% Q5_1|Q5_1|0.6775|103.86% Q4_0|Q4_0|0.7705|118.12% Q4_0|Q5_1|0.5621|80.65% Q4_0|16bit|0.5206|79.80% ## Comments From this, we can see you get ~80% of the improvement of going from a 7b to a 13b model even if you're going from a full precision 7b to the worst/most heavily quantized Q4_0 13b variant. So running the model with more parameters is basically always going to be better, even if it's heavily quantized. (This may not apply for other quantization levels like 3bit, 2bit, 1bit.) It's already pretty well known, but this also shows that larger models tolerate quantization better. There are no figures for 33b, 65b models here but one would expect the trend to continue. From looking at this, there's probably a pretty good chance a 3bit (maybe even 2bit) 65b model would be better than a full precision 13b. It's also pretty clear there's a large difference between Q5_1 and Q4_0. Q4_0 should be avoided if at all possible, especially for smaller models. (Unless it lets you go up to the next sized model.)
2023-05-18T14:22:50
https://www.reddit.com/r/LocalLLaMA/comments/13l0j7m/a_comparative_look_at_ggml_quantization_and/
KerfuffleV2
self.LocalLLaMA
2023-05-18T17:17:48
0
{}
13l0j7m
false
null
t3_13l0j7m
/r/LocalLLaMA/comments/13l0j7m/a_comparative_look_at_ggml_quantization_and/
false
false
self
58
{'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=108&crop=smart&auto=webp&s=b6caea286bbf31bdb473212eb5668f45376977be', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=216&crop=smart&auto=webp&s=ba8933d74dda3c391a7c9a355d2e1cd0054d1c21', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=320&crop=smart&auto=webp&s=93b690f58b739ff61da7a147fc67d6c8842b3a7d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=640&crop=smart&auto=webp&s=a55f55983fcc0b3f5a6d4e0b51f627e1b40ef9d4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=960&crop=smart&auto=webp&s=e56b77b835b76c51a1e12a410b9e908f0255d397', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=1080&crop=smart&auto=webp&s=d06ca9eb5611d109d3ef7935f6de61545e9828da', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?auto=webp&s=0b2a006e16468374b78dd67390927053776e6137', 'width': 1280}, 'variants': {}}]}
Error while finetuning
2
I was working with johnsmith0031 repo for 4 bit training, and I'm getting following error in the finetuning stage. Can anyone suggest how I can resolve the issue? LOG: ```python ╭───────────────────── Traceback (most recent call last) ──────────────────────╮ │ /content/alpaca_lora_4bit/finetune.py:65 in <module> │ │ │ │ 62 │ raise Exception('batch_size need to be larger than mbatch_size.') │ │ 63 │ │ 64 # Load Basic Model │ │ ❱ 65 model, tokenizer = load_llama_model_4bit_low_ram(ft_config.llama_q4_co │ │ 66 │ │ │ │ │ │ │ │ │ │ │ │ ft_config.llama_q4_m │ │ 67 │ │ │ │ │ │ │ │ │ │ │ │ device_map=ft_config │ │ 68 │ │ │ │ │ │ │ │ │ │ │ │ groupsize=ft_config. │ │ │ │ /content/alpaca_lora_4bit/autograd_4bit.py:216 in │ │ load_llama_model_4bit_low_ram │ │ │ │ 213 │ if half: │ │ 214 │ │ model_to_half(model) │ │ 215 │ │ │ ❱ 216 │ tokenizer = LlamaTokenizer.from_pretrained(config_path) │ │ 217 │ tokenizer.truncation_side = 'left' │ │ 218 │ │ │ 219 │ print(Style.BRIGHT + Fore.GREEN + f"Loaded the model in {(time.tim │ │ │ │ /usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base │ │ .py:1812 in from_pretrained │ │ │ │ 1809 │ │ │ else: │ │ 1810 │ │ │ │ logger.info(f"loading file {file_path} from cache at │ │ 1811 │ │ │ │ ❱ 1812 │ │ return cls._from_pretrained( │ │ 1813 │ │ │ resolved_vocab_files, │ │ 1814 │ │ │ pretrained_model_name_or_path, │ │ 1815 │ │ │ init_configuration, │ │ │ │ /usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base │ │ .py:1975 in _from_pretrained │ │ │ │ 1972 │ │ │ │ 1973 │ │ # Instantiate tokenizer. │ │ 1974 │ │ try: │ │ ❱ 1975 │ │ │ tokenizer = cls(*init_inputs, **init_kwargs) │ │ 1976 │ │ except OSError: │ │ 1977 │ │ │ raise OSError( │ │ 1978 │ │ │ │ "Unable to load vocabulary from file. " │ │ │ │ /usr/local/lib/python3.10/dist-packages/transformers/models/llama/tokenizati │ │ on_llama.py:96 in __init__ │ │ │ │ 93 │ │ self.add_bos_token = add_bos_token │ │ 94 │ │ self.add_eos_token = add_eos_token │ │ 95 │ │ self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwa │ │ ❱ 96 │ │ self.sp_model.Load(vocab_file) │ │ 97 │ │ │ 98 │ def __getstate__(self): │ │ 99 │ │ state = self.__dict__.copy() │ │ │ │ /usr/local/lib/python3.10/dist-packages/sentencepiece/__init__.py:905 in │ │ Load │ │ │ │ 902 │ │ raise RuntimeError('model_file and model_proto must be exclus │ │ 903 │ if model_proto: │ │ 904 │ │ return self.LoadFromSerializedProto(model_proto) │ │ ❱ 905 │ return self.LoadFromFile(model_file) │ │ 906 │ │ 907 │ │ 908 # Register SentencePieceProcessor in _sentencepiece: │ │ │ │ /usr/local/lib/python3.10/dist-packages/sentencepiece/__init__.py:310 in │ │ LoadFromFile │ │ │ │ 307 │ │ return _sentencepiece.SentencePieceProcessor_serialized_model │ │ 308 │ │ │ 309 │ def LoadFromFile(self, arg): │ │ ❱ 310 │ │ return _sentencepiece.SentencePieceProcessor_LoadFromFile(sel │ │ 311 │ │ │ 312 │ def _EncodeAsIds(self, text, enable_sampling, nbest_size, alpha, │ │ 313 │ │ return _sentencepiece.SentencePieceProcessor__EncodeAsIds(sel │ ╰──────────────────────────────────────────────────────────────────────────────╯ RuntimeError: Internal: src/sentencepiece_processor.cc(1101) [model_proto->ParseFromArray(serialized.data(), serialized.size())] ``` and training script goes as follows: ```python !python finetune.py "/content/data.json" \ --ds_type=alpaca \ --lora_out_dir=./test/ \ --llama_q4_config_dir="/content/text-generation-webui/models/wcde_llama-7b-4bit-gr128/config.json" \ --llama_q4_model="/content/text-generation-webui/models/wcde_llama-7b-4bit-gr128/llama-7b-4bit-gr128.pt" \ --mbatch_size=1 \ --batch_size=4 \ --epochs=3 \ --lr=3e-4 \ --cutoff_len=128 \ --lora_r=8 \ --lora_alpha=16 \ --lora_dropout=0.05 \ --warmup_steps=5 \ --save_steps=50 \ --save_total_limit=3 \ --logging_steps=5 \ --groupsize=128 \ --xformers \ --backend=cuda ```
2023-05-18T14:22:51
https://www.reddit.com/r/LocalLLaMA/comments/13l0j89/error_while_finetuning/
1azytux
self.LocalLLaMA
2023-05-19T13:30:06
0
{}
13l0j89
false
null
t3_13l0j89
/r/LocalLLaMA/comments/13l0j89/error_while_finetuning/
false
false
self
2
null
Any kind of LLM for OCR?
8
Having a lot of trouble searching for this info (related, where do people find their LLM news besides here?) Trying to figure out if I can implement an OCR with a LLM to improve the output, but finding 0 info.. any hints from anyone?
2023-05-18T14:24:32
https://www.reddit.com/r/LocalLLaMA/comments/13l0kos/any_kind_of_llm_for_ocr/
noneabove1182
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13l0kos
false
null
t3_13l0kos
/r/LocalLLaMA/comments/13l0kos/any_kind_of_llm_for_ocr/
false
false
self
8
null
I made a hitlerbot so I can mock him from time to time, and I think I may have interfered with the past. You don’t need to thank me.
0
[removed]
2023-05-18T16:37:13
[deleted]
1970-01-01T00:00:00
0
{}
13l3vhy
false
null
t3_13l3vhy
/r/LocalLLaMA/comments/13l3vhy/i_made_a_hitlerbot_so_i_can_mock_him_from_time_to/
false
false
default
0
null
Any tools available to use llama in text editors?
5
I use copilot a lot when programming and I think the tab autocomplete is really neat. I was wondering if anyone knows if there are plugins or apps (for word, docs, etc) that allow you to connect it to a local server for text completions.
2023-05-18T17:15:46
https://www.reddit.com/r/LocalLLaMA/comments/13l4usk/any_tools_available_to_use_llama_in_text_editors/
-General-Zero-
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13l4usk
false
null
t3_13l4usk
/r/LocalLLaMA/comments/13l4usk/any_tools_available_to_use_llama_in_text_editors/
false
false
self
5
null
Introducing TokenHawk: Local Llama Inference in WebGPU
44
2023-05-18T17:20:36
https://github.com/kayvr/token-hawk
kayvr
github.com
1970-01-01T00:00:00
0
{}
13l4z2e
false
null
t3_13l4z2e
/r/LocalLLaMA/comments/13l4z2e/introducing_tokenhawk_local_llama_inference_in/
false
false
https://a.thumbs.redditm…58I0MiErQgH0.jpg
44
{'enabled': False, 'images': [{'id': '8CZtAnMJkXfTRQHPKKSrVsh6Mmb5cAAzsVRnNi9Zs38', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/S3ftbHh44BeTjBgKAp9PeJHQCHJP2VKYrsBGRGYccQk.jpg?width=108&crop=smart&auto=webp&s=ccdcccd54e15125e5a437d711585fb0dba3d707d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/S3ftbHh44BeTjBgKAp9PeJHQCHJP2VKYrsBGRGYccQk.jpg?width=216&crop=smart&auto=webp&s=04a15008e9c532ad02acf443786138e065cc4a40', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/S3ftbHh44BeTjBgKAp9PeJHQCHJP2VKYrsBGRGYccQk.jpg?width=320&crop=smart&auto=webp&s=2f032f2dd28d338049450c21f5d62a5bb3144af0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/S3ftbHh44BeTjBgKAp9PeJHQCHJP2VKYrsBGRGYccQk.jpg?width=640&crop=smart&auto=webp&s=14a6c350e33c6efa70f87ed9e0fe755d56806eff', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/S3ftbHh44BeTjBgKAp9PeJHQCHJP2VKYrsBGRGYccQk.jpg?width=960&crop=smart&auto=webp&s=610c3f4c704a8954ce143c98e617e7cb12e0fc99', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/S3ftbHh44BeTjBgKAp9PeJHQCHJP2VKYrsBGRGYccQk.jpg?width=1080&crop=smart&auto=webp&s=38e729b9b6ceeaaa1a7fc808472ef719783e7456', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/S3ftbHh44BeTjBgKAp9PeJHQCHJP2VKYrsBGRGYccQk.jpg?auto=webp&s=466e4ec2dc926182acc7b5fc6f5a1634f4b7e7b8', 'width': 1200}, 'variants': {}}]}
Help with Random Characters and Words on Output
1
[deleted]
2023-05-18T18:55:33
[deleted]
1970-01-01T00:00:00
0
{}
13l7hk8
false
null
t3_13l7hk8
/r/LocalLLaMA/comments/13l7hk8/help_with_random_characters_and_words_on_output/
false
false
default
1
null
How to use MetaIX/GPT4-X-Alpasta-30b-4bit with oobabooga ?
2
[removed]
2023-05-18T19:48:46
https://www.reddit.com/r/LocalLLaMA/comments/13l8vni/how_to_use_metaixgpt4xalpasta30b4bit_with/
karljoaquin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13l8vni
false
null
t3_13l8vni
/r/LocalLLaMA/comments/13l8vni/how_to_use_metaixgpt4xalpasta30b4bit_with/
false
false
default
2
null
Pygmalion has released the new Pygmalion 13B and Metharme 13B! These are LLaMA based models for chat and instruction.
185
[removed]
2023-05-18T20:57:07
https://www.reddit.com/r/LocalLLaMA/comments/13lan4q/pygmalion_has_released_the_new_pygmalion_13b_and/
Creative-Rest-2112
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13lan4q
false
null
t3_13lan4q
/r/LocalLLaMA/comments/13lan4q/pygmalion_has_released_the_new_pygmalion_13b_and/
false
false
default
185
null
Possibility of deploying local LLM on 7 yr old laptop
9
Hi, I have 7 yr old Thinkpad P50 laptop with following specs: • Processor: Intel Core i7-6820HQ processor (8MB Cache, Up to 3.6 Ghz) • Operating System: Windows 10 Pro 64/Ubuntu 18 • Display: 15.6 FHD(1920x1080) IPS Non-Touch • Memory: 64GB(16x4) DDR4 2133MHz SoDIMM • Graphic Card: NVIDIA Quadro M1000M 2GB • Base: P50 NVIDIA Quadro M1000M 2GB,Intel Core i7-6820HQ processor (8MB Cache, up to 3.6 GHz) Would I be able to use laptop to deploy local LLM? If so, what would be recommendations? My use case is to use LLM for document querying(pdfs) similar to ChatGPT or filechat.io which uses openapi . Thanks!
2023-05-18T21:02:24
https://www.reddit.com/r/LocalLLaMA/comments/13lasa6/possibility_of_deploying_local_llm_on_7_yr_old/
mindseye73
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13lasa6
false
null
t3_13lasa6
/r/LocalLLaMA/comments/13lasa6/possibility_of_deploying_local_llm_on_7_yr_old/
false
false
self
9
null
Manticore 13B - updated model from OpenAccess AI Collective
120
# OpenAccess AI Collective have just released Manticore 13B An updated version of Wizard Mega 13B Available for instant demo on a GGML-powered space at: [https://huggingface.co/spaces/openaccess-ai-collective/manticore-ggml](https://huggingface.co/spaces/openaccess-ai-collective/manticore-ggml) Available for local use as GGML and GPTQ quantisations at: [https://huggingface.co/TheBloke/Manticore-13B-GPTQ](https://huggingface.co/TheBloke/Manticore-13B-GPTQ) [https://huggingface.co/TheBloke/Manticore-13B-GGML](https://huggingface.co/TheBloke/Manticore-13B-GGML) Full details in their repo at: [https://huggingface.co/openaccess-ai-collective/manticore-13b](https://huggingface.co/openaccess-ai-collective/manticore-13b) # Manticore 13B - Preview Release (previously Wizard Mega) Manticore 13B is a Llama 13B model fine-tuned on the following datasets: * [**ShareGPT**](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered) \- based on a cleaned and de-suped subset * [**WizardLM**](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered) * [**Wizard-Vicuna**](https://huggingface.co/datasets/ehartford/wizard_vicuna_70k_unfiltered) * [**subset of QingyiSi/Alpaca-CoT for roleplay and CoT**](https://huggingface.co/QingyiSi/Alpaca-CoT) * [**GPT4-LLM-Cleaned**](https://huggingface.co/datasets/teknium/GPT4-LLM-Cleaned) * [**GPTeacher-General-Instruct**](https://huggingface.co/datasets/teknium/GPTeacher-General-Instruct) * ARC-Easy & ARC-Challenge - instruct augmented for detailed responses * mmlu: instruct augmented for detailed responses subset including * abstract\_algebra * conceptual\_physics * formal\_logic * high\_school\_physics * logical\_fallacies * [**hellaswag**](https://huggingface.co/datasets/hellaswag) \- 5K row subset of instruct augmented for concise responses * [**metaeval/ScienceQA\_text\_only**](https://huggingface.co/datasets/metaeval/ScienceQA_text_only) \- instruct for concise responses * [**openai/summarize\_from\_feedback**](https://huggingface.co/datasets/openai/summarize_from_feedback) \- instruct augmented tl;dr summarization
2023-05-18T21:49:01
https://www.reddit.com/r/LocalLLaMA/comments/13lbyiw/manticore_13b_updated_model_from_openaccess_ai/
The-Bloke
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13lbyiw
false
null
t3_13lbyiw
/r/LocalLLaMA/comments/13lbyiw/manticore_13b_updated_model_from_openaccess_ai/
false
false
self
120
{'enabled': False, 'images': [{'id': '_CArfIRMSglzqoNebT4bvXRqZjSX6dMbq8siIyBeSlQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/T2ckplnJRCJt6Sx0qCx7mRAsqA-IsyuVVEbcMhHmrJw.jpg?width=108&crop=smart&auto=webp&s=14116147869d4de3e20df891cd959520caa7e65c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/T2ckplnJRCJt6Sx0qCx7mRAsqA-IsyuVVEbcMhHmrJw.jpg?width=216&crop=smart&auto=webp&s=84215b2cc08a9f7187f84e2393953e3e9f825b3e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/T2ckplnJRCJt6Sx0qCx7mRAsqA-IsyuVVEbcMhHmrJw.jpg?width=320&crop=smart&auto=webp&s=5accfc8157b78c4e1273b037363e99eadabfb77e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/T2ckplnJRCJt6Sx0qCx7mRAsqA-IsyuVVEbcMhHmrJw.jpg?width=640&crop=smart&auto=webp&s=76fdba486515f64c325cde58ed60905285606d76', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/T2ckplnJRCJt6Sx0qCx7mRAsqA-IsyuVVEbcMhHmrJw.jpg?width=960&crop=smart&auto=webp&s=bd88bad1f4d279a4df2ae01d712c822853eba68a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/T2ckplnJRCJt6Sx0qCx7mRAsqA-IsyuVVEbcMhHmrJw.jpg?width=1080&crop=smart&auto=webp&s=6a6c97c58469a03cda215ca4639932a8f70ba147', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/T2ckplnJRCJt6Sx0qCx7mRAsqA-IsyuVVEbcMhHmrJw.jpg?auto=webp&s=7eb416108823b2805e4912b9bda04b94475ee677', 'width': 1200}, 'variants': {}}]}
Equipment needed to run Dolly 2.0 3b model?
1
I'm trying to build a very simple chat bot for a business that needs to be able to understand and talk about a few hundred lines of data. It doesn't need to know about any external data, and would even be preferable not to, but it would be great if I can fine tune the model to learn information fed to it from the user. Dolly 2.0 seems interesting as it's open source and available for commercial use, and I imagine for my use case the 3b model may be acceptable? If so, what sort of hardware would I need if I want to be able to process at least 10 or so tokens per second? I've heard of people running the 7b model with a 3060, but wasn't able to find out how well it ran at those specs, or how different the 3b model is in terms of both specs and ability. Is there a way to find this out before I potentially spend a bunch of money on new hardware? If I could get it running with regular consumer hardware, is it going to be extremely slow? Or would running a 3b model, and/or limiting the data it needs to know allow me to run it locally at a somewhat conversational speed (Think the speed of GPT4)?
2023-05-18T22:04:40
https://www.reddit.com/r/LocalLLaMA/comments/13lccm4/equipment_needed_to_run_dolly_20_3b_model/
TheNomadicAspie
self.LocalLLaMA
2023-05-18T22:09:02
0
{}
13lccm4
false
null
t3_13lccm4
/r/LocalLLaMA/comments/13lccm4/equipment_needed_to_run_dolly_20_3b_model/
false
false
self
1
null
What is the smallest ggml model available?
52
I'm a bit obsessed with the idea that we can have an LLM “demoscene” but with small models, and I already tried a few 1B fresh models, but I want to go even smaller. Have anyone seen ggml models less than 1B? I want to evaluate their performance.
2023-05-18T22:57:47
https://www.reddit.com/r/LocalLLaMA/comments/13ldnlw/what_is_the_smallest_ggml_model_available/
Shir_man
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13ldnlw
false
null
t3_13ldnlw
/r/LocalLLaMA/comments/13ldnlw/what_is_the_smallest_ggml_model_available/
false
false
self
52
null
Is there a place for stupid questions?
3
[deleted]
2023-05-19T01:17:57
[deleted]
1970-01-01T00:00:00
0
{}
13lgwog
false
null
t3_13lgwog
/r/LocalLLaMA/comments/13lgwog/is_there_a_place_for_stupid_questions/
false
false
default
3
null
Local model response time?
0
How long is a response from a locally downloaded model expected to take. I'm running wizard-mega-13B.ggml.q5\_1.bin from disk and it takes as long as 2 minutes to get a response.
2023-05-19T02:31:46
https://www.reddit.com/r/LocalLLaMA/comments/13liju9/local_model_response_time/
Jl_btdipsbro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13liju9
false
null
t3_13liju9
/r/LocalLLaMA/comments/13liju9/local_model_response_time/
false
false
self
0
null
4060 Ti 16GB in July or 3060 12GB now?
12
Is the extra 4GB of VRAM worth the wait and the price difference (~100$)? As far as I know both can only run 13B 4bit models.
2023-05-19T05:56:01
https://www.reddit.com/r/LocalLLaMA/comments/13lmjjc/4060_ti_16gb_in_july_or_3060_12gb_now/
regunakyle
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13lmjjc
false
null
t3_13lmjjc
/r/LocalLLaMA/comments/13lmjjc/4060_ti_16gb_in_july_or_3060_12gb_now/
false
false
self
12
null
Looking to selfhost Llama on remote server, could use some help
3
My Goal: run 30b GPTQ Openassistant on a remote server with api access. Link to model: [https://huggingface.co/TheBloke/OpenAssistant-SFT-7-Llama-30B-GPTQ](https://huggingface.co/TheBloke/OpenAssistant-SFT-7-Llama-30B-GPTQ) My progess: Docker container running text-gen-webui with --public-api flag on to use it as an api with cloudflared to create a quick tunnel. Everything is working on the remote server the only thing I am having trouble getting the quick tunnel to work. My Current development setup is throwing a cuda image based docker container on [vast.ai](https://vast.ai) and work with a quick tunnel from cloudflared.(if i get it to work). My Question: What is everyone using to run these models on remote servers and access them via API. My home desktop setup is too weak to run these kinds of models, so I am interested in both production and development setups. There have to be better solutions out there to selfhost.
2023-05-19T07:00:55
https://www.reddit.com/r/LocalLLaMA/comments/13lnpgt/looking_to_selfhost_llama_on_remote_server_could/
jules241
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13lnpgt
false
null
t3_13lnpgt
/r/LocalLLaMA/comments/13lnpgt/looking_to_selfhost_llama_on_remote_server_could/
false
false
self
3
{'enabled': False, 'images': [{'id': '0_MVpXePzAudCyIn3uCjoPJqV69xuDEEw1P6fohf8tE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/bC3KWSlhdErNo61Ej7icWH7guz2c0GuhkKI0RvumXoE.jpg?width=108&crop=smart&auto=webp&s=3ed6d1099b03f9a3b3beddf35162b07d2c0ae313', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/bC3KWSlhdErNo61Ej7icWH7guz2c0GuhkKI0RvumXoE.jpg?width=216&crop=smart&auto=webp&s=d44e68c4ae0e61fd2ec54c83b18e3c4f5dedee96', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/bC3KWSlhdErNo61Ej7icWH7guz2c0GuhkKI0RvumXoE.jpg?width=320&crop=smart&auto=webp&s=21a3ef65b2fae55091cb55d4f17575f02272205d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/bC3KWSlhdErNo61Ej7icWH7guz2c0GuhkKI0RvumXoE.jpg?width=640&crop=smart&auto=webp&s=f96a677430862c92840df78ff670c8adac963f38', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/bC3KWSlhdErNo61Ej7icWH7guz2c0GuhkKI0RvumXoE.jpg?width=960&crop=smart&auto=webp&s=f99338ee52828dde3c507c1bc7b2db4788654e36', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/bC3KWSlhdErNo61Ej7icWH7guz2c0GuhkKI0RvumXoE.jpg?width=1080&crop=smart&auto=webp&s=11974c7b9704d28101e1c835d6a1b75e33fc11ef', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/bC3KWSlhdErNo61Ej7icWH7guz2c0GuhkKI0RvumXoE.jpg?auto=webp&s=5d6edde5db3af471073c75c924e4f7add8ff5389', 'width': 1200}, 'variants': {}}]}
Where's my new model?
7
2023-05-19T07:27:18
https://i.redd.it/yq5feiiwpq0b1.jpg
ArmoredBattalion
i.redd.it
1970-01-01T00:00:00
0
{}
13lo7gx
false
null
t3_13lo7gx
/r/LocalLLaMA/comments/13lo7gx/wheres_my_new_model/
true
false
nsfw
7
{'enabled': True, 'images': [{'id': 'AAZlZI3So2meQRXQDIIjG0StU43Z70qqGwPFKZd0wvI', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/yq5feiiwpq0b1.jpg?width=108&crop=smart&auto=webp&s=45722aac1aa128a8da490930ecdc8525d54fa950', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/yq5feiiwpq0b1.jpg?width=216&crop=smart&auto=webp&s=7de7986b6f0aae722e79b3a5121716fa0bd2072c', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/yq5feiiwpq0b1.jpg?width=320&crop=smart&auto=webp&s=0dd9ffae8fbfd41fb8ad4079cff529d7def5e86a', 'width': 320}], 'source': {'height': 800, 'url': 'https://preview.redd.it/yq5feiiwpq0b1.jpg?auto=webp&s=eb1b5a8e5f4d235894eadccd94a7644a5a92d414', 'width': 600}, 'variants': {'nsfw': {'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/yq5feiiwpq0b1.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=0cce187aa108df76e7e308ac317a3273cdc76da1', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/yq5feiiwpq0b1.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=3170c19f1423f59fe06fdd01be388d7654db4b08', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/yq5feiiwpq0b1.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=8ad44f506a52f5eb879f74c6d784bb60be7ed1c5', 'width': 320}], 'source': {'height': 800, 'url': 'https://preview.redd.it/yq5feiiwpq0b1.jpg?blur=40&format=pjpg&auto=webp&s=2dae17d43b667357cc29c56e1367936c7a77729f', 'width': 600}}, 'obfuscated': {'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/yq5feiiwpq0b1.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=0cce187aa108df76e7e308ac317a3273cdc76da1', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/yq5feiiwpq0b1.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=3170c19f1423f59fe06fdd01be388d7654db4b08', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/yq5feiiwpq0b1.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=8ad44f506a52f5eb879f74c6d784bb60be7ed1c5', 'width': 320}], 'source': {'height': 800, 'url': 'https://preview.redd.it/yq5feiiwpq0b1.jpg?blur=40&format=pjpg&auto=webp&s=2dae17d43b667357cc29c56e1367936c7a77729f', 'width': 600}}}}]}
If you previously had bad results with Stable Vicuna outside of ooba, fix the special_tokens_map...
19
https://huggingface.co/CarperAI/stable-vicuna-13b-delta/blob/main/special_tokens_map.json I'm not sure how this could've happened or if it could indicate some other issue, but bos_token is wrong at least ``` { "bos_token": "</s>", "eos_token": "</s>", "pad_token": "[PAD]", "unk_token": "</s>" } ``` should be ``` { "bos_token": "<s>", "eos_token": "</s>", "pad_token": "[PAD]", "unk_token": "</s>" } ``` I wonder if this had anything to do with people sleeping on Stable Vicuna
2023-05-19T08:06:17
https://www.reddit.com/r/LocalLLaMA/comments/13lowzm/if_you_previously_had_bad_results_with_stable/
phree_radical
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13lowzm
false
null
t3_13lowzm
/r/LocalLLaMA/comments/13lowzm/if_you_previously_had_bad_results_with_stable/
false
false
self
19
{'enabled': False, 'images': [{'id': '0CCfFmTZ60XSoL0dJ_ynCVdsBH-fmk8Xc8-W9nJPLRo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/1bUrRYhOdPoMUs38gJ3ZHAcizt_RdcA33DfakExdQj8.jpg?width=108&crop=smart&auto=webp&s=b3dcbf82fa79313c69c0fd1509879605b7de7e6b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/1bUrRYhOdPoMUs38gJ3ZHAcizt_RdcA33DfakExdQj8.jpg?width=216&crop=smart&auto=webp&s=5ad08519600fc54c1096d3ed7fc28b08a226c3b7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/1bUrRYhOdPoMUs38gJ3ZHAcizt_RdcA33DfakExdQj8.jpg?width=320&crop=smart&auto=webp&s=abe6f4dba84e9b69df8ec9ec0bd895d9b2bc36cd', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/1bUrRYhOdPoMUs38gJ3ZHAcizt_RdcA33DfakExdQj8.jpg?width=640&crop=smart&auto=webp&s=6c449dee1f29a55c79119d8e92b7b420edf45cc2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/1bUrRYhOdPoMUs38gJ3ZHAcizt_RdcA33DfakExdQj8.jpg?width=960&crop=smart&auto=webp&s=0c39a7620494ab79dfde8c3c2a031eb283d26d83', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/1bUrRYhOdPoMUs38gJ3ZHAcizt_RdcA33DfakExdQj8.jpg?width=1080&crop=smart&auto=webp&s=f737fd6eb21bb1d89e0e7d7849d880c3d2d63561', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/1bUrRYhOdPoMUs38gJ3ZHAcizt_RdcA33DfakExdQj8.jpg?auto=webp&s=4c46ff0e82b80fa085b40ab60da914bc22e4f5af', 'width': 1200}, 'variants': {}}]}
Using iGPU for llama models?
0
[deleted]
2023-05-19T08:35:33
[deleted]
2023-05-19T08:52:26
0
{}
13lpgdf
false
null
t3_13lpgdf
/r/LocalLLaMA/comments/13lpgdf/using_igpu_for_llama_models/
false
false
default
0
null
q4_0, q5_1?
3
New learner here! What do these mean in the context of models? I see them all over the place, but I've never seen any explanation. EDIT: Thanks a lot everybody, now I understand!
2023-05-19T09:51:33
https://www.reddit.com/r/LocalLLaMA/comments/13lqua2/q4_0_q5_1/
qwerty44279
self.LocalLLaMA
2023-05-19T13:24:31
0
{}
13lqua2
false
null
t3_13lqua2
/r/LocalLLaMA/comments/13lqua2/q4_0_q5_1/
false
false
self
3
null
Can you use LoRA unsupervised?
1
I’d now like to add a ‘concept’ or a set of information from a Knowledge Base to an instruct-tuned model. I could set up a series of prompts, but ultimately I don’t want it to reproduce the information, I want the model to understand it and use it creatively. Is there a way to use LoRA to teach it a concept in an unsupervised way? Like how LLaMA was trained in the first place? All the tutorials I see are prompt-based.
2023-05-19T11:08:02
https://www.reddit.com/r/LocalLLaMA/comments/13lsbzj/can_you_use_lora_unsupervised/
amemingfullife
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13lsbzj
false
null
t3_13lsbzj
/r/LocalLLaMA/comments/13lsbzj/can_you_use_lora_unsupervised/
false
false
self
1
null
Get my GPU involved?
0
I'm using koboldcpp-1.23.1 To run GGML models, and my GPU is never involved. Ryzen 7 5700U. Is this something I can fix? Am I doing something obviously wrong?
2023-05-19T11:25:28
https://www.reddit.com/r/LocalLLaMA/comments/13lsopu/get_my_gpu_involved/
Innomen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13lsopu
false
null
t3_13lsopu
/r/LocalLLaMA/comments/13lsopu/get_my_gpu_involved/
false
false
self
0
null
Seeking clarification about LLM's, Tools, etc.. for developers.
20
As android developer (professionally working 4years\~) and "AI" enthusiast, for long time i wanted to study ML/BigData to enter this world but was held back difficulty a steep learning curve. Been lurking and using (basic stuff) stabledifussion (A1111) since august, and LLM's since about march, without python and ML libraries experience it is difficult to comprehend (reading source code) all that is going on. ChatGPT opened a whole new world of possibilities for people like me, and discovering the world open of open source, local models made me quit my job (two weeks ago\~) to pursue my passion, with many ideas for this sector that i wish to execute (AGI is the goal). note: at my workplace i was the guy (preacher) of the coming of our AI Overlords, the go to guy about questions about this topics, even did a meetup in the style of "We have no MOAT" with deep explanations about NN's, Transformers, LLM, opensource tools, possibilities and future. the purpose of this note is 1. connect with like minded people 2. i did question chatgpt about before posting &#x200B; "What is my Purpose?": Understand the connections between tools, frameworks, and concepts. master the use of tools without blindly following tutorials before deep diving into LangChain and similar frameworks. as i want not only make intricate prompts, and finetune with UI, but also understand the concepts allowing them to exist. Edit: added short answer (21/05/2023) Questions/Facts (that need clarification): * GGML is a format for LLM's created to run inference (text completion) on CPU's. * Its a way to convert Floats to Integers (of parameters), resulting in less precise calculations, but enough for LLM's (not only transformers) * GPTQ is a format that can be used by GPU's, how to actually use it? * Same as GGML for model weights, but instead of cutting floats to nearest int,it does the conversion in smart way * What about ".safetensor" how to use this models? * format to store data (tensors) * What is the difference between them? why different formats? * GGML formatted models are runnable by llama.cpp - **clarification needed** safetensor - **clarification needed** * llamacpp is C++ implementation of Llama by facebook, why is it needed? * it's not from facebook. it's optimized to run GGML models on CPU's * Oobabooga/KoboldAI is a UI wrapper for llamacpp and \_\_\_\_ ? * Oobabooga is a UI for running many types of LLM's including llama.cpp * KobolAI is a fork of llama.cpp with UI * ~~KoboldAI vs Oobabooga, seems they do exactly the same with different UI.~~ * Where, how, why CUDA, OpenBLAS, CLBlast is used, and how related to each other? * are all libraries for linear algebra subroutines optimized for different architectures,cuBLAST - nvidiaclBLAS - OpenCL (many gpu's not only nvidia)CLBlast - optimized clBLAS * How an agent actually run code (tools), how is that "Action: use tool X" -> X(), is it basic code string manipulation that runs over the output? * more research needed, will answer asap. * PyTorch, Tensorflow, what, how, why is needed for oobabooga? * PyTorch is python wrapper for LM library "Torch", a big framework for various ML tasks not only LLM. * Tensorflow the PyTorch of GoogleBrain and is not compatible. Parallelism is easier implemented on PyTorch than on Tensorflow. * "Install GPTQ-for-LLaMa and the monkey patch" what is the purpose of this? * Since GPTQ-for-LLaMa had several breaking updates, that made older models incompatible with newer versions of GPTQ, they are sometimes refering to a certain version of GPTQ-for-LLaMa. So if the notes of a model, or a tutorial tells you to install GPTQ-for-LLaMa with a certain patch, it probably referrs to a commit, which if you know git, you can specifically clone a commit-hash or a feature branch. credit to /u/_underlines_ and /u/Evening_Ad6637 for answers so far I will edit this post as more questions will come up. Hope this community can help and answer my questions, correct my facts as this post could be a starting point for many.
2023-05-19T11:30:49
https://www.reddit.com/r/LocalLLaMA/comments/13lssoi/seeking_clarification_about_llms_tools_etc_for/
Clicker7
self.LocalLLaMA
2023-05-21T11:02:07
0
{}
13lssoi
false
null
t3_13lssoi
/r/LocalLLaMA/comments/13lssoi/seeking_clarification_about_llms_tools_etc_for/
false
false
self
20
null
Training Vicuna based on custom text/web pages
20
Hello, I currently use Vicuna 13B (created delta based on llama). It works well. However, I am interested in training the model / feeding it with custom texts (own private documents). What do you think the best way is to do it? I currently have Oobabooga. I am a bit new to it, so not sure how to start (a tutorial would be handy if you know a good one). Thank you.
2023-05-19T12:26:47
https://www.reddit.com/r/LocalLLaMA/comments/13lu13k/training_vicuna_based_on_custom_textweb_pages/
guyromb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13lu13k
false
null
t3_13lu13k
/r/LocalLLaMA/comments/13lu13k/training_vicuna_based_on_custom_textweb_pages/
false
false
self
20
null
Run MPT-7B-Instruct on Google Colab?
7
I wanted to know has anyone tried to run MPT-7B-Instruct on Google Colab? If so can they please share the code? I am running into the tried to allocate more RAM error If in any other platform (eg. Kaggle) then how?
2023-05-19T13:48:58
https://www.reddit.com/r/LocalLLaMA/comments/13lvzkc/run_mpt7binstruct_on_google_colab/
AdRealistic03
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13lvzkc
false
null
t3_13lvzkc
/r/LocalLLaMA/comments/13lvzkc/run_mpt7binstruct_on_google_colab/
false
false
self
7
null